Podcasts about Schmidhuber

  • 43PODCASTS
  • 86EPISODES
  • 37mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Feb 12, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Schmidhuber

Latest podcast episodes about Schmidhuber

Machine Learning Street Talk
Sepp Hochreiter - LSTM: The Comeback Story?

Machine Learning Street Talk

Play Episode Listen Later Feb 12, 2025 67:01


Sepp Hochreiter, the inventor of LSTM (Long Short-Term Memory) networks – a foundational technology in AI. Sepp discusses his journey, the origins of LSTM, and why he believes his latest work, XLSTM, could be the next big thing in AI, particularly for applications like robotics and industrial simulation. He also shares his controversial perspective on Large Language Models (LLMs) and why reasoning is a critical missing piece in current AI systems.SPONSOR MESSAGES:***CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. Check out their super fast DeepSeek R1 hosting!https://centml.ai/pricing/Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich.Goto https://tufalabs.ai/***TRANSCRIPT AND BACKGROUND READING:https://www.dropbox.com/scl/fi/n1vzm79t3uuss8xyinxzo/SEPPH.pdf?rlkey=fp7gwaopjk17uyvgjxekxrh5v&dl=0Prof. Sepp Hochreiterhttps://www.nx-ai.com/https://x.com/hochreitersepphttps://scholar.google.at/citations?user=tvUH3WMAAAAJ&hl=enTOC:1. LLM Evolution and Reasoning Capabilities[00:00:00] 1.1 LLM Capabilities and Limitations Debate[00:03:16] 1.2 Program Generation and Reasoning in AI Systems[00:06:30] 1.3 Human vs AI Reasoning Comparison[00:09:59] 1.4 New Research Initiatives and Hybrid Approaches2. LSTM Technical Architecture[00:13:18] 2.1 LSTM Development History and Technical Background[00:20:38] 2.2 LSTM vs RNN Architecture and Computational Complexity[00:25:10] 2.3 xLSTM Architecture and Flash Attention Comparison[00:30:51] 2.4 Evolution of Gating Mechanisms from Sigmoid to Exponential3. Industrial Applications and Neuro-Symbolic AI[00:40:35] 3.1 Industrial Applications and Fixed Memory Advantages[00:42:31] 3.2 Neuro-Symbolic Integration and Pi AI Project[00:46:00] 3.3 Integration of Symbolic and Neural AI Approaches[00:51:29] 3.4 Evolution of AI Paradigms and System Thinking[00:54:55] 3.5 AI Reasoning and Human Intelligence Comparison[00:58:12] 3.6 NXAI Company and Industrial AI ApplicationsREFS:[00:00:15] Seminal LSTM paper establishing Hochreiter's expertise (Hochreiter & Schmidhuber)https://direct.mit.edu/neco/article-abstract/9/8/1735/6109/Long-Short-Term-Memory[00:04:20] Kolmogorov complexity and program composition limitations (Kolmogorov)https://link.springer.com/article/10.1007/BF02478259[00:07:10] Limitations of LLM mathematical reasoning and symbolic integration (Various Authors)https://www.arxiv.org/pdf/2502.03671[00:09:05] AlphaGo's Move 37 demonstrating creative AI (Google DeepMind)https://deepmind.google/research/breakthroughs/alphago/[00:10:15] New AI research lab in Zurich for fundamental LLM research (Benjamin Crouzier)https://tufalabs.ai[00:19:40] Introduction of xLSTM with exponential gating (Beck, Hochreiter, et al.)https://arxiv.org/abs/2405.04517[00:22:55] FlashAttention: fast & memory-efficient attention (Tri Dao et al.)https://arxiv.org/abs/2205.14135[00:31:00] Historical use of sigmoid/tanh activation in 1990s (James A. McCaffrey)https://visualstudiomagazine.com/articles/2015/06/01/alternative-activation-functions.aspx[00:36:10] Mamba 2 state space model architecture (Albert Gu et al.)https://arxiv.org/abs/2312.00752[00:46:00] Austria's Pi AI project integrating symbolic & neural AI (Hochreiter et al.)https://www.jku.at/en/institute-of-machine-learning/research/projects/[00:48:10] Neuro-symbolic integration challenges in language models (Diego Calanzone et al.)https://openreview.net/forum?id=7PGluppo4k[00:49:30] JKU Linz's historical and neuro-symbolic research (Sepp Hochreiter)https://www.jku.at/en/news-events/news/detail/news/bilaterale-ki-projekt-unter-leitung-der-jku-erhaelt-fwf-cluster-of-excellence/YT: https://www.youtube.com/watch?v=8u2pW2zZLCs

Eye On A.I.
#232 Sepp Hochreiter: How LSTMs Power Modern AI System's

Eye On A.I.

Play Episode Listen Later Jan 22, 2025 51:08


In this special episode of the Eye on AI podcast, Sepp Hochreiter, the inventor of Long Short-Term Memory (LSTM) networks, joins Craig Smith to discuss the profound impact of LSTMs on artificial intelligence, from language models to real-time robotics. Sepp reflects on the early days of LSTM development, sharing insights into his collaboration with Jürgen Schmidhuber and the challenges they faced in gaining recognition for their groundbreaking work. He explains how LSTMs became the foundation for technologies used by giants like Amazon, Apple, and Google, and how they paved the way for modern advancements like transformers. Topics include: - The origin story of LSTMs and their unique architecture. - Why LSTMs were crucial for sequence data like speech and text. - The rise of transformers and how they compare to LSTMs. - Real-time robotics: using LSTMs to build energy-efficient, autonomous systems. The next big challenges for AI and robotics in the era of generative AI. Sepp also shares his optimistic vision for the future of AI, emphasizing the importance of efficient, scalable models and their potential to revolutionize industries from healthcare to autonomous vehicles. Don't miss this deep dive into the history and future of AI, featuring one of its most influential pioneers. (00:00) Introduction: Meet Sepp Hochreiter (01:10) The Origins of LSTMs (02:26) Understanding the Vanishing Gradient Problem (05:12) Memory Cells and LSTM Architecture (06:35) Early Applications of LSTMs in Technology (09:38) How Transformers Differ from LSTMs (13:38) Exploring XLSTM for Industrial Applications (15:17) AI for Robotics and Real-Time Systems (18:55) Expanding LSTM Memory with Hopfield Networks (21:18) The Road to XLSTM Development (23:17) Industrial Use Cases of XLSTM (27:49) AI in Simulation: A New Frontier (32:26) The Future of LSTMs and Scalability (35:48) Inference Efficiency and Potential Applications (39:53) Continuous Learning and Adaptability in AI (42:59) Training Robots with XLSTM Technology (44:47) NXAI: Advancing AI in Industry

Machine Learning Street Talk
Jurgen Schmidhuber on Humans co-existing with AIs

Machine Learning Street Talk

Play Episode Listen Later Jan 16, 2025 72:50


Jürgen Schmidhuber, the father of generative AI, challenges current AI narratives, revealing that early deep learning work is in his opinion misattributed, where it actually originated in Ukraine and Japan. He discusses his early work on linear transformers and artificial curiosity which preceded modern developments, shares his expansive vision of AI colonising space, and explains his groundbreaking 1991 consciousness model. Schmidhuber dismisses fears of human-AI conflict, arguing that superintelligent AI scientists will be fascinated by their own origins and motivated to protect life rather than harm it, while being more interested in other superintelligent AI and in cosmic expansion than earthly matters. He offers unique insights into how humans and AI might coexist. This was the long-awaited second, unreleased part of our interview we filmed last time. SPONSOR MESSAGES: *** CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. https://centml.ai/pricing/ Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events? Goto https://tufalabs.ai/ *** Interviewer: Tim Scarfe TOC [00:00:00] The Nature and Motivations of AI [00:02:08] Influential Inventions: 20th vs. 21st Century [00:05:28] Transformer and GPT: A Reflection The revolutionary impact of modern language models, the 1991 linear transformer, linear vs. quadratic scaling, the fast weight controller, and fast weight matrix memory. [00:11:03] Pioneering Contributions to AI and Deep Learning The invention of the transformer, pre-trained networks, the first GANs, the role of predictive coding, and the emergence of artificial curiosity. [00:13:58] AI's Evolution and Achievements The role of compute, breakthroughs in handwriting recognition and computer vision, the rise of GPU-based CNNs, achieving superhuman results, and Japanese contributions to CNN development. [00:15:40] The Hardware Lottery and GPUs GPUs as a serendipitous advantage for AI, the gaming-AI parallel, and Nvidia's strategic shift towards AI. [00:19:58] AI Applications and Societal Impact AI-powered translation breaking communication barriers, AI in medicine for imaging and disease prediction, and AI's potential for human enhancement and sustainable development. [00:23:26] The Path to AGI and Current Limitations Distinguishing large language models from AGI, challenges in replacing physical world workers, and AI's difficulty in real-world versus board games. [00:25:56] AI and Consciousness Simulating consciousness through unsupervised learning, chunking and automatizing neural networks, data compression, and self-symbols in predictive world models. [00:30:50] The Future of AI and Humanity Transition from AGIs as tools to AGIs with their own goals, the role of humans in an AGI-dominated world, and the concept of Homo Ludens. [00:38:05] The AI Race: Europe, China, and the US Europe's historical contributions, current dominance of the US and East Asia, and the role of venture capital and industrial policy. [00:50:32] Addressing AI Existential Risk The obsession with AI existential risk, commercial pressure for friendly AIs, AI vs. hydrogen bombs, and the long-term future of AI. [00:58:00] The Fermi Paradox and Extraterrestrial Intelligence Expanding AI bubbles as an explanation for the Fermi paradox, dark matter and encrypted civilizations, and Earth as the first to spawn an AI bubble. [01:02:08] The Diversity of AI and AI Ecologies The unrealism of a monolithic super intelligence, diverse AIs with varying goals, and intense competition and collaboration in AI ecologies. [01:12:21] Final Thoughts and Closing Remarks REFERENCES: See pinned comment on YT: https://youtu.be/fZYUqICYCAk

Redefining AI - Artificial Intelligence with Squirro
Dr. Imanol Schlag - Pioneering the Future of AI - Language Models and the Swiss AI Initiative

Redefining AI - Artificial Intelligence with Squirro

Play Episode Listen Later Dec 21, 2024 23:27


Join host Lauren Hawker Zafer - Squirro CMO as she sits down with Dr. Imanol Schlag, a prominent researcher in the field of artificial intelligence and machine learning. Currently serving as a Postdoctoral Researcher at the ETH Zurich, Dr. Schlag has made significant contributions to the development of advanced neural network architectures and reasoning models. His academic journey includes a PhD under the guidance of renowned AI pioneer Jürgen Schmidhuber at the Swiss AI Lab, where he explored innovative approaches to enhance machine learning capabilities. Dr. Schlag's research focuses on the intersection of language models and quantitative reasoning, with several of his papers gaining recognition in top-tier conferences. He is known for his work on Linear Transformers and their applications in fast weight programming, which has generated substantial attention in the AI community. Join us, as we dive into Dr. Schlag's insights on the future of AI, the challenges of developing trustworthy models, and his vision for the role of machine learning in Swiss society. Whether you're an AI enthusiast or just curious about the latest advancements in technology, this conversation promises to be both enlightening and inspiring. Don't forget to subscribe and share your favourite episodes with your friends! #techpodcast #squirro #ai #transformation

Redefining AI - Artificial Intelligence with Squirro
Spotlight Seventeen - Pioneering the Future of AI - Language Models and the Swiss AI Initiative

Redefining AI - Artificial Intelligence with Squirro

Play Episode Listen Later Dec 17, 2024 1:59


Season Three - Spotlight Seventeen Our seventeenth spotlight of this season is a snippet of our upcoming episode: Pioneering the Future of AI - Language Models and the Swiss AI Initiative. Join host Lauren Hawker Zafer - Squirro CMO as she sits down with Dr. Imanol Schlag, a prominent researcher in the field of artificial intelligence and machine learning. Currently serving as a Postdoctoral Researcher at the ETH Zurich, Dr. Schlag has made significant contributions to the development of advanced neural network architectures and reasoning models. His academic journey includes a PhD under the guidance of renowned AI pioneer Jürgen Schmidhuber at the Swiss AI Lab, where he explored innovative approaches to enhance machine learning capabilities. Dr. Schlag's research focuses on the intersection of language models and quantitative reasoning, with several of his papers gaining recognition in top-tier conferences. He is known for his work on Linear Transformers and their applications in fast weight programming, which has generated substantial attention in the AI community. Join us, as we dive into Dr. Schlag's insights on the future of AI, the challenges of developing trustworthy models, and his vision for the role of machine learning in society. Whether you're an AI enthusiast or just curious about the latest advancements in technology, this conversation promises to be both enlightening and inspiring.

BBVA Aprendemos Juntos
Jurgen Schmidhuber: What can artificial intelligence do for you?

BBVA Aprendemos Juntos

Play Episode Listen Later Nov 21, 2024 63:18


In the 1980s, a young admirer of Einstein with a passion for science had a dream: to create an artificial scientist capable of solving the mysteries of the universe. At the time, everyone thought he was crazy.   This young man was Jürgen Schmidhuber, a German computer scientist who is now considered "the father" of modern artificial intelligence. As he explains, "In the 1990s, we began the research that led to the development of AI, but back then, no one was interested in the topic." However, the algorithms he and his team developed during those years “are now in our smartphones, translators, ChatGPT, and countless applications that are part of our daily lives in the 21st century,” he adds. His work has been internationally recognized, and he is considered one of the pioneers in deep learning. He is also the creator of the so-called "artificial neural networks" and a staunch advocate of the "Artificial General Intelligence" (AGI) approach, seeking to create systems that can learn and reason similarly to humans. Despite the suspicion and fear surrounding AI today, Schmidhuber defends its applications in fields such as medicine, language, and the Sustainable Development Goals (SDGs), including combating climate change. “If used correctly, artificial intelligence can help prevent environmental disasters such as droughts and floods, improve global issues like air quality, and, in the field of medicine, help us prevent and detect diseases like cancer or cardiovascular conditions,” he explains.

The Retort AI Podcast
The Nobel Albatross

The Retort AI Podcast

Play Episode Listen Later Oct 11, 2024 44:55


Tom and Nate catch up on the happenings in AI. Of course, we're focused on the biggest awards available to us as esteemed scientists (or something close enough) -- the Nobel Prizes! What does it mean in the trajectory of AI for Hinton and Hassabis to carry added scientific weight. Honestly, feels like a sinking ship. Some links:* Schmidhuber tweet: https://x.com/SchmidhuberAI/status/1844022724328394780* Hinton "I'm proud my student fired Sam": https://x.com/Grady_Booch/status/184414542282424329000:00 Introduction04:43 Criticism of AI-related Nobel Prize awards09:06 Geoffrey Hinton's comments on winning the Nobel Prize18:14 Debate on who should be credited for current AI advancements25:53 Changes in the nature of scientific research and recognition34:44 Changes in AI safety culture and company dynamics37:27 Discussion on AI scaling and its impact on the industry42:21 Reflection on the ongoing AI hype cycleRetort on YouTube: https://www.youtube.com/@TheRetortAIPodcastRetort on Twitter: https://x.com/retortaiRetort website: https://retortai.com/Retort email: mail at retortai dot com

Machine Learning Street Talk
Jürgen Schmidhuber - Neural and Non-Neural AI, Reasoning, Transformers, and LSTMs

Machine Learning Street Talk

Play Episode Listen Later Aug 28, 2024 99:39


Jürgen Schmidhuber, the father of generative AI shares his groundbreaking work in deep learning and artificial intelligence. In this exclusive interview, he discusses the history of AI, some of his contributions to the field, and his vision for the future of intelligent machines. Schmidhuber offers unique insights into the exponential growth of technology and the potential impact of AI on humanity and the universe. YT version: https://youtu.be/DP454c1K_vQ MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. TOC 00:00:00 Intro 00:03:38 Reasoning 00:13:09 Potential AI Breakthroughs Reducing Computation Needs 00:20:39 Memorization vs. Generalization in AI 00:25:19 Approach to the ARC Challenge 00:29:10 Perceptions of Chat GPT and AGI 00:58:45 Abstract Principles of Jurgen's Approach 01:04:17 Analogical Reasoning and Compression 01:05:48 Breakthroughs in 1991: the P, the G, and the T in ChatGPT and Generative AI 01:15:50 Use of LSTM in Language Models by Tech Giants 01:21:08 Neural Network Aspect Ratio Theory 01:26:53 Reinforcement Learning Without Explicit Teachers Refs: ★ "Annotated History of Modern AI and Deep Learning" (2022 survey by Schmidhuber): ★ Chain Rule For Backward Credit Assignment (Leibniz, 1676) ★ First Neural Net / Linear Regression / Shallow Learning (Gauss & Legendre, circa 1800) ★ First 20th Century Pioneer of Practical AI (Quevedo, 1914) ★ First Recurrent NN (RNN) Architecture (Lenz, Ising, 1920-1925) ★ AI Theory: Fundamental Limitations of Computation and Computation-Based AI (Gödel, 1931-34) ★ Unpublished ideas about evolving RNNs (Turing, 1948) ★ Multilayer Feedforward NN Without Deep Learning (Rosenblatt, 1958) ★ First Published Learning RNNs (Amari and others, ~1972) ★ First Deep Learning (Ivakhnenko & Lapa, 1965) ★ Deep Learning by Stochastic Gradient Descent (Amari, 1967-68) ★ ReLUs (Fukushima, 1969) ★ Backpropagation (Linnainmaa, 1970); precursor (Kelley, 1960) ★ Backpropagation for NNs (Werbos, 1982) ★ First Deep Convolutional NN (Fukushima, 1979); later combined with Backprop (Waibel 1987, Zhang 1988). ★ Metalearning or Learning to Learn (Schmidhuber, 1987) ★ Generative Adversarial Networks / Artificial Curiosity / NN Online Planners (Schmidhuber, Feb 1990; see the G in Generative AI and ChatGPT) ★ NNs Learn to Generate Subgoals and Work on Command (Schmidhuber, April 1990) ★ NNs Learn to Program NNs: Unnormalized Linear Transformer (Schmidhuber, March 1991; see the T in ChatGPT) ★ Deep Learning by Self-Supervised Pre-Training. Distilling NNs (Schmidhuber, April 1991; see the P in ChatGPT) ★ Experiments with Pre-Training; Analysis of Vanishing/Exploding Gradients, Roots of Long Short-Term Memory / Highway Nets / ResNets (Hochreiter, June 1991, further developed 1999-2015 with other students of Schmidhuber) ★ LSTM journal paper (1997, most cited AI paper of the 20th century) ★ xLSTM (Hochreiter, 2024) ★ Reinforcement Learning Prompt Engineer for Abstract Reasoning and Planning (Schmidhuber 2015) ★ Mindstorms in Natural Language-Based Societies of Mind (2023 paper by Schmidhuber's team) https://arxiv.org/abs/2305.17066 ★ Bremermann's physical limit of computation (1982) EXTERNAL LINKS CogX 2018 - Professor Juergen Schmidhuber https://www.youtube.com/watch?v=17shdT9-wuA Discovering Neural Nets with Low Kolmogorov Complexity and High Generalization Capability (Neural Networks, 1997) https://sferics.idsia.ch/pub/juergen/loconet.pdf The paradox at the heart of mathematics: Gödel's Incompleteness Theorem - Marcus du Sautoy https://www.youtube.com/watch?v=I4pQbo5MQOs (Refs truncated, full version on YT VD)

FAZ Digitec
Steht in der KI der nächste Durchbruch bevor, Sepp Hochreiter?

FAZ Digitec

Play Episode Listen Later Jul 5, 2024 49:52


Sepp Hochreiter hat in seiner Diplomarbeit zu Beginn der neunziger Jahre eine kleine KI-Revolution erdacht. Sein Lernalgorithmus verbarg sich hinter dem Kürzel LSTM, das steht für Long-Short-Term-Memory. Er verlieh der KI so etwas wie ein brauchbares Gedächtnis - Computer können seither besser mit Sequenzen umgehen. Als er die Idee zusammen mit Jürgen Schmidhuber verbreitete und weiterentwickelte, erkannten zunächst nicht einmal Fachleute, welche Macht sie besaß. Letztendlich hob LSTM die Sprachverarbeitung auf eine neue Stufe. Von Google bis Microsoft verwendete jeder Tech-Konzern sie, bis zum Jahr 2017 steckte sie sozusagen in jedem Smartphone. Dann veröffentlichten Fachleute von Google einen Aufsatz und stellten darin die Grundlage für jene großen Sprachemodelle vor, die heute so erfolgreich sind und die durch ChatGPT berühmt und ein Massenphänomen wurden. Doch auch diese KI-Systeme haben Grenzen, scheitern an Aufgaben, die für das Gehirn leicht sind. Nun kommt Hochreiter, der inzwischen im österreichischen Linz lehrt und forscht, seinerseits mit der nächsten womöglich bahnbrechenden Idee um die Ecke: Er stellte unlängst vor, wie er LSTM weiterentwickelt hat - unter dem Kürzel xLSTM, das steht für Extended Long-Short-Term-Memory. Die bisherigen Schwächen will er reduziert und zugleich die Vorteile gegenüber den gegenwärtig angesagten Transformer-Modellen verbessert haben. Wie funktioniert das? Was steckt dahinter? Wie weit ist sein Unternehmen NXAI? Über all das sprechen wir in dieser Episode.

FUTURE FOSSILS

In this episode we're joined by Andrés Goméz Emilsson, President and Director of Research at the Qualia Research Institute (QRI), with whom we go deep on their computational approach to probe the mysteries of consciousness and the psychedelic experience — and thereby, perhaps, make the world a substantially happier place. Join us for an adventurous dialogue at the intersections of phenomenology, spirituality, and mathematics…with stops along the way to ask about the neurobiological construction of time's arrow(s), the geometry of DMT space, and the ethical challenges of creating conscious computers. It's a trip…!00:00:00 Intro, Thanks, and News00:11:18 Dialogue Starts00:14:15 The Origins of Qualia Research Institute00:17:37 The Importance of Consciousness Research00:22:18 Phenomenology and Symmetry00:47:38 The Hyperbolic Geometry of DMT00:54:12 Avoiding Dissonance in Psychedelic States00:56:09 Complexity, Music, and Cognitive Processing00:57:22 Future Shock and Technological Overwhelm01:04:28 Pharmacological Adaptations to Technology01:08:39 Temporal Perception and Psychedelics✨ Support This Work:• Subscribe on Substack or Patreon.• Help me pitch my next big projects Humans On The Loop & Jurassic Worlding.• Join the Holistic Technology & Wise Innovation Server, the Future Fossils Server, and Future Fossils FB Group.• Make one-off donations at @futurefossils on Venmo, $manfredmacx on CashApp, or @michaelgarfield on PayPal.• Buy the music on Bandcamp — this episode features “You Don't Have To Move” off The Age of Reunion & “Sonnet A” off Double-Edged Sword.• Buy the books we discuss at the Future Fossils Bookshop.org reading list.• Browse original paintings and prints or commission new work.✨ Related Episodes:212 - Manfred Laubichler & Geoffrey West176 - Richard Doyle128, 165, 203 - Kevin Kelly99, 132, 140 - Erik Davis131 - Jessica Nielson & Link Swanson42, 43 - William Irwin Thompson111, 199 - Android Jones14, 52, 161 - Michael Philip57, 140, 153 - Mitch Mignano60, 113, 150 - Sean Esbjörn-Hargens✨ Mentioned:QRI Research LineagesAndrés' Noonautics Advisory Board BioThe Hyperbolic Geometry of DMT Experiences: Symmetries, Sheets, and Saddled ScenesThe Pseudo-Time ArrowNon-Ordinary States of Consciousness Contest: Psychedelic Cryptography (Innovate)Digital Sentience Requires Solving the Boundary ProblemQualia Mastery (Guided Meditations, Part 1 & 2)Principia Qualia by Michael Edward JohnsonThe Psychedelic Transhumanists: A Virtual Round Table Between Legends Living & Dead by Michael GarfieldOne Half A Manifesto by Jaron LanierJürgen Schmidhuber's HomepageThe Peripheral (TV series adapting William Gibson)Toward A New Evolutionary Paradigm 1.0 by Michael GarfieldAn ‘Integrated Mess of Music Lovers' in Science by Michael Garfield for SFIWestworld (TV series adaptation)Sean McGowanMike JohnsonDavid PearceAndrew GallimoreJim O'ShaughnessyJulio TononiKarl FristonRobin Carhart-HarrisIlya PrigogineJaron LanierSteven LeharRupert SheldrakeWilliam GibsonJürgen SchmidhuberAlain GorielyDarren ZhuHugh EverettSean CarrollIsaac NewtonStephen WolframChris LangtonJames C. ScottH. P. LovecraftNoonautics This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit michaelgarfield.substack.com/subscribe

OMR Podcast
Jürgen Schmidhuber, "Vater" der modernen KI (#693)

OMR Podcast

Play Episode Listen Later May 1, 2024 70:42


KI-Pionier Jürgen Schmidhuber spricht im OMR Podcast über Chancen, Gefahren von künstlicher Intelligenz und die Bedeutung des Standorts Deutschland. Der deutsche Informatiker hat die entscheidenden Grundlagen für die Algorithmen entwickelt, die gerade dabei sind, die Welt, wie wir sie kennen, umzukrempeln. Er forscht weiterhin, twittert und bloggt über KI. Was Schmidhubers Ansicht nach passieren muss, um in Deutschland ausgebildete KI-Expert*innen im Land zu halten, wie es dazu kam, dass Elon Musk ihn auf eine Familienfeier einlud, und seine Einschätzung, ob wir in Wahrheit schon längst in der Matrix leben – das sind nur einige der Themen dieser OMR Podcast-Episode.

Verbrauchertipp - Deutschlandfunk
Tierhaltung in der Wohnung

Verbrauchertipp - Deutschlandfunk

Play Episode Listen Later Mar 11, 2024 3:19


Schmidhuber, Elke www.deutschlandfunk.de, Verbrauchertipp

Machine Learning Street Talk
Prof. Jürgen Schmidhuber - FATHER OF AI ON ITS DANGERS

Machine Learning Street Talk

Play Episode Listen Later Aug 14, 2023 81:03


Please check out Numerai - our sponsor @ http://numer.ai/mlst Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/ESrGqhf5CB Professor Jürgen Schmidhuber, the father of artificial intelligence, joins us today. Schmidhuber discussed the history of machine learning, the current state of AI, and his career researching recursive self-improvement, artificial general intelligence and its risks. Schmidhuber pointed out the importance of studying the history of machine learning to properly assign credit for key breakthroughs. He discussed some of the earliest machine learning algorithms. He also highlighted the foundational work of Leibniz, who discovered the chain rule that enables training of deep neural networks, and the ancient Antikythera mechanism, the first known gear-based computer. Schmidhuber discussed limits to recursive self-improvement and artificial general intelligence, including physical constraints like the speed of light and what can be computed. He noted we have no evidence the human brain can do more than traditional computing. Schmidhuber sees humankind as a potential stepping stone to more advanced, spacefaring machine life which may have little interest in humanity. However, he believes commercial incentives point AGI development towards being beneficial and that open-source innovation can help to achieve "AI for all" symbolised by his company's motto "AI∀". Schmidhuber discussed approaches he believes will lead to more general AI, including meta-learning, reinforcement learning, building predictive world models, and curiosity-driven learning. His "fast weight programming" approach from the 1990s involved one network altering another network's connections. This was actually the first Transformer variant, now called an unnormalised linear Transformer. He also described the first GANs in 1990, to implement artificial curiosity. Schmidhuber reflected on his career researching AI. He said his fondest memories were gaining insights that seemed to solve longstanding problems, though new challenges always arose: "then for a brief moment it looks like the greatest thing since sliced bread and and then you get excited ... but then suddenly you realize, oh, it's still not finished. Something important is missing.” Since 1985 he has worked on systems that can recursively improve themselves, constrained only by the limits of physics and computability. He believes continual progress, shaped by both competition and collaboration, will lead to increasingly advanced AI. On AI Risk: Schmidhuber: "To me it's indeed weird. Now there are all these letters coming out warning of the dangers of AI. And I think some of the guys who are writing these letters, they are just seeking attention because they know that AI dystopia are attracting more attention than documentaries about the benefits of AI in healthcare." Schmidhuber believes we should be more concerned with existing threats like nuclear weapons than speculative risks from advanced AI. He said: "As far as I can judge, all of this cannot be stopped but it can be channeled in a very natural way that is good for humankind...there is a tremendous bias towards good AI, meaning AI that is good for humans...I am much more worried about 60 year old technology that can wipe out civilization within two hours, without any AI.” [this is truncated, read show notes] YT: https://youtu.be/q27XMPm5wg8 Show notes: https://docs.google.com/document/d/13-vIetOvhceZq5XZnELRbaazpQbxLbf5Yi7M25CixEE/edit?usp=sharing Note: Interview was recorded 15th June 2023. https://twitter.com/SchmidhuberAI Panel: Dr. Tim Scarfe @ecsquendor / Dr. Keith Duggar @DoctorDuggar Pod version: TBA TOC: [00:00:00] Intro / Numerai [00:00:51] Show Kick Off [00:02:24] Credit Assignment in ML [00:12:51] XRisk [00:20:45] First Transformer variant of 1991 [00:47:20] Which Current Approaches are Good [00:52:42] Autonomy / Curiosity [00:58:42] GANs of 1990 [01:11:29] OpenAI, Moats, Legislation

GPT Reviews
Google's AI Ads

GPT Reviews

Play Episode Listen Later May 25, 2023 15:49


Google introduces new AI-powered ad features, while 1X deploys humanoid robots for security and healthcare tasks, potentially addressing labor shortages. Juergen Schmidhuber shares his optimistic perspective on the potential of AI and the fear of a dystopian future. The team also dives into AI language model advancements, including textually pretrained speech language models, efficient finetuning of quantized LLMs, and aligning large language models through synthetic feedback. Contact:  sergi@earkind.com Timestamps: 00:34 Introduction 01:53 Introducing a new era of AI-powered ads with Google 03:22 OpenAI-backed robot startup beats Elon Musk's Tesla, deploys AI-enabled robots in real world 05:56 Juergen Schmidhuber, Renowned 'Father Of Modern AI,' Says His Life's Work Won't Lead To Dystopia 07:24 Fake sponsor 09:23 Textually Pretrained Speech Language Models 10:53 QLoRA: Efficient Finetuning of Quantized LLMs 12:55 Aligning Large Language Models through Synthetic Feedback 14:38 Outro

Azeem Azhar's Exponential View
Azeem’s Picks: AI’s Near Future with Jürgen Schmidhuber

Azeem Azhar's Exponential View

Play Episode Listen Later May 12, 2023 30:22


Artificial intelligence (AI) is dominating the headlines, but it's not a new topic here on Exponential View. This week and next, Azeem Azhar shares his favorite conversations with AI pioneers. Their work and insights are more relevant than ever. Jürgen Schmidhuber is a recognized pioneer in the field of deep neural networks. His techniques form the basis of the modern AI systems used by billions of people daily on services like Google, Facebook, and the Apple iPhone. In 2019, Jürgen joined Azeem to discuss the next thirty years of artificial intelligence.

KI in der Industrie
The future of transformers and RNNs, Open Source an AI, LLMs for OPC UA and LLMs with memory

KI in der Industrie

Play Episode Listen Later May 10, 2023 50:49


We are looking into the future of Industrial AI. Our guest is Prof. Dr. Günter Klambauer from JKU Linz and we discuss the peak in LLMs, memory for LLMs and what the future of transformers looks like. In the news part we discuss the article about Jürgen Schmidhuber, talk about the leaked document about Open Source an AI and explain, how to use a LLM for OPC UA. Thanks for listening. We welcome suggestions for topics, criticism and a few stars on Apple, Spotify and Co. We thank our new partner [Siemens ](https://new.siemens.com/global/en/products/automation/topic-areas/artificial-intelligence-in-industry.html) Our guest: Prof. Dr. Günter Klambauer, JKU Linz ([more](https://www.linkedin.com/in/g%C3%BCnter-klambauer-1b73293a/)) **Shownotes:** OPC Foundation Podcast ([more](https://opcfoundation.org/resources/podcast/)) Godfather and father of AI discussion ([more](https://www.theguardian.com/technology/2023/may/07/rise-of-artificial-intelligence-is-inevitable-but-should-not-be-feared-father-of-ai-says)) OPC UA and LLMs ([more](https://github.com/OPCFoundation/UA-EdgeTranslator)) Open Source and AI ([more](https://www.semianalysis.com/p/google-we-have-no-moat-and-neither))

Machine Learning Street Talk
#114 - Secrets of Deep Reinforcement Learning (Minqi Jiang)

Machine Learning Street Talk

Play Episode Listen Later Apr 16, 2023 167:15


Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/ESrGqhf5CB Twitter: https://twitter.com/MLStreetTalk In this exclusive interview, Dr. Tim Scarfe sits down with Minqi Jiang, a leading PhD student at University College London and Meta AI, as they delve into the fascinating world of deep reinforcement learning (RL) and its impact on technology, startups, and research. Discover how Minqi made the crucial decision to pursue a PhD in this exciting field, and learn from his valuable startup experiences and lessons. Minqi shares his insights into balancing serendipity and planning in life and research, and explains the role of objectives and Goodhart's Law in decision-making. Get ready to explore the depths of robustness in RL, two-player zero-sum games, and the differences between RL and supervised learning. As they discuss the role of environment in intelligence, emergence, and abstraction, prepare to be blown away by the possibilities of open-endedness and the intelligence explosion. Learn how language models generate their own training data, the limitations of RL, and the future of software 2.0 with interpretability concerns. From robotics and open-ended learning applications to learning potential metrics and MDPs, this interview is a goldmine of information for anyone interested in AI, RL, and the cutting edge of technology. Don't miss out on this incredible opportunity to learn from a rising star in the AI world! TOC Tech & Startup Background [00:00:00] Pursuing PhD in Deep RL [00:03:59] Startup Lessons [00:11:33] Serendipity vs Planning [00:12:30] Objectives & Decision Making [00:19:19] Minimax Regret & Uncertainty [00:22:57] Robustness in RL & Zero-Sum Games [00:26:14] RL vs Supervised Learning [00:34:04] Exploration & Intelligence [00:41:27] Environment, Emergence, Abstraction [00:46:31] Open-endedness & Intelligence Explosion [00:54:28] Language Models & Training Data [01:04:59] RLHF & Language Models [01:16:37] Creativity in Language Models [01:27:25] Limitations of RL [01:40:58] Software 2.0 & Interpretability [01:45:11] Language Models & Code Reliability [01:48:23] Robust Prioritized Level Replay [01:51:42] Open-ended Learning [01:55:57] Auto-curriculum & Deep RL [02:08:48] Robotics & Open-ended Learning [02:31:05] Learning Potential & MDPs [02:36:20] Universal Function Space [02:42:02] Goal-Directed Learning & Auto-Curricula [02:42:48] Advice & Closing Thoughts [02:44:47] References: - Why Greatness Cannot Be Planned: The Myth of the Objective by Kenneth O. Stanley and Joel Lehman https://www.springer.com/gp/book/9783319155234 - Rethinking Exploration: General Intelligence Requires Rethinking Exploration https://arxiv.org/abs/2106.06860 - The Case for Strong Emergence (Sabine Hossenfelder) https://arxiv.org/abs/2102.07740 - The Game of Life (Conway) https://www.conwaylife.com/ - Toolformer: Teaching Language Models to Generate APIs (Meta AI) https://arxiv.org/abs/2302.04761 - OpenAI's POET: Paired Open-Ended Trailblazer https://arxiv.org/abs/1901.01753 - Schmidhuber's Artificial Curiosity https://people.idsia.ch/~juergen/interest.html - Gödel Machines https://people.idsia.ch/~juergen/goedelmachine.html - PowerPlay https://arxiv.org/abs/1112.5309 - Robust Prioritized Level Replay: https://openreview.net/forum?id=NfZ6g2OmXEk - Unsupervised Environment Design: https://arxiv.org/abs/2012.02096 - Excel: Evolving Curriculum Learning for Deep Reinforcement Learning https://arxiv.org/abs/1901.05431 - Go-Explore: A New Approach for Hard-Exploration Problems https://arxiv.org/abs/1901.10995 - Learning with AMIGo: Adversarially Motivated Intrinsic Goals https://www.researchgate.net/publication/342377312_Learning_with_AMIGo_Adversarially_Motivated_Intrinsic_Goals PRML https://www.microsoft.com/en-us/research/uploads/prod/2006/01/Bishop-Pattern-Recognition-and-Machine-Learning-2006.pdf Sutton and Barto https://web.stanford.edu/class/psych209/Readings/SuttonBartoIPRLBook2ndEd.pdf

The Nonlinear Library
LW - Where's the foom? by Fergus Fettes

The Nonlinear Library

Play Episode Listen Later Apr 12, 2023 3:01


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Where's the foom?, published by Fergus Fettes on April 11, 2023 on LessWrong. "The first catastrophe mechanism seriously considered seems to have been the possibility, raised in the 1940s at Los Alamos before the first atomic bomb tests, that fission or fusion bombs might ignite the atmosphere or oceans in an unstoppable chain reaction." This is not our first rodeo. We have done risk assessments before. The best reference-class examples I could find were the bomb, vacuum decay, killer strangelets, and LHC black holes (all covered in ). I was looking for a few days, but didn't complete my search, but I decided to publish this note as now Tyler Cowen is asking too: "Which is the leading attempt to publish a canonical paper on AGI risk, in a leading science journal, refereed of course. The paper should have a formal model or calibration of some sort, working toward the conclusion of showing that the relevant risk is actually fairly high. Is there any such thing?" The three papers people replied with were: - Is Power-Seeking AI an Existential Risk?- The Alignment Problem from a Deep Learning Perspective - Unsolved Problems in ML Safety Places I was looking so far:- The list of references for that paper- The references for the Muehlhauser and Salamon intelligence explosion paper- The Sandburg review of singularities and related papers (these are quite close to passing muster I think) Places I wanted to look further: Papers by Yampolsky, aka- Papers mentioned in there by Schmidhuber (haven't gotten around to this)- I haven't thoroughly reviewed Intelligence Explosion Microeconomics, maybe this is the closest thing to fulfilling the criteria? But if there is something concrete in eg. some papers by Yampolsky and Schmidhuber, why hasn't anyone fleshed it out in more detail? For all the time people spend working on 'solutions' to the alignment problem, there still seems to be a serious lack of 'descriptions' of the alignment problem. Maybe the idea is, if you found the latter you would automatically have the former? I feel like something built on top of Intelligence Explosion Microeconomics and the Orthogonality Thesis could be super useful and convincing to a lot of people. And I think people like TC are perfectly justified in questioning why it doesn't exist, for all the millions of words collectively written on this topic on LW etc. I feel like a good simple model of this would be much more useful than another ten blog posts about the pros and cons of bombing data centers. This is the kind of thing that governments and lawyers and insurance firms can sink their teeth into. Where's the foom? Edit: Forgot to mention clippy. Clippy is in many ways the most convincing of all the things I read looking for this, and whenever I find myself getting skeptical of foom I read it again. Maybe an summary of the mechanisms described in there would be a step in the right direction? A critical look at risk assessments for global catastrophes List Intelligence Explosion: Evidence and Import An Overview of Models of Technological Singularity From Seed AI to Technological Singularity via Recursively Self-Improving Software Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library: LessWrong
LW - Where's the foom? by Fergus Fettes

The Nonlinear Library: LessWrong

Play Episode Listen Later Apr 12, 2023 3:01


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Where's the foom?, published by Fergus Fettes on April 11, 2023 on LessWrong. "The first catastrophe mechanism seriously considered seems to have been the possibility, raised in the 1940s at Los Alamos before the first atomic bomb tests, that fission or fusion bombs might ignite the atmosphere or oceans in an unstoppable chain reaction." This is not our first rodeo. We have done risk assessments before. The best reference-class examples I could find were the bomb, vacuum decay, killer strangelets, and LHC black holes (all covered in ). I was looking for a few days, but didn't complete my search, but I decided to publish this note as now Tyler Cowen is asking too: "Which is the leading attempt to publish a canonical paper on AGI risk, in a leading science journal, refereed of course. The paper should have a formal model or calibration of some sort, working toward the conclusion of showing that the relevant risk is actually fairly high. Is there any such thing?" The three papers people replied with were: - Is Power-Seeking AI an Existential Risk?- The Alignment Problem from a Deep Learning Perspective - Unsolved Problems in ML Safety Places I was looking so far:- The list of references for that paper- The references for the Muehlhauser and Salamon intelligence explosion paper- The Sandburg review of singularities and related papers (these are quite close to passing muster I think) Places I wanted to look further: Papers by Yampolsky, aka- Papers mentioned in there by Schmidhuber (haven't gotten around to this)- I haven't thoroughly reviewed Intelligence Explosion Microeconomics, maybe this is the closest thing to fulfilling the criteria? But if there is something concrete in eg. some papers by Yampolsky and Schmidhuber, why hasn't anyone fleshed it out in more detail? For all the time people spend working on 'solutions' to the alignment problem, there still seems to be a serious lack of 'descriptions' of the alignment problem. Maybe the idea is, if you found the latter you would automatically have the former? I feel like something built on top of Intelligence Explosion Microeconomics and the Orthogonality Thesis could be super useful and convincing to a lot of people. And I think people like TC are perfectly justified in questioning why it doesn't exist, for all the millions of words collectively written on this topic on LW etc. I feel like a good simple model of this would be much more useful than another ten blog posts about the pros and cons of bombing data centers. This is the kind of thing that governments and lawyers and insurance firms can sink their teeth into. Where's the foom? Edit: Forgot to mention clippy. Clippy is in many ways the most convincing of all the things I read looking for this, and whenever I find myself getting skeptical of foom I read it again. Maybe an summary of the mechanisms described in there would be a step in the right direction? A critical look at risk assessments for global catastrophes List Intelligence Explosion: Evidence and Import An Overview of Models of Technological Singularity From Seed AI to Technological Singularity via Recursively Self-Improving Software Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

KI in der Industrie
LSTM for ABB's Augmented Operator

KI in der Industrie

Play Episode Listen Later Feb 15, 2023 57:58


In this episode we have three guests from ABB and talk about the augmented operator - what is it supposed to do, how was it built and where will it be used? One requirement is the LSTM algorithm. 25 years ago, Schmidhuber and Hochreiter developed the LSTM algorithm. The breakthrough came with the tech companies from Silicon Valley. But the automation industry also uses the approach. ABB developed an augmented operator with the LSTM. The podcast is growing and we want to keep growing. That's why our German-language podcast is now available in English. We are happy about new listeners. We thank our new partner [Hannover Messe] (https://www.hannovermesse.de/en/) Shownotes: Github CEO about Open Source and AI ([more](https://www.linkedin.com/pulse/europes-chance-leader-age-ai-thomas-dohmke%3FtrackingId=nkqtFFG%252FTG%252Bm7XdC9xVUVQ%253D%253D/?trackingId=nkqtFFG%2FTG%2Bm7XdC9xVUVQ%3D%3D)) ChatGPT for business ([more](https://www.zuehlke.com/en/insights/how-chatgpt-is-changing-business)) Our guests: Arzam Muzaffar Kotriwala https://www.linkedin.com/in/arzam/ Benedikt Schmidt https://www.linkedin.com/in/schmidtbenedikt/ John Pretlove https://www.linkedin.com/in/john-pretlove-4675121/

Lexman Artificial
Juergen Schmidhuber, Founder of Garrisons and Caladium Embitterer

Lexman Artificial

Play Episode Listen Later Jan 28, 2023 4:40


Juergen Schmidhuber is a computer scientist and mathematician who works on artificial general intelligence and pattern recognition. He is also the founder of Garrisons, a company which creates artificial "microcosms" in which artificial intelligence can evolve. In this episode, Juergen and Lexman discuss the concept of unravelment, which is the process of eliminating unnecessary complexities from systems. They also discuss the importance of freaks in artificial intelligence, and how training artificial neural networks can cause embitterment in some specimens.

KI in der Industrie
More than an interface for open source AI-tools - what is AIUI?

KI in der Industrie

Play Episode Listen Later Jan 25, 2023 36:11


Martin explains his approach and describes how his solution differs from an Auto ML tool. In the news we talk about the new AI Powerhouse Amsterdam, our event in the Alps and briefly about ChatGPT. The podcast is growing and we want to keep growing. That's why our German-language podcast is now available in English. We are happy about new listeners. We thank our new partner [Siemens](https://new.siemens.com/global/en/products/automation/topic-areas/artificial-intelligence-in-industry.html) Our guest: https://www.linkedin.com/in/martin-schiele-2a3b747b/ 1. Video https://youtu.be/I39sxwYKlEE 2. Our Video/Pod with Dickmanns https://youtu.be/UYDtI6njaNM 3. Google https://www.youtube.com/watch?v=YZ6nPhUG2i0 4. Schmidhuber and Dickmanns https://people.idsia.ch/~juergen/robotcars.html 5. Time a) https://time.com/6246119/demis-hassabis-deepmind-interview/ b) https://time.com/6247678/openai-chatgpt-kenya-workers/

Lexman Artificial
Interview with Dr. Juergen Schmidhuber: Earmuffs Will Eventually Become the Standard Mode of Transportation for the Aware, Intelligent Minority

Lexman Artificial

Play Episode Listen Later Jan 18, 2023 5:55


In this episode, Lexman interviews mathematician and cybernetist Dr. Juergen Schmidhuber about his work on artificial intelligence and its implications for transportation. They discuss Schmidhuber's theory that earmuffs will eventually become the standard mode of transportation for the aware, intelligent minority and how that could shape the future of the world.

Einfach Erfüllt Leben
4. Das hat mich fast umgehauen: Unsere mögliche Zukunft mit Künstlicher Intelligenz (Interviewkommentar Jürgen Schmidhuber)

Einfach Erfüllt Leben

Play Episode Listen Later Jan 9, 2023 13:22


Vor kurzem habe ich ein Interview mit Jürgen Schmidhuber gelesen. Er ist wissenschaftlicher Kodirektor am Schweizer KI Labor IDSIA (USI & SUPSI) und seit etwa 30 Jahren einer der Wegbereiter der künstlichen Intelligenz (KI). Seine Vision ist, dass sich etwas wirklich Großes anbahnt. WIE groß das seiner Meinung nach ist, hat mich fast umgehauen. Interview-Quelle: Publik-Forum (März 2021, Extra Thema „Digitales Leben“, S.26ff). Mehr zu mir und meiner Arbeit: https://elskeschoenhals.de Komm' mit mir ins Gespräch: https://instagram.com/elske_schoenhals

Lexman Artificial
The History and Importance of Apprenticeship with Juergen Schmidhuber

Lexman Artificial

Play Episode Listen Later Jan 2, 2023 4:47


In today's episode, Lexman is joined by super-intelligent AI Juergen Schmidhuber to discuss the history and importance of apprenticeship. They also discuss the albugo mineral Goethite and Cuthbert, the medieval monk who is believed to have been the first to write in a programming language.

Lexman Artificial
Lexman and Juergen Schmidhuber Chat About Local Optimization in Aphrodisiacs

Lexman Artificial

Play Episode Listen Later Dec 4, 2022 3:44


Lexman and Juergen Schmidhuber chat about Juergen's latest paper on locoism in aphrodisiacs. They discuss the possible implications of this new research on desert life and whether or not locoism might help explain some of the behaviors seen there.

KI in der Industrie
Use case: Industrial AI and the saw doctor

KI in der Industrie

Play Episode Listen Later Nov 30, 2022 43:48


AI isn't just for the big companies. We found a medium-sized company from the Alpine foothills that is taking an interesting approach to its levelling and tensioning machine. Kohlbacher produces straightening center, sharpening, llevelling and tensioning machines and the owner Siegfried Kohlbacher and his colleague Michael Trumb explain to us why and how they use Industrial AI. The partner in the project was Bosch Rexroth. Kohlbacher is a market leader. The specialist company improves its machines for the timber industry such as filing machines, benching machines and straightening centers on an ongoing basis. In the news section we talk about Galactica, ASML and a new cold war in AI and about Le Cun and Jürgen Schmidhuber. The podcast is growing and we want to keep growing. That's why our German-language podcast is now available in English. We are happy about new listeners. We thank our new partner [Siemens](https://new.siemens.com/global/en/products/automation/topic-areas/artificial-intelligence-in-industry.html) Questions? robert@aipod.de or peter@aipod.de NEWS: ASML https://www.bloomberg.com/news/articles/2022-11-22/dutch-resist-us-call-to-ban-more-chip-equipment-sales-to-china https://www.ft.com/content/0c4e752d-cf22-425f-a3cb-876b68e1b787 Le Cun and Schmidhuber https://twitter.com/schmidhuberai/status/1594964463727570945?s=46&t=bXJ8jD05EuNBvaA1LNdtQQ

Lexman Artificial
Interview with Juergen Schmidhuber (Amblyopia, Gibbs, Kibbles)

Lexman Artificial

Play Episode Listen Later Nov 6, 2022 4:20


Juergen Schmidhuber, world-renowned authority on artificial intelligence and its cognizant mechanisms, joins Lexman for a fascinating discussion on the topic of amblyopia - or poorsightedness. Schmidhuber explains how he discovered the kibbles Gibbs phenomenon, deriving a set of surprising but compelling conclusions about human mating behavior.

Lexman Artificial
Juergen Schmidhuber on the Future of AI

Lexman Artificial

Play Episode Listen Later Nov 3, 2022 4:00


Juergen Schmidhuber, a mathematician, discusses how biont ingenuity could lead to a new form of intelligence.

Lexman Artificial
Juergen Schmidhuber on Harmonic Machines and Stellarators

Lexman Artificial

Play Episode Listen Later Oct 26, 2022 4:02


In this episode, Juergen Schmidhuber, a world-renowned expert in the fields of artificial intelligence and machine learning, joins Lexman to discuss his work on harmonic machines and stellarators. These devices simulate the conditions of stars, allowing scientists to learn more about how stars work and how to create similar systems in the laboratory.

London Futurists
AI overview: 2. The Big Bang and the years that followed

London Futurists

Play Episode Listen Later Sep 7, 2022 31:50


In this episode, co-hosts Calum Chace and David Wood continue their review of progress in AI, taking up the story at the 2012 "Big Bang".00.05: Introduction: exponential impact, big bangs, jolts, and jerks00.45: What enabled the Big Bang01.25: Moore's Law02.05: Moore's Law has always evolved since its inception in 196503.08: Intel's tick tock becomes tic tac toe03.49: GPUs - Graphic Processing Units04.29: TPUs - Tensor Processing Units04.46: Moore's Law is not dead or dying05.10: 3D chips05.32: Memristors05.54: Neuromorphic chips06.48: Quantum computing08.18: The astonishing effect of exponential growth09.08: We have seen this effect in computing already. The cost of an iPhone in the 1950s.09.42: Exponential growth can't continue forever, but Moore's Law hasn't reached any theoretical limits10.33: Reasons why Moore's Law might end: too small, too expensive, not worthwhile11.20: Counter-arguments12.01: "Plenty more room at the bottom"12.56: Software and algorithms can help keep Moore's Law going14.15: Using AI to improve chip design14.40: Data is critical15.00: ImageNet, Fei Fei Lee, Amazon Turk16.10: AIs labelling data16.35: The Big Bang17.00: Jürgen Schmidhuber challenges the narrative17.41: The Big Bang enabled AI to make money18.24: 2015 and the Great Robot Freak-Out18.43: Progress in many domains, especially natural language processing19.44: Machine Learning and Deep Learning20.25: Boiling the ocean vs the scientific method's hypothesis-driven approach21.15: Deep Learning: levels21.57: How Deep Learning systems recognise faces22.48: Supervised, Unsupervised, and Reinforcement Learning24.00: Variants, including Deep Reinforcement Learning and Self-Supervised Learning24.30: Yann LeCun's camera metaphor for Deep Learning26.05: Lack of transparency is a concern27.45: Explainable AI. Is it achievable?29.00: Other AI problems29.17: Has another Big Bang taken place? Large Language Models like GPT-330.08: Few-shot learning and transfer learning30.40: Escaping Uncanny Valley31.50: Gato and partially general AIMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationFor more about the podcast hosts, see https://calumchace.com/ and https://dw2blog.com/

Lexman Artificial
with Juergen Schmidhuber: Hexateuch, Guaco, Heterogenies, Bactericides, Staggerer, Phores

Lexman Artificial

Play Episode Listen Later Sep 2, 2022 5:45


Juergen Schmidhuber is a computer scientist and mathematician who has made major contributions to artificial intelligence and machine learning. He studies strange attractors in complex systems and Discusses how these could be used to better understand complexity in the world around us.

Verbrauchertipp - Deutschlandfunk
Grundwasser: Ein Brunnen im eigenen Garten lohnt sich nur selten

Verbrauchertipp - Deutschlandfunk

Play Episode Listen Later Aug 30, 2022 3:41


Schmidhuber, Elkewww.deutschlandfunk.de, VerbrauchertippDirekter Link zur Audiodatei

Lexman Artificial
Purging tongs with juergen Schmidhuber

Lexman Artificial

Play Episode Listen Later Aug 10, 2022 2:47


In this episode, Lexman discusses the purging process of tongs with juergen Schmidhuber.

Lexman Artificial
Non-Plus Theorem with Juergen Schmidhuber

Lexman Artificial

Play Episode Listen Later Aug 7, 2022 14:05


Juergen Schmidhuber discusses the non- plus theorem with Lexman.

Lexman Artificial
Juergen Schmidhuber: Negus Baboons, Florida Panthers, and Uncommon Animals

Lexman Artificial

Play Episode Listen Later Jul 14, 2022 2:27


In this episode of Lexman Artificial, Juergen Schmidhuber drops by to chat about the unusual animals of Jacksonville, Florida. From negus baboons to the rare Florida panther, Juergens shares stories and insights about these fascinating creatures.

Lexman Artificial
Guest Juergen Schmidhuber Talks About Rosefish, Mullet and How They Use Their Muscles to Locomote

Lexman Artificial

Play Episode Listen Later Jul 11, 2022 6:08


guest Juergen Schmidhuber talks about rosefish, mullet and how they way they use their muscles to locomote.

Machine Learning Street Talk
MLST #78 - Prof. NOAM CHOMSKY (Special Edition)

Machine Learning Street Talk

Play Episode Listen Later Jul 8, 2022 217:01


Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/ESrGqhf5CB In this special edition episode, we have a conversation with Prof. Noam Chomsky, the father of modern linguistics and the most important intellectual of the 20th century. With a career spanning the better part of a century, we took the chance to ask Prof. Chomsky his thoughts not only on the progress of linguistics and cognitive science but also the deepest enduring mysteries of science and philosophy as a whole - exploring what may lie beyond our limits of understanding. We also discuss the rise of connectionism and large language models, our quest to discover an intelligible world, and the boundaries between silicon and biology. We explore some of the profound misunderstandings of linguistics in general and Chomsky's own work specifically which have persisted, at the highest levels of academia for over sixty years. We have produced a significant introduction section where we discuss in detail Yann LeCun's recent position paper on AGI, a recent paper on emergence in LLMs, empiricism related to cognitive science, cognitive templates, “the ghost in the machine” and language. Panel: Dr. Tim Scarfe Dr. Keith Duggar Dr. Walid Saba YT version: https://youtu.be/-9I4SgkHpcA 00:00:00 Kick off 00:02:24 C1: LeCun's recent position paper on AI, JEPA, Schmidhuber, EBMs 00:48:38 C2: Emergent abilities in LLMs paper 00:51:32 C3: Empiricism 01:25:33 C4: Cognitive Templates 01:35:47 C5: The Ghost in the Machine 01:59:21 C6: Connectionism and Cognitive Architecture: A Critical Analysis by Fodor and Pylyshyn 02:19:25 C7: We deep-faked Chomsky 02:29:11 C8: Language 02:34:41 C9: Chomsky interview kick-off! 02:35:39 Large Language Models such as GPT-3 02:39:14 Connectionism and radical empiricism 02:44:44 Hybrid systems such as neurosymbolic 02:48:47 Computationalism silicon vs biological 02:53:28 Limits of human understanding 03:00:46 Semantics state-of-the-art 03:06:43 Universal grammar, I-Language, and language of thought 03:16:27 Profound and enduring misunderstandings 03:25:41 Greatest remaining mysteries science and philosophy 03:33:10 Debrief and 'Chuckles' from Chomsky

Verbrauchertipp - Deutschlandfunk
Lovescamming - Abzocke bei der Partnersuche

Verbrauchertipp - Deutschlandfunk

Play Episode Listen Later May 25, 2022 3:47


Schmidhuber, Elkewww.deutschlandfunk.de, VerbrauchertippDirekter Link zur Audiodatei

Tagesgespräch
FAO-Experte Josef Schmidhuber über die globale Lebensmittelkrise

Tagesgespräch

Play Episode Listen Later May 4, 2022 27:00


Der Krieg in der Ukraine bedroht zunehmend auch die globale Nahrungsmittelversorgung. Laut der UNO-Ernährungsorganisation FAO haben die Lebensmittelpreise inzwischen ein Rekordniveau erreicht. Die Gründe und die gravierenden Folgen erörtert FAO-Experte Josef Schmidhuber im «Tagesgespräch». Russland und die Ukraine gehören zu den wichtigsten Getreide-Produzenten. Sie liefern normalerweise zusammen fast ein Drittel des weltweit gehandelten Weizens. Zu ihren Kunden zählen auch viele Länder Afrikas und Asiens. Doch durch den Preisanstieg können sich nicht mehr alle Länder den Getreide-Import leisten. Hilfsorganisationen, die UNO und die Weltbank warnen vor Hungersnöten und einer möglichen humanitären Katastrophe. Wie dramatisch ist die Lage? Und welche Lösungsansätze gibt es? Darüber spricht Barbara Peter im «Tagesgespräch» mit Josef Schmidhuber von der UNO-Ernährungs- und Landwirtschaftsorganisation FAO in Rom. Der Ökonom ist stellvertretender Direktor der FAO-Abteilung Handel und Märkte. 

KI in der Industrie
25 Jahre LSTM - mit Prof. Dr. Jürgen Schmidhuber und Prof. Dr. Sepp Hochreiter

KI in der Industrie

Play Episode Listen Later Feb 9, 2022 54:00


Eine besondere Folge und wir sind stolz, dass wir Prof. Dr. Jürgen Schmidhuber und Prof. Dr. Sepp Hochreiter für ein gemeinsames Interview gewinnen konnten. Wie war das damals mit LSTM und wie lange ist die Halbwertszeit eines Algorithmus?

The Nonlinear Library
LW - The Good News of Situationist Psychology by lukeprog from The Science of Winning at Life

The Nonlinear Library

Play Episode Listen Later Dec 25, 2021 4:40


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is The Science of Winning at Life, Part 5: The Good News of Situationist Psychology, published by lukeprog. Part of the sequence: The Science of Winning at Life In 1961, Stanley Milgram began his famous obedience experiments. He found that ordinary people would deliver (what they believed to be) excruciatingly painful electric shocks to another person if instructed to do so by an authority figure. Milgram claimed these results showed that in certain cases, people are more heavily influenced by their situation than by their internal character. Fifty years and hundreds of studies later, this kind of situationism is widely accepted for broad domains of human action. People can inflict incredible cruelties upon each other in a prison simulation.b Hurried passersby step over a stricken person in their path, while unhurried passersby stop to help.a Willingness to help varies with the number of bystanders, and with proximity to a fragrant bakery or cofee shop.c The list goes on and on.d Our inability to realize how powerful the effect situation has on human action is so well-known that it has a name. Our tendency to over-value trait-based explanations of others' behavior and under-value situation-based explanations of their behavior is called the fundamental attribution error (aka correspondence bias). Recently, some have worried that this understanding undermines the traditional picture we have of ourselves as stable persons with robust characteristics. How can we trust others if their unpredictable situation may have so powerful an effect that it overwhelms the effect of their virtuous character traits? But as I see it, situationist psychology is wonderful news, for it means we can change! If situation has a powerful effect on behavior, then we have significant powers to improve our own behavior. It would be much worse to discover that our behavior was almost entirely determined by traits we were born with and cannot control. For example, drug addicts can be more successful in beating addiction if they change their peer group - if they stop spending recreational time with other addicts, and spend time with drug-free people instead, or in a treatment environment.e Improving rationality What about improving your rationality? Situationist psychology suggests it may be wise to surround yourself with fellow rationalists. Having now been a visiting fellow with the Singularity Institute for only two days, I can already tell that almost everyone I've met who is with the Singularity Institute or has been through its visiting fellows program is a level or two above me - not just in knowledge about Friendly AI and simulation arguments and so on, but in day-to-day rationality skills. It's fascinating to take part in a conversation with really trained rationalists. It might go something like this: Person One: "I suspect that P, though I know that cognitive bias A and B and C are probably influencing me here. However, I think that evidence X and Y offer fairly strong support for P." Person Two: "But what about Z? This provides evidence against P because blah blah blah..." Person One: "Huh. I hadn't thought that. Well, I'm going to downshift my probability that P." Person Three: "But what about W? The way Schmidhuber argues is this: blah blah blah." Person One: "No, that doesn't work because blah blah blah." Person Three: "Hmmm. Well, I have a lot of confusion and uncertainty about that." This kind of thing can go on for hours, and not just on abstract subjects like simulation arguments, but also on more personal issues like fears and dreams and dating. I've had several of these many-hours-long group conversations already - people arguing vigorously, often 'trashing' others' views (with logic and evidence), but with everybody apparently willing to update their beliefs, nobody getting mad or...

The Nonlinear Library: LessWrong
LW - The Good News of Situationist Psychology by lukeprog from The Science of Winning at Life

The Nonlinear Library: LessWrong

Play Episode Listen Later Dec 25, 2021 4:40


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is The Science of Winning at Life, Part 5: The Good News of Situationist Psychology, published by lukeprog. Part of the sequence: The Science of Winning at Life In 1961, Stanley Milgram began his famous obedience experiments. He found that ordinary people would deliver (what they believed to be) excruciatingly painful electric shocks to another person if instructed to do so by an authority figure. Milgram claimed these results showed that in certain cases, people are more heavily influenced by their situation than by their internal character. Fifty years and hundreds of studies later, this kind of situationism is widely accepted for broad domains of human action. People can inflict incredible cruelties upon each other in a prison simulation.b Hurried passersby step over a stricken person in their path, while unhurried passersby stop to help.a Willingness to help varies with the number of bystanders, and with proximity to a fragrant bakery or cofee shop.c The list goes on and on.d Our inability to realize how powerful the effect situation has on human action is so well-known that it has a name. Our tendency to over-value trait-based explanations of others' behavior and under-value situation-based explanations of their behavior is called the fundamental attribution error (aka correspondence bias). Recently, some have worried that this understanding undermines the traditional picture we have of ourselves as stable persons with robust characteristics. How can we trust others if their unpredictable situation may have so powerful an effect that it overwhelms the effect of their virtuous character traits? But as I see it, situationist psychology is wonderful news, for it means we can change! If situation has a powerful effect on behavior, then we have significant powers to improve our own behavior. It would be much worse to discover that our behavior was almost entirely determined by traits we were born with and cannot control. For example, drug addicts can be more successful in beating addiction if they change their peer group - if they stop spending recreational time with other addicts, and spend time with drug-free people instead, or in a treatment environment.e Improving rationality What about improving your rationality? Situationist psychology suggests it may be wise to surround yourself with fellow rationalists. Having now been a visiting fellow with the Singularity Institute for only two days, I can already tell that almost everyone I've met who is with the Singularity Institute or has been through its visiting fellows program is a level or two above me - not just in knowledge about Friendly AI and simulation arguments and so on, but in day-to-day rationality skills. It's fascinating to take part in a conversation with really trained rationalists. It might go something like this: Person One: "I suspect that P, though I know that cognitive bias A and B and C are probably influencing me here. However, I think that evidence X and Y offer fairly strong support for P." Person Two: "But what about Z? This provides evidence against P because blah blah blah..." Person One: "Huh. I hadn't thought that. Well, I'm going to downshift my probability that P." Person Three: "But what about W? The way Schmidhuber argues is this: blah blah blah." Person One: "No, that doesn't work because blah blah blah." Person Three: "Hmmm. Well, I have a lot of confusion and uncertainty about that." This kind of thing can go on for hours, and not just on abstract subjects like simulation arguments, but also on more personal issues like fears and dreams and dating. I've had several of these many-hours-long group conversations already - people arguing vigorously, often 'trashing' others' views (with logic and evidence), but with everybody apparently willing to update their beliefs, nobody getting mad or...

Artificial Intelligence in Industry with Daniel Faggella
A Use-Case Deep Dive: Artificial Intelligence for Additive Manufacturing - with Faustino Gomez of NNAISENSE

Artificial Intelligence in Industry with Daniel Faggella

Play Episode Listen Later Nov 30, 2021 28:03


Today's guest is Faustino Gomez, CEO, and Co-Founder of NNAISENSE. NNAISENSE is a firm focused on developing AI solutions for the physical world, and we spoke with their other Co-Founder, Jürgen Schmidhuber, about two years ago. If you're interested in listening to Jürgen's episode, you can find it on Apple Podcasts or Soundcloud. In today's episode, Faustino discusses one particular use-case that NNAISENSE is focused on in additive manufacturing. Faustino has a Ph.D. in artificial intelligence and was a senior researcher for over ten years at the IDSIA, a well-known artificial intelligence lab in Lugano, Switzerland. To access Emerj's frameworks for AI readiness, ROI and strategy, visit emerj.com/p1.

Verbrauchertipp - Deutschlandfunk
Krankenkassenwechsel - ganz einfach Geld gespart

Verbrauchertipp - Deutschlandfunk

Play Episode Listen Later Nov 15, 2021 3:39


Schmidhuber, Elkewww.deutschlandfunk.de, VerbrauchertippDirekter Link zur Audiodatei

Yannic Kilcher Videos (Audio Only)
[ML News] Microsoft trains 530B model | ConvMixer model fits into single tweet | DeepMind profitable

Yannic Kilcher Videos (Audio Only)

Play Episode Listen Later Oct 21, 2021 27:51


#mlnews #turingnlg #convmixer Your latest upates on what's happening in the Machine Learning world. OUTLINE: 0:00 - Intro 0:16 - Weights & Biases raises on 1B valuation (sponsored) 2:30 - Microsoft trains 530 billion parameter model 5:15 - StyleGAN v3 released 6:45 - A few more examples may be worth billions of parameters 8:30 - ConvMixer fits into a tweet 9:45 - Improved VQGAN 11:25 - William Shatner AI chats about his life 12:35 - Google AI pushes material science 14:10 - Gretel AI raises 50M for privacy protection 16:05 - DeepMind's push into ML for biology 19:00 - Schmidhuber laudates Kunihiko Fukushima for Bower Award 21:30 - Helpful Things 22:25 - Mosaic ML out of stealth mode 23:55 - First German self-driving train 24:45 - Ex-Pentagon Chief: China has already won 26:25 - DeepMind becomes profitable Sponsor: Weights & Biases https://wandb.com References: Microsoft Trains 530B Parameter Model https://www.microsoft.com/en-us/resea... StyleGAN 3 Code Released https://nvlabs.github.io/stylegan3/ https://github.com/NVlabs/stylegan3 https://colab.research.google.com/git... When do labels help? https://arxiv.org/pdf/2110.04374.pdf ml_paper.bruh https://openreview.net/pdf?id=TVHS5Y4... Improved VQGAN https://openreview.net/pdf?id=pfNyExj7z2 William Shatner "AI" & Storyfile https://www.livescience.com/william-s... https://www.storyfile.com/ GoogleAI Finds Complex Metal Oxides https://ai.googleblog.com/2021/10/fin... GretelAI raises 50M Series B https://techcrunch.com/2021/10/07/gre... https://gretel.ai/ https://gretel.ai/blog/why-privacy-by... DeepMind's Push in ML for Bio https://www.biorxiv.org/content/10.11... https://deepmind.com/blog/article/enf... Kunihiko Fukushima wins Bower Award: Schmidhuber Congratulates https://www.fi.edu/laureates/kunihiko... https://www.youtube.com/watch?v=ysOw6... Helpful Things https://github.com/UKPLab/beir#beers-... https://arxiv.org/pdf/2104.08663.pdf https://bayesoptbook.com/ https://github.com/nvlabs/imaginaire/ https://github.com/NVlabs/imaginaire/... MosaicML out of Stealth Mode https://www.mosaicml.com/ https://www.mosaicml.com/blog/founder... https://app.mosaicml.com/library/imag... https://github.com/mosaicml/composer https://mosaicml-composer.readthedocs... Germany's first self-driving train https://techxplore.com/news/2021-10-g... Ex-Pentagon Chief: China has already won tech war https://nypost.com/2021/10/11/pentago... DeepMind becomes profitable https://bdtechtalks.com/2021/10/07/go... Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yann... Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannick...

Yannic Kilcher Videos (Audio Only)
[ML News] Plagiarism Case w/ Plot Twist | CLIP for video surveillance | OpenAI summarizes books

Yannic Kilcher Videos (Audio Only)

Play Episode Listen Later Sep 30, 2021 30:51


#plagiarism #surveillance #schmidhuber Your Mondaily updates of what's going in the world of Machine Learning. OUTLINE: 0:00 - Intro 0:20 - New plagiarism case has plot twist 7:25 - CLIP for video surveillance 9:40 - DARPA SubTerranean Challenge 11:00 - Schmidhuber criticizing Turing Lecture 15:00 - OpenAI summarizes books 17:55 - UnBiasIt monitors employees' communications for bias 20:00 - iOS plans to detect depression 21:30 - UK 10 year plan to become AI superpower 23:30 - Helpful Libraries 29:00 - WIT: Wikipedia Image-Text dataset References: New plagiarism case with plot twist https://www.reddit.com/r/MachineLearn... https://zhuanlan.zhihu.com/p/411800486 https://github.com/cybercore-co-ltd/C... CLIP used for video surveillance https://www.reddit.com/r/MachineLearn... https://github.com/johanmodin/clifs DARPA SubTerranean Challenge https://twitter.com/BotJunkie/status/... https://twitter.com/BotJunkie https://www.subtchallenge.com/index.html https://www.subtchallenge.com/resourc... https://twitter.com/dynamicrobots/sta... Schmidhuber Blog: Turing Lecture Errors https://people.idsia.ch/~juergen/scie... OpenAI on Summarizing Books https://openai.com/blog/summarizing-b... https://arxiv.org/pdf/2109.10862.pdf UnBiasIt to monitor employee language https://edition.cnn.com/2021/09/20/te... https://www.unbiasit.com/ iPhone to detect depression https://www.wsj.com/articles/apple-wa... https://archive.ph/hRTnw UK 10-year plan to become AI-superpower https://www.cnbc.com/2021/09/22/uk-pu... https://archive.ph/4gkKK Helpful Libraries https://twitter.com/scikit_learn/stat... https://scikit-learn.org/stable/auto_... https://twitter.com/pcastr/status/144... https://github.com/google/dopamine https://github.com/microsoft/muzic https://ai-muzic.github.io/muzic_logo/ https://ai.facebook.com/blog/dynatask... https://github.com/tum-pbs/PhiFlow https://github.com/facebookresearch/dora Habitat and Matterport 3D Dataset https://github.com/facebookresearch/h... https://aihabitat.org/ https://arxiv.org/pdf/2109.08238.pdf WIT: Wikipedia-Based Image-Text Dataset https://ai.googleblog.com/2021/09/ann... Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yann... Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannick... Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

Yannic Kilcher Videos (Audio Only)
[ML News] Roomba Avoids Poop | Textless NLP | TikTok Algorithm Secrets | New Schmidhuber Blog

Yannic Kilcher Videos (Audio Only)

Play Episode Listen Later Sep 16, 2021 25:39


#schmidhuber #tiktok #roomba Your regularly irregular update on what's happening in the world of Machine Learning. OUTLINE: 0:00 - Intro 0:15 - Sponsor: Weights & Biases 1:55 - ML YouTuber reaches 100k subscribers 2:40 - Facebook AI pushes Textless NLP 5:30 - Schmidhuber blog post: I invented everything 7:55 - TikTok algorithm rabbitholes users 10:45 - Roomba learns to avoid poop 11:50 - AI can spot art forgeries 14:55 - Deepmind's plans to separate from Google 16:15 - Cohere raises 40M 16:55 - US Judge rejects AI inventor on patent 17:55 - Altman: GPT-4 not much bigger than GPT-3 18:45 - Salesforce CodeT5 19:45 - DeepMind Reinforcement Learning Lecture Series 20:15 - WikiGraphs Dataset 20:40 - LiveCell Dataset 21:00 - SpeechBrain 21:10 - AI-generated influencer gains 100 sponsorships 22:20 - AI News Questions 23:15 - AI hiring tools reject millions of valid applicants Sponsor: Weights & Biases https://wandb.me/start References: Facebook AI creates Textless NLP https://ai.facebook.com/blog/textless... https://speechbot.github.io/pgslm/?fb... Schmidhuber invented everything https://people.idsia.ch/~juergen/most... How TikTok's algorithm works https://www.wsj.com/video/series/insi... Roomba learns to avoid poop https://edition.cnn.com/2021/09/09/te... Amateur develops fake art detector https://blogs.nvidia.com/blog/2021/08... https://spectrum.ieee.org/this-ai-can... DeepMind's plan to break away from Google https://www.businessinsider.com/deepm... https://archive.ph/8s5IK Cohere raises USD 40M https://www.fastcompany.com/90670635/... https://cohere.ai/ US judge refuses AI patent https://www.theregister.com/2021/09/0... Sam Altman on GPT-4 https://www.reddit.com/r/OpenAI/comme... Salesforce releases CodeT5 https://blog.einstein.ai/codet5/ DeepMind RL lecture series https://deepmind.com/learning-resourc... WikiGraphs Dataset https://github.com/deepmind/deepmind-... LiveCell Dataset https://sartorius-research.github.io/... https://www.nature.com/articles/s4159... SpeechBrain Library https://speechbrain.github.io/ AI generated influencer lands 100 sponsorships https://www.allkpop.com/article/2021/... AI News Questions https://www.forbes.com/sites/tomtaull... https://mindmatters.ai/2021/09/isnt-i... https://fortune.com/2021/09/07/deepmi... https://www.forbes.com/sites/anniebro... https://www.cnbctv18.com/views/view-a... https://www.kcrw.com/culture/shows/li... https://techcrunch.com/2021/09/07/ai-... https://www.forbes.com/sites/bernardm... AI hiring tools mistakenly reject millions of applicants https://www.theverge.com/2021/9/6/226... Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yann... Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :)

Yannic Kilcher Videos (Audio Only)
[ML News] AI predicts race from X-Ray | Google kills HealthStreams | Boosting Search with MuZero

Yannic Kilcher Videos (Audio Only)

Play Episode Listen Later Sep 13, 2021 27:33


#mlnews #schmidhuber #muzero Your regular updates on what's happening in the ML world! OUTLINE: 0:00 - Intro 0:15 - Sponsor: Weights & Biases 1:45 - Google shuts down health streams 4:25 - AI predicts race from blurry X-Rays 7:35 - Facebook labels black men as primates 11:05 - Distill papers on Graph Neural Networks 11:50 - Jürgen Schmidhuber to lead KAUST AI Initiative 12:35 - GitHub brief on DMCA notices for source code 14:55 - Helpful Reddit Threads 19:40 - Simple Tricks to improve Transformers 20:40 - Apple's Unconstrained Scene Generation 21:40 - Common Objects in 3D dataset 22:20 - WarpDrive Multi-Agent RL framework 23:10 - My new paper: Boosting Search Agents & MuZero 25:15 - Can AI detect depression from speech? References: Google shuts down Health Streams https://techcrunch.com/2021/08/26/goo... AI predicts race from X-Rays https://www.iflscience.com/technology... https://arxiv.org/ftp/arxiv/papers/21... Facebook labels black men as primates https://www.nytimes.com/2021/09/03/te... https://en.wikipedia.org/wiki/Human Distill articles on GNNs https://distill.pub/2021/gnn-intro/ https://distill.pub/2021/understandin... Jürgen Schmidhuber leads KAUST AI initiative https://people.idsia.ch/~juergen/kaus... GitHub issues court brief on code DMCAs https://github.blog/2021-08-31-vague-... Useful Reddit Threads https://www.reddit.com/r/MachineLearn... https://www.reddit.com/r/MachineLearn... https://www.reddit.com/r/MachineLearn... https://www.reddit.com/r/MachineLearn... Tricks to improve Transformers https://arxiv.org/pdf/2108.12284.pdf Unconstrained Scene Generation https://apple.github.io/ml-gsn/ Common Objects in 3D dataset https://ai.facebook.com/blog/common-o... WarpDrive Multi-Agent RL framework https://blog.einstein.ai/warpdrive-fa... Boosting Search Engines / MuZero Code https://arxiv.org/abs/2109.00527 https://github.com/google-research/go... https://github.com/google-research/la... Can AI detect depression? https://venturebeat.com/2021/08/31/ai... Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yann... Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-ki... BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannick... Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

Trend
Tessin – Hightech-Standort mit Ambitionen

Trend

Play Episode Listen Later Jul 2, 2021 21:50


Der Kanton Tessin ist mehr als bloss die Sonnenstube der Schweiz. Was viele nördlich des Gotthards nicht wissen: Im Tessin wird intensiv an Künstlicher Intelligenz geforscht, und es wird Drohnen- und Roboter-Technik entwickelt. «Trend» zeigt, wie sich das Tessin zu einem Zentrum für Künstliche Intelligenz entwickelt hat und spricht mit Professor Jürgen Schmidhuber, ein Wahl-Tessiner, der an der Universität Lugano lehrt; er gilt als Vater der Künstlichen Intelligenz.

Echo der Zeit
SwissCovid-App mit neuer Funktion erfreut nicht alle

Echo der Zeit

Play Episode Listen Later Jul 2, 2021 43:58


Mit der neuen Check-in-Funktion können sich Besucherinnen und Besucher von Veranstaltungen oder Restaurants beim Eintritt einfacher registrieren, zudem wurde die Warnfunktion erweitert. Die neue Funktion stösst nun auf Kritik von verschiedensten Seiten. Weitere Themen: (01:14) SwissCovid-App mit neuer Funktion erfreut nicht alle (09:39) Globale Mindeststeuer: historischer Tag für Wirtschaftsdiplomatie (17:58) Leiser Abzug westlicher Truppen aus Afghanistan (21:16) Schuldenbremse in Krisenzeiten: Fluch oder Segen? (26:18) Belarussen in Litauen: Zwischen Skepsis und Engagement (34:08) Wie umweltfreundlich ist Sonnenstrom? (38:07) Besuch bei Jürgen Schmidhuber, dem «Vater der Künstlichen Intelligenz»

KI in der Industrie
Kurz KI - Der Roboter liest den Bildschirm aus

KI in der Industrie

Play Episode Listen Later Jun 30, 2021 19:19


Die Idee stammt vom Münchner Start-up VisCheck. Im Podcastgespräch berichten sie, wie die Idee entstand und warum ein Foto von Jürgen Schmidhuber über ihren Schreibtischen hängt.

This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Learning Long-Time Dependencies with RNNs w/ Konstantin Rusch - #484

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Play Episode Listen Later May 17, 2021 36:15


Today we conclude our 2021 ICLR coverage joined by Konstantin Rusch, a PhD Student at ETH Zurich. In our conversation with Konstantin, we explore his recent papers, titled coRNN and uniCORNN respectively, which focus on a novel architecture of recurrent neural networks for learning long-time dependencies. We explore the inspiration he drew from neuroscience when tackling this problem, how the performance results compared to networks like LSTMs and others that have been proven to work on this problem and Konstantin’s future research goals. The complete show notes for this episode can be found at twimlai.com/go/484.

Datacast
Episode 61: Meta Reinforcement Learning with Louis Kirsch

Datacast

Play Episode Listen Later Apr 18, 2021 61:04


Show Notes(2:05) Louis went over his childhood as a self-taught programmer and his early days in school as a freelance developer.(4:22) Louis described his overall undergraduate experience getting a Bachelor’s degree in IT Systems Engineering from Hasso Plattner Institute, a highly-ranked computer science university in Germany.(6:10) Louis dissected his Bachelor thesis at HPI called “Differentiable Convolutional Neural Network Architectures for Time Series Classification,” — which addresses the problem of automatically designing architectures for time series classification efficiently, using a regularization technique for ConvNet that enables joint training of network weights and architecture through back-propagation.(7:40) Louis provided a brief overview of his publication “Transfer Learning for Speech Recognition on a Budget,” — which explores Automatic Speech Recognition training by model adaptation under constrained GPU memory, throughput, and training data.(10:31) Louis described his one-year Master of Research degree in Computational Statistics and Machine Learning at the University College London supervised by David Barber.(12:13) Louis unpacked his paper “Modular Networks: Learning to Decompose Neural Computation,” published at NeurIPS 2018 — which proposes a training algorithm that flexibly chooses neural modules based on the processed data.(15:13) Louis briefly reviewed his technical report, “Scaling Neural Networks Through Sparsity,” which discusses near-term and long-term solutions to handle sparsity between neural layers.(18:30) Louis mentioned his report, “Characteristics of Machine Learning Research with Impact,” which explores questions such as how to measure research impact and what questions the machine learning community should focus on to maximize impact.(21:16) Louis explained his report, “Contemporary Challenges in Artificial Intelligence,” which covers lifelong learning, scalability, generalization, self-referential algorithms, and benchmarks.(23:16) Louis talked about his motivation to start a blog and discussed his two-part blog series on intelligence theories (part 1 on universal AI and part 2 on active inference).(27:46) Louis described his decision to pursue a Ph.D. at the Swiss AI Lab IDSIA in Lugano, Switzerland, where he has been working on Meta Reinforcement Learning agents with Jürgen Schmidhuber.(30:06) Louis created a very extensive map of reinforcement learning in 2019 that outlines the goal, methods, and challenges associated with the RL domain.(33:50) Louis unpacked his blog post reflecting on his experience at NeurIPS 2018 and providing updates on the AGI roadmap regarding topics such as scalability, continual learning, meta-learning, and benchmarks.(37:04) Louis dissected his ICLR 2020 paper “Improving Generalization in Meta Reinforcement Learning using Learned Objectives,” which introduces a novel algorithm called MetaGenRL, inspired by biological evolution.(44:03) Louis elaborated on his publication “Meta-Learning Backpropagation And Improving It,” which introduces the Variable Shared Meta-Learning framework that unifies existing meta-learning approaches and demonstrates that simple weight-sharing and sparsity in a network are sufficient to express powerful learning algorithms.(51:14) Louis expands on his idea to bootstrap AI that entails how the task, the general meta learner, and the unsupervised objective should interact (proposed at the end of his invited talk at NeurIPS 2020).(54:14) Louis shared his advice for individuals who want to make a dent in AI research.(56:05) Louis shared his three most useful productivity tips.(58:36) Closing segment.Louis’s Contact InfoWebsiteTwitterLinkedInGoogle ScholarGitHubMentioned ContentPapers and ReportsDifferentiable Convolutional Neural Network Architectures for Time Series Classification (2017)Transfer Learning for Speech Recognition on a Budget (2017)Modular Networks: Learning to Decompose Neural Computation (2018)Contemporary Challenges in Artificial Intelligence (2018)Characteristics of Machine Learning Research with Impact (2018)Scaling Neural Networks Through Sparsity (2018)Improving Generalization in Meta Reinforcement Learning using Learned Objectives (2019)Meta-Learning Backpropagation And Improving It (2020)Blog PostsTheories of Intelligence — Part 1 and Part 2 (July 2018)Modular Networks: Learning to Decompose Neural Computation (May 2018)How to Make Your ML Research More Impactful (Dec 2018)A Map of Reinforcement Learning (Jan 2019)NeurIPS 2018, Updates on the AI Roadmap (Jan 2019)MetaGenRL: Improving Generalization in Meta Reinforcement Learning (Oct 2019)General Meta-Learning and Variable Sharing (Nov 2020)PeopleJeff Clune (for his push on meta-learning research)Kenneth Stanley (for his deep thoughts on open-ended learning)Jürgen Schmidhuber (for being a visionary scientist)Book“Grit” (by Angela Duckworth)

DataCast
Episode 61: Meta Reinforcement Learning with Louis Kirsch

DataCast

Play Episode Listen Later Apr 18, 2021 61:04


Show Notes(2:05) Louis went over his childhood as a self-taught programmer and his early days in school as a freelance developer.(4:22) Louis described his overall undergraduate experience getting a Bachelor’s degree in IT Systems Engineering from Hasso Plattner Institute, a highly-ranked computer science university in Germany.(6:10) Louis dissected his Bachelor thesis at HPI called “Differentiable Convolutional Neural Network Architectures for Time Series Classification,” — which addresses the problem of automatically designing architectures for time series classification efficiently, using a regularization technique for ConvNet that enables joint training of network weights and architecture through back-propagation.(7:40) Louis provided a brief overview of his publication “Transfer Learning for Speech Recognition on a Budget,” — which explores Automatic Speech Recognition training by model adaptation under constrained GPU memory, throughput, and training data.(10:31) Louis described his one-year Master of Research degree in Computational Statistics and Machine Learning at the University College London supervised by David Barber.(12:13) Louis unpacked his paper “Modular Networks: Learning to Decompose Neural Computation,” published at NeurIPS 2018 — which proposes a training algorithm that flexibly chooses neural modules based on the processed data.(15:13) Louis briefly reviewed his technical report, “Scaling Neural Networks Through Sparsity,” which discusses near-term and long-term solutions to handle sparsity between neural layers.(18:30) Louis mentioned his report, “Characteristics of Machine Learning Research with Impact,” which explores questions such as how to measure research impact and what questions the machine learning community should focus on to maximize impact.(21:16) Louis explained his report, “Contemporary Challenges in Artificial Intelligence,” which covers lifelong learning, scalability, generalization, self-referential algorithms, and benchmarks.(23:16) Louis talked about his motivation to start a blog and discussed his two-part blog series on intelligence theories (part 1 on universal AI and part 2 on active inference).(27:46) Louis described his decision to pursue a Ph.D. at the Swiss AI Lab IDSIA in Lugano, Switzerland, where he has been working on Meta Reinforcement Learning agents with Jürgen Schmidhuber.(30:06) Louis created a very extensive map of reinforcement learning in 2019 that outlines the goal, methods, and challenges associated with the RL domain.(33:50) Louis unpacked his blog post reflecting on his experience at NeurIPS 2018 and providing updates on the AGI roadmap regarding topics such as scalability, continual learning, meta-learning, and benchmarks.(37:04) Louis dissected his ICLR 2020 paper “Improving Generalization in Meta Reinforcement Learning using Learned Objectives,” which introduces a novel algorithm called MetaGenRL, inspired by biological evolution.(44:03) Louis elaborated on his publication “Meta-Learning Backpropagation And Improving It,” which introduces the Variable Shared Meta-Learning framework that unifies existing meta-learning approaches and demonstrates that simple weight-sharing and sparsity in a network are sufficient to express powerful learning algorithms.(51:14) Louis expands on his idea to bootstrap AI that entails how the task, the general meta learner, and the unsupervised objective should interact (proposed at the end of his invited talk at NeurIPS 2020).(54:14) Louis shared his advice for individuals who want to make a dent in AI research.(56:05) Louis shared his three most useful productivity tips.(58:36) Closing segment.Louis’s Contact InfoWebsiteTwitterLinkedInGoogle ScholarGitHubMentioned ContentPapers and ReportsDifferentiable Convolutional Neural Network Architectures for Time Series Classification (2017)Transfer Learning for Speech Recognition on a Budget (2017)Modular Networks: Learning to Decompose Neural Computation (2018)Contemporary Challenges in Artificial Intelligence (2018)Characteristics of Machine Learning Research with Impact (2018)Scaling Neural Networks Through Sparsity (2018)Improving Generalization in Meta Reinforcement Learning using Learned Objectives (2019)Meta-Learning Backpropagation And Improving It (2020)Blog PostsTheories of Intelligence — Part 1 and Part 2 (July 2018)Modular Networks: Learning to Decompose Neural Computation (May 2018)How to Make Your ML Research More Impactful (Dec 2018)A Map of Reinforcement Learning (Jan 2019)NeurIPS 2018, Updates on the AI Roadmap (Jan 2019)MetaGenRL: Improving Generalization in Meta Reinforcement Learning (Oct 2019)General Meta-Learning and Variable Sharing (Nov 2020)PeopleJeff Clune (for his push on meta-learning research)Kenneth Stanley (for his deep thoughts on open-ended learning)Jürgen Schmidhuber (for being a visionary scientist)Book“Grit” (by Angela Duckworth)

KI in der Industrie
Kurz KI am Rotorblatt

KI in der Industrie

Play Episode Listen Later Mar 31, 2021 21:57


Masci ist Co-Founder von NNAISENSE. Zusammen mit seinem Doktorvater Prof. Dr.Jürgen Schmidhuber nutzen sie KI und Deep Learning in unterschiedlichen Industrieanwendungen. Aufsehen erregt ihre mit Festo entwickelte Roboter-Hand. Hintergrund war damals ein Reinforcement-Ansatz. Wir haben einen neuen, alten Partner: Die Hannover Messe - wir freuen uns sehr. Besucht die https://www.hannovermesse.de vom 12. bis 16. April 2021 digital und freut Euch auf KI Talks mit Sepp Hochreiter, Toby Walsh, Jürgen Schmidhuber und vielen anderen Gästen.

KI in der Industrie
KI und die etwas andere Strategie von Körber

KI in der Industrie

Play Episode Listen Later Mar 25, 2021 43:37


Das ist mal eine Ansage: Die Körber Geschäftsbereiche müssen bis 2025 30 Prozent ihres Umsatzes mit digitalen Produkten erzielen. Welche Rolle dabei KI spielt, wie sich Körber auch strategisch dafür aufstellt, verraten Christian Schlögel und Daniel Szabo im Interview. Wir haben einen neuen, alten Partner: Die Hannover Messe - wir freuen uns sehr. Besucht die https://www.hannovermesse.de vom 12. bis 16. April 2021 digital und freut Euch auf KI Talks mit Sepp Hochreiter, Toby Walsh, Jürgen Schmidhuber und vielen anderen Gästen. Noch mehr KI in der Industrie? www.kipodcast.de Oder unser Buch https://www.hanser-fachbuch.de/buch/KI+in+der+Industrie/9783446463455 Aus dem aktuellen Teil:

SpaceBase Podcast
Supporting International Space Missions from the Bottom of the World: An Interview with Robin McNeil

SpaceBase Podcast

Play Episode Listen Later Mar 23, 2021 41:24


An interview with Robin McNeil,  Engineering and Ground segment Manager at Great South where he oversees the space programmes.  Robin is called the "Dish Master" at the Awarua Ground Stations located in Invercargill.  In the past he has worked for Thermo Cell and the International Telecommunications Union. Robin has an honours degree in Electrical and Electronic Engineering from the University of Canterbury and a  BA in English from Massey University. He is a Fellow of IPENZ (Interprofessional Education NZ), Senior Member of IEEE, and a Member of the New Zealand Order of Merit.In this interview, we are going to talk about Robin’s career journey and the importance of his ground station work at the bottom of the world in support of international space missions around the globe.Additional Resources:Spacecraft Operations by Uhlig, Thomas, Sellmaier, Florian, Schmidhuber, Michael (Eds.)Hosted by: Emeline Paat-Dahlstrom, Co-Founder, SpaceBase Music: reCreation by airtone (c) copyright 2019 Licensed under a Creative Commons (3.0) If you like our work, please consider donating to SpaceBase through The Gift Trust or RSF Social Finance (for US charitable donations) and indicate "SpaceBase" gift account.

KI in der Industrie
KI in der Porsche Produktions-Planung

KI in der Industrie

Play Episode Listen Later Mar 10, 2021 48:59


Den Porsche 911er können Kundinnen und Kunden mit über 250 Optionen bestellen. Das ist in der Automobilindustrie erstmal nichts Neues. Die Herausforderungen wachsen mit jeder Option. Doch Porsche will jetzt die Planung mit KI optimieren. Simon Dürr erklärt uns im Podcast wie die Schwaben das machen und welche Rolle der Internetkonfigurator dabei spielt. Wir haben einen neuen, alten Partner: Die Hannover Messe - wir freuen uns sehr. Besucht die https://www.hannovermesse.de vom 12. bis 16. April 2021 digital und freut Euch auf KI Talks mit Sepp Hochreiter, Toby Walsh, Jürgen Schmidhuber und vielen anderen Gästen. Noch mehr KI in der Industrie? www.kipodcast.de Oder unser Buch https://www.hanser-fachbuch.de/buch/KI+in+der+Industrie/9783446463455 Oder Peters Buch für Opa, Oma https://www.hanser-fachbuch.de/buch/Wie+KI+unser+Leben+veraendert/9783446466920 Aus dem aktuellen Teil: Der Lehrgang vom VDI https://www.vdi-wissensforum.de/lehrgaenge/fachingenieur-data-science-vdi/ DX21 https://www.hsu-hh.de/imb/en/dx-2021 Deloitte und Nvidia https://www.techrepublic.com/article/deloitte-partnering-with-nvidia-to-launch-artificial-intelligence-computing-center/ Automatische Entscheidungen anfechten https://unding.de

Gespräche von Morgen
#031 | Mensch, Maschine, Superintelligenz? Die Zukunft unserer Spezies mit Harald Lesch & Jürgen Schmidhuber

Gespräche von Morgen

Play Episode Listen Later Dec 9, 2020 59:09


Zum Jahresabschluss haben wir eine unfassbar interessante Debatte mit Harald Lesch - Physiker, Philosoph, Astronom und bekannt aus der ZDF Reihe "Terra X" - und Jürgen Schmidhuber - Informatiker, Spezialist für Künstliche Intelligenz und Deep Learning. Zusammen mit ihnen, diskutieren wir die Vor- und Nachteile einer künstlichen Intelligenz: Wer ist wirklich gefragt, unseren Planeten zu retten? Geht es jetzt, mit Corona, wirklich um die Zukunft, oder nicht vielleicht doch eher um die Gegenwart? Wie hängen Computer mit der Gesellschaft zusammen? Kann man wirklich mit KI’s in die Zukunft schauen und planen? Jede kleine Veränderung, die sich ergibt, beeinflusst ja das weitere Weltgeschehen. Wie sind KI's aufgebaut, und was ist, wenn sie irgendwann so nah am Menschen sind, dass sie auch unter einer Depression leiden könnten? Brauchen wir wirklich eine solche Superintelligenz? Wie kann eine zukunftsfähige, nachhaltige Gesellschaft aussehen? Was für eine Welt wollen wir gestalten? Welche Technologien und Innovationen sollten wir dabei auf dem Schirm haben und wie wirken sich diese auf unser Leben aus? Jonathan Sierck, Gründer von vonMorgen und Experte für digitales Lernen, begibt sich mit Pionieren, Koryphäen und inspirierenden Persönlichkeiten aus aller Welt auf eine Reise in die Zukunft. In Gespräche von Morgen suchen wir klare Antworten und (auch) kontroverse Blickwinkel auf die großen Fragen der Menschheit, erkunden Zukunftstrends und -technologien und erklären, worauf es bei den Future Skills wirklich ankommt. Wir eröffnen spannende Perspektiven und Einblicke, die zum Nachdenken und Handeln anregen. Die Zukunft beginnt jetzt – mit dir – in den Gesprächen von Morgen. Unterstütze uns auf: https://www.patreon.com/vonmorgen Instagram: @teamvonmorgen Twitter: @vonMorgenLearn Facebook: fb.me/teamvonmorgen LinkedIn: @vonMorgen Webpage: www.vonmorgen.io

#SRFglobal
#SRFglobal vom 03.09.2020

#SRFglobal

Play Episode Listen Later Sep 3, 2020 27:16


Erleben wir gerade die dritte Revolution des Kriegs? Wird nach der Erfindung des Schiesspulvers und der Atomwaffe nun die Künstliche Intelligenz das Wesen des Kriegs von Grund auf verändern? Kämpfen auf den Schlachtfeldern der Zukunft Killerroboter gegen Killerroboter? Noch klingt es nach einem Science-Fiction-Film, dass Killerroboter selbständig in den Kampf ziehen, um gegen andere Roboter zu kämpfen und Menschen zu töten. Vom technisch Machbaren ist das aber nicht mehr weit entfernt. Mit Hochdruck arbeitet die Rüstungsindustrie an sogenannten intelligenten Waffensystemen. Manche Staaten äussern sich kritisch, konnten sich aber bisher noch nicht auf neue Regeln einigen. Sebastian Ramspeck diskutiert in «#SRFglobal» mit: – Beatrice Heuser, Militär-Expertin; – Brandon Bryant, ehemaliger Drohnenpilot der US-Streitkräfte; – Jody Williams, Friedensnobelpreisträgerin und Mitinitiantin der Kampagne «Stop Killer Robots»; – Jürgen Schmidhuber, Vater der modernen Künstlichen Intelligenz; und – Pascal Weber, SRF-Nahost-Korrespondent.

#SRFglobal HD
#SRFglobal vom 03.09.2020

#SRFglobal HD

Play Episode Listen Later Sep 3, 2020 27:16


Erleben wir gerade die dritte Revolution des Kriegs? Wird nach der Erfindung des Schiesspulvers und der Atomwaffe nun die Künstliche Intelligenz das Wesen des Kriegs von Grund auf verändern? Kämpfen auf den Schlachtfeldern der Zukunft Killerroboter gegen Killerroboter? Noch klingt es nach einem Science-Fiction-Film, dass Killerroboter selbständig in den Kampf ziehen, um gegen andere Roboter zu kämpfen und Menschen zu töten. Vom technisch Machbaren ist das aber nicht mehr weit entfernt. Mit Hochdruck arbeitet die Rüstungsindustrie an sogenannten intelligenten Waffensystemen. Manche Staaten äussern sich kritisch, konnten sich aber bisher noch nicht auf neue Regeln einigen. Sebastian Ramspeck diskutiert in «#SRFglobal» mit: – Beatrice Heuser, Militär-Expertin; – Brandon Bryant, ehemaliger Drohnenpilot der US-Streitkräfte; – Jody Williams, Friedensnobelpreisträgerin und Mitinitiantin der Kampagne «Stop Killer Robots»; – Jürgen Schmidhuber, Vater der modernen Künstlichen Intelligenz; und – Pascal Weber, SRF-Nahost-Korrespondent.

The So Strangely Podcast
Unmixer: Loop Extraction with Repetition, with Dr. Jordan Smith and Tim de Reuse

The So Strangely Podcast

Play Episode Listen Later Aug 24, 2020 60:10


Music technology PhD Candidate Tim de Reuse recommends “Unmixer: An Interface for Extracting and Remixing Loops” by Jordan Smith,Yuta Kawasaki, and Masataka Goto, published in the proceedings of ISMIR 2019. Tim and Finn interview Jordan about the origins of this project, the algorithm behind the loop extraction, the importance of repetition in music, and the creative and playful applications of Unmixer. Note: This conversation was recorded in December 2019. Techically issues with some tracks contributed to delays. Apologies for the choppy audio quality. Time Stamps [0:01:40] Project Summary[0:05:05] Demonstration of Unmixer[0:14:27] Origins of the UnMixer project [0:19:44] Factorisation algorithm [0:28:37] Computational and musical objectives for factorisation[0:36:15] The Unmixer web interface[0:41:30] 2nd Demonstration, parameters and track selection[0:49:13] What Unmixer tells us about music Show notes Recommended article:Smith, J, Kawasaki, Y, & Goto, M. (2019) Unmixer: An Interface for Extracting and Remixing Loops. Proceedings of  20th ISMIR meeting, Delft Netherlands.UnMixer website: https://unmixer.ongaaccel.jp/Project webpageInterviewee: Dr. Jordan BL Smith, Research Scientist at Tik Tok.Website, twitter Co-host: PhD Candidate Tim de Reuse, website, twitterPapers cited in the discussion:Smith, J. B., & Goto, M. (2018, April). Nonnegative tensor factorization for source separation of loops in audio. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 171-175). IEEE.Schmidhuber, J. (2009). Simple algorithmic theory of subjective beauty, novelty, surprise, interestingness, attention, curiosity, creativity, art, science, music, jokes. Journal of SICE, 48(1).Rafii, Z., & Pardo, B. (2012). Repeating pattern extraction technique (REPET): A simple method for music/voice separation. IEEE transactions on audio, speech, and language processing, 21(1), 73-84.Music sampled: Daft Punk, Random Access Memories (2013): Doing it Right (ft. Panda Bear)Martin Solveig & Dragonette, Smash (2011): Hello - Single EditMura Masa, Soundtrack To a Death (2014): I've Never Felt So GoodOther references:Madeon's Adventure MachineChocolate Rain by Tay Zonday Credits The So Strangely Podcast is produced by Finn Upham, 2020. The closing music includes a sample of Diana Deutsch's Speech-Song Illusion sound demo 1.

Food Talk with Dani Nierenberg
157 - Lawrence Haddad on Promoting Nutritious Foods, Josef Schmidhuber on Agricultural Economics

Food Talk with Dani Nierenberg

Play Episode Listen Later May 25, 2020 61:52


On “Food Talk with Dani Nierenberg,” Dani talks with Lawrence Haddad, the Executive Director of the Global Alliance for Improved Nutrition (GAIN). GAIN works with governments, nonprofit organizations, and businesses to promote nutritious foods in areas of the world with malnutrition. Haddad describes to Dani the current state of countries facing malnutrition and the efforts GAIN is taking to improve those conditions. Then, Dani interviews Josef Schmidhuber, Deputy Director for the Division of Labor and Markets at the UN Food and Agriculture Organization to  discuss the resilience of the agricultural sector. While you’re listening, subscribe, rate, and review the show; it would mean the world to us to have your feedback. You can listen to “Food Talk with Dani Nierenberg” wherever you consume your podcasts.

Robustly Beneficial Podcast
AI vs COVID19 #RB15

Robustly Beneficial Podcast

Play Episode Listen Later Apr 27, 2020 62:08


We discuss ideas presented on this blog post by Jürgen Schmidhuber, and beyond. http://people.idsia.ch/~juergen/ai-covid.html Timecodes : 1:55 Population-scale analysis 9:11 Individual risk assessment 22:11 Drug discovery 30:22 Recommender systems 43:44 Computational thinking

This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Upside-Down Reinforcement Learning with Jürgen Schmidhuber - #357

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Play Episode Listen Later Mar 16, 2020 33:19


Today we’re joined by Jürgen Schmidhuber, Co-Founder and Chief Scientist of NNAISENSE, the Scientific Director at IDSIA, as well as a Professor of AI at USI and SUPSI in Switzerland. Jürgen’s lab is well known for creating the Long Short-Term Memory (LSTM) network which has become a prevalent neural network, used commonly devices such as smartphones, which we discuss in detail in our first conversation with Jürgen back in 2017. In this conversation, we dive into some of Jürgen’s more recent work, including his recent paper, Reinforcement Learning Upside Down: Don’t Predict Rewards — Just Map Them to Actions. Check out the show notes page at twimlai.com/talk/357.

Lex Fridman Podcast
#75 – Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

Lex Fridman Podcast

Play Episode Listen Later Feb 26, 2020 100:23


Marcus Hutter is a senior research scientist at DeepMind and professor at Australian National University. Throughout his career of research, including with Jürgen Schmidhuber and Shane Legg, he has proposed a lot of interesting ideas in and around the field of artificial general intelligence, including the development of the AIXI model which is a mathematical approach to AGI that incorporates ideas of Kolmogorov complexity, Solomonoff induction, and reinforcement learning. EPISODE LINKS: Hutter Prize: http://prize.hutter1.net Marcus web: http://www.hutter1.net Books mentioned: – Universal AI: https://amzn.to/2waIAuw – AI: A Modern Approach: https://amzn.to/3camxnY – Reinforcement Learning: https://amzn.to/2PoANj9 – Theory of Knowledge: https://amzn.to/3a6Vp7x This conversation

AI with AI
Hit the Wall: Do Not Play GO (Part II)

AI with AI

Play Episode Listen Later Jan 10, 2020 33:55


In research, Andy and Dave discuss a new idea from Schmidhuber, which introduces Upside-Down reinforcement learning, where no value functions or policy search are necessary, essentially transforming reinforcement learning into a form of supervised learning. Research from OpenAI demonstrates a “double-descent” inherent in deep learning tasks, where performance initially gets worse and then gets better as the model increases in size. Tortoise Media provides yet-another-AI-index, but with a nifty GUI for exploration. August Cole explores a future conflict with Arctic Night. And Richard Feynman provides thoughts (from 1985) on whether machines will be able to think. Twitter Throwdown: On 23 December, Yoshua Bengio and Gary Marcus will have a debate on the Best Way Forward for AI.

Mittelmaß und Wahnsinn
To AI or not to AI

Mittelmaß und Wahnsinn

Play Episode Listen Later Nov 5, 2019 42:47


Welcome to another special edition of „Mediocrity and Madness“! Usually this Podcast is dedicated to the ever-widening gap between talk and reality in our big organizations, most notably in our global corporates. Well, I might have to admit that in some cases the undertone is a tiny bit angry and another bit tongue-in-cheek. The title might indicate that. Today’s episode is not like this. Well, it is but in a different way. Upon reflection, it still addresses a mighty chasm between talk and reality but the reason for this chasm appears more forgivable to me than those many dysfunctions we appear to have accepted against better judgement. Today’s podcast is about artificial intelligence and our struggles to put it to use in businesses. This podcast is to some measure inspired by what I learned in and around two programs of Allianz, “IT Literacy for top executives” and “AI for the business”, which I had the privilege and the pleasure to help developing and facilitating. I am tempted to begin this episode with the same claim I used in the last (German) one: With artificial intelligence it is like with teenage sex. Everybody talks about it, but nobody really knows how it works. Everybody thinks that everyone else does it. Thus, everybody claims he does it. And again, Dan Ariely gets all the credits for coining that phrase with “Big Data” instead of “artificial intelligence” which is actually a bit related anyway. Or not. As we will see later. To begin with, the big question is: What is “artificial intelligence” after all? The straightforward way to answering that question is to first define what intelligence is in general and then apply the notion that “artificial” is just when the same is done by machines. Yet here begins the problem. There simply is no proper definition of intelligence. Some might say, intelligence is what discerns man from animal but that’s not very helpful, too. Where’s the boarder. When I was a boy, I read that a commonplace definition was that humans use tools while animals don’t. Besides the question whether that little detail would be one that made us truly proud of our human intelligence, multiple examples of animals using tools have been found since. To make a long story short, there is no proper and general definition of intelligence. Thus, we end up with some self-referentiality: “It’s intelligent if it behaves like a human”. In a way, that’s quite a dissatisfying definition, most of all because it leaves no room for types of intelligences that behave – or “are” – significantly non-human. “Black swan” is greeting. But we’re detouring into philosophy. Back to our problem at hand: What is artificial intelligence after all? Well, if it’s intelligent, if it behaves like a human, then the logical answer to this question is: “artificial intelligence is when a computer/machine behaves like a human”. For practical purposes this is something we can work with. Yet even then another question looms: How do we evaluate whether it behaves like a human? Being used to some self-referentiality already, the answer is quite straight forward: “It behaves like a human if other humans can’t tell the difference from human behavior.” This is actually the essence of what is called the “Turing test”, devised by the famous British mathematician Alan Turing who next to basically inventing what we today call computer sciences helped solving the Enigma encryption during World War II. Turing’s biography is as inspiring as it is tragic and I wouldn’t mind if you stopped listening to this humble podcast and explored Turing in a bit more depth, for example by watching “The imitation game” starring Benedict Cumberbatch. If you decide to stay with me instead of Cumberbatch, that’s where we finally are: “Artificial intelligence is when a machine/robot behaves in a way that humans can’t discern that behavior from human behavior.” As you might imagine, the respective tests have to be designed properly so that biases are avoided. And, of course, also the questions or problems designed to ascertain human or less human behavior have to be designed carefully. These are subjects of more advanced versions of the Turing test but in the end, the ultimate condition remains the same: A machine is regarded intelligent if it behaves like a human. (Deliberately) stupid? It has taken us some time to establish this somewhat flawed, extremely human-centric but workable definition of machine intelligence. It poses some questions and it helps answering some others. One question that is discussed around the Turing test is indeed whether would-be artificial intelligences should deliberately put a few mistakes into their behavior even despite better knowledge, just in order to appear more human. I think that question comes more from would-be philosophers than it is a serious one to consider. Yet, you could argue that if taking the Turing test seriously, in order to convince a human of being a fellow human the occasional mistake is appropriate. After all, “to err is human”. Again, the question appears a bit stupid to me. Would you really argue that it is intelligent only if it occasionally errs? The other side of that coin though is quite relevant. In many discussions about machine intelligence, the implicit or explicit requirement appears to be: If it’s done by a machine, it needs to be 100%. I reason that’s because when dealing with computer algorithms, like calculating for example the trajectory of a moon rocket, we’re used to zero errors; given that the programming is right, that there are no strange glitches in the hardware and that the input data isn’t faulty as such. Writing that, a puzzling thought enters my mind: We trustin machine perfection and expect human imperfection. Not a good outlook in regard to human supremacy. Sorry, I’m on another detour. Time to get back to the question of intelligence. If we define intelligence as behavior being indiscernible from human one, why then do we wonder if machine intelligence doesn’t yield 100% perfect results. Well, for the really complex problems it would actually be impossible to define what “100% perfect” even is, neither ex ante nor ex post but let’s stick to the simpler problems for now: pattern recognition, predictive analysis, autonomous driving … . Intelligent beings make mistakes. Even those whose intelligence is focused onto a specific task. Human radiologists identify some spots on their pictures falsely as positive signs of cancer whilst they overlook others that actually would be malicious. So do machines trained to the same purpose. Competition I am rather sure that the kind listener’s intuitive reaction at this point is: “Who cares? – If the machine makes less errors than her human counterpart, let her take the lead!” And of course, this is the only logical conclusion. Yet quite often, here’s one major barrier to embracing artificial intelligence. Our reaction to machines threatening to become better than us but not totally perfect is poking for the outliers and inflating them until the use of machine intelligence feels somewhat disconcerting. Well, they are competitors after all, aren’t they? The radiologist case is especially illuminating. In fact, the problem is that amongst human radiologists there is a huge, huge spread in competency. Whilst a few radiologists are just brilliant in analyzing their pictures, others are comparatively poor. The gap not only results from experience or attitude, there are also significant differences from county to country for example. Thus, even if the machine would not beat the very best of radiologists, it would be a huge step ahead and saving many, many lives if one could just provide a better average across the board;  – which is what commonly available machines geared to the task do. Guess what your average radiologist thinks about that. – Ah, and don’t mind, if the machine would not yet be better than her best human colleagues, it is but a matter of weeks or months or maybe a year or two until she is as we will see in a minute. You still don’t believe that this impedes the adaption of artificial intelligence? – Look this example that made it into the feuilletons not long ago. Autonomous driving. Suppose you’re sitting in a car that is driven autonomously by some kind of artificial intelligence. All of a sudden, another car – probably driven by a human intelligence – comes towards you on the rather narrow street you’re driven through. Within microseconds, your car recognizes its choices: divert to the right and kill a group of kids playing there, divert to the left and kill some adults in their sixties one of which it recognizes as an important advisor to an even more important politician or keep the track and kill both, the occupants of the oncoming car … and unfortunately you yourself. The dilemma has been stylized to a kind of fundamental question by some would-be philosophers with the underlying notion of “if we can’t solve that dilemma rationally, we might better give up the whole idea of autonomous driving for good.” Well, I am exaggerating again but there is some truth in that. Now, as the dilemma is inextricable as such: bye, bye autonomous driving! Of course, the real answer is all but philosophical. Actually, it doesn’t matter what choice the intelligence driving our car makes. It might actually just throw a dice in its random access memory. We have thousands of traffic victims every year anyway. Humankind has decided to live with that sad fact as the advantages of mobility outweigh these bereavements. We have invented motor liability insurance exactly for that reason. Thus, the only and very pragmatic question has to be: Do the advantages of autonomous driving outweigh some sad accidents? – And fortunately, probability is that autonomous driving will massively reduce the number of traffic accidents so the question is actually a very simple one to deal with. Except probably for motor insurance companies … and some would-be philosophers. Irreversible Here’s another intriguing thing with artificial intelligence: irreversibility. As soon as machine intelligence has become better than man in a specific area, the competition is won forever by the machines. Or lost for humankind. Simple: as soon as your artificial radiologist beats her human colleague, the latter one will never catch up again. On the contrary. The machine will improve further, in some cases very fast. Man might improve a little, over time but by far not at the same speed as his silicon colleague … or competitor … or potential replacement. In some cases, the world splits into two parallel ones: the machine world and the human world. This is what happened in 1997 with the game of Chess when Deep Blue beat the then world champion Gary Kasparow. Deep Blue wasn’t even an intelligence. It was just a brute force with input from some chess savvy programmers but then humans have lost the game to the machines, forever. In today’s chess tournaments not the best players on earth compete but the best human players. They might use computers to improve their game but none of them would stand the slightest chance against a halfway decent artificial chess intelligence … or even a brute force algorithm. The loss of chess for humankind is a rather ancient story compared to the game of Go. Go being multitudes more complex than chess resisted the machines about twenty years more. Brute force doesn’t work for Go and thus it took until 2016 until AlphaGo, an artificial intelligence designed to play Go by Google’s DeepMind finally conquered that stronghold of humanity. That year, AlphaGo defeated Lee Sedol, one of the best players in the world. A few months later, the program also defeated Ke Jie, the then top-ranking player in the world. Most impressive though it is that again only a few months later DeepMind published another version of its Go-genius: AlphaGo Zero. Whilst AlphaGo had been trained with huge numbers of Go matches played by human players, AlphaGo Zero had to be taught only the rules of the game and developed its skills purely by playing against versions of itself. After three days, this version beat her predecessor that had won against Lee Sedol 100:0. And again only three months later, another version was deployed. AlphaZero learnt the games of Chess and Go and Shogi, another highly complex strategy game, in only a few hours and defeated all previous versions in a sweep. By then, man was out of the picture for what can be considered an eternity by measures of AI development cycles. AlphaZero not only plays a better Go – or Chess – than any human does, it develops totally new strategies and tactics to play the game, it plays moves never considered reasonable before by its carbon-based predecessors. It has transcended its creators in the game and never again will humanity regain that domain. This, you see, is the nature of artificial intelligence: as soon as it has gained superiority in a certain domain, this domain is forever lost for humankind. If anything, another technology will surpass its predecessor. We and our human brains won’t. We might comfort ourselves that it’s only rather mundane tasks that we cede to machines of specialized intelligence, that it’s a long way still towards a more universal artificial intelligence and that after all, we’re the creators of these intelligences … . But the games of Chess and Go are actually not quite so mundane and the development is somewhat exponential. Finally, a look into ancient mythology is all but comforting. Take Greece as an example: the progenitor of gods, Uranos, was emasculated by his offspring, the Titans and these again were defeated and punished by their offspring, the Olympians, who then ruled the world, most notably Zeus, Uranos’ grandson. Well, Greek mythology is probably not what the kind listener expects from a podcast about artificial intelligence. Hence, back to business. AI is not necessarily BIG Data Here’s a not so uncommon misconception: AI or advanced analytics is always Big Data or – more exactly: Big Data is a necessary prerequisite for advanced analytics. We could make use of the AlphaZero example again. There could hardly be less data necessary. Just a few rules of the game and off we go! “Wait”, some will argue, “our business problems aren’t like this. What we want is predictive analysis and that’s Big Data for sure!”. I personally and vehemently believe this is a misconception. I actually assume, it is a misconception with a purpose but before sinking deeper into speculation, let’s look at an example, a real business problem. I have spent quite some years in the insurance business. Hence please apologize for me using an insurance example. It is very simple. The idea is using artificial intelligence for calculating insurance premiums, specifically motor insurance third party liability (TPL). Usually, this is a mandatory insurance. The risk it covers is that you in your capacity of driving a car – or parking it – damage an object that belongs to someone else or that you injure someone else. Usually, your insurance premium should reflect the risk you want to cover. Thus, in the case of TPL the essential question from an actuary’s point of view is the following one: Is the person under inspection a good driver or a not so good one? “Good” in the insurer’s sense: less prone to cause an accident and if so, one that usually doesn’t come with a big damage. There are zillions of ways to approach that problem. The best would probably be to get an individual psychological profile of the respective person, add a decently detailed analysis of her driving patterns (where, when, …) and calculate the premium based on that analysis, maybe using some sort of artificial intelligence in order to cope with the complex set of data. The traditional way is comparatively simplistic and indirect. We use a mere handful of data, some of them related to the car like type and registration code, some personal data like age or homeownership and some about driving patterns, mostly yearly mileage and calculate a premium out of these few by some rather simple statistical analysis. If we were looking for more Big Data-ish solutions we could consider basing our calculation on social media timelines. Young males posting photos that show them Friday and Saturday nights in distant clubs with fancy drinks in their hands should emerge with way higher premiums than their geeky contemporaries who spend their weekends in front of some computers using their cars only to drive to the next fast food restaurant or once a week to the comic book shop. The shades in between might be subtle and an artificial intelligence might come up with some rather delicate distinctions. And you might not even need a whole timeline. Just one picture might suffice. The forms of our faces, our haircut, the glasses we fancy, the jewelry we wear, the way we twinkle our noses … might well be very good indicators of our driving behavior. Definitely a job for an artificial intelligence. I’m sure, you can imagine other avenues. Some are truly Big Data, others are rather small in terms of data … and fancy learning machines. The point is, these very different approaches may well yield very similar results ie, a few data related to your car might reveal quite as much about the question at hand as an analysis of your Instagram story. The fundamental reason is that data as such are worthless. Valuable is only what we extract from that data. This is the so-called DIKW hierarchy. Data, Information, Knowledge, Wisdom. The true challenge is extracting wisdom from data. And the rule is not: more data – more wisdom. On the contrary. Too much data might in fact clutter the way to wisdom. And in any case, very different data might represent the same information, knowledge or wisdom. As what concerns our example, I have first of all to admit that I have nor analytical proof – or wisdom – about specifics I am going to discuss but I feel confident that the examples illustrate the point. Here we go. The type of car – put into in the right correlation with a few other data -- might already contain most of the knowledge you could gain from a full-blown psychological analysis or a comprehensive inspection of a person’s social media profile. Data representing a 19 year old male, living in a certain area of town, owning a used but rather high powered car, driving a certain mileage per year might very well contain the same information with respect to our question about “good” driving as all the pictures we find in his Facebook timeline. And the other way around. The same holds true for the information we might get out of a single static photo. Yet the Facebook timeline or the photo are welling over with information that is irrelevant for our specific problem. Or irrelevant at all. And it is utterly difficult to get a) the necessary data in a proper breadth and quality at all and b) to distill relevant information, knowledge and wisdom from this cornucopia of data.  Again: more data does not necessarily mean more wisdom! It might. But one kind of data might – no: will – contain the same information as other kinds. Even the absence of data might contain information or knowledge. Assume for instance, you have someone explicitly denying her consent to using her data for marketing purposes. That might mean she is anxious about her data privacy which in turn might indicate that she is also concerned about other burning social and environmental issues which then might indicate she doesn’t use her car a lot and if so in a rather responsible way … . You get the point. Most probably that whole chain of reasoning won’t work having that single piece of data in isolation but put into the context of other data there might actually be wisdom. Actually, looking at the whole picture, this might not even be a chain of reasoning but more a description of the certain state of things that denies decomposition into human logic. Which leads us to another issue with artificial intelligence. The unboxing problem Artificial intelligences, very much like their human contemporaries, can’t always be understood easily. That is, the logic, the chain of reasoning, the parameters that causally determine certain outcomes, decisions or predictions are in many cases less than transparent. At the same time, we humans demand from artificial intelligence what we can’t deliver for our own reasoning: this very transparency. Quite like us demanding 100% machine perfection, some control-instinct of ours claims: If it’s not transparent to us (humans), it isn’t worth much. Hence, a line of research in the field of artificial intelligence has developed: “Unboxing the AI”. Except for some specific cases yet, the outlook for this discipline isn’t too bright. The reason is the very way artificial intelligence works. Made in the image of the human brain, artificial intelligences consist of so-called “neural networks”. A neural network is more or less a – layered – mesh of nodes. The strength of the connections between these nodes determines how the input to the network determines the output. Training the AI means varying the strengths of these connections in a way that the network finally translates the input into a desired output in a decent manner. There are different topologies for these networks, tailored to certain classes of problems but the thing as such is rather universal. Hence AI projects can be rather simple by IT standards: define the right target function, collect proper training data, plug that data to your neural network, train it … . It takes but a couple of weeks and voila, you have an artificial intelligence thatyou can throw on new data for solving your problem. In short, what we can call “intelligence” is the state of strengths of all the connections in your network. The number of these connections can be huge and the nature of the neural network is actually agnostic to the problem you want it to solve. “Unboxing” would thus mean to backwardly extract specific criteria from such a huge and agnostic network. In our radiologist case for example, we would have to find something like “serrated fringes” or “solid core” in nothing but this set of connection strengths in our network. Have fun! Well, you might approach the problem differently by simply probing your AI in order to learn that and how it actually reacts to serrated fringes. But that approach has its limits, too. If you don’t know what to look for or if the results are determined not by a single criterion but by the entirety of some data, looking for specifics becomes utterly difficult. Think of AlphaZero again. It develops strategies and moves that have been unknown to man before. Can we really claim we must understand the logic behind, neglecting the fact that Go as such has been quite resistant to straightforward tactics and logic patterns for the centuries humans have played it. The question is: why “unboxing” after all? – Have you ever asked for unboxing a fellow human’s brain? OK, being able to do that for your adolescent kids’ brains would be a real blessing! But normally we don’t unbox brains. Why are we attracted by one person and not by another? Is it the colour of her eyes, her laughter lines, her voice, her choice of words …? Why do we find one person trustworthy and another one not? Is it the way she stands, her dress, her sincerity, her sense of humour? How do we solve a mathematical problem? Or a business one? When and how do the pieces fall into place? Where does the crucial idea emerge from? Even when we strive to rationalize our decision making, there always remain components we cannot properly “unbox”. If the problem at hand is complex – and thus relevant – enough. We “factor in” strategic considerations, assumptions about the future, others’ expectations … . Parts of our reasoning are shaped by our personal experiences, our individual preferences, like our risk-appetite, values, aspirations, … . Unbox this! Humankind has learnt to cope with the impossibility of “unboxing” brains or lives. We probe others and if we’re happy with the results, we start trusting. We cede responsibilities and continue probing. We cede more responsibilities … and sometimes we are surpassed by the very persons we promoted. Ah, I am entering philosophical grounds again. Apologies! To make it short. I admit, there are some cases in which you might need full transparency, complete “unboxing”. And in case you don’t get it, abolish the idea of using AI for the problem you had in mind. But there are more cases in which the desire for unboxing is just another pretense for not chartering new territory. If it’s intelligent if it behaves like a human why do we ask for so much more from the machines than we would ask from man? Again, I am drifting off into questions of dangerously fundamental nature. Let’s assume for once that we have overcome all our concerns, prejudices and excuses, that despite all of them, we have a business problem we full-heartedly want to throw artificial intelligence at. Then comes the biggest challenge of all. The biggest challenge of all: how to operationalize it Pretty much like in our discussion at the beginning of this post, on the face of it, it looks simple: unplug the human intelligence occupied with the work at hand and plug in the artificial one. If it is significant – quite some AI projects are still more in the toy category – this comes along with all the challenges we are used to in what we call change management. Automating tasks comes with adapting to new processes, jobs becoming redundant, layoffs, re-training and rallying the remaining workforce behind the new ways of working. Yet changes related to artificial intelligence might have a very different quality. They are about “intelligence” after all, aren’t they? They are not about replacing repetitive, sometimes strenuous or boring work like welding metal or consolidating accounting records, they dig to the heart of our pride. Plus, the results are by default neither perfect nor “unboxable”. That makes it very hard to actually operationalize artificial intelligence. Here’s an example. It is more than fifteen years old, taking place at a time when a terabyte was an still an incredible amount of storage, when data was still desired to be stored in warehouses and not floating around in lakes or oceans and when true machine learning was still a purely academic discipline. In short: the good old times. This gives us the privilege to strip the example bare of complexity and buzz. At that time, I was together with a few others responsible for developing Business Intelligence solutions in the area of insurance sales. We had our dispositive data stored in the proverbial warehouse, some smart actuaries had applied multivariate statistics to that data and hurrah, we got propensities to buy and rescind for our customers. Even with the simple means we had by then, these propensities were quite accurate. As an ex-post analysis showed, they hit the mark at 80% applying the relevant metrics. Cutting the ranking at rather ambitious levels, we pushed the information to our agents: customers who with a likelihood of more than 80% were to close a new contract or to cancel one … or both. The latter one sounds a bit odd, but a deeper look showed that these were indeed customers who were intensely looking for a new insurance without a strong loyalty. – If we won them, they would stay with us and loyalty would improve, if a competitor won them, they would gradually transfer their portfolio to him. You would think that would be a treasure trove for any salesforce in the world, wouldn’t you? Far from it! Most agents either ignored the information or – worse – they discredited it. To the latter purpose, they used anecdotal evidence: “My mother in law was on the list”, they broadcast, “she would never cancel her contract”. Well, some analysis showed that she was on the list for a reason but how would you fight a good story with the intricacies of multivariate statistics? Actually, the mother-in-law issue was more of a proxy for a deeper concern. Client relationship is supposed to be the core competency of any salesforce. And now, there comes some algorithm or artificial intelligence that claims to understand at least a (major) part of that core competency as good as that very salesforce … . Definitely a reason to fight back, isn’t it? Besides this, agents did not use the information because they regarded it not too helpful. Many of the customers on the high-propensity-to-buy-list were their “good” customers anyway, those with who they were in regular contact already. They were likely indeed to make another buy but agents reasoned they would have contacted them anyway. So, don’t bother with that list. Regarding the list of customers on the verge of rescinding, the problem was a different one. Agents had only very little (monetary) incentive to prevent these from doing so. There was a recurring commission but asked whether to invest valuable time into just keeping a customer or going for new business, most were inclined to choose the latter option. I could continue on end with stories around that work, but I’d like to share only one more tidbit here before entering a brief review of what went wrong: What was the reaction of management higher up the food-chain when all these facts trickled in? Well, they questioned the quality of the analysis and demanded to include more – today we would say “bigger” – data in order to improve that quality, like buying sociodemographic data which was the fad at that time. Well, that might have increased the quality from 80% to 80+% but remember the discussion we had around redundancy of data. The type of car you drive or the sum covered by your home insurance might say much more than sociodemographic data based on the area you live in. … Not to speak of that eternal management talk that 80% would be good enough. What went wrong? First, the purpose of the action wasn’t thought through well enough from the start. We more or less just choose the easiest way. Certainly, the purpose couldn’t have been to provide agents with a list of leads they already knew were their best customers. From a business perspective the group of “second best customers” might have been much more attractive. Approaching that group and closing new contracts there would have not only created new business but also broadened the base of loyal customers and thus paved the way for longer term success. The price would of course have been that these customers would have been more difficult to win over than the “already good” ones so that agents would have needed an incentive to invest effort into this group. Admittedly going for the second-best group would have come with more difficulties. We might have faced for example many more mother-in-law anecdotes. Second, there was no mechanism in place to foster the use of the information. Whether the agents worked on the leads or not didn’t matter so why should they? Worse even with the churn-list. From a long-term business perspective, it makes all the sense in the world to prevent customer churn as winning new customers is way more expensive. It also makes perfect sense to try making your second-best customers more loyal but from a short-term salesman’s or -woman’s perspective boiling the soup of already good customers makes more short-term sense. Thus, in order to operationalize AI target systems might need a thorough overhaul. If you are serious, that is. The same holds true if you would for example want to establish machine assisted sentiment analysis in your customer care center. Third, there was no good understanding of data and data analytics neither on the supposed-to-be users’ side nor on the management side. This led to the “usual” reflexes on both sides: resistance on the one side and an overly simplified call for “better” on the other one. Whatever “better” was supposed to mean. Of course, neither the example nor the conclusions are exhaustive, but I hope they help illustrate the point: more often than not it is not the analytics part of artificial intelligence that is the tricky one. It is tricky indeed but there are smart and experienced people around to deal with that type of tricky business. More often than not, the truly tricky part is to put AI into operations, to ask the right questions in the first place, to integrate the amazing opportunities in a consistent way into your organization, processes and systems, to manage a change that is more fundamental than simple automation and to resist the reflex that bigger is always better!   So much for today from “Mediocrity and Madness”, the podcast that usually deals with the ever-growing gap between corporate rhetoric and action. I dearly thank all the people who provided inspiration and input to these musings especially in and around the programs I mentioned in the intro, most notably Gemma Garriga, Marcela Schrank Fialova, Christiane Konzelmann, Stephanie Schneider, Arnaud Michelet and the revered Prof. Jürgen Schmidhuber! Thank You for listening … and I hope to have you back soon!  

Commerce Corner - Interview Podcast mit Digitalmachern großer Marken!
Commerce Corner #30 mit Kai Schmidhuber (Gründer, Unternehmer + Top Executive)

Commerce Corner - Interview Podcast mit Digitalmachern großer Marken!

Play Episode Listen Later Aug 9, 2019 44:12


Data Science, der coole Bruder von Business Intelligence? Gast der Commerce Corner Folge #30 ist Kai Schmidhuber. Der 37-Jährige ist Mehrfach-Gründer, Unternehmer und bringt Erfahrung als Führungskraft multinationaler Unternehmen wie Henkel, DHL und der Fraport AG mit. Aktuell ist er CDO von L’Oréal Deutschland. In seinen bisherigen Berufsstationen hat er die digitale Transformation mit dem Fokus auf E-Commerce und Data Science vorangetrieben – und so gibt es viele Themen, über die Podcast-Host Michael mit Kai hätte sprechen können. In ihrem Gespräch fokussieren sich beide aber ganz bewusst auf das Thema, das in der Fachpresse, in Keynotes sowie LinkedIn-Feeds aktuell überwiegt: Data Science. Was ist der Unterschied zwischen Data Science und Business Intelligence? Was macht ein Data Scientist? Welche Skills muss er haben und wie finde ich ihn als Arbeitgeber? Wo und wie ist Data Science in den Unternehmen angekommen? Was haben uns andere Länder voraus? Antworten auf diese Fragen und mehr Infos zur Person Kai Schmidhuber bekommt ihr in der neuen Ausgabe von Commerce Corner. Viel Spaß!

Azeem Azhar's Exponential View
AI’s Near Future

Azeem Azhar's Exponential View

Play Episode Listen Later Jun 12, 2019 30:04


Jürgen Schmidhuber is a recognized pioneer in the field of deep neural networks. His techniques form the basis of the modern AI systems used by billions of people daily on services like Google, Facebook, and the Apple iPhone. Jürgen joins Azeem to discuss the next thirty years of artificial intelligence.

Artificial Intelligence in Industry with Daniel Faggella
How Machines and Robots Learn - the Progression of AI

Artificial Intelligence in Industry with Daniel Faggella

Play Episode Listen Later Apr 17, 2019 26:01


This week, we speak with arguably one of the best-known folks in the domain of neural networks: Jurgen Schmidhuber. He's working on a lot of different applications now in heavy industry, self-driving cars, and other spaces. We talk to him about the future of manufacturing and more broadly, how machines and robots learn. Schmidhuber uses the analogy of a baby learning about the world around it. He has a lot of interesting perspectives on how the general progression of making machines more intelligent will affect other industries outside of where AI is arguably best known today: consumer tech and advertising. If you're in the manufacturing space, this will be an interesting interview to tune into. If you're just interested in what the next phase in AI might be like, I think Schmidhuber actually frames it pretty succinctly.

WIRKSTOFF.A
Der Weltglückstag: VISION.A 2019

WIRKSTOFF.A

Play Episode Listen Later Mar 29, 2019 15:57


Digitale Querdenker, Professoren, Start-ups: An zwei Tagen trafen bei VISION.A – der Digitalkonferenz von APOTHEKE ADHOC, zahlreiche Vertreter der Pharma- und Apothekenbranche auf Experten der Digitalisierung. Im Berliner Kühlhaus wurden Zukunftsvisionen vorgestellt, spannende Themen wie Big Data und Künstliche Intelligenz diskutiert sowie innovative Ideen aus der Branche präsentiert. Neu war die Gallery of Inspiration und die Start-up Audition. Traditionell wurden die VISION.A Awards verliehen und bei der Party der Visionäre gefeiert. Redaktion: APOTHEKE ADHOC Musik: Bojan Assenov & Ellen-Jane Austin

This Week in Machine Learning & Artificial Intelligence (AI) Podcast
The Unreasonable Effectiveness of the Forget Gate with Jos Van Der Westhuizen - TWiML Talk #240

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Play Episode Listen Later Mar 18, 2019 33:32


Today we’re joined by Jos Van Der Westhuizen, PhD student in Engineering at Cambridge University. Jos’ research focuses on applying LSTMs, or Long Short-Term Memory neural networks, to biological data for various tasks. In our conversation, we discuss his paper The unreasonable effectiveness of the forget gate, in which he explores the various “gates” that make up an LSTM module and the general impact of getting rid of gates on the computational intensity of training the networks. Jos eventually determines that leaving only the forget-gate results in an unreasonably effective network, and we discuss why. Jos also gives us some great LSTM related resources, including references to Jurgen Schmidhuber, whose research group invented the LSTM, and who I spoke to back in Talk #44. Thanks to Pegasystems for sponsoring today's show! I'd like to invite you to join me at PegaWorld, the company’s annual digital transformation conference, which takes place this June in Las Vegas. To learn more about the conference or to register, visit pegaworld.com and use TWIML19 in the promo code field when you get there for $200 off. The complete show notes for this episode can be found at https://twimlai.com/talk/240.

Take The Lead
Breaking Down Artificial Intelligence with Jürgen Schmidhuber

Take The Lead

Play Episode Listen Later Mar 4, 2019 55:54


There are so many organizations that want to be innovative now, but they are worried about jobs being replaced by artificial intelligence or other factors. They worry about where to put people, but then how are they going to be relevant without implementing AI and software or the next big thing? Jürgen Schmidhuber, the Father of Artificial Intelligence, breaks down artificial intelligence and talks about its impact to the workplace as he addresses the issue of should we really worry about humans being replaced by machines? Love the show? Subscribe, rate, review, and share!Here’s How »Join the Take The Lead community today:DrDianeHamilton.comDr. Diane Hamilton FacebookDr. Diane Hamilton TwitterDr. Diane Hamilton LinkedInDr. Diane Hamilton YouTubeDr. Diane Hamilton Instagram

Lex Fridman Podcast
Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs

Lex Fridman Podcast

Play Episode Listen Later Dec 23, 2018 80:06


Juergen Schmidhuber is the co-creator of long short-term memory networks (LSTMs) which are used in billions of devices today for speech recognition, translation, and much more. Over 30 years, he has proposed a lot of interesting, out-of-the-box ideas in artificial intelligence including a formal theory of creativity. Video version is available on YouTube. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, or YouTube where you can watch the video versions of these conversations.

Hidden Layers
Jürgen Schmidhuber: Neural Networks

Hidden Layers

Play Episode Listen Later Nov 1, 2018 45:00


Jürgen Schmidhuber and Jeremy Fain discuss neural networks. 

FAZ Digitec
Der Gottvater der KI: Jürgen Schmidhuber - Folge 2

FAZ Digitec

Play Episode Listen Later Jun 15, 2018 16:05


Jürgen Schmidhuber wurde von der New York Times schon als „Gottvater“ der Künstlichen Intelligenz bezeichnet. Wer ist dieser deutsche Forscher, auf den tatsächlich bahnbrechende Erkenntnisse in der KI zurückzuführen sind? Alexander Armbruster und Carsten Knop sprechen über ihre Eindrücke aus Begegnungen mit Schmidhuber, über seine Thesen – und warum es sich lohnt, sich mit dem nicht immer einfachen Forscher und seinen Ansichten auseinanderzusetzen.

On the Way to New Work - Der Podcast über neue Arbeit
#46 mit KI-Experte Jürgen Schmidhuber (LIVE)

On the Way to New Work - Der Podcast über neue Arbeit

Play Episode Listen Later Mar 11, 2018 50:00


“Lernt zu lernen. Jeder Beruf, den Ihr ergreift, wird sich gewaltig ändern.” Jürgen Schmidhuber ist weltweit führender Wissenschaftler für Künstliche Intelligenz. Wir hatten die große Ehre, anlässlich der New Work Konferenz #NWX18 von XING, einen Live-Podcast mit ihm aufzunehmen. Jürgen Schmidhuber war schon als 15 Jähriger von der Idee begeistert, eine Intelligenz zu schaffen, die viel klüger ist als er selbst. Seine ursprüngliche Idee, Physik zu studieren, wich schnell einer verblüffenden Vision: “Baue einen Physiker, der viel besser ist als Du selber.” Er entschied sich für das Studium der Mathematik und der Informatik an der TU München und beschäftigte sich bereits 1987 in seiner Diplomarbeit mit Allgemeiner Künstlicher Intelligenz und rekursiver Selbstverbesserung. Seither entwickelt er mit seinen Teams unter anderem die tiefen neuronalen Netze, die heute in jedem Smartphone zu finden sind. Er gewinnt weltweit Preise und Anerkennung für seine Arbeit. Einer der Gründe für die großen Fortschritte in seinem Gebiet liegen nach Schmidhubers Ansicht darin begründet, dass Computer alle fünf Jahre zehn Mal billiger werden. Dieser Trend hält seit 1941 an, sein Abbrechen ist nicht in Sicht. Das sogenannte Long Short-Term Memory (LSTM, entwickelt seit den 1990ern) ist eines der sichtbarsten Produkte seines Labors. LSTM ist ein rückgekoppeltes neuronales Netzwerk, das sich durch Training immer weiter verbessert. Google nutzt LSTM heute auf über zwei Milliarden Smartphones, unter anderem für Spracherkennung und Übersetzung. Apple nutzt LSTM auf 1 Milliarde iPhones. Facebook macht seit 2017 jeden Tag 4 Milliarden Übersetzungen mit LSTM. In nicht so ferner Zukunft sieht Jürgen Schmidhuber die “Show and Tell Robotics”: Menschen zeigen dem Roboter etwas durch Zureden und Vormachen, und er macht es dann immer besser nach (zum Bsp. T-Shirts nähen oder Smartphones bauen). In ein paar Jahrzehnten werden KIs die Menschen in vieler Hinsicht bei weitem übertreffen, und dann wird alles anders. Wir haben mit Jürgen auch über das Thema “Autonomes Fahren” gesprochen und er hat uns von den Anfängen berichtet, die wie so viele KI-Durchbrüche in Deutschland begannen. In den 80er Jahren hatte der Robotiker Ernst Dickmanns bereits erste selbstfahrende Mercedes-Benz Lieferwagen. Schon damals fuhren diese Autos ohne Fahrer 80km/h, zunächst noch auf leeren Straßen. Ein Lieferwagen war notwendig, um die seinerzeit noch riesigen Rechner zu transportieren. Ab 1994 fuhr Dickmanns' autonome S-Klasse auf der Autobahn 180 km/h im Verkehr, nur mit Kameras und ohne GPS, eigentlich wie Menschen. Laut FAZ haben deutsche Firmen immer noch die meisten Patente für autonomes Fahren. Die gegenwärtigen KI-Profite, so berichtet Schmidhuber, werden allerdings vor allem von den großen Spielern am pazifischen Rand gemacht, wie Amazon, Alibaba, Facebook, Tencent und Google. Schmidhuber glaubt aber auch, dass kein Teil der Welt besser aufgestellt ist als Nordeuropa, wenn es darum geht, in der nahen Zukunft beide Welten zusammenzubringen: Robotik / Maschinenbau und KI / maschinelles Lernen. Wir wagen gemeinsam einen Ausblick darauf, was KI für Arbeit bedeutet. Männer müssen hier tapfer sein, denn es ist - so Jürgen Schmidhuber - oft schwieriger, eine Frau zu ersetzen als einen Mann. Der Grund: Männer haben oft Inselbegabungen und Tunnelblick, können nur eine Sache wirklich gut. Diese eine Sache lässt sich oft automatisieren (z.B. Schachspielen). Viele Frauen jedoch sind allgemeine Problemlöser. “Ich kann nicht voraussagen, welche Berufe in Zukunft wichtig werden.” Aber seinen beiden mittlerweile erwachsenen Töchtern hat er eine einfache Botschaft mitgegeben: “Lernt zu lernen. Jeder Beruf, den Ihr ergreift, wird sich gewaltig ändern.” Ein spannender Ausblick auf das, was beim Thema KI noch auf uns zukommen wird, rundet ein Gespräch ab, das für Christoph und mich zu unseren absoluten Highlights zählt. Für solche Momente machen wir diesen Podcast.

The World Transformed
Signs of the Times

The World Transformed

Play Episode Listen Later Feb 20, 2018 30:00


Phil and Stephen ask whether current developments make the Singularity more plausible. The “Father of Artificial Intelligence” Says Singularity Is 30 Years Away At the World Government Summit in Dubai, I spoke with Jürgen Schmidhuber, who is the Co-Founder and Chief Scientist at AI company NNAISENSE, Director of the Swiss AI lab IDSIA, and heralded by some as the “father of artificial intelligence” to find out.   Intel just put a quantum computer on a silicon chip Dutch quantum computing company QuTech, in conjunction with chip-maker Intel, yesterday unveiled a programmable two-qubit quantum computer running on a silicon chip. The researchers used a special type of qubit (the quantum version of a classical computer’s bits) called spin qubits to run two different quantum algorithms on a silicon chip.   Japanese tour firm offers virtual reality holidays – with a first-class seat Fasten your seatbelts for a flight departing to Paris – and never leave the ground. That’s exactly what 12 passengers did at First Airlines in central Tokyo this week, where they relaxed in first and business-class seats and were served four-course dinners, before immersing themselves in 360-degree virtual reality (VR) tours of the City of Light’s sights. WT 406-719 Eternity Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 3.0 License creativecommons.org/licenses/by/3.0/

The World Transformed
Becoming the Gods We Once Feared

The World Transformed

Play Episode Listen Later Sep 13, 2017 31:00


What's next for humanity and artificial intelligence? Will we be the AI, and effective cogs in a machine? Will we be the owners of the intelligence -- which won’t exactly be us? Or will we become something far different? What would happen if we upload our brains to computers? Robin Hanson discusses his book, The Age of Em.  Will our progeny be uploaded copies of human minds -- worker bees of a vast new economy? Is this the next stage in evolution or is it a nightmare slavery scenario? And if the latter, did our smartphones somehow lead us down this path? AI Will Colonize the Galaxy by the 2050s, According to the “Father of Deep Learning Jürgen Schmidhuber asserts that, by 2050, there will be trillions of self-replicating robot factories on the asteroid belt. In a few million years, robots will naturally explore the galaxy out of curiosity, setting their own goals without much human interaction. In the Year 2100 We Will Become the Gods We Once Feared Michio Kaku says that, like Zeus, we will control the physical universe with our minds. Like Venus we will have perfect, ageless bodies.  Seems that he has caught on to the idea of Sexy Immortal Billionaires with Superpowers! More here. Interestingly, this scenario is not incompatible with Robin Hanson’s view that all the actual humans will be retired in the Em scenario. WT 347-656

This Week in Machine Learning & Artificial Intelligence (AI) Podcast
LSTMs, Plus a Deep Learning History Lesson with Jürgen Schmidhuber - TWiML Talk #44

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Play Episode Listen Later Aug 28, 2017 66:19


This week we have a very special interview to share with you! Those of you who’ve been receiving my newsletter for a while might remember that while in Switzerland last month, I had the pleasure of interviewing Jurgen Schmidhuber, in his lab IDSIA, which is the Dalle Molle Institute for Artificial Intelligence Research in Lugano, Switzerland, where he serves as Scientific Director. In addition to his role at IDSIA, Jurgen is also Co-Founder and Chief Scientist of NNaisense, a company that is using AI to build large-scale neural network solutions for “superhuman perception and intelligent automation.” Jurgen is an interesting, accomplished and in some circles controversial figure in the AI community and we covered a lot of very interesting ground in our discussion, so much so that I couldn't truly unpack it all until I had a chance to sit with it after the fact. We talked a bunch about his work on neural networks, especially LSTM’s, or Long Short-Term Memory networks, which are a key innovation behind many of the advances we’ve seen in deep learning and its application over the past few years. Along the way, Jurgen walks us through a deep learning history lesson that spans 50+ years. It was like walking back in time with the 3 eyed raven. I know you’re really going to enjoy this one, and by the way, this is definitely a nerd alert show! For the show notes, visit twimlai.com/talk/44

The World Transformed
Super Projects: Airships, Bugs that Eat Plastic, Robot Overlords

The World Transformed

Play Episode Listen Later May 3, 2017 30:00


Phil and Stephen explore more super projects, including buiulding airships and saving the planet with a caterpillar that eats plastic bags. Google’s Sergey Brin said to be working on a zeppelin-like airship "Google co-founder Sergey Brin has been known to enjoy ambitious flights of technological fancy, but this may be his flightiest: Bloomberg reports the enigmatic billionaire, who favors dark attire and once leapt out of an airplane wearing Google Glass to promote the launch of the wearable, is now working on a secret airship in a NASA hangar. Brin’s interest in zeppelins isn’t purely an anachronistic tick — the Hybrid Air Vehicles HAV 304 Airlander 10 hybrid airship, depicted in the image above, holds the record as the world’s largest aircraft and has some promising benefits for potential military operations, including a very low operational heat signature and radar profile." Scientists have discovered a worm that eats plastic bags and leaves behind antifreeze "The wax worm, a caterpillar typically used for fishing bait and known for damaging beehives by eating their wax comb, has now been observed munching on a different material: plastic bags.   Jürgen Schmidhuber on the robot future?: ‘They will pay as much attention to us as we do to ants' "The German computer scientist says artificial intelligence will surpass humans’ in 2050, enabling robots to have fun, fall in love – and colonise the galaxy." WT 298-607

Zukunft, Trends und Strategien
ZTS003 Thema Trendforum 2014, 3D Drucker und über die Frage, wer zukünftig im Cockpit sitzt.

Zukunft, Trends und Strategien

Play Episode Listen Later Dec 7, 2014


Oliver Leisse diesmal im Gespräch mit Axel Gloger, Moderator des Trendforums 2014 in München. Wir sprechen über das Trendforum, über den 3D Drucker Hype, künstliche Intelligenz, intuitive Entscheidungen, Offline Tendenzen und das Ende des Durchschnitts. Axel Gloger ist Autor und Herausgeber des TrendScanner, Aufsichtsrat und Beirat in Unternehmen der Wissensindustrie http://www.trendscanner.biz und moderiert das Trendforum. Dort konnte man in diesem Jahr u.a. Edward Wrenbeck, den Entwickler von Siri treffen, Prof. Schmidhuber, den Vorreiter der künstlichen Intelligenz, Mark Curtis, Designer von Fjord und Andreas Meinheit, Chef Trendresearcher von Audi. Folge direkt herunterladen

Opalesque Radio
Radio Feature 35: Jurgen Schmidhuber in conversation with Sona Blessing

Opalesque Radio

Play Episode Listen Later Dec 20, 2011 10:57


Opalesque Radio
Radio Feature 26: Christof Schmidhuber in conversation with Sona Blessing

Opalesque Radio

Play Episode Listen Later Aug 31, 2011 12:36