Podcasts about turing

English mathematician and computer scientist

  • 1,159PODCASTS
  • 1,864EPISODES
  • 48mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Nov 28, 2023LATEST
turing

POPULARITY

20162017201820192020202120222023

Categories



Best podcasts about turing

Show all podcasts related to turing

Latest podcast episodes about turing

The Jim Rutt Show
EP 211 Ben Goertzel on Generative AI vs. AGI

The Jim Rutt Show

Play Episode Listen Later Nov 28, 2023 68:20


Jim talks with recurring guest Ben Goertzel about the ideas in his paper "Generative AI vs. AGI: The Cognitive Strengths and Weaknesses of Modern LLMs." They discuss the exponential acceleration of AI development, why LLMs by themselves won't lead to AGI, OpenAI's integrative system, skyhooking, why LLMs may be useful for achieving AGI, solving LLM hallucinations, why Google hasn't replicated GPT-4, LLM-tuning lore, what differentiates AGI from other forms of AI,  conceptualizing general intelligence, Weaver's theory of open-ended intelligence, multiple intelligence, the Turing test & the Minsky prize, what LLMs aren't good at, the danger of defining AGI as whatever LLMs can't do, the derivative & imitative character of LLMs, banality, doing advanced math with GPT-4, why the human brain doesn't form arbitrary abstractions, the duality of heuristics & abstractions, adding recurrence to transformers, OpenCog Hyperon, using a weighted labeled metagraph, orienting toward self-reflection & self-rewriting, the challenge of scalability of infrastructure, acceleration on non-LLM projects, and much more. Episode Transcript JRS Currents 072: Ben Goertzel on Viable Paths to True AGI "Generative AI vs. AGI: The Cognitive Strengths and Weaknesses of Modern LLMs," by Ben Goertzel "OpenCog Hyperon: A Framework for AGI at the Human Level and Beyond," by Ben Goertzel et. al. Dr. Ben Goertzel is a cross-disciplinary scientist, entrepreneur and author.  Born in Brazil to American parents, in 2020 after a long stretch living in Hong Kong he relocated his primary base of operations to a rural island near Seattle. He leads the SingularityNET Foundation, the OpenCog Foundation, and the AGI Society which runs the annual Artificial General Intelligence conference. Dr. Goertzel's research work encompasses multiple areas including artificial general intelligence, natural language processing, cognitive science, machine learning, computational finance, bioinformatics, virtual worlds, gaming, parapsychology, theoretical physics and more. He also chairs the futurist nonprofit Humanity+,  serves as Chief Scientist of AI firms  Rejuve, Mindplex, Cogito and Jam Galaxy, all parts of the SingularityNET ecosystem, and serves as keyboardist and vocalist in the Jam Galaxy Band, the first-ever band led by a humanoid robot.

The Paris Station
Adventures of a Z-list Actor: Miranda's Victim,The Keifer Sutherland "Trilogy " and Thanksgiving

The Paris Station

Play Episode Listen Later Nov 23, 2023 33:23


A Rag-Tag Thanksgiving episode that literally has ,A movie review, tales of a Car Crash by Alan "Cameron from.Ferris Bueller" Ruck,an Intoxicated Trilogy of meeting Keifer Sutherland Three Separate Times. (And me Turing into my 14 year old self) and getting though a Shitty Thanksgiving

The Infinite Monkey Cage
How I is AI?

The Infinite Monkey Cage

Play Episode Listen Later Nov 22, 2023 42:57


Brian and Robin (the real ones) are joined by mathematician Prof Hannah Fry, compute scientist Dr Kate Devlin and comedian Rufus Hound to discuss pros and cons of AI. Just how intelligent is the most intelligent AI? Will our phones soon be smarter than us – will we fail a Turing test while our phone passes it? Will we have AI therapists, doctors, lawyers, carers or even politicians? How will the increasing ubiquity of AI systems change our society and our relationships with each other? Could radio presenters of hit science/comedy shows soon be replaced with wittier, smarter AI versions that know more about particle physics... surely not! New episodes released Wednesdays. If you're in the UK, listen to the newest episodes of The Infinite Monkey Cage first on BBC Sounds: bbc.in/3K3JzyF Executive Producer: Alexandra Feachem.

uk ai turing bbc sounds rufus hound kate devlin infinite monkey cage
ITmedia NEWS
NVIDIA H100を96基搭載 大規模計算基盤を自動運転ベンチャーが構築へ

ITmedia NEWS

Play Episode Listen Later Nov 22, 2023 0:22


NVIDIA H100を96基搭載 大規模計算基盤を自動運転ベンチャーが構築へ。 自動運転車の開発・販売に取り組むTuring(千葉県柏市)は11月22日、大規模言語モデル(LLM)などの専用計算基盤を構築すると発表した。2024年前半の稼働開始を目指す。

Plus podcast – Maths on the Move
The universal machine: Putting Alan Turing on the stage

Plus podcast – Maths on the Move

Play Episode Listen Later Nov 21, 2023 30:02


When you think of Alan Turing you might think of his work breaking the Enigma code in World War II. Or you might think of his work that helped build the foundations of computer science and mathematical logic. Or you might even think of his groundbreaking work in mathematical biology on morphogensis which helps explain animal patterns. One thing we hadn't thought of, until 2013 that is, was that he could be the emotional centerpoint of a musical. The universal machine is a musical about Alan Turing's life and work that was staged in London in 2013. As part of our series about putting maths on stage and screen, we revisit our 2013 interview with the writer and director David Byrne, actor Richard Delaney, who played Turing, and assistant director Natalie York, to find out how you turn such a story, and the maths in it, into a musical. We are very grateful to Dominic Brennan, who wrote the music for The universal machine, for giving us permission to use the track Building The Bombe Part Two from the show.   The universal machine poster detail.   For more information: You can read the original article accompanying this podcast and a review of The universal machine; You can find out more about the Enigma code and how it was cracked in Exploring the Enigma; You can read about morphogenesis in How the leopard got its spots; And there is more on Turing and his work in Alan Turing: ahead of his time and What computers can't do. These two articles also look at the halting problem which is related to the Entscheidungsproblem mentioned in the podcast.

Future of Coding
Propositions as Types by Philip Wadler

Future of Coding

Play Episode Listen Later Nov 19, 2023 124:35


The subject of this episode's paper — Propositions as Types by Philip Wadler — is one of those grand ideas that makes you want to go stargazing. To stare out into space and just disassociate from your body and become one with the heavens. Everything — life, space, time, existence — all of it is a joke! A cosmic ribbing delivered by the laws of the universe or some higher power or, perhaps, higher order. Humanity waited two thousand years, from the time of the ancient Greeks through until the 1930s, for a means to answer questions of calculability, when three suddenly arrived all at once: General recursive functions by Gödel in 1934, with functions of sets of natural numbers. Lambda calculus by Alonzo Church in 1936, with anonymous single-variable functions. Turing machines by Alan Turing in 1937, with a process for evaluating symbols on a tape. Then it was discovered that these three models of computation were, in fact, perfectly equivalent. That any statement made in one could be made in the others. A striking coincidence, sure, but not without precedent. But then it was quietly determined (in 1934, again in 1969, and finally published in 1980) that computation itself is in a direct correspondence with logic. That every proposition in a given logic corresponds with a type in a given programming language, every proof corresponds with a program, and the simplification of the proof corresponds with the evaluation of the program. The implications boggle the mind. How could this be so? Well, how could it be any other way? Why did it take so long to discover? What other discoveries like this are perched on the precipice of revelation? Philip Wadler is here to walk us through this bit of history, suggest answers to some of these questions, and point us in a direction to search for more. And we are here, dear listener, to level with you that a lot of this stuff is miserably hard to approach, presented with the symbols and language of formal logic that is so often inscrutable to outsiders. By walking you through Wadler's paper (and the much more approachable Strange Loop talk), and tying it in with the cultural context of modern functional programming, we hope you'll gain an appreciation for this remarkable, divine pun that sits beneath all of computation. Links => patreon.com/futureofcoding — but only if you back the Visual Programming tier!! I'm warning you! Wadler's Strange Loop talk Propositions as Types Cocoon is good. It's not, like, Inside or Limbo good, but it's good. Actually, just play Inside. Do that ASAP. Hollow Knight, also extremely good. Can't wait for Silksong. But seriously, if you're reading this and have haven't played Inside, just skip this episode of the podcast and go play Inside. It's like 3 hours long and it's, like, transformatively great. Chris Martens has done some cool work (eg) bringing together linear logic and games. Meh: Gödel, Escher, Bach by Douglas Hofstadter Yeh: Infinity and the Mind by Rudy Rucker Heh: To Mock a MockingBird by Raymond Smullyan. The hierarchy of automata Games: Agency as Art The Incredible Proof Machine is what some would call a "visual programming language" because proofs are programs. But it's actually really cool and fun to play with. Approach it like a puzzle game, and give it 10 minutes or so to get its hooks into you. "Stop Doing Logic" is part of the Stop Doing Math meme. Unrelated: Ivan's song Don't Do Math. Bidirectional Type Checking, a talk by David Christiansen List Out of Lambda, a blog post by Steve Losh Nobody noticed that these links were silly last time, so this time I'm drawing more attention to it: Ivan: Mastodon • Email Jimmy: Mastodon • Twitter This link is legit: DM us in the FoC Slack https://futureofcoding.org/episodes/068See omnystudio.com/listener for privacy information.

ChatGPT: News on Open AI, MidJourney, NVIDIA, Anthropic, Open Source LLMs, Machine Learning

Join us for a fascinating episode as we challenge your ability to discern between ChatGPT and human responses. Test your skills in distinguishing AI-generated content from human-authored communication as we delve into the exciting world of Turing tests and AI-human interaction. Don't miss this thought-provoking discussion on the evolving boundary between ChatGPT and human conversation! Get on the AI Box Waitlist: https://AIBox.ai/Join our ChatGPT Community: ⁠https://www.facebook.com/groups/739308654562189/⁠Follow me on Twitter: ⁠https://twitter.com/jaeden_ai⁠

Les clefs d'une vie
Les clefs d'une vie - Benoit Solès : "la machine de Turing est jouée dans 15 et étudiée dans les écoles."

Les clefs d'une vie

Play Episode Listen Later Nov 15, 2023


Jacques Pessis reçoit Benoit Solès. Il a été chanteur, a tourné des sitcom avant de connaitre la consécration au théâtre avec « La machine de Turing » . Sa nouvelle pièce « La maison du [...]

The Mushroom Hour Podcast
Ep. 165: Underground Fungi in Patagonia - Pedomorphosis & Rethinking Evolution (feat. Dr. Francisco Kuhar)

The Mushroom Hour Podcast

Play Episode Listen Later Nov 14, 2023 80:07


Today on Mushroom Hour we have the pleasure of interviewing Dr. Francisco Kuhar. Dr. Kuhar is a Mycologist specialized in the fungal diversity of gasteroid and ectomycorrhizal fungi and biotechnological applications of fungal enzymes. He has a special interest in the evolutionary biology of sequestrate forms of fungi. Dr. Kuhar is an Associate researcher at CONICET in the Instituto Multidisciplinario de Biologia Vegetal (IMBIV - UNC), curator of Fungi at the CORD Herbarium and one of the leaders of Inommy Labs helping to pioneer a new fungi-based product platform.   TOPICS:    Freud, Linguistics and Life Sciences   Hongos in Patagonia   See the Future in a Spore   Hypogeous & Sequestrate Fungi   Pedomorphosis    Mutations Happening too Fast in the Evolutionary Record   Are We Too Obsessed with Adaptation in Evolutionary Biology?   The Story of Rhizopogon and Suillis     Alan Turing Equations Predicting Biological Forms   Approaching Scientific Questions with an Open Mind   Burning Questions on Underground Fungi   Matching Genetics to Traits in Fungi   Inommy Labs   Fungal Bioprospecting    LINKS:   Francisco Kuhar IG: https://www.instagram.com/franfungi   Innomy Labs: http://innomylabs.com/index.html   Hongos De Argentina: https://hongos.ar/   "Ontogeny and Phylogeny": https://www.hup.harvard.edu/catalog.php?isbn=9780674639416   Turing pattern: https://en.wikipedia.org/wiki/Turing_pattern   Geastrum genus: https://en.wikipedia.org/wiki/Geastrum   

El gato de Turing
167 – El año de Linux en el Mac

El gato de Turing

Play Episode Listen Later Nov 11, 2023 54:59


Hoy tenemos un nuevo episodio de El gato de Turing más corto de lo normal y con pocas noticias. Hablaremos de redes sociales, debates constructivos y de cómo se está perdiendo la esencia del debate público. También te contaremos cómo y por qué podrías tener un Linux en tu portátil de Mac, y de cómo las nuevas normativas Euro 7 están afectando el mercado europeo. Os presentaremos la nueva división de coches eléctricos de Renault. Noticias Our new flagship distro: Fedora Asahi Remix Renault presenta oficialmente Ampere, su división de coches eléctricos y software Europa confirma su paso atrás con la relajación de las normas sobre emisiones Música del episodio Introducción: Safe and Warm in Hunter's Arms - Roller Genoa Cierre: Inspiring Course Of Life - Alex Che Puedes encontrarnos en Mastodon y apoyarnos suscribiéndote al podcast en Podhero o haciéndote fan en iVoox. Si quieres un mes gratis en iVoox Premium, haz click aquí.

Tech Café
World of Warcraft : 20 ans déjà

Tech Café

Play Episode Listen Later Nov 7, 2023 72:36


Infomaniak partage les valeurs de Tech Café : éthique, écologie et respect de la vie privée. Découvrez les services de notre partenaire sur Infomaniak.comEn plus de 3 nouvelles extensions annoncées pour World of Warcraft, on a beaucoup d'intelligence artificielle générative à relayer dans cet épisode : avec un décret du président américain qui nous permet d'aborder plusieurs volets sur la régulation de l'IA ; on a aussi des infos sur les nouveautés des services que vous connaissez bien et qui se mettent à jour, et aussi quelques informations sur YouTube et le poids qu'il représente dans le web. ❤️ Patreon

Downsizing
Downsizing Episode 143: Small Town Mentality

Downsizing

Play Episode Listen Later Nov 7, 2023 30:18


Seriously I think most of the people I email wouldn't pass a Turing test. Podcast art by Joey Rizk

downsizing turing small town mentality
Innovation and Leadership
AI Talent Recruitment Behind Disney, Pepsi, & More | Jonathan Siddharth, CEO of Turing

Innovation and Leadership

Play Episode Listen Later Nov 7, 2023 42:25


Join host Jess Larsen in an exclusive interview with Jonathan Siddharth, the visionary CEO and co-founder of Turing. Discover how they recently secured $140 million in funding at a remarkable $4 billion valuation. Explore their AI-powered approach to sourcing developers that's revolutionizing the tech industry. Learn more about your ad choices. Visit megaphone.fm/adchoices

Scientificast
Magnetizzare pesci zebra equilibristi

Scientificast

Play Episode Listen Later Nov 6, 2023 61:27


Prima puntata di novembre. Romina ci parla della passione del mondo matematico per i pesci zebra. Per molto tempo, infatti si è pensato che la formazione delle sue (orizzontali) strisce si potesse descrivere attraverso il modello descritto da Alan Turing nel 1952. Le scoperte degli ultimi anni, invece mostrano che alla base ci sono dei meccanismi di interazione cellulare abbastanza particolari.Nel nostro intervento esterno Luca intervista Giuseppe Del Vecchio, postdoc presso il Laboratoire de Physique Theoritique et Modeles Statistique dell'Università Paris-Saclay. Ci parlerà di fisica del non-equilibrio e pendoli di Newton quantistici. Andrea invece ci parla di un nuovo interessante risultato sui superconduttori. In particolare è stato osservato che, quando la temperatura di ordinamento magnetico è al di sotto della temperatura di transizione superconduttiva, si verificano fenomeni fisici estremamente complessi, caratterizzati da configurazioni spaziali di flusso esotiche, tra cui cluster di vortici, catene, vortici giganti e gocce di liquido vorticoso.Per saperne di piùStudies of Turing pattern formation in zebrafish skin https://royalsocietypublishing.org/doi/10.1098/rsta.2020.0274Intertype superconductivity in ferromagnetic superconductors https://www.nature.com/articles/s42005-023-01395-7#Abs1

The Leader | Evening Standard daily
Musk at AI Safety Summit as Biden regulates tech

The Leader | Evening Standard daily

Play Episode Listen Later Nov 1, 2023 10:54


Elon Musk jetted into the UK to join US vice president Kamala Harris for an international conference focussing on the threats and opportunities of artificial intelligence.The AI Safety Summit at the Second World War top secret code-breaking HQ at Bletchley Park in Buckinghamshire, features tech moguls and politicians representing countries including Germany, Japan and China.In this episode of the Standard podcast, we'll look at the significance of the Bletchley Park conference, future legislation - and how close we are to passing the Turing ‘intelligence' test.We're joined by Dr Jeni Tennison, executive director of Connected by Data, who's a specialist in data and AI governance. Hosted on Acast. See acast.com/privacy for more information.

Outgrow's Marketer of the Month
Snippet: ⁠Phil Walsh⁠ the Chief Marketing Officer at⁠ Turing⁠ Charts the New Marketing Journey at Turing!

Outgrow's Marketer of the Month

Play Episode Listen Later Oct 28, 2023 1:28


With a background in both traditional tech services and AI startups, he sees Turing as the perfect blend of the two. His mission? To unify the marketing team and steer them towards a cohesive strategy that transforms Turing into a nurturing, brand-building powerhouse.  Join Phil on the journey to shape Turing's marketing future. Watch the full episode here

Colunistas Eldorado Estadão
Inteligência Artificial nas Ondas do Rádio: Teste de Turing sobre IA

Colunistas Eldorado Estadão

Play Episode Listen Later Oct 27, 2023 10:42


Marcelo Finger, um dos principais nomes em IA no País, aborda o tema e seus desdobramentos quase que diários, todas as 6ªs, às 8h, no Jornal Eldorado.See omnystudio.com/listener for privacy information.

My 904 News
A bit of St. Augustine nostalgia turing into a car wash

My 904 News

Play Episode Listen Later Oct 23, 2023 54:02


A bit of St. Augustine nostalgia turing into a car wash

Theories of Everything with Curt Jaimungal
Joscha Bach Λ Ben Goertzel: Conscious Ai, LLMs, AGI

Theories of Everything with Curt Jaimungal

Play Episode Listen Later Oct 17, 2023 124:04


YouTube Link: https://www.youtube.com/watch?v=xw7omaQ8SgA Joscha Bach meets with Ben Goertzel to discuss cognitive architectures, AGI, and conscious computers in another theolocution on TOE. - Patreon: https://patreon.com/curtjaimungal (early access to ad-free audio episodes!) - Crypto: https://tinyurl.com/cryptoTOE - PayPal: https://tinyurl.com/paypalTOE - Twitter: https://twitter.com/TOEwithCurt - Discord Invite: https://discord.com/invite/kBcnfNVwqs - iTunes: https://podcasts.apple.com/ca/podcast... - Pandora: https://pdora.co/33b9lfP - Spotify: https://open.spotify.com/show/4gL14b9... - Subreddit r/TheoriesOfEverything: https://reddit.com/r/theoriesofeveryt... - TOE Merch: https://tinyurl.com/TOEmerch LINKS MENTIONED: - OpenCog (Ben's Ai company): https://opencog.org - SingularityNET (Ben's Decentralized Ai company): https://singularitynet.io - Podcast w/ Joscha Bach on TOE: https://youtu.be/3MNBxfrmfmI - Podcast w/ Ben Goertzel on TOE: https://youtu.be/27zHyw_oHSI - Podcast w/ Michael Levin and Joscha on TOE: https://youtu.be/kgMFnfB5E_A - Podcast w/ John Vervaeke and Joscha on TOE: https://youtu.be/rK7ux_JhHM4 - Podcast w/ Donald Hoffman and Joscha on TOE: https://youtu.be/bhSlYfVtgww TIMESTAMPS: - 00:00:00 Introduction - 00:02:23 Computation vs Awareness - 00:06:11 The paradox of language and self-contradiction - 00:10:05 The metaphysical categories of Charles Peirce - 00:13:00 Zen Buddhism's category of zero - 00:14:18 Carl Jung's interpretation of four - 00:21:22 Language as "representation" - 00:28:48 Computational reality vs AGI - 00:33:06 Consciousness in particles - 00:44:18 Anesthesia and consciousness: Joscha's personal perspective - 00:54:36 Levels of consciousness levels (panpsychism vs functionalism) - 00:56:23 Deep neural nets & LLMs as steps backward from AGI? - 01:05:04 Emergent properties of LLMs - 01:12:26 Turing-completeness and its implications - 01:15:08 OpenAI's bold claims challenged - 01:24:24 Future of AGI - 01:31:58 Intelligent species after human extinction - 01:36:33 Emergence of a cosmic mind - 01:43:56 The timeline to AGI development - 01:52:16 The physics of immortality - 01:54:00 Critique of Integrated Information Theory (pseudoscience?) Learn more about your ad choices. Visit megaphone.fm/adchoices

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Thanks to the over 11,000 people who joined us for the first AI Engineer Summit! A full recap is coming, but you can 1) catch up on the fun and videos on Twitter and YouTube, 2) help us reach 1000 people for the first comprehensive State of AI Engineering survey and 3) submit projects for the new AI Engineer Foundation.See our Community page for upcoming meetups in SF, Paris, NYC, and Singapore. This episode had good interest on Twitter.Last month, Imbue was crowned as AI's newest unicorn foundation model lab, raising a $200m Series B at a >$1 billion valuation. As “stealth” foundation model companies go, Imbue (f.k.a. Generally Intelligent) has stood as an enigmatic group given they have no publicly released models to try out. However, ever since their $20m Series A last year their goal has been to “develop generally capable AI agents with human-like intelligence in order to solve problems in the real world”.From RL to Reasoning LLMsAlong with their Series A, they announced Avalon, “A Benchmark for RL Generalization Using Procedurally Generated Worlds”. Avalon is built on top of the open source Godot game engine, and is ~100x faster than Minecraft to enable fast RL benchmarking and a clear reward with adjustable game difficulty.After a while, they realized that pure RL isn't a good path to teach reasoning and planning. The agents were able to learn mechanical things like opening complex doors, climbing, but couldn't go to higher level tasks. A pure RL world also doesn't include a language explanation of the agent reasoning, which made it hard to understand why it made certain decisions. That pushed the team more towards the “models for reasoning” path:“The second thing we learned is that pure reinforcement learning is not a good vehicle for planning and reasoning. So these agents were able to learn all sorts of crazy things: They could learn to climb like hand over hand in VR climbing, they could learn to open doors like very complicated, like multiple switches and a lever open the door, but they couldn't do any higher level things. And they couldn't do those lower level things consistently necessarily. And as a user, I do not want to interact with a pure reinforcement learning end to end RL agent. As a user, like I need much more control over what that agent is doing.”Inspired by Chelsea Finn's work on SayCan at Stanford, the team pivoted to have their agents do the reasoning in natural language instead. This development parallels the large leaps in reasoning that humans have developed as the scientific method:“We are better at reasoning now than we were 3000 years ago. An example of a reasoning strategy is noticing you're confused. Then when I notice I'm confused, I should ask:* What was the original claim that was made? * What evidence is there for this claim? * Does the evidence support the claim? * Is the claim correct? This is like a reasoning strategy that was developed in like the 1600s, you know, with like the advent of science. So that's an example of a reasoning strategy. There are tons of them. We employ all the time, lots of heuristics that help us be better at reasoning. And we can generate data that's much more specific to them.“The Full Stack Model LabOne year later, it would seem that the pivot to reasoning has had tremendous success, and Imbue has now reached a >$1B valuation, with participation from Astera Institute, NVIDIA, Cruise CEO Kyle Vogt, Notion co-founder Simon Last, and others. Imbue tackles their work with a “full stack” approach:* Models. Pretraining very large (>100B parameter) models, optimized to perform well on internal reasoning benchmarks, with a ~10,000 Nvidia H100 GPU cluster lets us iterate rapidly on everything from training data to architecture and reasoning mechanisms.* Tools and Agents. Building internal productivity tools from coding agents for fixing type checking and linting errors, to sophisticated systems like CARBS (for hyperparameter tuning and network architecture search).* Interface Invention. Solving agent trust and collaboration (not merely communication) with humans by creating better abstractions and interfaces — IDEs for users to program computers in natural language.* Theory. Publishing research about the theoretical underpinnings of self-supervised learning, as well as scaling laws for machine learning research.Kanjun believes we are still in the “bare metal phase” of agent development, and they want to take a holistic approach to building the “operating system for agents”. We loved diving deep into the Imbue approach toward solving the AI Holy Grail of reliable agents, and are excited to share our conversation with you today!Timestamps* [00:00:00] Introductions* [00:06:07] The origin story of Imbue* [00:09:39] Imbue's approach to training large foundation models optimized for reasoning* [00:12:18] Imbue's goals to build an "operating system" for reliable, inspectable AI agents* [00:15:37] Imbue's process of developing internal tools and interfaces to collaborate with AI agents* [00:17:27] Imbue's focus on improving reasoning capabilities in models, using code and other data* [00:19:50] The value of using both public benchmarks and internal metrics to evaluate progress* [00:21:43] Lessons learned from developing the Avalon research environment* [00:23:31] The limitations of pure reinforcement learning for general intelligence* [00:28:36] Imbue's vision for building better abstractions and interfaces for reliable agents* [00:31:36] Interface design for collaborating with, rather than just communicating with, AI agents* [00:37:40] The future potential of an agent-to-agent protocol* [00:39:29] Leveraging approaches like critiquing between models and chain of thought* [00:45:49] Kanjun's philosophy on enabling team members as creative agents at Imbue* [00:53:51] Kanjun's experience co-founding the communal co-living space The Archive* [01:00:22] Lightning RoundShow Notes* Imbue* Avalon* CARBS (hyperparameter optimizer)* Series B announcement* Kanjun/Imbue's Podcast* MIT Media Lab* Research mentioned:* Momentum Contrast* SimClr* Chelsea Finn - SayCan* Agent Protocol - part of the AI Engineer Foundation* Xerox PARC* Michael Nielsen* Jason Benn* Outset Capital* Scenius - Kevin Kelly* South Park Commons* The Archive* Thursday Nights in AITranscriptAlessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, Partner and CTO at Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai. [00:00:19]Swyx: Hey, and today in the studio we have Kanjun from Imbue. Welcome. So you and I have, I guess, crossed paths a number of times. You're formerly named Generally Intelligent and you've just announced your rename, rebrand in huge, humongous ways. So congrats on all of that. And we're here to dive in into deeper detail on Imbue. We like to introduce you on a high level basis, but then have you go into a little bit more of your personal side. So you graduated your BS at MIT and you also spent some time at the MIT Media Lab, one of the most famous, I guess, computer hacking labs in the world. Then you graduated MIT and you went straight into BizOps at Dropbox, where you're eventually chief of staff, which is a pretty interesting role we can dive into later. And then it seems like the founder bug hit you. You were basically a three times founder at Ember, Sorceress, and now at Generally Intelligent slash Imbue. What should people know about you on the personal side that's not on your LinkedIn? That's something you're very passionate about outside of work. [00:01:12]Kanjun: Yeah. I think if you ask any of my friends, they would tell you that I'm obsessed with agency, like human agency and human potential. [00:01:19]Swyx: That's work. Come on.Kanjun: It's not work. What are you talking about?Swyx: So what's an example of human agency that you try to promote? [00:01:27]Kanjun: With all of my friends, I have a lot of conversations with them that's kind of helping figure out what's blocking them. I guess I do this with a team kind of automatically too. And I think about it for myself often, like building systems. I have a lot of systems to help myself be more effective. At Dropbox, I used to give this onboarding talk called How to Be Effective, which people liked. I think like a thousand people heard this onboarding talk, and I think maybe Dropbox was more effective. I think I just really believe that as humans, we can be a lot more than we are. And it's what drives everything. I guess completely outside of work, I do dance. I do partner dance. [00:02:03]Swyx: Yeah. Lots of interest in that stuff, especially in the sort of group living houses in San Francisco, which I've been a little bit part of, and you've also run one of those. [00:02:12]Kanjun: That's right. Yeah. I started the archive with two friends, with Josh, my co-founder, and a couple of other folks in 2015. That's right. And GPT-3, our housemates built. [00:02:22]Swyx: Was that the, I guess, the precursor to Generally Intelligent, that you started doing more things with Josh? Is that how that relationship started? Yeah. [00:02:30]Kanjun: This is our third company together. Our first company, Josh poached me from Dropbox for Ember. And there we built a really interesting technology, laser raster projector, VR headset. And then we were like, VR is not the thing we're most passionate about. And actually it was kind of early days when we both realized we really do believe that in our lifetimes, like computers that are intelligent are going to be able to allow us to do much more than we can do today as people and be much more as people than we can be today. And at that time, we actually, after Ember, we were like, work on AI research or start an AI lab. A bunch of our housemates were joining OpenAI, and we actually decided to do something more pragmatic to apply AI to recruiting and to try to understand like, okay, if we are actually trying to deploy these systems in the real world, what's required? And that was Sorceress. That taught us so much about maybe an AI agent in a lot of ways, like what does it actually take to make a product that people can trust and rely on? I think we never really fully got there. And it's taught me a lot about what's required. And it's kind of like, I think informed some of our approach and some of the way that we think about how these systems will actually get used by people in the real world. [00:03:42]Swyx: Just to go one step deeper on that, you're building AI agents in 2016 before it was cool. You got some muscle and you raised $30 million. Something was working. What do you think you succeeded in doing and then what did you try to do that did not pan out? [00:03:56]Kanjun: Yeah. So the product worked quite well. So Sorceress was an AI system that basically looked for candidates that could be a good fit and then helped you reach out to them. And this was a little bit early. We didn't have language models to help you reach out. So we actually had a team of writers that like, you know, customized emails and we automated a lot of the customization. But the product was pretty magical. Like candidates would just be interested and land in your inbox and then you can talk to them. As a hiring manager, that's such a good experience. I think there were a lot of learnings, both on the product and market side. On the market side, recruiting is a market that is endogenously high churn, which means because people start hiring and then we hire the role for them and they stop hiring. So the more we succeed, the more they... [00:04:39]Swyx: It's like the whole dating business. [00:04:40]Kanjun: It's the dating business. Exactly. Exactly. And I think that's the same problem as the dating business. And I was really passionate about like, can we help people find work that is more exciting for them? A lot of people are not excited about their jobs and a lot of companies are doing exciting things and the matching could be a lot better. But the dating business phenomenon like put a damper on that, like it's actually a pretty good business. But as with any business with like relatively high churn, the bigger it gets, the more revenue we have, the slower growth becomes because if 30% of that revenue you lose year over year, then it becomes a worse business. So that was the dynamic we noticed quite early on after our Series A. I think the other really interesting thing about it is we realized what was required for people to trust that these candidates were like well vetted and had been selected for a reason. And it's what actually led us, you know, a lot of what we do at Imbue is working on interfaces to figure out how do we get to a situation where when you're building and using agents, these agents are trustworthy to the end user. That's actually one of the biggest issues with agents that, you know, go off and do longer range goals is that I have to trust, like, did they actually think through this situation? And that really informed a lot of our work today. [00:05:52]Alessio: Let's jump into GI now, Imbue. When did you decide recruiting was done for you and you were ready for the next challenge? And how did you pick the agent space? I feel like in 2021, it wasn't as mainstream. Yeah. [00:06:07]Kanjun: So the LinkedIn says that it started in 2021, but actually we started thinking very seriously about it in early 2020, late 2019, early 2020. So what we were seeing is that scale is starting to work and language models probably will actually get to a point where like with hacks, they're actually going to be quite powerful. And it was hard to see that at the time, actually, because GPT-3, the early versions of it, there are all sorts of issues. We're like, oh, that's not that useful, but we could kind of see like, okay, you keep improving it in all of these different ways and it'll get better. What Josh and I were really interested in is how can we get computers that help us do bigger things? Like, you know, there's this kind of future where I think a lot about, you know, if I were born in 1900 as a woman, like my life would not be that fun. I'd spend most of my time like carrying water and literally like getting wood to put in the stove to cook food and like cleaning and scrubbing the dishes and, you know, getting food every day because there's no refrigerator, like all of these things, very physical labor. And what's happened over the last 150 years since the industrial revolution is we've kind of gotten free energy, like energy is way more free than it was 150 years ago. And so as a result, we've built all these technologies like the stove and the dishwasher and the refrigerator, and we have electricity and we have infrastructure, running water, all of these things that have totally freed me up to do what I can do now. And I think the same thing is true for intellectual energy. We don't really see it today, but because we're so in it, but our computers have to be micromanaged. You know, part of why people are like, oh, you're stuck to your screen all day. Well, we're stuck to our screen all day because literally nothing happens unless I'm doing something in front of my screen. I don't, you know, I can't send my computer off to do a bunch of stuff for me. And there is a future where that's not the case, where, you know, I can actually go off and do stuff and trust that my computer will pay my bills and figure out my travel plans and do the detailed work that I am not that excited to do so that I can like be much more creative and able to do things that I as a human, I'm very excited about and collaborate with other people. And there are things that people are uniquely suited for. So that's kind of always been the thing that has been really exciting to me. Like Josh and I have known for a long time, I think that, you know, whatever AI is, it would happen in our lifetimes. And the personal computer kind of started giving us a bit of free intellectual energy. And this is like really the explosion of free intellectual energy. So in early 2020, we were thinking about this and what happened was self-supervised learning basically started working across everything. So worked in language, SimClear came out, I think MoCo had come out, Momentum Contrast had come out earlier in 2019, SimClear came out in early 2020. And we're like, okay, for the first time, self-supervised learning is working really well across images and text and suspect that like, okay, actually it's the case that machines can learn things the way that humans do. And if that's true, if they can learn things in a fully self-supervised way, because like as people, we are not supervised. We like go Google things and try to figure things out. So if that's true, then like what the computer could be is much bigger than what it is today. And so we started exploring ideas around like, how do we actually go? We didn't think about the fact that we could actually just build a research lab. So we were like, okay, what kind of startup could we build to like leverage self-supervised learning? So that eventually becomes something that allows computers to become much more able to do bigger things for us. But that became General Intelligence, which started as a research lab. [00:09:39]Alessio: So your mission is you aim to rekindle the dream of the personal computer. So when did it go wrong and what are like your first products and user facing things that you're building to rekindle it? [00:09:53]Kanjun: Yeah. So what we do at Imbue is we train large foundation models optimized for reasoning. And the reason for that is because reasoning is actually, we believe the biggest blocker to agents or systems that can do these larger goals. If we think about something that writes an essay, like when we write an essay, we like write it. We put it and then we're done. We like write it and then we look at it and we're like, oh, I need to do more research on that area. I'm going to go do some research and figure it out and come back and, oh, actually it's not quite right. The structure of the outline. So I'm going to rearrange the outline, rewrite it. It's this very iterative process and it requires thinking through like, okay, what am I trying to do? Is the goal correct? Also like, has the goal changed as I've learned more? So as a tool, like when should I ask the user questions? I shouldn't ask them questions all the time, but I should ask them questions in higher risk situations. How certain am I about the like flight I'm about to book? There are all of these notions of like risk certainty, playing out scenarios, figuring out how to make a plan that makes sense, how to change the plan, what the goal should be. That are things that we lump under the bucket of reasoning and models today, they're not optimized for reasoning. It turns out that there's not actually that much explicit reasoning data on the internet as you would expect. And so we get a lot of mileage out of optimizing our models for reasoning in pre-training. And then on top of that, we build agents ourselves and we, I can get into, we really believe in serious use, like really seriously using the systems and trying to get to an agent that we can use every single day, tons of agents that we can use every single day. And then we experiment with interfaces that help us better interact with the agents. So those are some set of things that we do on the kind of model training and agent side. And then the initial agents that we build, a lot of them are trying to help us write code better because code is most of what we do every day. And then on the infrastructure and theory side, we actually do a fair amount of theory work to understand like, how do these systems learn? And then also like, what are the right abstractions for us to build good agents with, which we can get more into. And if you look at our website, we build a lot of tools internally. We have a like really nice automated hyperparameter optimizer. We have a lot of really nice infrastructure and it's all part of the belief of like, okay, let's try to make it so that the humans are doing the things humans are good at as much as possible. So out of our very small team, we get a lot of leverage. [00:12:18]Swyx: And so would you still categorize yourself as a research lab now, or are you now in startup mode? Is that a transition that is conscious at all? [00:12:26]Kanjun: That's a really interesting question. I think we've always intended to build, you know, to try to build the next version of the computer, enable the next version of the computer. The way I think about it is there's a right time to bring a technology to market. So Apple does this really well. Actually, iPhone was under development for 10 years, AirPods for five years. And Apple has a story where iPhone, the first multi-touch screen was created. They actually were like, oh wow, this is cool. Let's like productionize iPhone. They actually brought, they like did some work trying to productionize it and realized this is not good enough. And they put it back into research to try to figure out like, how do we make it better? What are the interface pieces that are needed? And then they brought it back into production. So I think of production and research as kind of like these two separate phases. And internally we have that concept as well, where like things need to be done in order to get to something that's usable. And then when it's usable, like eventually we figure out how to productize it. [00:13:20]Alessio: What's the culture like to make that happen, to have both like kind of like product oriented, research oriented. And as you think about building the team, I mean, you just raised 200 million. I'm sure you want to hire more people. What are like the right archetypes of people that work at Imbue? [00:13:35]Kanjun: I would say we have a very unique culture in a lot of ways. I think a lot about social process design. So how do you design social processes that enable people to be effective? I like to think about team members as creative agents, because most companies, they think of their people as assets and they're very proud of this. And I think about like, okay, what is an asset? It's something you own that provides you value that you can discard at any time. This is a very low bar for people. This is not what people are. And so we try to enable everyone to be a creative agent and to really unlock their superpowers. So a lot of the work I do, you know, I was mentioning earlier, I'm like obsessed with agency. A lot of the work I do with team members is try to figure out like, you know, what are you really good at? What really gives you energy and where can we put you such that, how can I help you unlock that and grow that? So much of our work, you know, in terms of team structure, like much of our work actually comes from people. Carbs, our hyperparameter optimizer came from Abe trying to automate his own research process doing hyperparameter optimization. And he actually pulled some ideas from plasma physics. He's a plasma physicist to make the local search work. A lot of our work on evaluations comes from a couple of members of our team who are like obsessed with evaluations. We do a lot of work trying to figure out like, how do you actually evaluate if the model is getting better? Is the model making better agents? Is the agent actually reliable? A lot of things kind of like, I think of people as making the like them shaped blob inside imbue and I think, you know, yeah, that's the kind of person that we're, we're hiring for. We're hiring product engineers and data engineers and research engineers and all these roles. We have projects, not teams. We have a project around data, data collection and data engineering. That's actually one of the key things that improve the model performance. We have a pre-training kind of project with some fine tuning as part of that. And then we have an agent's project that's like trying to build on top of our models as well as use other models in the outside world to try to make agents then we actually use as programmers every day. So all sorts of different, different projects. [00:15:37]Swyx: As a founder, you're now sort of a capital allocator among all of these different investments effectively at different projects. And I was interested in how you mentioned that you were optimizing for improving reasoning and specifically inside of your pre-training, which I assume is just a lot of data collection. [00:15:55]Kanjun: We are optimizing reasoning inside of our pre-trained models. And a lot of that is about data. And I can talk more about like what, you know, what exactly does it involve? But actually big, maybe 50% plus of the work is figuring out even if you do have models that reason well, like the models are still stochastic. The way you prompt them still makes, is kind of random, like makes them do random things. And so how do we get to something that is actually robust and reliable as a user? How can I, as a user, trust it? We have all sorts of cool things on the, like, you know, I was mentioning earlier when I talked to other people building agents, they have to do so much work, like to try to get to something that they can actually productize and it takes a long time and agents haven't been productized yet for, partly for this reason is that like the abstractions are very leaky. We can get like 80% of the way there, but like self-driving cars, like the remaining 20% is actually really difficult. We believe that, and we have internally, I think some things that like an interface, for example, that lets me really easily like see what the agent execution is, fork it, try out different things, modify the prompt, modify like the plan that it is making. This type of interface, it makes it so that I feel more like I'm collaborating with the agent as it's executing, as opposed to it's just like doing something as a black box. That's an example of a type of thing that's like beyond just the model pre-training, but on the model pre-training side, like reasoning is a thing that we optimize for. And a lot of that is about what data do we put in. [00:17:27]Swyx: It's interesting just because I always think like, you know, out of the levers that you have, the resources that you have, I think a lot of people think that running foundation model company or a research lab is going to be primarily compute. And I think the share of compute has gone down a lot over the past three years. It used to be the main story, like the main way you scale is you just throw more compute at it. And now it's like, Flops is not all you need. You need better data, you need better algorithms. And I wonder where that shift has gone. This is a very vague question, but is it like 30-30-30 now? Is it like maybe even higher? So one way I'll put this is people estimate that Llama2 maybe took about three to $4 million of compute, but probably 20 to $25 million worth of labeling data. And I'm like, okay, well that's a very different story than all these other foundation model labs raising hundreds of millions of dollars and spending it on GPUs. [00:18:20]Kanjun: Data is really expensive. We generate a lot of data. And so that does help. The generated data is close to actually good, as good as human labeled data. [00:18:34]Swyx: So generated data from other models? [00:18:36]Kanjun: From our own models. From your own models. Or other models, yeah. [00:18:39]Swyx: Do you feel like there's certain variations of this? There's the sort of the constitutional AI approach from Anthropic and basically models sampling training on data from other models. I feel like there's a little bit of like contamination in there, or to put it in a statistical form, you're resampling a distribution that you already have that you already know doesn't match human distributions. How do you feel about that basically, just philosophically? [00:19:04]Kanjun: So when we're optimizing models for reasoning, we are actually trying to like make a part of the distribution really spiky. So in a sense, like that's actually what we want. We want to, because the internet is a sample of the human distribution that's also skewed in all sorts of ways. That is not the data that we necessarily want these models to be trained on. And so when we're generating data, we're not really randomly generating data. We generate very specific things that are like reasoning traces and that help optimize reasoning. Code also is a big piece of improving reasoning. So generated code is not that much worse than like regular human written code. You might even say it can be better in a lot of ways. So yeah. So we are trying to already do that. [00:19:50]Alessio: What are some of the tools that you thought were not a good fit? So you built Avalon, which is your own simulated world. And when you first started, the metagame was like using games to simulate things using, you know, Minecraft and then OpenAI is like the gym thing and all these things. And I think in one of your other podcasts, you mentioned like Minecraft is like way too slow to actually do any serious work. Is that true? Yeah. I didn't say it. [00:20:17]Swyx: I don't know. [00:20:18]Alessio: That's above my pay grade. But Avalon is like a hundred times faster than Minecraft for simulation. When did you figure that out that you needed to just like build your own thing? Was it kind of like your engineering team was like, Hey, this is too slow. Was it more a long-term investment? [00:20:34]Kanjun: Yeah. At that time we built Avalon as a research environment to help us learn particular things. And one thing we were trying to learn is like, how do you get an agent that is able to do many different tasks? Like RL agents at that time and environments at that time. What we heard from other RL researchers was the like biggest thing keeping holding the field back is lack of benchmarks that let us explore things like planning and curiosity and things like that and have the agent actually perform better if the agent has curiosity. And so we were trying to figure out in a situation where, how can we have agents that are able to handle lots of different types of tasks without the reward being pretty handcrafted? That's a lot of what we had seen is that like these very handcrafted rewards. And so Avalon has like a single reward it's across all tasks. And it also allowed us to create a curriculum so we could make the level more or less difficult. And it taught us a lot, maybe two primary things. One is with no curriculum, RL algorithms don't work at all. So that's actually really interesting. [00:21:43]Swyx: For the non RL specialists, what is a curriculum in your terminology? [00:21:46]Kanjun: So a curriculum in this particular case is basically the environment Avalon lets us generate simpler environments and harder environments for a given tasks. What's interesting is that the simpler environments, what you'd expect is the agent succeeds more often. So it gets more reward. And so, you know, kind of my intuitive way of thinking about it is, okay, the reason why it learns much faster with a curriculum is it's just getting a lot more signal. And that's actually an interesting general intuition to have about training these things as like, what kind of signal are they getting? And like, how can you help it get a lot more signal? The second thing we learned is that reinforcement learning is not a good vehicle, like pure reinforcement learning is not a good vehicle for planning and reasoning. So these agents were not able to, they were able to learn all sorts of crazy things. They could learn to climb like hand over hand in VR climbing, they could learn to open doors like very complicated, like multiple switches and a lever open the door, but they couldn't do any higher level things. And they couldn't do those lower level things consistently necessarily. And as a user, I do not want to interact with a pure reinforcement learning end to end RL agent. As a user, like I need much more control over what that agent is doing. And so that actually started to get us on the track of thinking about, okay, how do we do the reasoning part in language? And we were pretty inspired by our friend Chelsea Finn at Stanford was I think working on SACAN at the time where it's basically an experiment where they have robots kind of trying to do different tasks and actually do the reasoning for the robot in natural language. And it worked quite well. And that led us to start experimenting very seriously with reasoning. [00:23:31]Alessio: How important is the language part for the agent versus for you to inspect the agent? You know, like is it the interface to kind of the human on the loop really important or? [00:23:43]Kanjun: Yeah, I personally think of it as it's much more important for us, the human user. So I think you probably could get end to end agents that work and are fairly general at some point in the future. But I think you don't want that. Like we actually want agents that we can like perturb while they're trying to figure out what to do. Because, you know, even a very simple example, internally we have like a type error fixing agent and we have like a test generation agent. Test generation agent goes off rails all the time. I want to know, like, why did it generate this particular test? [00:24:19]Swyx: What was it thinking? [00:24:20]Kanjun: Did it consider, you know, the fact that this is calling out to this other function? And the formatter agent, if it ever comes up with anything weird, I want to be able to debug like what happened with RL end to end stuff. Like we couldn't do that. Yeah. [00:24:36]Swyx: It sounds like you have a bunch of agents operating internally within the company. What's your most, I guess, successful agent and what's your least successful one? [00:24:44]Kanjun: The agents don't work. All of them? I think the only successful agents are the ones that do really small things. So very specific, small things like fix the color of this button on the website or like change the color of this button. [00:24:57]Swyx: Which is now sweep.dev is doing that. Exactly. [00:25:00]Kanjun: Perfect. Okay. [00:25:02]Swyx: Well, we should just use sweep.dev. Well, I mean, okay. I don't know how often you have to fix the color of a button, right? Because all of them raise money on the idea that they can go further. And my fear when encountering something like that is that there's some kind of unknown asymptote ceiling that's going to prevent them, that they're going to run head on into that you've already run into. [00:25:21]Kanjun: We've definitely run into such a ceiling. But what is the ceiling? [00:25:24]Swyx: Is there a name for it? Like what? [00:25:26]Kanjun: I mean, for us, we think of it as reasoning plus these tools. So reasoning plus abstractions, basically. I think actually you can get really far with current models and that's why it's so compelling. Like we can pile debugging tools on top of these current models, have them critique each other and critique themselves and do all of these, like spend more computer inference time, context hack, retrieve augmented generation, et cetera, et cetera, et cetera. Like the pile of hacks actually does get us really far. And a way to think about it is like the underlying language model is kind of like a noisy channel. Actually I don't want to use this analogy. It's actually a really bad analogy, but you kind of like trying to get more signal out of the channel. We don't like to think about it that way. It's what the default approach is, is like trying to get more signal out of this noising channel. But the issue with agents is as a user, I want it to be mostly reliable. It's kind of like self-driving in that way. Like it's not as bad as self-driving, like in self-driving, you know, you're like hurtling at 70 miles an hour. It's like the hardest agent problem. But one thing we learned from Sorceress and one thing we learned by using these things internally is we actually have a pretty high bar for these agents to work. You know, it's actually really annoying if they only work 50% of the time and we can make interfaces to make it slightly less annoying. But yeah, there's a ceiling that we've encountered so far and we need to make the models better. We also need to make the kind of like interface to the user better. And also a lot of the like critiquing. I hope what we can do is help people who are building agents actually like be able to deploy them. I think, you know, that's the gap that we see a lot of today is everyone who's trying to build agents to get to the point where it's robust enough to be deployable. It just, it's like an unknown amount of time. Okay. [00:27:12]Swyx: So this goes back into what Embu is going to offer as a product or a platform. How are you going to actually help people deploy those agents? Yeah. [00:27:21]Kanjun: So our current hypothesis, I don't know if this is actually going to end up being the case. We've built a lot of tools for ourselves internally around like debugging, around abstractions or techniques after the model generation happens. Like after the language model generates the text and like interfaces for the user and the underlying model itself, like models talking to each other, maybe some set of those things kind of like an operating system. Some set of those things will be helpful for other people. And we'll figure out what set of those things is helpful for us to make our agents. Like what we want to do is get to a point where we can like start making an agent, deploy it, it's reliable, like very quickly. And there's a similar analog to software engineering, like in the early days, in the seventies and the sixties, like to program a computer, like you have to go all the way down to the registers and write things and eventually we had assembly. That was like an improvement. But then we wrote programming languages with these higher levels of abstraction and that allowed a lot more people to do this and much faster. And the software created is much less expensive. And I think it's basically a similar route here where we're like in the like bare metal phase of agent building. And we will eventually get to something with much nicer abstractions. [00:28:36]Alessio: We had this conversation with George Hotz and we were like, there's not a lot of reasoning data out there. And can the models really understand? And his take was like, look, with enough compute, you're not that complicated as a human. Like the model can figure out eventually why certain decisions are made. What's been your experience? Like as you think about reasoning data, like do you have to do a lot of like manual work or like is there a way to prompt models to extract the reasoning from actions that they [00:29:03]Swyx: see? [00:29:03]Kanjun: So we don't think of it as, oh, throw enough data at it and then it will figure out what the plan should be. I think we're much more explicit. You know, a way to think about it is as humans, we've learned a lot of reasoning strategies over time. We are better at reasoning now than we were 3000 years ago. An example of a reasoning strategy is noticing you're confused. Then when I notice I'm confused, I should ask like, huh, what was the original claim that was made? What evidence is there for this claim? Does the evidence support the claim? Is the claim correct? This is like a reasoning strategy that was developed in like the 1600s, you know, with like the advent of science. So that's an example of a reasoning strategy. There are tons of them. We employ all the time, lots of heuristics that help us be better at reasoning. And we didn't always have them. And because they're invented, like we can generate data that's much more specific to them. So I think internally, yeah, we have a lot of thoughts on what reasoning is and we generate a lot more specific data. We're not just like, oh, it'll figure out reasoning from this black box or like it'll figure out reasoning from the data that exists. Yeah. [00:30:04]Alessio: I mean, the scientific method is like a good example. If you think about hallucination, right, people are thinking, how do we use these models to do net new, like scientific research? And if you go back in time and the model is like, well, the earth revolves around the sun and people are like, man, this model is crap. It's like, what are you talking about? Like the sun revolves around the earth. It's like, how do you see the future? Like if the models are actually good enough, but we don't believe them, it's like, how do we make the two live together? So you're like, you use Inbu as a scientist to do a lot of your research and Inbu tells you, hey, I think this is like a serious path you should go down. And you're like, no, that sounds impossible. Like how is that trust going to be built? And like, what are some of the tools that maybe are going to be there to inspect it? [00:30:51]Kanjun: Really there are two answers to this. One element of it is as a person, like I need to basically get information out of the model such that I can try to understand what's going on with the model. Then the second question is like, okay, how do you do that? And that's kind of some of our debugging tools, they're not necessarily just for debugging. They're also for like interfacing with and interacting with the model. So like if I go back in this reasoning trace and like change a bunch of things, what's going to happen? Like, what does it conclude instead? So that kind of helps me understand like, what are its assumptions? And, you know, we think of these things as tools. And so it's really about like, as a user, how do I use this tool effectively? I need to be willing to be convinced as well. It's like, how do I use this tool effectively? And what can it help me with? [00:31:36]Swyx: And what can it tell me? There's a lot of mention of code in your process. And I was hoping to dive in even deeper. I think we might run the risk of giving people the impression that you view code or you use code just as like a tool within InView just for coding assistance. But I think you actually train code models. And I think there's a lot of informal understanding about how adding code to language models improves their reasoning capabilities. I wonder if there's any research or findings that you have to share that talks about the intersection of code and reasoning. Hmm. Yeah. [00:32:08]Kanjun: So the way I think about it intuitively is like code is the most explicit example of reasoning data on the internet. [00:32:15]Swyx: Yeah. [00:32:15]Kanjun: And it's not only structured, it's actually very explicit, which is nice. You know, it says this variable means this, and then it uses this variable. And then the function does this. As people, when we talk in language, it takes a lot more to extract that explicit structure out of our language. And so that's one thing that's really nice about code is I see it as almost like a curriculum for reasoning. I think we use code in all sorts of ways. The coding agents are really helpful for us to understand what are the limitations of the agents. The code is really helpful for the reasoning itself. But also code is a way for models to act. So by generating code, it can act on my computer. And, you know, when we talk about rekindling the dream of the personal computer, kind of where I see computers going is, you know, like computers will eventually become these much more malleable things where I, as a user today, I have to know how to write software code, like in order to make my computer do exactly what I want it to do. But in the future, if the computer is able to generate its own code, then I can actually interface with it in natural language. And so one way we think about agents is kind of like a natural language programming language. It's a way to program my computer in natural language that's much more intuitive to me as a user. And these interfaces that we're building are essentially IDEs for users to program our computers in natural language. Maybe I should say what we're doing that way. Maybe it's clearer. [00:33:47]Swyx: I don't know. [00:33:47]Alessio: That's a good pitch. What do you think about the different approaches people have, kind of like text first, browser first, like multi-on? What do you think the best interface will be? Or like, what is your, you know, thinking today? [00:33:59]Kanjun: In a lot of ways, like chat as an interface, I think Linus, Linus Lee, you had on this. I really like how he put it. Chat as an interface is skeuomorphic. So in the early days, when we made word processors on our computers, they had notepad lines because that's what we understood these like objects to be. Chat, like texting someone is something we understand. So texting our AI is something that we understand. But today's word documents don't have notepad lines. And similarly, the way we want to interact with agents, like chat is a very primitive way of interacting with agents. What we want is to be able to inspect their state and to be able to modify them and fork them and all of these other things. And we internally have, think about what are the right representations for that? Like architecturally, like what are the right representations? What kind of abstractions do we need to build? And how do we build abstractions that are not leaky? Because if the abstractions are leaky, which they are today, like, you know, this stochastic generation of text is like a leaky abstraction. I cannot depend on it. And that means it's actually really hard to build on top of. But our experience and belief is actually by building better abstractions and better tooling, we can actually make these things non-leaky. And now you can build like whole things on top of them. So these other interfaces, because of where we are, we don't think that much about them. [00:35:17]Swyx: Yeah. [00:35:17]Alessio: I mean, you mentioned, this is kind of like the Xerox Spark moment for AI. And we had a lot of stuff come out of Parc, like the, what you see is what you got editors and like MVC and all this stuff. But yeah, but then we didn't have the iPhone at Parc. We didn't have all these like higher things. What do you think it's reasonable to expect in like this era of AI, you know, call it like five years or so? Like what are like the things we'll build today and what are things that maybe we'll see in kind of like the second wave of products? [00:35:46]Kanjun: That's interesting. I think the waves will be much faster than before. Like what we're seeing right now is basically like a continuous wave. Let me zoom a little bit earlier. So people like the Xerox Parc analogy I give, but I think there are many different analogies. Like one is the like analog to digital computer is kind of an example, like another analogy to where we are today. The analog computer Vannevar Bush built in the 1930s, I think, and it's like a system of pulleys and it can only calculate one function. Like it can calculate like an integral. And that was so magical at the time because you actually did need to calculate this integral bunch, but it had a bunch of issues like in analog errors compound. And so there was actually a set of breakthroughs necessary in order to get to the digital computer, like Turing's decidability, Shannon. I think the like whole like relay circuits can be thought of as can be mapped to Boolean operators and a set of other like theoretical breakthroughs, which essentially were abstractions. They were like creating abstractions for these like very like lossy circuits. They were creating abstractions for these like very analog circuits and digital had this nice property of like being error correcting. And so when I talk about like less leaky abstractions, that's what I mean. That's what I'm kind of pointing a little bit to. It's not going to look exactly the same way. And then the Xerox PARC piece, a lot of that is about like, how do we get to computers that as a person, I can actually use well. And the interface actually helps it unlock so much more power. So the sets of things we're working on, like the sets of abstractions and the interfaces, like hopefully that like help us unlock a lot more power in these systems. Like hopefully that'll come not too far in the future. I could see a next version, maybe a little bit farther out. It's like an agent protocol. So a way for different agents to talk to each other and call each other. Kind of like HTTP. [00:37:40]Swyx: Do you know it exists already? [00:37:41]Kanjun: Yeah, there is a nonprofit that's working on one. I think it's a bit early, but it's interesting to think about right now. Part of why I think it's early is because the issue with agents, it's not quite like the internet where you could like make a website and the website would appear. The issue with agents is that they don't work. And so it may be a bit early to figure out what the protocol is before we really understand how these agents get constructed. But, you know, I think that's, I think it's a really interesting question. [00:38:09]Swyx: While we're talking on this agent to agent thing, there's been a bit of research recently on some of these approaches. I tend to just call them extremely complicated chain of thoughting, but any perspectives on kind of meta-GPT, I think it's the name of the paper. I don't know if you care about at the level of individual papers coming out, but I did read that recently and TLDR, it beat GPT-4 and human eval by role-playing software agent development agency, instead of having sort of single shot or single role, you have multiple roles and how having all of them criticize each other as agents communicating with other agents. [00:38:45]Kanjun: Yeah, I think this is an example of an interesting abstraction of like, okay, can I just plop in this like multi-role critiquing and see how it improves my agent? And can I just plop in chain of thought, tree of thought, plop in these other things and see how they improve my agent? One issue with this kind of prompting is that it's still not very reliable. It's like, there's one lens, which is like, okay, if you do enough of these techniques, you'll get to high reliability. And I think actually that's a pretty reasonable lens. We take that lens often. And then there's another lens that's like, okay, but it's starting to get really messy what's in the prompt and like, how do we deal with that messiness? And so maybe you need like cleaner ways of thinking about and constructing these systems. And we also take that lens. So yeah, I think both are necessary. Yeah. [00:39:29]Swyx: Side question, because I feel like this also brought up another question I had for you. I noticed that you work a lot with your own benchmarks, your own evaluations of what is valuable. I would say I would contrast your approach with OpenAI as OpenAI tends to just lean on, hey, we played StarCraft or hey, we ran it on the SAT or the, you know, the AP bio test and that did results. Basically, is benchmark culture ruining AI? [00:39:55]Swyx: Or is that actually a good thing? Because everyone knows what an SAT is and that's fine. [00:40:04]Kanjun: I think it's important to use both public and internal benchmarks. Part of why we build our own benchmarks is that there are not very many good benchmarks for agents, actually. And to evaluate these things, you actually need to think about it in a slightly different way. But we also do use a lot of public benchmarks for like, is the reasoning capability in this particular way improving? So yeah, it's good to use both. [00:40:26]Swyx: So for example, the Voyager paper coming out of NVIDIA played Minecraft and set their own benchmarks on getting the Diamond X or whatever and exploring as much of the territory as possible. And I don't know how that's received. That's obviously fun and novel for the rest of the engineer, the people who are new to the scene. But for people like yourselves, you build Avalon just because you already found deficiencies with using Minecraft. Is that valuable as an approach? Oh, yeah. I love Voyager. [00:40:57]Kanjun: I mean, Jim, I think is awesome. And I really like the Voyager paper and I think it has a lot of really interesting ideas, which is like the agent can create tools for itself and then use those tools. [00:41:06]Swyx: He had the idea of the curriculum as well, which is something that we talked about earlier. Exactly. [00:41:09]Kanjun: And that's like a lot of what we do. We built Avalon mostly because we couldn't use Minecraft very well to like learn the things we wanted. And so it's like not that much work to build our own. [00:41:19]Swyx: It took us, I don't know. [00:41:22]Kanjun: We had like eight engineers at the time, took about eight weeks. So six weeks. [00:41:27]Swyx: And OpenAI built their own as well, right? Yeah, exactly. [00:41:30]Kanjun: It's just nice to have control over our environment. But if you're doing our own sandbox to really trying to inspect our own research questions. But if you're doing something like experimenting with agents and trying to get them to do things like Minecraft is a really interesting environment. And so Voyager has a lot of really interesting ideas in it. [00:41:47]Swyx: Yeah. Cool. One more element that we had on this list, which is context and memory. I think that's kind of like the foundational, quote unquote, RAM of our era. I think Andrej Karpathy has already made this comparison. So there's nothing new here. And that's just the amount of working knowledge that we can fit into one of these agents. And it's not a lot, right? Especially if you need to get them to do long running tasks. If they need to self-correct from errors that they observe while operating in their environment. Do you see this as a problem? Do you think we're going to just trend to infinite context and that'll go away? Or how do you think we're going to deal with it? [00:42:22]Kanjun: I think when you talked about what's going to happen in the first wave and then in the second wave, I think what we'll see is we'll get like relatively simplistic agents pretty soon. And they will get more and more complex. And there's like a future wave in which they are able to do these like really difficult, really long running tasks. And the blocker to that future, one of the blockers is memory. And that was true of computers too. You know, I think when von Neumann made the von Neumann architecture, he was like, the biggest blocker will be like, we need this amount of memory, which is like, I don't remember exactly like 32 kilobytes or something to store programs. And that will allow us to write software. He didn't say it this way because he didn't have these terms, but that only really was like happened in the seventies with the microchip revolution. It may be the case that we're waiting for some research breakthroughs or some other breakthroughs in order for us to have like really good long running memory. And then in the meantime, agents will be able to do all sorts of things that are a little bit smaller than that. I do think with the pace of the field, we'll probably come up with all sorts of interesting things like, you know, RAG is already very helpful. [00:43:26]Swyx: Good enough, you think? [00:43:27]Kanjun: Maybe good enough for some things. [00:43:29]Swyx: How is it not good enough? I don't know. [00:43:31]Kanjun: I just think about a situation where you want something that's like an AI scientist. As a scientist, I have learned so much about my fields and a lot of that data is maybe hard to fine tune or on, or maybe hard to like put into pre-training. Like a lot of that data, I don't have a lot of like repeats of the data that I'm seeing. You know, like if I'm a scientist, I've like accumulated so many little data points. And ideally I'd want to store those somehow, or like use those to fine tune myself as a model somehow, or like have better memory somehow. I don't think RAG is enough for that kind of thing. But RAG is certainly enough for like user preferences and things like that. Like what should I do in this situation? What should I do in that situation? That's a lot of tasks. We don't have to be a scientist right away. Awesome. [00:44:21]Swyx: I have a hard question, if you don't mind me being bold. Yeah. I think the most comparable lab to InView is Adept. You know, a research lab with like some amount of product situation on the horizon, but not just yet, right? Why should people work for InView over Adept? And we can cut this if it's too like... Yeah. [00:44:40]Kanjun: The way I think about it is I believe in our approach. The type of thing that we're doing is we're trying to like build something that enables other people to build agents and build something that really can be maybe something like an operating system for agents. I know that that's what we're doing. I don't really know what everyone else is doing. You know, I can kind of like talk to people and have some sense of what they're doing. And I think it's a mistake to focus too much on what other people are doing, because extremely focused execution on the right thing is what matters. To the question of like, why us? I think like strong focus on reasoning, which we believe is the biggest blocker, on inspectability, which we believe is really important for user experience and also for the power and capability of these systems. Building non-leaky, good abstractions, which we believe is solving the core issue of agents, which is around reliability and being able to make them deployable. And then really seriously trying to use these things ourselves, like every single day, and getting to something that we can actually ship to other people that becomes something that is a platform. Like, it feels like it could be Mac or Windows. I love the dogfooding approach. [00:45:49]Swyx: That's extremely important. And you will not be surprised how many agent companies I talk to that don't use their own agent. Oh no, that's not good. That's a big surprise. [00:45:59]Kanjun: Yeah, I think if we didn't use our own agents, then we would have all of these beliefs about how good they are. Wait, did you have any other hard questions you wanted to ask? [00:46:08]Swyx: Yeah, mine was just the only other follow-up that you had based on the answer you just gave was, do you see yourself releasing models or do you see yourself, what is the artifacts that you want to produce that lead up to the general operating system that you want to have people use, right? And so a lot of people just as a byproduct of their work, just to say like, hey, I'm still shipping, is like, here's a model along the way. Adept took, I don't know, three years, but they released Persimmon recently, right? Like, do you think that kind of approach is something on your horizon? Or do you think there's something else that you can release that can show people, here's kind of the idea, not the end products, but here's the byproducts of what we're doing? [00:46:51]Kanjun: Yeah, I don't really believe in releasing things to show people like, oh, here's what we're doing that much. I think as a philosophy, we believe in releasing things that will be helpful to other people. [00:47:02]Swyx: Yeah. [00:47:02]Kanjun: And so I think we may release models or we may release tools that we think will help agent builders. Ideally, we would be able to do something like that, but I'm not sure exactly what they look like yet. [00:47:14]Swyx: I think more companies should get into the releasing evals and benchmarks game. Yeah. [00:47:20]Kanjun: Something that we have been talking to agent builders about is co-building evals. So we build a lot of our own evals and every agent builder tells me, basically evals are their biggest issue. And so, yeah, we're exploring right now. And if you are building agents, please reach out to me because I would love to, like, figure out how we can be helpful based on what we've seen. Cool. [00:47:40]Swyx: That's a good call to action. I know a bunch of people that I can send your way. Cool. Great. [00:47:43]Kanjun: Awesome. [00:47:44]Swyx: Yeah. We can zoom out to other interests now. [00:47:46]Alessio: We got a lot of stuff. So we have Sherif from Lexicon, the podcast. He had a lot of interesting questions on his website. You similarly have a lot of them. Yeah. [00:47:55]Swyx: I need to do this. I'm very jealous of people with personal websites right there. Like, here's the high level questions of goals of humanity that I want to set people on. And I don't have that. [00:48:04]Alessio: It's never too late, Sean. [00:48:05]Swyx: Yeah. [00:48:05]Alessio: It's never too late. [00:48:06]Kanjun: Exactly. [00:48:07]Alessio: There were a few that stuck out as related to your work that maybe you're kind of learning [00:48:12]Swyx: more about it. [00:48:12]Alessio: So one is why are curiosity and goal orientation often at odds? And from a human perspective, I get it. It's like, you know, would you want to like go explore things or kind of like focus on your career? How do you think about that from like an agent perspective? Where it's like, should you just stick to the task and try and solve it as in the guardrails as possible? Or like, should you look for alternative solutions? [00:48:34]Swyx: Yeah. [00:48:34]Kanjun: I think one thing that's really interesting about agents actually is that they can be forked. Like, you know, we can take an agent that's executed to a certain place and said, okay, here, like fork this and do a bunch of different things. I try a bunch of different things. Some of those agents can be goal oriented and some of them can be like more curiosity driven. You can prompt them in slightly different ways. And something I'm really curious about, like what would happen if in the future, you know, we were able to actually go down both paths. As a person, why I have this question on my website is I really find that like I really can only take one mode at a time and I don't understand why. And like, is it inherent in like the kind of context that needs to be held? That's why I think from an agent perspective, like forking it is really interesting. Like I can't fork myself to do both, but I maybe could fork an agent to like add a certain point in a task. [00:49:26]Swyx: Yeah. Explore both. Yeah. [00:49:28]Alessio: How has the thinking changed for you as the funding of the company changed? That's one thing that I think a lot of people in the space think is like, oh, should I raise venture capital? Like, how should I get money? How do you feel your options to be curious versus like goal oriented has changed as you raise more money and kind of like the company has grown? [00:49:50]Kanjun: Oh, that's really funny. Actually, things have not changed that much. So we raised our Series A $20 million in late 2021. And our entire philosophy at that time was, and still kind of is, is like, how do we figure out the stepping stones, like collect stepping stones that eventually let us build agents, kind of these new computers that help us do bigger things. And there was a lot of curiosity in that. And there was a lot of goal orientation in that. Like the curiosity led us to build CARBS, for example, this hyperparameter optimizer. Great name, by the way. [00:50:28]Swyx: Thank you. [00:50:29]Kanjun: Is there a story behind that name? [00:50:30]Swyx: Yeah. [00:50:31]Kanjun: Abe loves CARBS. It's also cost aware. So as soon as he came up with cost aware, he was like, I need to figure out how to make this work. But the cost awareness of it was really important. So that curiosity led us to this really cool hyperparameter optimizer. That's actually a big part of how we do our research. It lets us experiment on smaller models. And for those experiment results to carry to larger ones. [00:50:56]Swyx: Which you also published a scaling laws, which is great. I think the scaling laws paper from OpenAI was like the biggest. And from Google, I think, was the greatest public service to machine learning that any research lab can do. Yeah, totally. [00:51:10]Kanjun: What was nice about CARBS is it gave us scaling laws for all sorts of hyperparameters. So yeah, that's cool. It basically hasn't changed very much. So there's some curiosity. And then there's some goal oriented parts. Like Avalon, it was like a six to eight week sprint for all of us. And we got this thing out. And then now different projects do like more curiosity or more goal orientation at different times. Cool. [00:51:36]Swyx: Another one of your questions that we highlighted was, how can we enable artificial agents to permanently learn new abstractions and processes? I think this is might be called online learning. [00:51:45]Kanjun: Yeah. So I struggle with this because, you know, that scientist example I gave. As a scientist, I've like permanently learned a lot of new things. And I've updated and created new abstractions and learned them pretty reliably. And you were talking about like, okay, we have this RAM that we can store learnings in. But how well does online learning actually work? And the answer right now seems to be like, as models get bigger, they fine tune faster. So they're more sample efficient as they get bigger. [00

Active Towns
Turing Pro On Urbanism w/ Tesho Akindele (video available)

Active Towns

Play Episode Listen Later Oct 13, 2023 63:25


In this episode, I reconnect with former pro soccer player, Tesho Akindele, for a conversation about his unlikely and unexpected journey from pro sports to urbanism and developing, walkable and bike-friendly communities in Charlotte, NC. I first met Tesho at the annual Congress for the New Urbanism gathering in Charlotte and immediately knew I needed to have him on the Channel to chat about his advocacy efforts in the YIMBY and Legalize Housing movements. He also shares how the book Walkable City by Jeff Speck was instrumental in influencing his passion for urbanism. Be sure to pick up a copy of the brand new 10th Anniversary version of Walkable City with 100 pages of bonus material; see links below.Tesho describes his time in college at the Colorado School of Mines: "I didn't have a car, I had my 'Escalegs' as I liked to call them, I walked everywhere..."Thank you so much for tuning in! If you enjoyed this episode, please share it with a friend and subscribe to the Podcast on your preferred listening platform, and don't forget to check out the Active Towns Channel for more contentHelpful Links (note that some may include affiliate links to help me support the channel):- Camp North End- Orlando YIMBY group- Craig Ustler "Mr. Downtown Orlando"- CNU - Congress for the New Urbanism- Walkable City book by Jeff Speck in the Active Towns Bookshop and on Amazon - Episode 121 w/ Jeff Speck - Strong Towns- Reinventing the Front Porch Video Part 1- Reinventing the Front Porch Video Part 2- CNU Charlotte Playlist of videosIf you are a fan of the Active Towns Podcast, please consider supporting the effort as an Active Towns Ambassador in the following ways:1. Join our Patreon community. Contributions start at just $1 per month(Note: Patron benefits include early, ad-free access to content and a 15% discount in the Active Towns Merch Store)2. If you enjoyed this episode, you can also "leave a tip" through "Buy Me a Coffee"3. Pick up some Active Towns #StreetsAreForPeople Merch at my storeCredits:- Video and audio production by John Simmerman- Music via Epidemic SoundResources used during the production of this video:- My recording platform is Ecamm Live- Editing software Adobe Creative Cloud Suite- Equipment: Contact me for a complete listFor more information about the Active Towns effort or to follow along, please visit our links below:- Active Towns Website- Active Towns on Twitter- Periodic e-NewsletterBackground:Hi Everyone! My name is John Simmerman, and I'm a health promotion and public health professional with over 30 years of experience. Over the years, my area of concentration has evolved into a specialization in how the built environment influences human behavior related to active living and especially active mobility.Since 2010,  I've been exploring, documenting, and profiling established, emerging, and aspiring Active Towns wherever they might be while striving to produce high-quality multimedia content to help inspire the creation of more safe and inviting, environments that promote a "Culture of Activity" for "All Ages & Abilities."The Active Towns Channel features my original video content and reflections, including a selection of podcast episodes and short films profiling the positive and inspiring efforts happening around the world as I am able to experience and document them. Thanks once again for tuning in! I hope you find this content helpful and insightful.Creative Commons License: Attributions, Non-Commercial, No Derivatives, 2023 ★ Support this podcast on Patreon ★

The Empowered Business Woman
Episode 29: The Five Things Really Stopping You From Getting What You Want

The Empowered Business Woman

Play Episode Listen Later Oct 13, 2023 33:00


cWelcome to Episode 29 of the SSB podcast! Feel like you're stuck somewhere along a growth path but unsure just what is really holding you back?  In this episode Nicola Wilkes, founder of SSB and the host of the Seriously Stylish Business Podcast, shares five things she belives are holding you back from getting what you want right now... What you can expect to hear in this episode of the #SSBPodcast: How boredom can literally block us accessing our very 'best' that life and career has to offer us Why fear of taking that next step is your worst eneny   Turing the tables on the confidence factor    Why fixating on the 'how' is stopping you from making the progress you need and want And finally why everyone else is your biggest obstacle to making that bold next step! Learn how to conquer these top five blockers to taking the next steps along your very own success path and how you can make changes - even today - to create the results you want in your personal life, professional space or your business.  Want to know more about SSB?   Join us for more news at seriouslystylishbusiness for weekly inspiration, personal growth tips and our exclusive female membership: ‘AMBITION' Follow SSB on instagram Follow SSB on Pinterest Follow SSB on LinkedIn Share your thoughts, views and best takeaways from the podcast #SSBPodcast   Read More Quick LinksGet Embed PlayerDownload Audio File  

College of Optometrists
WebinarXtra: Artificial intelligence in optometry

College of Optometrists

Play Episode Listen Later Oct 12, 2023 44:05


In this WebinarXtra podcast, College clinical adviser Daniel Hardiman-McCartney FCOptom MBE talks to Dr Jeffry Hogg following his webinar on artificial intelligence in optometry. Jeffry is an ophthalmology registrar and NIHR Doctoral Fellow working between Newcastle University and NHS trusts in Newcastle, Birmingham and London. Jeffry answers all those questions there wasn't time to cover during the live webinar which provided a practical exploration of the near-term realities of AI. The subject of the questions include the Turing test, macular age versus retinal age, and clinical governance of AI and optometric practice. For more information about AI regulation please visit the NHS AI and Digital Regulations Service. You can also read our recent Acuity article on AI and retinal vasculometry and earn CPD. --- Send in a voice message: https://podcasters.spotify.com/pod/show/collegeofoptometrists/message

Bitcoin Italia Podcast
S05E35 - Fiat bellum: che guerra sia!

Bitcoin Italia Podcast

Play Episode Listen Later Oct 12, 2023 77:26


Rikki è costretto ad anticipare di qualche giorno il suo rientro in Italia a causa delle tensioni a sud del Libano tra Hezbollah e Israele, mentre una nuova guerra infiamma il Medioriente. In questo episodio gli ultimi racconti da Beirut, le difficoltà evidenti che incontra Bitcoin quando manca la conoscenza della tecnologia e alcune considerazioni su come questa guerra, come tutte le guerre, siano possibili solo nel fiat standard, grazie ad armi acquistate a debito con soldi fasulli stampati dal nulla.Inoltre: il protocollo BitVM, Bitcoin Virtual Machine, rende Bitcoin Turing complete, il progetto Spartacus rende eterni e immutabili i dossier di Wikileaks, Fidelity si scopre massimalista e parteciapiamo tutti insieme a un bando dell'Unione Europea.It's showtime!

Stephan Livera Podcast
What is BitVM? with Robin Linus and Super Testnet (SLP520)

Stephan Livera Podcast

Play Episode Listen Later Oct 12, 2023 54:34


BitVM is a new paradigm for Turing-complete bitcoin contracts. In this episode I speak with the creator, Robin Linus and another developer working on the idea, Super Testnet. We discuss: What is BitVM?  What are some of the trade offs?  What could BitVM enable?  Set up time and interactivity Can it enable sidechains or covenants?  Links: Paper: bitvm.pdf X: @robin_linus X: @super_testnet Super Testnet's Tapleaf circuits Shinobi article on BM: The Big Deal With Bitvm: Arbitrary Computation Now Possible on Bitcoin Without a Fork AJ Towns ML response on BitVM: BitVM: Compute Anything on Bitcoin Sponsors: Swan.com (code LIVERA) CoinKite.com (code LIVERA) Mempool.space Stephan Livera links: Follow me on X @stephanlivera Subscribe to the podcast

il posto delle parole
Ottavio Fatica "Adelphi a Portici di Carta"

il posto delle parole

Play Episode Listen Later Oct 4, 2023 12:15


Ottavio Fatica"Adelphi a Portici di Carta"www.porticidicarta.itSabato 7 e Domenica 8 ottobre 203Portici di Carta, TorinoSi sa, a Torino la cultura è una passeggiata, più precisamente una camminata di ben 2 chilometri sotto i portici del centro che, in occasione della sedicesima edizione di Portici di Carta, si trasformeranno letteralmente in una delle librerie all'aperto più lunghe del mondo. Usando le parole di Vittoria Poggio, Assessore alla Cultura, Turismo e Commercio della Regione Piemonte,Ospite d'onore di questa sedicesima edizione sarà Adelphi che si inserirà in un fitto calendario di eventi: avremo la possibilità di dialogare con Benjamin Labatout con il suo ultimo romanzo Maniac, ricorderemo le opere di Milan Kundera, autore prolifico che ha segnato la letteratura del Novecento, attraverso la voce di Giorgio Pinotti, traduttore delle opere, e seguiremo il dialogo di Chiara Valerio con Roberto Colajanni che parleranno di libri, autori, editoria; termineremo, poi, il nostro viaggio nell'universo adelphiano con Ottavio Fatica e Walter Siti che disquisiranno di Guerra di Louis-Ferdinand Céline.Sabato 07 ottobre Ore 17:00Sala multimediale, Gallerie d'Italia Intesa Sanpaolo – piazza San Carlo 156Benjamin LabatutAutore di Maniac (Adelphi)Con Paolo GiordanoDopo il successo di Quando abbiamo smesso di capire il mondo, Labatut torna a raccontare la vita di uomini di scienza di cui conosciamo poco, se non il fatto che con le loro scoperte e le loro invenzioni hanno condizionato la storia dell'umanità. Questa volta tocca a John von Neumann e al MANIAC, il calcolatore universale basato sulla macchina di Turing che apre le porte a quella che sarà l'informatica.Sabato 7 ottobe Ore 18:30Sala multimediale, Gallerie d'Italia Intesa Sanpaolo – piazza San Carlo 156Omaggio a Milan KunderaLa dedica di Portici di Carta ad Adelphi e a Milan KunderaCon Giorgio PinottiDa pochi mesi ci ha lasciato uno degli scrittori più emblematici della seconda metà del Novecento, che è anche tra i più rappresentativi del ricchissimo catalogo Adelphi. Portici di Carta dedica l'edizione 2023 a Milan Kundera, raccontato dal suo traduttore Giorgio Pinotti, e alla casa editrice che lo pubblica.Sabato 7 ottobre Ore 19.00Sala multimediale, Gallerie d'ItaliaAdelphiEditore ospite di Portici di Carta 2023con Roberto Colajanni e Chiara ValerioDomenica 08 ottobre Ore 17:00Oratorio San Filippo Neri – via Maria Vittoria 5Omaggio a Louis-Ferdinand CélineAutore di Guerra (Adelphi)Con Ottavio Fatica e Walter SitiModera: Marco LupoLa pubblicazione del primo degli inediti dell'autore di Viaggio al termine della notte, rubati nel 1944 dalla sua abitazione e miracolosamente ricomparsi settant'anni dopo, è uno degli eventi destinati a segnare una stagione editoriale. Ne parlano il traduttore, Ottavio Fatica, e Walter Siti, uno degli scrittori italiani più riconoscenti all'opera di Céline.IL POSTO DELLE PAROLEascoltare fa pensarewww.ilpostodelleparole.itQuesto show fa parte del network Spreaker Prime. Se sei interessato a fare pubblicità in questo podcast, contattaci su https://www.spreaker.com/show/1487855/advertisement

Timeline (5.000 ans d'Histoire)
Xpresso / La machine de Turing - Benoît Solès

Timeline (5.000 ans d'Histoire)

Play Episode Listen Later Oct 3, 2023 22:47


La Machine de Turing au Théâtre du Palais Royal jusqu'au 23/12/2023 sauf prolongations. Turing a construit une machine pensante qui se révélera être le premier ordinateur. Contraint au silence par les services secrets, il fut condamné pour homosexualité, avant de se suicider en croquant une pomme empoisonnée rappelant étrangement un célèbre logo… Vous est-il déjà arrivé de détenir un secret, un grand secret ? Non ? Dans ce cas, vous ignorez combien il peut être difficile de le garder pour soi. De toutes les choses immatérielles, le silence est l'une des plus lourde à porter. Et justement, ma vie était remplie de secrets… Avez-vous déjà entendu parler de l'Enigma ? Bien sûr que non, comment le pourriez-vous ? Alors, c'est le moment d'être bien attentif. » Benoît Solès, auteur et « Turing » dans la pièce est notre invité pour Xpresso

The Stephen Wolfram Podcast
History of Science & Technology Q&A (December 28, 2022)

The Stephen Wolfram Podcast

Play Episode Listen Later Sep 22, 2023 74:54


Stephen Wolfram answers questions from his viewers about the history science and technology as part of an unscripted livestream series, also available on YouTube here: https://wolfr.am/youtube-sw-qa Questions include: Was the invention of computers inevitable? Will evolution always stumble upon universal computers, given enough resources? What are the implications for the laws of physics and reality? - I don't think computing technology could have possibly been conceived until after the Industrial Revolution. - ​Ideas alone don't govern how science evolves. It's a combination of factors, including technology, mode of production of society, etc. - The Sun's computation helps sustain us. - I like thinking about machine learning as a black box that gets to a human-comprehendable product, but the "reasoning" that enables it to get to that output is not really understood. Once we understand what's really going on in a machine learning model, we can be confident that its output is sound. - I started playing chess lately and I noticed that high-level and machine chess are a lot like proof of computational work and willingness to commit it. Do you have any thoughts on this? - I wonder how much power one would need in order to run a mechanical computer comparable to a modern CPU. - Historically speaking, do you think the modern AI systems are unique in terms of replacing human work, or just another step in automation? - ​I may change my email signature to "Written by ChatGPT. Please excuse any nonsense." - It's tempting to think general AI could emerge from some digital version of evolution. That seems to require digital entities competing for resources and a "will" to fight for survival. - Historically, how has written record keeping evolved? Will we ever revert back to oral records (spoken stories, songs, etc.)? - GPT-4 and GPT-5 are going to be amazing. - The question is whether the interviewer will care if the candidate is an AI. For some roles, it will not matter, and that number will increase. - Has ChatGPT passed the Turing test? Or can it pass the test soon? - I suspect the major deployment of AI in the short term will be phishing. For the time being, it can't replace regular employees at legitimate businesses because it can't be legally held culpable because it's not conscious. But for scammers, that's not an impediment.

Turing School Podcast
New Dev Series: Cohort Co-op

Turing School Podcast

Play Episode Listen Later Sep 21, 2023 14:31


This week, hosts Alaina and Jesse share strategies for leveraging your Turing cohort to prep for job interviews.  From practicing failure to joining hype rooms, tune in to gain some valuable insights. If you or someone you know are code curious, we encourage you to attend a Turing Try Coding Event. You can register for a Try Coding class at turing.edu/try-coding.  

Turing School Podcast
Engineer One

Turing School Podcast

Play Episode Listen Later Sep 20, 2023 47:06


Bailey and Jesse chat with Travis Haby 1507 alum and Staff Engineer at Guild. They discuss his Turing story, his roots in education, his background in physics, the Blakement, his job as the first engineer at Guild, watching engineering culture grow at Guild, hiring at Guild, the Climate Crisis, and some advice. If you or someone you know are code curious, we encourage you to attend a Turing Try Coding Event. You can register for a Try Coding class at turing.edu/try-coding.

Theories of Everything with Curt Jaimungal
Edward Frenkel: Infinity, String Theory, Death, The Self

Theories of Everything with Curt Jaimungal

Play Episode Listen Later Sep 20, 2023 197:49


YouTube Link: https://www.youtube.com/watch?v=n_oPMcvHbAc Math professor Edward Frenkel discusses his work on string theory & the Langlands Program while reflecting on the profound themes of infinity, death, and trauma. - Patreon: https://patreon.com/curtjaimungal (early access to ad-free audio episodes!) - Crypto: https://tinyurl.com/cryptoTOE - PayPal: https://tinyurl.com/paypalTOE - Twitter: https://twitter.com/TOEwithCurt - Discord Invite: https://discord.com/invite/kBcnfNVwqs BOOKS MENTIONED: - Love and Math (Edward Frenkel): https://amzn.to/3ZiXyI1 - From Mathematics to Philosophy (Hao Wang): https://amzn.to/3ZkIcTD - The Lord of the Rings (J.R.R. Tolkien): https://amzn.to/46bZnsw - Works of Carl Jung (Carl Jung): https://amzn.to/44UKJF7 - Works of Marie-Louise von Franz (Marie-Louise von Franz): https://amzn.to/3sWWg9D - Bhagavad Gita: https://amzn.to/3sTfrB8 - The Undecidable (Martin Davis): https://amzn.to/48njW7u - The Emperor's New Mind (Roger Penrose): https://amzn.to/44Zabcw - The Trouble with Physics (Lee Smolin): https://amzn.to/3sXpmG6 - Lost in Math: How Beauty Leads Physics Astray (Sabine Hossenfelder): https://amzn.to/3Pna7xy - Not Even Wrong (Peter Woit): https://amzn.to/44XJlBx VIDEOS MENTIONED: - Debate w/ Sabine Λ Bernardo on Superdeterminism: https://youtu.be/kJmBmopxc1k - Podcast w/ Norman Wildberger TOE: https://youtu.be/l7LvgvunVCM - Podcast w/ Dror Bar Natan on TOE: https://youtu.be/rJz_Badd43c - Podcast w/ Lex Friedman and Frenkel: https://www.youtube.com/watch?v=Osh0-J3T2nY - Podcast w/ Richard Borcherds on TOE: https://youtu.be/U3pQWkE2KqM (Part 2) and https://youtu.be/xu15ZbxxnUQ (Part 1) - Podcast w/ Sabine Hossenfelder on TOE: https://youtu.be/walaNM7KiYA - Podcast w/ Bernardo Kastrup on TOE: https://youtu.be/lAB21FAXCDE - Podcast w/ Eric Weinstein on TOE: https://youtu.be/KElq_MLO1kw - Angela Collier on String Theory: https://www.youtube.com/watch?v=kya_LXa_y1E - Mike Wallace's Interview of Aldous Huxley (1958): https://www.youtube.com/watch?v=alasBxZsb40 WEBSITES MENTIONED: - Edward Frenkel's Official Site: https://edwardfrenkel.com - Edward Frenkel's Soundcloud (DJ Moonshine): https://soundcloud.com/moonstein - Twitter (Edward Frenkel): https://twitter.com/edfrenkel - YouTube (Edward Frenkel): https://www.youtube.com/edfrenkel - Instagram (Edward Frenkel): https://www.instagram.com/edfrenkel - Peter Woit's Blog: https://www.math.columbia.edu/~woit/wordpress TIMESTAMPS: - 00:00:00 Introduction - 00:03:40 The Langlands Program - 00:09:23 Love and Math: An ode to mathematics - 00:13:09 Art as a two-way street (reciprocal nature of artistic expression) - 00:18:28 The Weil conjectures - 00:24:53 Romantic side of math and the "Theory of Everything" as a process vs. a state - 00:30:39 Paradoxes in math & axioms - 00:33:57 Observer problem in mathematics - 00:39:55 The debate on philosophy's role in science - 00:51:44 "You can't get away from infinity." - 01:07:41 Are computers conscious? Can they "think"? Turing's quotation - 01:18:33 The limitations of computation and the unification of distinctions - 01:23:29 Blurring lines between truth & beauty (algebraic-geometric interplay) - 01:33:48 The terrifying question of self - 01:36:07 Childhood memories and personal growth (transformative power of pain) - 01:49:30 The struggle for excellence & reconnecting with the past - 02:17:26 Death is love exposed most bare - 02:19:10 Function fields in higher dimensional algebraic varieties - 02:25:12 Superdeterminism w/ Sabine Hossenfelder Λ Bernardo Kastrup - 02:37:08 The human aspect in scientific theories - 02:39:00 Even atheists reason backward from God to interpretations of quantum mechanics (Richard Hamming) - 02:43:02 The unfulfilled promise of string theory - 03:01:33 Credit and ethics in scientific fields (Eric Weinstein) Learn more about your ad choices. Visit megaphone.fm/adchoices

The Theory of Anything
Episode 65: Causality, Time, and Free Will

The Theory of Anything

Play Episode Listen Later Sep 18, 2023 118:43


What did David Deutsch get right and wrong in chapter 11, “Time: The First Quantum Concept,” from his first book, Fabric of Reality? Is the flow of time real or an illusion? What does it mean to have free will in a deterministic world? And what are the implications of Bruce's “Turing world within a Turing world” thought experiment? --- Send in a voice message: https://podcasters.spotify.com/pod/show/four-strands/message Support this podcast: https://podcasters.spotify.com/pod/show/four-strands/support

Turing School Podcast
Unlikely Teacher

Turing School Podcast

Play Episode Listen Later Sep 13, 2023 44:39


Jesse speaks with Abdul Redd, Backend Instructor at Turing about how he found Turing, how he learned to code, code school instructional models, getting hired at Turing, instruction at Turing, and advice for potential students. If you or someone you know are code curious, we encourage you to attend a Turing Try Coding Event. You can register for a Try Coding class at turing.edu/try-coding.

Turing School Podcast
Ruby Roots and Kotlin Wings

Turing School Podcast

Play Episode Listen Later Sep 6, 2023 35:58


Jesse chats with Gavin Carew, 2207 BE Alum and Jack In the Box Software Engineer about his Turing story, his tips for being a successful Turing student, his job search, coming up to speed on Kotlin, his current work, advice for current prospective students, asking questions, and optimism for the future. A few classic Jim Weirich conference talks are What all Rubyists should know about Threads - https://youtu.be/4zbN29UkNQw?si=s9v1kqIg22K2v0iu and SOLID Principles in Ruby - https://youtu.be/1BVFlvRPZVM?si=nJ05gm2wI1kOSFHf If you or someone you know are code curious, we encourage you to attend a Turing Try Coding Event. You can register for a Try Coding class at turing.edu/try-coding.  

roots wings threads turing kotlin solid principles rubyists jim weirich
Warfare of Art & Law Podcast
AI Policy From the UK to the US with Institute of Art and Law's Emily Gould - A 2ND Saturday Conversation

Warfare of Art & Law Podcast

Play Episode Listen Later Sep 3, 2023 60:16


 SHOW NOTES:0:00 Alan Robertshaw1:00 Emily Gould - overview of AI historical development2:30 first phase - 1950s Alan Turing - machines do what they are told3:10 second phase - machine learning creating models using data and develop methods to make decisions / predictions based on that data3:50 third phase - deep learning usually using neural networks to mimic the human brain4:50 GANs - part of third phase that involve generator and discriminator algorithms5:55 Obvious' Portrait of Edmond de Belamy6:40 Robbie Barrett's code used by Obvious 8:40 unpredictability in the deep learning phase 9:25 different tests applied to determine if a machine is intelligent9:55 Turing test - machine is intelligent if you can't tell the difference between responses by a human and a machine10:10 Lovelace test - machine is intelligent if you can't explain machine's answer11:20 ‘Alpha Go' algorithm 13:30 uses of AI14:20 huge training data sets15:50 major risks with AI include copyright 17:10 privacy and data protection17:20 transparency - deep fake17:40 bias amplification18:15 MIT researcher Joy Buolamwini's work with facial analysis software 19:45 UK's pro-innovation approach to AI21:45 text and data mining (TDM) exception only for non-commercial use - proposal to expand to commercial use24:25 Nov 2022 government decided not to expand TDM exception to commercial use24:55 UK Pro-innovation Regulation of Technologies Review 26:45 A pro-innovation approach to AI regulation policy paper  - no legislation in the short term, no move to central regulatory body for AI 29:30 AI described in UK white paper as including autonomy and adaptivity 32:25 Global Summit on AI Safety32:45 EU AI Act with risk—based approach - June 2023 signed off by Parliament; final conclusions expected late 2023; operational circa 202636:35 US - AI suits pending37:00 Robbie Barrett 38:00 opt in versus opt out policy39:20 Senate testimony regarding UK's AI advances40:15 US Task Force on AI Policy proposed; Privacy Consumer Protection Framework40:45 Getty v. Stability AI suits in US and UK41:25 2024 elections and AI 44:00 Alan Robertshaw's case with Getty 47:05 Gould: AI voice scam48:00 Robertshaw: AI uses50:20 AI medical screening53:00 consciousness56:00 Artist Sofia Crespo's work with natural history56:30 Lines and Bones by artist Iskra Velitchkova56:50 Dawn Chorus Alexandra Daisy Ginsberg 57:30 projection for how artists in the UK will address AI issues Please share your comments and/or questions at stephanie@warfareofartandlaw.comTo hear more episodes, please visit Warfare of Art and Law podcast's website.To view rewards for supporting the podcast, please visit Warfare's Patreon page.To leave questions or comments about this or other episodes of the podcast and/or for information about joining the 2ND Saturday discussion on art, culture and justice, please message me at stephanie@warfareofartandlaw.com. Thanks so much for listening!© Stephanie Drawdy [2023]

The Stephen Wolfram Podcast
Science & Technology Q&A for Kids (and others) [December 9, 2022]

The Stephen Wolfram Podcast

Play Episode Listen Later Sep 1, 2023 82:09


Stephen Wolfram answers general questions from his viewers about science and technology as part of an unscripted livestream series, also available on YouTube here: https://wolfr.am/youtube-sw-qa Questions include: Are there nuclear reactions going on inside our bodies? - Do you think we'll ever be able to replace damaged brain parts with computational parts as another form of prosthesis?   --  What ethical implications will become relevant when we combine machine learning and brain sensors/effectors? - Suppose a rule creates a memory in our brain. Then it could be an irreducible problem to make a true brain interface for any individual that could interpret a memory or preexisting concept. Truly a fascinating subject. Assuming we are able to completely understand the human brain, one could probably make a complete copy - basically, we could "fork" one brain into multiple copies! - Do you think neurons do their signal processing based mostly on discrete states or the temporal difference between states? - Even though all brains are different, don't they all "implement" the same underlying ideas? Doesn't this point to some Platonic realm of reality? - One of the issues with being able to read and decode a memory is that someone will have the ability to write artificial memories into a brain. It's somewhat scary to think that could happen one day, but it could also be used for good. - What about a Turing test, but for memories; like in Inception? - Perhaps the only difference between dreams and reality is just a matter of degree? Perhaps it just depends on its logical coherence? Once the logical coherence is larger than what the brain can be aware of, it is considered "real." - We've co-evolved with our environment so it should be coherent to us, but if we inject things into our environment that we haven't co-evolved with or evolved in, we get confused.

The Gradient Podcast
Stevan Harnad: AI's Symbol Grounding Problem

The Gradient Podcast

Play Episode Listen Later Aug 31, 2023 118:21


In episode 88 of The Gradient Podcast, Daniel Bashir speaks to Professor Stevan Harnad.Stevan Harnad is professor of psychology and cognitive science at Université du Québec à Montréal, adjunct professor of cognitive science at McGill University, and professor emeritus of cognitive science at the University of Southampton. His research is on category learning, categorical perception, symbol grounding, the evolution of language, and animal and human sentience (otherwise known as “consciousness”). He is also an advocate for open access and an activist for animal rights.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pubSubscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (05:20) Professor Harnad's background: interests in cognitive psychobiology, editing Behavioral and Brain Sciences* (07:40) John Searle submits the Chinese Room article* (09:20) Early reactions to Searle and Prof. Harnad's role* (13:38) The core of Searle's argument and the generator of the Symbol Grounding Problem, “strong AI”* (19:00) Ways to ground symbols* (20:26) The acquisition of categories* (25:00) Pantomiming, non-linguistic category formation* (27:45) Mathematics, abstraction, and grounding* (36:20) Symbol manipulation and interpretation language* (40:40) On the Whorf Hypothesis* (48:39) Defining “grounding” and introducing the “T3” Turing Test* (53:22) Turing's concerns, AI and reverse-engineering cognition* (59:25) Other Minds, T4 and zombies* (1:05:48) Degrees of freedom in solutions to the Turing Test, the easy and hard problems of cognition* (1:14:33) Over-interepretation of AI systems' behavior, sentience concerns, T3 and evidence sentience* (1:24:35) Prof. Harnad's commentary on claims in The Vector Grounding Problem* (1:28:05) RLHF and grounding, LLMs' (ungrounded) capabilities, syntactic structure and propositions* (1:35:30) Multimodal AI systems (image-text and robotic) and grounding, compositionality* (1:42:50) Chomsky's Universal Grammar, LLMs and T2* (1:50:55) T3 and cognitive simulation* (1:57:34) OutroLinks:* Professor Harnad's webpage and skywritings* Papers:* Category Induction and Representation* Categorical Perception* From Sensorimotor Categories to Grounded Symbols* Minds, machines and Searle 2* The Latent Structure of Dictionaries Get full access to The Gradient at thegradientpub.substack.com/subscribe

Brain Inspired
BI 173 Justin Wood: Origins of Visual Intelligence

Brain Inspired

Play Episode Listen Later Aug 30, 2023 95:45


Support the show to get full episodes and join the Discord community. In the intro, I mention the Bernstein conference workshop I'll participate in, called How can machine learning be used to generate insights and theories in neuroscience?. Follow that link to learn more, and register for the conference here. Hope to see you there in late September in Berlin! Justin Wood runs the Wood Lab at Indiana University, and his lab's tagline is "building newborn minds in virtual worlds." In this episode, we discuss his work comparing the visual cognition of newborn chicks and AI models. He uses a controlled-rearing technique with natural chicks, whereby the chicks are raised from birth in completely controlled visual environments. That way, Justin can present designed visual stimuli to test what kinds of visual abilities chicks have or can immediately learn. Then he can building models and AI agents that are trained on the same data as the newborn chicks. The goal is to use the models to better understand natural visual intelligence, and use what we know about natural visual intelligence to help build systems that better emulate biological organisms. We discuss some of the visual abilities of the chicks and what he's found using convolutional neural networks. Beyond vision, we discuss his work studying the development of collective behavior, which compares chicks to a model that uses CNNs, reinforcement learning, and an intrinsic curiosity reward function. All of this informs the age-old nature (nativist) vs. nurture (empiricist) debates, which Justin believes should give way to embrace both nature and nurture. Wood lab. Related papers: Controlled-rearing studies of newborn chicks and deep neural networks. Development of collective behavior in newborn artificial agents. A newborn embodied Turing test for view-invariant object recognition. Justin mentions these papers: Untangling invariant object recognition (Dicarlo & Cox 2007) 0:00 - Intro 5:39 - Origins of Justin's current research 11:17 - Controlled rearing approach 21:52 - Comparing newborns and AI models 24:11 - Nativism vs. empiricism 28:15 - CNNs and early visual cognition 29:35 - Smoothness and slowness 50:05 - Early biological development 53:27 - Naturalistic vs. highly controlled 56:30 - Collective behavior in animals and machines 1:02:34 - Curiosity and critical periods 1:09:05 - Controlled rearing vs. other developmental studies 1:13:25 - Breaking natural rules 1:16:33 - Deep RL collective behavior 1:23:16 - Bottom-up and top-down

Theories of Everything with Curt Jaimungal
Gregory Chaitin: Complexity, Metabiology, Gödel, Cold Fusion

Theories of Everything with Curt Jaimungal

Play Episode Listen Later Aug 28, 2023 189:52


YouTube link https://youtu.be/zMPnrNL3zsE Gregory Chaitin discusses algorithmic information theory, its relationship with Gödel incompleteness theorems, and the properties of Omega number.  Topics of discussion include algorithmic information theory, Gödel incompleteness theorems, and the Omega number. Listen now early and ad-free on Patreon https://patreon.com/curtjaimungal. Sponsors:  - Patreon: https://patreon.com/curtjaimungal (early access to ad-free audio episodes!) - Crypto: https://tinyurl.com/cryptoTOE - PayPal: https://tinyurl.com/paypalTOE - Twitter: https://twitter.com/TOEwithCurt - Discord Invite: https://discord.com/invite/kBcnfNVwqs - iTunes: https://podcasts.apple.com/ca/podcast/better-left-unsaid-with-curt-jaimungal/id1521758802 - Pandora: https://pdora.co/33b9lfP - Spotify: https://open.spotify.com/show/4gL14b92xAErofYQA7bU4e - Subreddit r/TheoriesOfEverything: https://reddit.com/r/theoriesofeverything - TOE Merch: https://tinyurl.com/TOEmerch LINKS MENTIONED: - Meta Math and the Quest for Omega (Gregory Chaitin): https://amzn.to/3stCFxH - Visual math episode on Chaitin's constant: https://youtu.be/WLASHxChXKM - Podcast w/ David Wolpert on TOE: https://youtu.be/qj_YUxg-qtY - A Mathematician's Apology (G. H. Hardy): https://amzn.to/3qOEbtL - The Physicalization of Metamathematics (Stephen Wolfram): https://amzn.to/3YUcGLL - Podcast w/ Neil deGrasse Tyson on TOE: https://youtu.be/HhWWlJFwTqs - Proving Darwin (Gregory Chaitin): https://amzn.to/3L0hSbs - What is Life? (Erwin Schrödinger): https://amzn.to/3YVk8Xm - "On Computable Numbers, with an Application to the Entscheidungsproblem" (Alan Turing): https://www.cs.virginia.edu/~robins/T... - "The Major Transitions in Evolution" (John Maynard Smith and Eörs Szathmáry): https://amzn.to/3PdzYci - "The Origins of Life: From the Birth of Life to the Origin of Language" (John Maynard Smith and Eörs Szathmáry): https://amzn.to/3PeKFeM - Podcast w/ Stephen Wolfram on TOE: https://youtu.be/1sXrRc3Bhrs - Incompleteness: The Proof and Paradox of Kurt Gödel (Rebecca Goldstein): https://amzn.to/3Pf8Yt4 - Rebecca Goldstein on TOE on Godel's Incompleteness: https://youtu.be/VkL3BcKEB6Y - Gödel's Proof (Ernest Nagel and James R. Newman): https://amzn.to/3QX89q1 - Giant Brains, or Machines That Think (Edmund Callis Berkeley): https://amzn.to/3QXniYj - An Introduction to Probability Theory and Its Applications (William Feller): https://amzn.to/44tWjXI TIMESTAMPS: - 00:00:00 Introduction - 00:02:27 Chaitin's Unconventional Self-Taught Journey - 00:06:56 Chaitin's Incompleteness Theorem and Algorithmic Randomness - 00:12:00 The Infinite Calculation Paradox and Omega Number's Complexity (Halting Probability) - 00:27:38 God is a Mathematician: An Ontological Basis - 00:37:06 Emergence of Information as a Fundamental Substance - 00:53:10 Evolution and the Modern Synthesis (Physics-Based vs. Computational-Based Life) - 01:08:43 Turing's Less Known Masterpiece - 01:16:58 Extended Evolutionary Synthesis and Epigenetics - 01:21:20 Renormalization and Tractability - 01:28:15 The Infinite Fitness Function - 01:42:03 Progress in Mathematics despite Incompleteness - 01:48:38 Unconventional Academic Approach - 01:50:35 Godel's Incompleteness, Mathematical Intuition, and the Platonic World - 02:06:01 The Enigma of Creativity in Mathematics - 02:15:37 Dark Matter: A More Stable Form of Hydrogen? (Hydrinos) - 02:23:33 Stigma and the "Reputation Trap" in Science - 02:28:43 Cold Fusion - 02:29:28 The Stagnation of Physics - 02:41:33 Defining Randomness: The Chaos of 0s and 1s - 02:52:01 The Struggles For Young Mathematicians and Physicists (Advice) Learn more about your ad choices. Visit megaphone.fm/adchoices

Data Skeptic
LLMs in Music Composition

Data Skeptic

Play Episode Listen Later Aug 28, 2023 33:39


In this episode, we are joined by Carlos Hernández Oliván, a Ph.D. student at the University of Zaragoza. Carlos's interest focuses on building new models for symbolic music generation. Carlos shared his thoughts on whether these models are genuinely creative. He revealed situations where AI-generated music can pass the Turing test. He also shared some essential considerations when constructing models for music composition.

Platzi English Academy
Martes de IA | Historia y origen de la inteligencia artificial

Platzi English Academy

Play Episode Listen Later Aug 23, 2023 8:03


Explora la fascinante evolución de la Inteligencia Artificial. Desde sus orígenes mitológicos hasta los actuales modelos de lenguaje avanzados como GPT-3, este episodio te sumergirá en la trayectoria histórica de la IA:Viaja desde las ideas conceptuales de Isaac Asimov hasta los hitos cruciales de los años 50, como el perceptrón y el Test de Turing. Descubre momentos de resurgimiento en los 80, el auge de las redes neuronales convolucionales y modelos de procesamiento del lenguaje en los 90, hasta los hitos contemporáneos con modelos escalables y logros en juegos.Acompáñanos en esta exploración que demuestra cómo la IA ha transformado nuestro mundo tecnológico, y prepárate para anticipar futuros avances en este campo. --- Send in a voice message: https://podcasters.spotify.com/pod/show/platzi-podcast/message

Turing School Podcast
People Puzzle

Turing School Podcast

Play Episode Listen Later Aug 23, 2023 50:38


Bailey and Jesse chat with Matt Kaufman, a Staff Front End Engineer at Afresh and 1608 FE Alum. They discuss love, Matt's background in control engineering, the benefits of the Turing community, mission driven companies, engineering management, staff level engineering, capitalism, performance reviews, and other topics. They mention a few books, including Managing Humans by Michael Lopp and The Manager's Path by Camille Fournier. If you or someone you know are code curious, we encourage you to attend a Turing Try Coding Event. You can register for a Try Coding class at turing.edu/try-coding.

puzzle turing afresh camille fournier michael lopp
Turing School Podcast
Agile Arborist

Turing School Podcast

Play Episode Listen Later Aug 16, 2023 43:18


Hosts Jesse and Bailey are reunited and chat with Evan Wheeler, Senior Software Engineer at Peloton and 1801 BE Alum. The trio discuss Evan's Turing story, arborism, the job search, mentoring, imposter syndrome, Evan's work at Peloton, the developer experience going through an acquisition, company reorganizations, technologies that solve for scale, AI, and other topics. If you or someone you know are code curious, we encourage you to attend a Turing Try Coding Event. You can register for a Try Coding class at turing.edu/try-coding.

Weird Crap in Australia
Episode 271 - Australia's Secret Codebreakers (1939-2023) Part 2

Weird Crap in Australia

Play Episode Listen Later Aug 14, 2023 53:43


There's a part of modern warfare that is often overlooked by the people on the ground, and that's the communications systems feeding them information. A war can't be fought without orders, and where there's a messenger, there's an enemy looking to find out what he's up to.When Australia joined the WWII efforts on Sept 1, 1939, we didn't have a cryptanalyst team, or anything remotely resembling a Bond/Borne/Reacher team of sleuths and codebreakers. But within 3 years, we were out there teaching the rest of the world how it was done. It was thanks to a group of people stationed across Australia - in Melbourne, Brisbane, Townsville, Darwin and other places - that helped the Americans seek vengeance for Pearl Harbor, strike back against the intended invasions, and start intercepting the Japanese fleets closer and closer to home.But for 30 years, these people were hidden from society, their achievements unrecognised and shredded like 1970s paperwork; some would argue they still are.Join Holly and Matthew as they look into Australia's own version of Bletchley Park and Alan Turing, or the Manhattan Project and Oppenheimer, complete with geniuses ahead of their time, abandoned and treated like crap after the War by governments desperate to bury their history.

Many Minds
The five portals of cognitive evolution

Many Minds

Play Episode Listen Later Aug 10, 2023 64:48


Welcome back all! So, this episode is a first for us. Two firsts, actually. For one, it features our first-ever repeat guest: Andrew Barron, a neuroscientist at Macquarie University. If you're a long-time listener, you might remember that Andy was actually the guest on our very first episode, 'Of bees and brains,' in February 2020. And, second, this episode is our first-ever "live show." We recorded this interview in July at the Diverse Intelligences Summer Institute in St Andrews, Scotland. Andy and his colleagues—the philosophers Marta Halina and Colin Klein—just released an ambitious paper titled 'Transitions in Cognitive Evolution.' In it, they take a wide-angle view of mind; they zoom out to try to tell an overarching story of how brains and cognition evolved across the tree of life. The story, as they tell it, is not about a smoothly gradual evolution of cognitive sophistication. Rather, it's a story built around five major transitions—fundamental changes, that is, to how organisms process information.  In this conversation, Andy and I discuss their framework and how it takes inspiration from other transitional accounts of life and mind. We lay out each of the five stages—or portals, as we refer to them—and talk about the organisms that we find on either side of these portals. We discuss what propels organisms to make these radical changes, especially considering that evolution is not prospective. It doesn't look ahead—it can't see what abilities might be possible down the road. We talk about how this framework got its start, particularly in some of Andy's thinking about insect brains and how they differ from vertebrate brains. And, as a bit of a bonus, we left in some of the live Q & A with the audience. In it we touch on octopuses, eusocial insects, oysters, and a bunch else.  Speaking of major transitions, I will be going on parental leave for much of the fall. So this is, in fact, the final episode of Season 4 and then the podcast will go on a brief hiatus. Before we get started on Season 5, we'll be putting up some of our favorite episodes from the archive. Alright friends, on to my conversation with Dr. Andrew Barron, recorded live at DISI 2023. Enjoy! A transcript of this episode will be available soon.   Notes and links 3:30 – For further information about the “major transitions” project, see the project's web page here. 7:00 – Many transitional accounts of evolution draw inspiration from the classic book The Major Transitions in Evolution. 8:00 – One influential previous transitional account of the evolution of cognition was put forward by Dennett in Kinds of Minds. Another was put forward by Ginsburg and Jablonka in The Evolution of the Sensitive Soul. 12:45 – A brief introduction to cnidaria. 18:00 – The idea of cellular memory has been garnering more and more attention—see, e.g., this popular article.   21:00 – The idea of “reflective” systems is also used in computer science.  26:00 – The scala naturae, or Great Chain of Being, was the notion that organisms could be arranged on a scale of sophistication, with humans on the top of the scale.  30:00 – The “teleological fallacy” as Dr. Barron and colleagues describe it in their paper is the fallacy of “appeal[ing] to later benefits to explain earlier changes.” 34:00 – A brief introduction to the phylum gastropoda. 37:00 – For an overview of Dr. Barron's work on the neuroscience of honey bees, see our previous episode.   48:30 – It's commonly observed in popular coverage of octopuses that their brains are “decentralized” (e.g., here, here, and here).  55:00 – In discussions of human brain evolution, it has been argued that certain kinds of cognitive offloading (e.g., writing) have allowed our brains to actually shrink in recent history. See our earlier episode with Jeremy DeSilva.  58:00 – On the notion of “Turing completeness,” see here. The idea of an “Infinite Improbability Drive” comes (apparently) from The Hitchhiker's Guide to the Galaxy.  1:00:06 – For a discussion of eusociality and individuality in the context of “major transitions” ideas, see here.   Many Minds is a project of the Diverse Intelligences Summer Institute, which is made possible by a generous grant from the Templeton World Charity Foundation to UCLA. It is hosted and produced by Kensy Cooperrider, with help from Assistant Producer Urte Laukaityte and with creative support from DISI Directors Erica Cartmill and Jacob Foster. Our artwork is by Ben Oldroyd. Our transcripts are created by Sarah Dopierala. Subscribe to Many Minds on Apple, Stitcher, Spotify, Pocket Casts, Google Play, or wherever you listen to podcasts. You can also now subscribe to the Many Minds newsletter here! We welcome your comments, questions, and suggestions. Feel free to email us at: manymindspodcast@gmail.com.  For updates about the show, visit our website or follow us on Twitter: @ManyMindsPod.

The Chad & Cheese Podcast
Firing Squad: InSquad's Alex Svinov

The Chad & Cheese Podcast

Play Episode Listen Later Aug 7, 2023 34:23


Tech hiring is in kind of a weird place right now. Newsworthy layoffs, artificial intelligence threats, and a down economy mean a lot of indecision and uncertainty. Many companies are just pushing pause as a result. All this makes for a tricky environment for any and all startups looking to place high-tech workers. InSquad, a one-click solution to source and hire top-quality vetted remote software developers, ain't scared though. That's why CEO Alex Svinov decided to face the Firing Squad. He thinks he has a better mousetrap than the likes of Upwork, Turing, Andela, and many others. Is he right? Gotta listen to find out.

History of Everything
History of Everything: Alan Turing and the Enigma Machine

History of Everything

Play Episode Listen Later Jul 28, 2023 76:39


Alan Mathison Turing was a British mathematician, computer scientist, logician, cryptanalyst, philosopher, and theoretical biologist. Turing was highly influential in the development of theoretical computer science, providing a formalization of the concepts of algorithm and computation with the Turing machine, which can be considered a model of a general-purpose computer. He is widely considered to be the father of theoretical computer science and artificial intelligence. Travel to Italy With Me here Travel to Japan With Me here Bonus episodes as well as ad-free episodes on Patreon. Find us on Instagram. Join us on Discord. Submit your relatives on our website Join the Book Club on http://chirpbooks.com/history Get some delicious COFFEE Podcast Youtube Channel Learn more about your ad choices. Visit megaphone.fm/adchoices

This Week in Google (MP3)
TWiG 726: Corpora Bailiwick and Jones - Twitter becomes X, Alphabet earnings, NotebookLM, Web Integrity API

This Week in Google (MP3)

Play Episode Listen Later Jul 27, 2023 147:26


Twitter's rebrand to X may actually be happening soon. Alphabet reports better-than-expected quarterly results driven by growth in cloud. YouTube Q2 Ad Sales Rise 4.4%, Alphabet Handily Tops Earnings Estimates. Google's CFO just got a promotion. Google says 2 billion logged in monthly users are watching YouTube Shorts. Google Street View is back in Germany after 10+ year halt. Jeff talks about his trip to Google previewing NotebookLM. Google software engineer got $605,000 bonus, plus more from massive salary leak. Google's nightmare "Web Integrity API" wants a DRM gatekeeper for the web. Top tech companies form group seeking to control AI. ChatGPT broke the Turing test — the race is on for new ways to assess AI. Google abandons work to move Assistant smart speakers to Fuchsia. Google Play services ending support for Android 4.4 KitKat. Everything Samsung Announced at Summer Galaxy Unpacked 2023. ChromeOS 115 rolling out: Android App Streaming, PDF signatures. Picks: Stacey - Weighted Vest. Jeff - IMAX emulates PalmPilot software to power Oppenheimer's 70 mm release. Ant - "Google Stadia" Lives On For Me. Hosts: Leo Laporte, Jeff Jarvis, Stacey Higginbotham, and Ant Pruitt Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: discourse.org/twit AWS Insiders - TWIG fastmail.com/twit

Radio Leo (Audio)
This Week in Google 726: Corpora Bailiwick and Jones

Radio Leo (Audio)

Play Episode Listen Later Jul 27, 2023 147:26


Twitter's rebrand to X may actually be happening soon. Alphabet reports better-than-expected quarterly results driven by growth in cloud. YouTube Q2 Ad Sales Rise 4.4%, Alphabet Handily Tops Earnings Estimates. Google's CFO just got a promotion. Google says 2 billion logged in monthly users are watching YouTube Shorts. Google Street View is back in Germany after 10+ year halt. Jeff talks about his trip to Google previewing NotebookLM. Google software engineer got $605,000 bonus, plus more from massive salary leak. Google's nightmare "Web Integrity API" wants a DRM gatekeeper for the web. Top tech companies form group seeking to control AI. ChatGPT broke the Turing test — the race is on for new ways to assess AI. Google abandons work to move Assistant smart speakers to Fuchsia. Google Play services ending support for Android 4.4 KitKat. Everything Samsung Announced at Summer Galaxy Unpacked 2023. ChromeOS 115 rolling out: Android App Streaming, PDF signatures. Picks: Stacey - Weighted Vest. Jeff - IMAX emulates PalmPilot software to power Oppenheimer's 70 mm release. Ant - "Google Stadia" Lives On For Me. Hosts: Leo Laporte, Jeff Jarvis, Stacey Higginbotham, and Ant Pruitt Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: discourse.org/twit AWS Insiders - TWIG fastmail.com/twit

All TWiT.tv Shows (MP3)
This Week in Google 726: Corpora Bailiwick and Jones

All TWiT.tv Shows (MP3)

Play Episode Listen Later Jul 27, 2023 147:26


Twitter's rebrand to X may actually be happening soon. Alphabet reports better-than-expected quarterly results driven by growth in cloud. YouTube Q2 Ad Sales Rise 4.4%, Alphabet Handily Tops Earnings Estimates. Google's CFO just got a promotion. Google says 2 billion logged in monthly users are watching YouTube Shorts. Google Street View is back in Germany after 10+ year halt. Jeff talks about his trip to Google previewing NotebookLM. Google software engineer got $605,000 bonus, plus more from massive salary leak. Google's nightmare "Web Integrity API" wants a DRM gatekeeper for the web. Top tech companies form group seeking to control AI. ChatGPT broke the Turing test — the race is on for new ways to assess AI. Google abandons work to move Assistant smart speakers to Fuchsia. Google Play services ending support for Android 4.4 KitKat. Everything Samsung Announced at Summer Galaxy Unpacked 2023. ChromeOS 115 rolling out: Android App Streaming, PDF signatures. Picks: Stacey - Weighted Vest. Jeff - IMAX emulates PalmPilot software to power Oppenheimer's 70 mm release. Ant - "Google Stadia" Lives On For Me. Hosts: Leo Laporte, Jeff Jarvis, Stacey Higginbotham, and Ant Pruitt Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: discourse.org/twit AWS Insiders - TWIG fastmail.com/twit