English mathematician and computer scientist
POPULARITY
Categories
The AI Advantage: Why Your Business Can't Afford to Wait
Ne Nedenmiş'in bu bölümünde Alan Turing'in hayatından kısa bir kesit paylaşırken günümüz yapay zekalarının da ilhamı olan Turing testinin
Every time you type a message, unlock your phone, or trust a computer to make a decision, you're relying on the ideas of someone you may never have heard of and probably never learned about at school. That person is Alan Turing. He was a British mathematician, logician, and wartime codebreaker, and one of the most important figures in modern science and technology. Turing helped lay the foundations of modern computing. He played a crucial role in breaking Nazi codes during the Second World War. And he asked questions about machines that still shape how we think about artificial intelligence today. His influence is everywhere, from the security that protects your data to the algorithms behind AI. In this episode, I want to explore Alan Turing's life, his scientific achievements, and the legacy he left behind, before connecting his story to my Greatest Scientist of All Time series. Conversation Club - https://thinkinginenglish.blog/2026/02/23/376-who-was-alan-turing-english-vocabulary-lesson/ TRANSCRIPT - https://thinkinginenglish.blog/2026/02/16/375-do-we-live-in-a-surveillance-society-prepositions-of-place-english-grammar-lesson/ AD Free Episode - https://www.patreon.com/thinkinginenglish Thinking in English Bonus Podcast - https://www.patreon.com/collection/869866 YouTube Channel - https://www.youtube.com/@thinkinginenglishpodcast INSTAGRAM - thinkinginenglishpodcast (https://www.instagram.com/thinkinginenglishpodcast/) My Editing Software (Affiliate Link) - https://descript.cello.so/BgOK9XOfQdD Borough by Blue Dot Sessions Contact advertising@airwavemedia.com to advertise on Thinking in English. Thinking in English is part of the Airwave Media podcast network. Learn more about your ad choices. Visit megaphone.fm/adchoices
Olive Song from MiniMax shares how her team trains the M series frontier open-weight models using reinforcement learning, tight product feedback loops, and systematic environment perturbations. This crossover episode weaves together her AI Engineer Conference talk and an in-depth interview from the Inference podcast. Listeners will learn about interleaved thinking for long-horizon agentic tasks, fighting reward hacking, and why they moved RL training to FP32 precision. Olive also offers a candid look at debugging real-world LLM failures and how MiniMax uses AI agents to track the fast-moving AI landscape. Use the Granola Recipe Nathan relies on to identify blind spots across conversations, AI research, and decisions: https://bit.ly/granolablindspot LINKS: Conference Talk (AI Engineer, Dec 2025) – https://www.youtube.com/watch?v=lY1iFbDPRlwInterview (Turing Post, Jan 2026) – https://www.youtube.com/watch?v=GkUMqWeHn40 Sponsors: Claude: Claude is the AI collaborator that understands your entire workflow, from drafting and research to coding and complex problem-solving. Start tackling bigger problems with Claude and unlock Claude Pro's full capabilities at https://claude.ai/tcr Tasklet: Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai CHAPTERS: (00:00) About the Episode (04:15) Minimax M2 presentation (Part 1) (17:59) Sponsors: Claude | Tasklet (21:22) Minimax M2 presentation (Part 2) (21:26) Research life and culture (26:27) Alignment, safety and feedback (32:01) Long-horizon coding agents (35:57) Open models and evaluation (43:29) M2.2 and researcher goals (48:16) Continual learning and AGI (52:58) Closing musical summary (55:49) Outro PRODUCED BY: https://aipodcast.ing SOCIAL LINKS: Website: https://www.cognitiverevolution.ai Twitter (Podcast): https://x.com/cogrev_podcast Twitter (Nathan): https://x.com/labenz LinkedIn: https://linkedin.com/in/nathanlabenz/ Youtube: https://youtube.com/@CognitiveRevolutionPodcast Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431 Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk
The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
Alexander Embiricos is the Head of Codex at OpenAI, leading the development of the company's flagship AI coding systems that power automated software generation, debugging and developer workflows. Under his leadership, Codex has become one of the most widely adopted AI developer platforms. AGENDA: 05:13 Will Coding Be Automated? Why AI Could Create More Engineers, Not Fewer 07:17 Do We Need PMs? The "Undefined" Product Role and When It Matters 08:06 The Real AGI Bottleneck: Human Prompting, Validation, and "Too Much Effort" 13:04 Three Phases of Agents: Coding → Computer Use → Productized Workflows 13:52 Enterprise Reality Check: Security, Permissions, and Safe Agentic Browsing 17:57 Is Inference the New Sales and Marketing? 18:49 What % of Codex Was Written by AI? 21:33 Do OpenAI Use AI for Code Review? 23:31 Is there any stickiness to AI coding tools? 28:22 What Does "Winning" Mean at OpenAI? Mission, Competition, and Moats 32:04 The Future UI: Chat or Voice 34:10 Agent-to-Agent Workflows: Designing for Approvals, Compliance, and Automation 35:39 Do Coding Models Have a Data Moat? 36:50 How does Codex View Data: Will They Build Their Own Mercor and Turing? 37:27 How Does Codex View Consumer: Will They Compete with Lovable? 41:56 Benchmarks vs "Vibes": How People Actually Judge Models 42:43 Cursor's Edge and the Case for Building Your Own Models 47:37 Is SaaS Dead? What Still Defends Value (Humans + Systems of Record) 51:28 Talent Wars and Career Advice for New Engineers in the AI Era 01:01:03 Guardrails, the Fully AI-Managed Stack, and a 10-Year Vision for Everyone
Um artigo da revista Nature defende que a inteligência artificial de nível humano já é uma realidade concreta através dos modelos de linguagem atuais. Os autores argumentam que estas ferramentas superaram o histórico Teste de Turing ao demonstrarem competências cognitivas vastas, desde a resolução de problemas matemáticos complexos até à criação literária. O texto rejeita críticas comuns, esclarecendo que a inteligência geral artificial não exige perfeição, consciência ou forma física para ser reconhecida. Segundo os especialistas, a hesitação em aceitar este marco deve-se a preconceitos antropocêntricos e definições inconsistentes, e não a uma falta de evidência técnica. Por fim, a obra sublinha que reconhecer a existência da AGI é fundamental para orientar políticas públicas e compreender a evolução tecnológica futura.
“How do you tend to respond when you do not know?” We had this question in our Journal Circle a couple of weeks ago. It’s at the heart of many issues in our world right now. How do we hold it?When do we conceal it?Where do we turn for knowledge?And what do we do with it when we acquire it? That’s what we explore in this episode of The Gentle Rebel Podcast. https://youtu.be/QRAS1dib_GM Our Relationship With Not Knowing I find this advert baffling. A couple are wandering around the Leeum Museum in South Korea. They didn't know it was big; they only gave themselves an hour. He thinks a roof tile is a book. Even when his phone corrects him, they skip off giggling without listening to the information. It reminds me of a billboard from the AI company Turing that says the quiet part out loud: “We teach AGI to think, reason, and code—so you don’t have to.” Are we being encouraged to outsource our thinking and reasoning, not to support and deepen our cognitive abilities, but to replace them? Are they saying we don’t have to think or reason anymore? Even if that’s not the intention, it’s certainly the outcome of using many tools like this. There seems to be a disregard for the sacred delight of human consciousness, thought processes, and creativity. And a subtle quest to eliminate mystery, curiosity, and the learning that comes from not knowing. Yet not knowing has always been central to human potential. It is the driving force of creativity, innovation, and deeper connection to the worlds within, around, and between us. Open and Closed Stances As people reflected in our Journal Circle, a thread emerged: openness vs closedness. Closed not-knowing: defensive, protective, secretive. Open not-knowing: curious, relational, exploratory. Closedness can feel tight. Clenched. Like rushing to paint over the threat of embarrassment or being found out. Openness can feel spacious. Physically expansive, deeper, and less pressured. Where the uncertainty is met with an invitation into possibility and curiosity rather than grasping, clinging, and defensiveness. We explore several ways this plays out in everyday life. Pretending To Know One response to not knowing is pretending to know. We’ve probably all done it. Nodding along when everyone else seems to understand. Staying quiet because asking a question feels risky. Research in 2007 found that children aged 14 months to five years ask an average of 107 questions per hour. By the time they reach late primary school, many stop asking questions altogether. In the episode, I share an anecdote from research led by Susan Engel, where a ninth grader is stopped mid-question with the instruction: “No questions now, please; it's time for learning.” Within institutional settings, our natural curiosity and creativity can be left behind, and if questions are deemed disruptive or inappropriate, we may simply pretend to know and struggle quietly. This is especially true for many more introverted and sensitive people, who are already generally disposed to slot in around others without drawing much attention to themselves. Child-like Curiosity A child doesn’t see their lack of knowledge as a reason to be ashamed. It’s underpinned by the electric buzz of connection. Everything is new, mysterious, and waiting to be explored. For an adult moving through and out of a rigid system, not knowing can feel like an exposing story in which their worth as a human is assessed. Pretending to know can become an adaptive strategy. A way to keep the peace. A way to belong. There's also the technological version, prominent in many AI tools people rely on for accurate information. These systems are designed to always produce an answer, even when they are wrong. This reflects the kind of closed pretending that aims to foster a perception of expertise, so those listening believe that the source’s confidence equates to competence. But pretending doesn't only come from intentional deception. It can stem from stories we absorb, linking knowledge with worth: “I must know in order to be useful.”“I must be useful in order to be accepted.” Letting go of that story can be liberating. Saying “I Don't Know” “I don't know” is an option. A surprisingly radical one. When it is open, it creates space to explore our unknowing. An open “don’t know” admits not knowing with hands turned towards learning and discovery. It might come with an inner spark and the freedom from performance. A closed “I don't know” shuts things down. It can signal indifference or defensiveness. Sometimes that boundary is healthy. Sometimes it is armour. Being “In The Know” There is also the social currency of being “in the know.” Trends. News. Other people's business. Ignorance can feel like bliss. It can also feel like exclusion. From a closed place, being in the know becomes about control. From an open place, it can become a source of connection. The ability to link ideas, introduce people, and catalyse collaboration. Knowing What's Best Another response to uncertainty is doubling down on certainty. We are pattern-seeking creatures. We build cognitive maps to navigate a complex world. But when ambiguity feels overwhelming, certainty can feel like solid ground, even if it's forged, manufactured, and brittle. Closedness says “this is how it is”, refuses nuance, and punishes curiosity and accountability as disrespect, insolence, and rudeness. Open wisdom looks different. It sits shoulder to shoulder, acknowledges nuance, and is willing to say, “I don't know the best thing to do here.” Admitting one does not know can be a radical act in cultures that equate doubt with weakness and desperately seek a way to explain and understand everything, even without empirical evidence. Knowing That We Don't Know In a 1933 essay lamenting the rise of the Nazi movement in Germany, Bertrand Russell wrote, “The fundamental cause of the trouble is that in the modern world the stupid are cocksure, while the intelligent are full of doubt.” Charles Bukowski said something similar when giving advice to budding writers: “But the problem is that bad writers tend to have the self-confidence, while the good ones tend to have self-doubt.“ These quotes highlight the importance of knowing what we do not know — and recognising the limits of our own perspective. This took us to a detour into the Dunning–Kruger effect, which is the idea that we can speak confidently about subjects precisely because we don't yet know what we don't know. Reading Maps and Navigating Life “I don't know, but I am aware of where to look to figure it out.” In The Return To Serenity Island course, we map elements of life, seeing it as a treasure laden island. Not knowing is a door to connection, curiosity, creativity, and exploration. But it can also feel disorienting, confusing, and alienating at times. Maps help disorientation become orientation-in-progress without strict instructions or someone else’s path to follow. They can bring us home to ourselves.
Thousands are dead, Iran’s economy is collapsing, and the nation is on the brink of unrest, yet Dr. Hormoz Shariat says God is moving powerfully behind the scenes. In this urgent conversation, Sean sits down again with the “Billy Graham of Iran” to hear how many Iranians are rejecting Islam, turning to Christ, and how the underground church is growing amid persecution and chaos. This is a rare glimpse into both the darkness and the spiritual awakening unfolding in Iran right now. WATCH: Why Iranian Muslims are Turing to Jesus (https://youtu.be/_tdPBR7i7rw) *Get a MASTERS IN APOLOGETICS or SCIENCE AND RELIGION at BIOLA (https://bit.ly/3LdNqKf) *USE Discount Code [smdcertdisc] for 25% off the BIOLA APOLOGETICS CERTIFICATE program (https://bit.ly/3AzfPFM) *See our fully online UNDERGRAD DEGREE in Bible, Theology, and Apologetics: (https://bit.ly/448STKK) FOLLOW ME ON SOCIAL MEDIA: Twitter: https://x.com/Sean_McDowell TikTok: https://www.tiktok.com/@sean_mcdowell?lang=en Instagram: https://www.instagram.com/seanmcdowell/ Website: https://seanmcdowell.org Discover more Christian podcasts at lifeaudio.com and inquire about advertising opportunities at lifeaudio.com/contact-us.
From Palantir and Two Sigma to building Goodfire into the poster-child for actionable mechanistic interpretability, Mark Bissell (Member of Technical Staff) and Myra Deng (Head of Product) are trying to turn “peeking inside the model” into a repeatable production workflow by shipping APIs, landing real enterprise deployments, and now scaling the bet with a recent $150M Series B funding round at a $1.25B valuation.In this episode, we go far beyond the usual “SAEs are cool” take. We talk about Goodfire's core bet: that the AI lifecycle is still fundamentally broken because the only reliable control we have is data and we post-train, RLHF, and fine-tune by “slurping supervision through a straw,” hoping the model picks up the right behaviors while quietly absorbing the wrong ones. Goodfire's answer is to build a bi-directional interface between humans and models: read what's happening inside, edit it surgically, and eventually use interpretability during training so customization isn't just brute-force guesswork.Mark and Myra walk through what that looks like when you stop treating interpretability like a lab demo and start treating it like infrastructure: lightweight probes that add near-zero latency, token-level safety filters that can run at inference time, and interpretability workflows that survive messy constraints (multilingual inputs, synthetic→real transfer, regulated domains, no access to sensitive data). We also get a live window into what “frontier-scale interp” means operationally (i.e. steering a trillion-parameter model in real time by targeting internal features) plus why the same tooling generalizes cleanly from language models to genomics, medical imaging, and “pixel-space” world models.We discuss:* Myra + Mark's path: Palantir (health systems, forward-deployed engineering) → Goodfire early team; Two Sigma → Head of Product, translating frontier interpretability research into a platform and real-world deployments* What “interpretability” actually means in practice: not just post-hoc poking, but a broader “science of deep learning” approach across the full AI lifecycle (data curation → post-training → internal representations → model design)* Why post-training is the first big wedge: “surgical edits” for unintended behaviors likereward hacking, sycophancy, noise learned during customization plus the dream of targeted unlearning and bias removal without wrecking capabilities* SAEs vs probes in the real world: why SAE feature spaces sometimes underperform classifiers trained on raw activations for downstream detection tasks (hallucination, harmful intent, PII), and what that implies about “clean concept spaces”* Rakuten in production: deploying interpretability-based token-level PII detection at inference time to prevent routing private data to downstream providers plus the gnarly constraints: no training on real customer PII, synthetic→real transfer, English + Japanese, and tokenization quirks* Why interp can be operationally cheaper than LLM-judge guardrails: probes are lightweight, low-latency, and don't require hosting a second large model in the loop* Real-time steering at frontier scale: a demo of steering Kimi K2 (~1T params) live and finding features via SAE pipelines, auto-labeling via LLMs, and toggling a “Gen-Z slang” feature across multiple layers without breaking tool use* Hallucinations as an internal signal: the case that models have latent uncertainty / “user-pleasing” circuitry you can detect and potentially mitigate more directly than black-box methods* Steering vs prompting: the emerging view that activation steering and in-context learning are more closely connected than people think, including work mapping between the two (even for jailbreak-style behaviors)* Interpretability for science: using the same tooling across domains (genomics, medical imaging, materials) to debug spurious correlations and extract new knowledge up to and including early biomarker discovery work with major partners* World models + “pixel-space” interpretability: why vision/video models make concepts easier to see, how that accelerates the feedback loop, and why robotics/world-model partners are especially interesting design partners* The north star: moving from “data in, weights out” to intentional model design where experts can impart goals and constraints directly, not just via reward signals and brute-force post-training—Goodfire AI* Website: https://goodfire.ai* LinkedIn: https://www.linkedin.com/company/goodfire-ai/* X: https://x.com/GoodfireAIMyra Deng* Website: https://myradeng.com/* LinkedIn: https://www.linkedin.com/in/myra-deng/* X: https://x.com/myra_dengMark Bissell* LinkedIn: https://www.linkedin.com/in/mark-bissell/* X: https://x.com/MarkMBissellFull Video EpisodeTimestamps00:00:00 Introduction00:00:05 Introduction to the Latent Space Podcast and Guests from Goodfire00:00:29 What is Goodfire? Mission and Focus on Interpretability00:01:01 Goodfire's Practical Approach to Interpretability00:01:37 Goodfire's Series B Fundraise Announcement00:02:04 Backgrounds of Mark and Myra from Goodfire00:02:51 Team Structure and Roles at Goodfire00:05:13 What is Interpretability? Definitions and Techniques00:05:30 Understanding Errors00:07:29 Post-training vs. Pre-training Interpretability Applications00:08:51 Using Interpretability to Remove Unwanted Behaviors00:10:09 Grokking, Double Descent, and Generalization in Models00:10:15 404 Not Found Explained00:12:06 Subliminal Learning and Hidden Biases in Models00:14:07 How Goodfire Chooses Research Directions and Projects00:15:00 Troubleshooting Errors00:16:04 Limitations of SAEs and Probes in Interpretability00:18:14 Rakuten Case Study: Production Deployment of Interpretability00:20:45 Conclusion00:21:12 Efficiency Benefits of Interpretability Techniques00:21:26 Live Demo: Real-Time Steering in a Trillion Parameter Model00:25:15 How Steering Features are Identified and Labeled00:26:51 Detecting and Mitigating Hallucinations Using Interpretability00:31:20 Equivalence of Activation Steering and Prompting00:34:06 Comparing Steering with Fine-Tuning and LoRA Techniques00:36:04 Model Design and the Future of Intentional AI Development00:38:09 Getting Started in Mechinterp: Resources, Programs, and Open Problems00:40:51 Industry Applications and the Rise of Mechinterp in Practice00:41:39 Interpretability for Code Models and Real-World Usage00:43:07 Making Steering Useful for More Than Stylistic Edits00:46:17 Applying Interpretability to Healthcare and Scientific Discovery00:49:15 Why Interpretability is Crucial in High-Stakes Domains like Healthcare00:52:03 Call for Design Partners Across Domains00:54:18 Interest in World Models and Visual Interpretability00:57:22 Sci-Fi Inspiration: Ted Chiang and Interpretability01:00:14 Interpretability, Safety, and Alignment Perspectives01:04:27 Weak-to-Strong Generalization and Future Alignment Challenges01:05:38 Final Thoughts and Hiring/Collaboration Opportunities at GoodfireTranscriptShawn Wang [00:00:05]: So welcome to the Latent Space pod. We're back in the studio with our special MechInterp co-host, Vibhu. Welcome. Mochi, Mochi's special co-host. And Mochi, the mechanistic interpretability doggo. We have with us Mark and Myra from Goodfire. Welcome. Thanks for having us on. Maybe we can sort of introduce Goodfire and then introduce you guys. How do you introduce Goodfire today?Myra Deng [00:00:29]: Yeah, it's a great question. So Goodfire, we like to say, is an AI research lab that focuses on using interpretability to understand, learn from, and design AI models. And we really believe that interpretability will unlock the new generation, next frontier of safe and powerful AI models. That's our description right now, and I'm excited to dive more into the work we're doing to make that happen.Shawn Wang [00:00:55]: Yeah. And there's always like the official description. Is there an understatement? Is there an unofficial one that sort of resonates more with a different audience?Mark Bissell [00:01:01]: Well, being an AI research lab that's focused on interpretability, there's obviously a lot of people have a lot that they think about when they think of interpretability. And I think we have a pretty broad definition of what that means and the types of places that can be applied. And in particular, applying it in production scenarios, in high stakes industries, and really taking it sort of from the research world into the real world. Which, you know. It's a new field, so that hasn't been done all that much. And we're excited about actually seeing that sort of put into practice.Shawn Wang [00:01:37]: Yeah, I would say it wasn't too long ago that Anthopic was like still putting out like toy models or superposition and that kind of stuff. And I wouldn't have pegged it to be this far along. When you and I talked at NeurIPS, you were talking a little bit about your production use cases and your customers. And then not to bury the lead, today we're also announcing the fundraise, your Series B. $150 million. $150 million at a 1.25B valuation. Congrats, Unicorn.Mark Bissell [00:02:02]: Thank you. Yeah, no, things move fast.Shawn Wang [00:02:04]: We were talking to you in December and already some big updates since then. Let's dive, I guess, into a bit of your backgrounds as well. Mark, you were at Palantir working on health stuff, which is really interesting because the Goodfire has some interesting like health use cases. I don't know how related they are in practice.Mark Bissell [00:02:22]: Yeah, not super related, but I don't know. It was helpful context to know what it's like. Just to work. Just to work with health systems and generally in that domain. Yeah.Shawn Wang [00:02:32]: And Mara, you were at Two Sigma, which actually I was also at Two Sigma back in the day. Wow, nice.Myra Deng [00:02:37]: Did we overlap at all?Shawn Wang [00:02:38]: No, this is when I was briefly a software engineer before I became a sort of developer relations person. And now you're head of product. What are your sort of respective roles, just to introduce people to like what all gets done in Goodfire?Mark Bissell [00:02:51]: Yeah, prior to Goodfire, I was at Palantir for about three years as a forward deployed engineer, now a hot term. Wasn't always that way. And as a technical lead on the health care team and at Goodfire, I'm a member of the technical staff. And honestly, that I think is about as specific as like as as I could describe myself because I've worked on a range of things. And, you know, it's it's a fun time to be at a team that's still reasonably small. I think when I joined one of the first like ten employees, now we're above 40, but still, it looks like there's always a mix of research and engineering and product and all of the above. That needs to get done. And I think everyone across the team is, you know, pretty, pretty switch hitter in the roles they do. So I think you've seen some of the stuff that I worked on related to image models, which was sort of like a research demo. More recently, I've been working on our scientific discovery team with some of our life sciences partners, but then also building out our core platform for more of like flexing some of the kind of MLE and developer skills as well.Shawn Wang [00:03:53]: Very generalist. And you also had like a very like a founding engineer type role.Myra Deng [00:03:58]: Yeah, yeah.Shawn Wang [00:03:59]: So I also started as I still am a member of technical staff, did a wide range of things from the very beginning, including like finding our office space and all of this, which is we both we both visited when you had that open house thing. It was really nice.Myra Deng [00:04:13]: Thank you. Thank you. Yeah. Plug to come visit our office.Shawn Wang [00:04:15]: It looked like it was like 200 people. It has room for 200 people. But you guys are like 10.Myra Deng [00:04:22]: For a while, it was very empty. But yeah, like like Mark, I spend. A lot of my time as as head of product, I think product is a bit of a weird role these days, but a lot of it is thinking about how do we take our frontier research and really apply it to the most important real world problems and how does that then translate into a platform that's repeatable or a product and working across, you know, the engineering and research teams to make that happen and also communicating to the world? Like, what is interpretability? What is it used for? What is it good for? Why is it so important? All of these things are part of my day-to-day as well.Shawn Wang [00:05:01]: I love like what is things because that's a very crisp like starting point for people like coming to a field. They all do a fun thing. Vibhu, why don't you want to try tackling what is interpretability and then they can correct us.Vibhu Sapra [00:05:13]: Okay, great. So I think like one, just to kick off, it's a very interesting role to be head of product, right? Because you guys, at least as a lab, you're more of an applied interp lab, right? Which is pretty different than just normal interp, like a lot of background research. But yeah. You guys actually ship an API to try these things. You have Ember, you have products around it, which not many do. Okay. What is interp? So basically you're trying to have an understanding of what's going on in model, like in the model, in the internal. So different approaches to do that. You can do probing, SAEs, transcoders, all this stuff. But basically you have an, you have a hypothesis. You have something that you want to learn about what's happening in a model internals. And then you're trying to solve that from there. You can do stuff like you can, you know, you can do activation mapping. You can try to do steering. There's a lot of stuff that you can do, but the key question is, you know, from input to output, we want to have a better understanding of what's happening and, you know, how can we, how can we adjust what's happening on the model internals? How'd I do?Mark Bissell [00:06:12]: That was really good. I think that was great. I think it's also a, it's kind of a minefield of a, if you ask 50 people who quote unquote work in interp, like what is interpretability, you'll probably get 50 different answers. And. Yeah. To some extent also like where, where good fire sits in the space. I think that we're an AI research company above all else. And interpretability is a, is a set of methods that we think are really useful and worth kind of specializing in, in order to accomplish the goals we want to accomplish. But I think we also sort of see some of the goals as even more broader as, as almost like the science of deep learning and just taking a not black box approach to kind of any part of the like AI development life cycle, whether that. That means using interp for like data curation while you're training your model or for understanding what happened during post-training or for the, you know, understanding activations and sort of internal representations, what is in there semantically. And then a lot of sort of exciting updates that were, you know, are sort of also part of the, the fundraise around bringing interpretability to training, which I don't think has been done all that much before. A lot of this stuff is sort of post-talk poking at models as opposed to. To actually using this to intentionally design them.Shawn Wang [00:07:29]: Is this post-training or pre-training or is that not a useful.Myra Deng [00:07:33]: Currently focused on post-training, but there's no reason the techniques wouldn't also work in pre-training.Shawn Wang [00:07:38]: Yeah. It seems like it would be more active, applicable post-training because basically I'm thinking like rollouts or like, you know, having different variations of a model that you can tweak with the, with your steering. Yeah.Myra Deng [00:07:50]: And I think in a lot of the news that you've seen in, in, on like Twitter or whatever, you've seen a lot of unintended. Side effects come out of post-training processes, you know, overly sycophantic models or models that exhibit strange reward hacking behavior. I think these are like extreme examples. There's also, you know, very, uh, mundane, more mundane, like enterprise use cases where, you know, they try to customize or post-train a model to do something and it learns some noise or it doesn't appropriately learn the target task. And a big question that we've always had is like, how do you use your understanding of what the model knows and what it's doing to actually guide the learning process?Shawn Wang [00:08:26]: Yeah, I mean, uh, you know, just to anchor this for people, uh, one of the biggest controversies of last year was 4.0 GlazeGate. I've never heard of GlazeGate. I didn't know that was what it was called. The other one, they called it that on the blog post and I was like, well, how did OpenAI call it? Like officially use that term. And I'm like, that's funny, but like, yeah, I guess it's the pitch that if they had worked a good fire, they wouldn't have avoided it. Like, you know what I'm saying?Myra Deng [00:08:51]: I think so. Yeah. Yeah.Mark Bissell [00:08:53]: I think that's certainly one of the use cases. I think. Yeah. Yeah. I think the reason why post-training is a place where this makes a lot of sense is a lot of what we're talking about is surgical edits. You know, you want to be able to have expert feedback, very surgically change how your model is doing, whether that is, you know, removing a certain behavior that it has. So, you know, one of the things that we've been looking at or is, is another like common area where you would want to make a somewhat surgical edit is some of the models that have say political bias. Like you look at Quen or, um, R1 and they have sort of like this CCP bias.Shawn Wang [00:09:27]: Is there a CCP vector?Mark Bissell [00:09:29]: Well, there's, there are certainly internal, yeah. Parts of the representation space where you can sort of see where that lives. Yeah. Um, and you want to kind of, you know, extract that piece out.Shawn Wang [00:09:40]: Well, I always say, you know, whenever you find a vector, a fun exercise is just like, make it very negative to see what the opposite of CCP is.Mark Bissell [00:09:47]: The super America, bald eagles flying everywhere. But yeah. So in general, like lots of post-training tasks where you'd want to be able to, to do that. Whether it's unlearning a certain behavior or, you know, some of the other kind of cases where this comes up is, are you familiar with like the, the grokking behavior? I mean, I know the machine learning term of grokking.Shawn Wang [00:10:09]: Yeah.Mark Bissell [00:10:09]: Sort of this like double descent idea of, of having a model that is able to learn a generalizing, a generalizing solution, as opposed to even if memorization of some task would suffice, you want it to learn the more general way of doing a thing. And so, you know, another. A way that you can think about having surgical access to a model's internals would be learn from this data, but learn in the right way. If there are many possible, you know, ways to, to do that. Can make interp solve the double descent problem?Shawn Wang [00:10:41]: Depends, I guess, on how you. Okay. So I, I, I viewed that double descent as a problem because then you're like, well, if the loss curves level out, then you're done, but maybe you're not done. Right. Right. But like, if you actually can interpret what is a generalizing or what you're doing. What is, what is still changing, even though the loss is not changing, then maybe you, you can actually not view it as a double descent problem. And actually you're just sort of translating the space in which you view loss and like, and then you have a smooth curve. Yeah.Mark Bissell [00:11:11]: I think that's certainly like the domain of, of problems that we're, that we're looking to get.Shawn Wang [00:11:15]: Yeah. To me, like double descent is like the biggest thing to like ML research where like, if you believe in scaling, then you don't need, you need to know where to scale. And. But if you believe in double descent, then you don't, you don't believe in anything where like anything levels off, like.Vibhu Sapra [00:11:30]: I mean, also tendentially there's like, okay, when you talk about the China vector, right. There's the subliminal learning work. It was from the anthropic fellows program where basically you can have hidden biases in a model. And as you distill down or, you know, as you train on distilled data, those biases always show up, even if like you explicitly try to not train on them. So, you know, it's just like another use case of. Okay. If we can interpret what's happening in post-training, you know, can we clear some of this? Can we even determine what's there? Because yeah, it's just like some worrying research that's out there that shows, you know, we really don't know what's going on.Mark Bissell [00:12:06]: That is. Yeah. I think that's the biggest sentiment that we're sort of hoping to tackle. Nobody knows what's going on. Right. Like subliminal learning is just an insane concept when you think about it. Right. Train a model on not even the logits, literally the output text of a bunch of random numbers. And now your model loves owls. And you see behaviors like that, that are just, they defy, they defy intuition. And, and there are mathematical explanations that you can get into, but. I mean.Shawn Wang [00:12:34]: It feels so early days. Objectively, there are a sequence of numbers that are more owl-like than others. There, there should be.Mark Bissell [00:12:40]: According to, according to certain models. Right. It's interesting. I think it only applies to models that were initialized from the same starting Z. Usually, yes.Shawn Wang [00:12:49]: But I mean, I think that's a, that's a cheat code because there's not enough compute. But like if you believe in like platonic representation, like probably it will transfer across different models as well. Oh, you think so?Mark Bissell [00:13:00]: I think of it more as a statistical artifact of models initialized from the same seed sort of. There's something that is like path dependent from that seed that might cause certain overlaps in the latent space and then sort of doing this distillation. Yeah. Like it pushes it towards having certain other tendencies.Vibhu Sapra [00:13:24]: Got it. I think there's like a bunch of these open-ended questions, right? Like you can't train in new stuff during the RL phase, right? RL only reorganizes weights and you can only do stuff that's somewhat there in your base model. You're not learning new stuff. You're just reordering chains and stuff. But okay. My broader question is when you guys work at an interp lab, how do you decide what to work on and what's kind of the thought process? Right. Because we can ramble for hours. Okay. I want to know this. I want to know that. But like, how do you concretely like, you know, what's the workflow? Okay. There's like approaches towards solving a problem, right? I can try prompting. I can look at chain of thought. I can train probes, SAEs. But how do you determine, you know, like, okay, is this going anywhere? Like, do we have set stuff? Just, you know, if you can help me with all that. Yeah.Myra Deng [00:14:07]: It's a really good question. I feel like we've always at the very beginning of the company thought about like, let's go and try to learn what isn't working in machine learning today. Whether that's talking to customers or talking to researchers at other labs, trying to understand both where the frontier is going and where things are really not falling apart today. And then developing a perspective on how we can push the frontier using interpretability methods. And so, you know, even our chief scientist, Tom, spends a lot of time talking to customers and trying to understand what real world problems are and then taking that back and trying to apply the current state of the art to those problems and then seeing where they fall down basically. And then using those failures or those shortcomings to understand what hills to climb when it comes to interpretability research. So like on the fundamental side, for instance, when we have done some work applying SAEs and probes, we've encountered, you know, some shortcomings in SAEs that we found a little bit surprising. And so have gone back to the drawing board and done work on that. And then, you know, we've done some work on better foundational interpreter models. And a lot of our team's research is focused on what is the next evolution beyond SAEs, for instance. And then when it comes to like control and design of models, you know, we tried steering with our first API and realized that it still fell short of black box techniques like prompting or fine tuning. And so went back to the drawing board and we're like, how do we make that not the case and how do we improve it beyond that? And one of our researchers, Ekdeep, who just joined is actually Ekdeep and Atticus are like steering experts and have spent a lot of time trying to figure out like, what is the research that enables us to actually do this in a much more powerful, robust way? So yeah, the answer is like, look at real world problems, try to translate that into a research agenda and then like hill climb on both of those at the same time.Shawn Wang [00:16:04]: Yeah. Mark has the steering CLI demo queued up, which we're going to go into in a sec. But I always want to double click on when you drop hints, like we found some problems with SAEs. Okay. What are they? You know, and then we can go into the demo. Yeah.Myra Deng [00:16:19]: I mean, I'm curious if you have more thoughts here as well, because you've done it in the healthcare domain. But I think like, for instance, when we do things like trying to detect behaviors within models that are harmful or like behaviors that a user might not want to have in their model. So hallucinations, for instance, harmful intent, PII, all of these things. We first tried using SAE probes for a lot of these tasks. So taking the feature activation space from SAEs and then training classifiers on top of that, and then seeing how well we can detect the properties that we might want to detect in model behavior. And we've seen in many cases that probes just trained on raw activations seem to perform better than SAE probes, which is a bit surprising if you think that SAEs are actually also capturing the concepts that you would want to capture cleanly and more surgically. And so that is an interesting observation. I don't think that is like, I'm not down on SAEs at all. I think there are many, many things they're useful for, but we have definitely run into cases where I think the concept space described by SAEs is not as clean and accurate as we would expect it to be for actual like real world downstream performance metrics.Mark Bissell [00:17:34]: Fair enough. Yeah. It's the blessing and the curse of unsupervised methods where you get to peek into the AI's mind. But sometimes you wish that you saw other things when you walked inside there. Although in the PII instance, I think weren't an SAE based approach actually did prove to be the most generalizable?Myra Deng [00:17:53]: It did work well in the case that we published with Rakuten. And I think a lot of the reasons it worked well was because we had a noisier data set. And so actually the blessing of unsupervised learning is that we actually got to get more meaningful, generalizable signal from SAEs when the data was noisy. But in other cases where we've had like good data sets, it hasn't been the case.Shawn Wang [00:18:14]: And just because you named Rakuten and I don't know if we'll get it another chance, like what is the overall, like what is Rakuten's usage or production usage? Yeah.Myra Deng [00:18:25]: So they are using us to essentially guardrail and inference time monitor their language model usage and their agent usage to detect things like PII so that they don't route private user information.Myra Deng [00:18:41]: And so that's, you know, going through all of their user queries every day. And that's something that we deployed with them a few months ago. And now we are actually exploring very early partnerships, not just with Rakuten, but with other people around how we can help with potentially training and customization use cases as well. Yeah.Shawn Wang [00:19:03]: And for those who don't know, like it's Rakuten is like, I think number one or number two e-commerce store in Japan. Yes. Yeah.Mark Bissell [00:19:10]: And I think that use case actually highlights a lot of like what it looks like to deploy things in practice that you don't always think about when you're doing sort of research tasks. So when you think about some of the stuff that came up there that's more complex than your idealized version of a problem, they were encountering things like synthetic to real transfer of methods. So they couldn't train probes, classifiers, things like that on actual customer data of PII. So what they had to do is use synthetic data sets. And then hope that that transfer is out of domain to real data sets. And so we can evaluate performance on the real data sets, but not train on customer PII. So that right off the bat is like a big challenge. You have multilingual requirements. So this needed to work for both English and Japanese text. Japanese text has all sorts of quirks, including tokenization behaviors that caused lots of bugs that caused us to be pulling our hair out. And then also a lot of tasks you'll see. You might make simplifying assumptions if you're sort of treating it as like the easiest version of the problem to just sort of get like general results where maybe you say you're classifying a sentence to say, does this contain PII? But the need that Rakuten had was token level classification so that you could precisely scrub out the PII. So as we learned more about the problem, you're sort of speaking about what that looks like in practice. Yeah. A lot of assumptions end up breaking. And that was just one instance where you. A problem that seems simple right off the bat ends up being more complex as you keep diving into it.Vibhu Sapra [00:20:41]: Excellent. One of the things that's also interesting with Interp is a lot of these methods are very efficient, right? So where you're just looking at a model's internals itself compared to a separate like guardrail, LLM as a judge, a separate model. One, you have to host it. Two, there's like a whole latency. So if you use like a big model, you have a second call. Some of the work around like self detection of hallucination, it's also deployed for efficiency, right? So if you have someone like Rakuten doing it in production live, you know, that's just another thing people should consider.Mark Bissell [00:21:12]: Yeah. And something like a probe is super lightweight. Yeah. It's no extra latency really. Excellent.Shawn Wang [00:21:17]: You have the steering demos lined up. So we were just kind of see what you got. I don't, I don't actually know if this is like the latest, latest or like alpha thing.Mark Bissell [00:21:26]: No, this is a pretty hacky demo from from a presentation that someone else on the team recently gave. So this will give a sense for, for technology. So you can see the steering and action. Honestly, I think the biggest thing that this highlights is that as we've been growing as a company and taking on kind of more and more ambitious versions of interpretability related problems, a lot of that comes to scaling up in various different forms. And so here you're going to see steering on a 1 trillion parameter model. This is Kimi K2. And so it's sort of fun that in addition to the research challenges, there are engineering challenges that we're now tackling. Cause for any of this to be sort of useful in production, you need to be thinking about what it looks like when you're using these methods on frontier models as opposed to sort of like toy kind of model organisms. So yeah, this was thrown together hastily, pretty fragile behind the scenes, but I think it's quite a fun demo. So screen sharing is on. So I've got two terminal sessions pulled up here. On the left is a forked version that we have of the Kimi CLI that we've got running to point at our custom hosted Kimi model. And then on the right is a set up that will allow us to steer on certain concepts. So I should be able to chat with Kimi over here. Tell it hello. This is running locally. So the CLI is running locally, but the Kimi server is running back to the office. Well, hopefully should be, um, that's too much to run on that Mac. Yeah. I think it's, uh, it takes a full, like each 100 node. I think it's like, you can. You can run it on eight GPUs, eight 100. So, so yeah, Kimi's running. We can ask it a prompt. It's got a forked version of our, uh, of the SG line code base that we've been working on. So I'm going to tell it, Hey, this SG line code base is slow. I think there's a bug. Can you try to figure it out? There's a big code base, so it'll, it'll spend some time doing this. And then on the right here, I'm going to initialize in real time. Some steering. Let's see here.Mark Bissell [00:23:33]: searching for any. Bugs. Feature ID 43205.Shawn Wang [00:23:38]: Yeah.Mark Bissell [00:23:38]: 20, 30, 40. So let me, uh, this is basically a feature that we found that inside Kimi seems to cause it to speak in Gen Z slang. And so on the left, it's still sort of thinking normally it might take, I don't know, 15 seconds for this to kick in, but then we're going to start hopefully seeing him do this code base is massive for real. So we're going to start. We're going to start seeing Kimi transition as the steering kicks in from normal Kimi to Gen Z Kimi and both in its chain of thought and its actual outputs.Mark Bissell [00:24:19]: And interestingly, you can see, you know, it's still able to call tools, uh, and stuff. It's um, it's purely sort of it's it's demeanor. And there are other features that we found for interesting things like concision. So that's more of a practical one. You can make it more concise. Um, the types of programs, uh, programming languages that uses, but yeah, as we're seeing it come in. Pretty good. Outputs.Shawn Wang [00:24:43]: Scheduler code is actually wild.Vibhu Sapra [00:24:46]: Yo, this code is actually insane, bro.Vibhu Sapra [00:24:53]: What's the process of training in SAE on this, or, you know, how do you label features? I know you guys put out a pretty cool blog post about, um, finding this like autonomous interp. Um, something. Something about how agents for interp is different than like coding agents. I don't know while this is spewing up, but how, how do we find feature 43, two Oh five. Yeah.Mark Bissell [00:25:15]: So in this case, um, we, our platform that we've been building out for a long time now supports all the sort of classic out of the box interp techniques that you might want to have like SAE training, probing things of that kind, I'd say the techniques for like vanilla SAEs are pretty well established now where. You take your model that you're interpreting, run a whole bunch of data through it, gather activations, and then yeah, pretty straightforward pipeline to train an SAE. There are a lot of different varieties. There's top KSAEs, batch top KSAEs, um, normal ReLU SAEs. And then once you have your sparse features to your point, assigning labels to them to actually understand that this is a gen Z feature, that's actually where a lot of the kind of magic happens. Yeah. And the most basic standard technique is look at all of your d input data set examples that cause this feature to fire most highly. And then you can usually pick out a pattern. So for this feature, If I've run a diverse enough data set through my model feature 43, two Oh five. Probably tends to fire on all the tokens that sounds like gen Z slang. You know, that's the, that's the time of year to be like, Oh, I'm in this, I'm in this Um, and, um, so, you know, you could have a human go through all 43,000 concepts andVibhu Sapra [00:26:34]: And I've got to ask the basic question, you know, can we get examples where it hallucinates, pass it through, see what feature activates for hallucinations? Can I just, you know, turn hallucination down?Myra Deng [00:26:51]: Oh, wow. You really predicted a project we're already working on right now, which is detecting hallucinations using interpretability techniques. And this is interesting because hallucinations is something that's very hard to detect. And it's like a kind of a hairy problem and something that black box methods really struggle with. Whereas like Gen Z, you could always train a simple classifier to detect that hallucinations is harder. But we've seen that models internally have some... Awareness of like uncertainty or some sort of like user pleasing behavior that leads to hallucinatory behavior. And so, yeah, we have a project that's trying to detect that accurately. And then also working on mitigating the hallucinatory behavior in the model itself as well.Shawn Wang [00:27:39]: Yeah, I would say most people are still at the level of like, oh, I would just turn temperature to zero and that turns off hallucination. And I'm like, well, that's a fundamental misunderstanding of how this works. Yeah.Mark Bissell [00:27:51]: Although, so part of what I like about that question is you, there are SAE based approaches that might like help you get at that. But oftentimes the beauty of SAEs and like we said, the curse is that they're unsupervised. So when you have a behavior that you deliberately would like to remove, and that's more of like a supervised task, often it is better to use something like probes and specifically target the thing that you're interested in reducing as opposed to sort of like hoping that when you fragment the latent space, one of the vectors that pops out.Vibhu Sapra [00:28:20]: And as much as we're training an autoencoder to be sparse, we're not like for sure certain that, you know, we will get something that just correlates to hallucination. You'll probably split that up into 20 other things and who knows what they'll be.Mark Bissell [00:28:36]: Of course. Right. Yeah. So there's no sort of problems with like feature splitting and feature absorption. And then there's the off target effects, right? Ideally, you would want to be very precise where if you reduce the hallucination feature, suddenly maybe your model can't write. Creatively anymore. And maybe you don't like that, but you want to still stop it from hallucinating facts and figures.Shawn Wang [00:28:55]: Good. So Vibhu has a paper to recommend there that we'll put in the show notes. But yeah, I mean, I guess just because your demo is done, any any other things that you want to highlight or any other interesting features you want to show?Mark Bissell [00:29:07]: I don't think so. Yeah. Like I said, this is a pretty small snippet. I think the main sort of point here that I think is exciting is that there's not a whole lot of inter being applied to models quite at this scale. You know, Anthropic certainly has some some. Research and yeah, other other teams as well. But it's it's nice to see these techniques, you know, being put into practice. I think not that long ago, the idea of real time steering of a trillion parameter model would have sounded.Shawn Wang [00:29:33]: Yeah. The fact that it's real time, like you started the thing and then you edited the steering vector.Vibhu Sapra [00:29:38]: I think it's it's an interesting one TBD of what the actual like production use case would be on that, like the real time editing. It's like that's the fun part of the demo, right? You can kind of see how this could be served behind an API, right? Like, yes, you're you only have so many knobs and you can just tweak it a bit more. And I don't know how it plays in. Like people haven't done that much with like, how does this work with or without prompting? Right. How does this work with fine tuning? Like, there's a whole hype of continual learning, right? So there's just so much to see. Like, is this another parameter? Like, is it like parameter? We just kind of leave it as a default. We don't use it. So I don't know. Maybe someone here wants to put out a guide on like how to use this with prompting when to do what?Mark Bissell [00:30:18]: Oh, well, I have a paper recommendation. I think you would love from Act Deep on our team, who is an amazing researcher, just can't say enough amazing things about Act Deep. But he actually has a paper that as well as some others from the team and elsewhere that go into the essentially equivalence of activation steering and in context learning and how those are from a he thinks of everything in a cognitive neuroscience Bayesian framework, but basically how you can precisely show how. Prompting in context, learning and steering exhibit similar behaviors and even like get quantitative about the like magnitude of steering you would need to do to induce a certain amount of behavior similar to certain prompting, even for things like jailbreaks and stuff. It's a really cool paper. Are you saying steering is less powerful than prompting? More like you can almost write a formula that tells you how to convert between the two of them.Myra Deng [00:31:20]: And so like formally equivalent actually in the in the limit. Right.Mark Bissell [00:31:24]: So like one case study of this is for jailbreaks there. I don't know. Have you seen the stuff where you can do like many shot jailbreaking? You like flood the context with examples of the behavior. And the topic put out that paper.Shawn Wang [00:31:38]: A lot of people were like, yeah, we've been doing this, guys.Mark Bissell [00:31:40]: Like, yeah, what's in this in context learning and activation steering equivalence paper is you can like predict the number. Number of examples that you will need to put in there in order to jailbreak the model. That's cool. By doing steering experiments and using this sort of like equivalence mapping. That's cool. That's really cool. It's very neat. Yeah.Shawn Wang [00:32:02]: I was going to say, like, you know, I can like back rationalize that this makes sense because, you know, what context is, is basically just, you know, it updates the KV cache kind of and like and then every next token inference is still like, you know, the sheer sum of everything all the way. It's plus all the context. It's up to date. And you could, I guess, theoretically steer that with you probably replace that with your steering. The only problem is steering typically is on one layer, maybe three layers like like you did. So it's like not exactly equivalent.Mark Bissell [00:32:33]: Right, right. There's sort of you need to get precise about, yeah, like how you sort of define steering and like what how you're modeling the setup. But yeah, I've got the paper pulled up here. Belief dynamics reveal the dual nature. Yeah. The title is Belief Dynamics Reveal the Dual Nature of Incompetence. And it's an exhibition of the practical context learning and activation steering. So Eric Bigelow, Dan Urgraft on the who are doing fellowships at Goodfire, Ekt Deep's the final author there.Myra Deng [00:32:59]: I think actually to your question of like, what is the production use case of steering? I think maybe if you just think like one level beyond steering as it is today. Like imagine if you could adapt your model to be, you know, an expert legal reasoner. Like in almost real time, like very quickly. efficiently using human feedback or using like your semantic understanding of what the model knows and where it knows that behavior. I think that while it's not clear what the product is at the end of the day, it's clearly very valuable. Thinking about like what's the next interface for model customization and adaptation is a really interesting problem for us. Like we have heard a lot of people actually interested in fine-tuning an RL for open weight models in production. And so people are using things like Tinker or kind of like open source libraries to do that, but it's still very difficult to get models fine-tuned and RL'd for exactly what you want them to do unless you're an expert at model training. And so that's like something we'reShawn Wang [00:34:06]: looking into. Yeah. I never thought so. Tinker from Thinking Machines famously uses rank one LoRa. Is that basically the same as steering? Like, you know, what's the comparison there?Mark Bissell [00:34:19]: Well, so in that case, you are still applying updates to the parameters, right?Shawn Wang [00:34:25]: Yeah. You're not touching a base model. You're touching an adapter. It's kind of, yeah.Mark Bissell [00:34:30]: Right. But I guess it still is like more in parameter space then. I guess it's maybe like, are you modifying the pipes or are you modifying the water flowing through the pipes to get what you're after? Yeah. Just maybe one way.Mark Bissell [00:34:44]: I like that analogy. That's my mental map of it at least, but it gets at this idea of model design and intentional design, which is something that we're, that we're very focused on. And just the fact that like, I hope that we look back at how we're currently training models and post-training models and just think what a primitive way of doing that right now. Like there's no intentionalityShawn Wang [00:35:06]: really in... It's just data, right? The only thing in control is what data we feed in.Mark Bissell [00:35:11]: So, so Dan from Goodfire likes to use this analogy of, you know, he has a couple of young kids and he talks about like, what if I could only teach my kids how to be good people by giving them cookies or like, you know, giving them a slap on the wrist if they do something wrong, like not telling them why it was wrong or like what they should have done differently or something like that. Just figure it out. Right. Exactly. So that's RL. Yeah. Right. And, and, you know, it's sample inefficient. There's, you know, what do they say? It's like slurping feedback. It's like, slurping supervision. Right. And so you'd like to get to the point where you can have experts giving feedback to their models that are, uh, internalized and, and, you know, steering is an inference time way of sort of getting that idea. But ideally you're moving to a world whereVibhu Sapra [00:36:04]: it is much more intentional design in perpetuity for these models. Okay. This is one of the questions we asked Emmanuel from Anthropic on the podcast a few months ago. Basically the question, was you're at a research lab that does model training, foundation models, and you're on an interp team. How does it tie back? Right? Like, does this, do ideas come from the pre-training team? Do they go back? Um, you know, so for those interested, you can, you can watch that. There wasn't too much of a connect there, but it's still something, you know, it's something they want toMark Bissell [00:36:33]: push for down the line. It can be useful for all of the above. Like there are certainly post-hocVibhu Sapra [00:36:39]: use cases where it doesn't need to touch that. I think the other thing a lot of people forget is this stuff isn't too computationally expensive, right? Like I would say, if you're interested in getting into research, MechInterp is one of the most approachable fields, right? A lot of this train an essay, train a probe, this stuff, like the budget for this one, there's already a lot done. There's a lot of open source work. You guys have done some too. Um, you know,Shawn Wang [00:37:04]: There's like notebooks from the Gemini team for Neil Nanda or like, this is how you do it. Just step through the notebook.Vibhu Sapra [00:37:09]: Even if you're like, not even technical with any of this, you can still make like progress. There, you can look at different activations, but, uh, if you do want to get into training, you know, training this stuff, correct me if I'm wrong is like in the thousands of dollars, not even like, it's not that high scale. And then same with like, you know, applying it, doing it for post-training or all this stuff is fairly cheap in scale of, okay. I want to get into like model training. I don't have compute for like, you know, pre-training stuff. So it's, it's a very nice field to get into. And also there's a lot of like open questions, right? Um, some of them have to go with, okay, I want a product. I want to solve this. Like there's also just a lot of open-ended stuff that people could work on. That's interesting. Right. I don't know if you guys have any calls for like, what's open questions, what's open work that you either open collaboration with, or like, you'd just like to see solved or just, you know, for people listening that want to get into McInturk because people always talk about it. What are, what are the things they should check out? Start, of course, you know, join you guys as well. I'm sure you're hiring.Myra Deng [00:38:09]: There's a paper, I think from, was it Lee, uh, Sharky? It's open problems and, uh, it's, it's a bit of interpretability, which I recommend everyone who's interested in the field. Read. I'm just like a really comprehensive overview of what are the things that experts in the field think are the most important problems to be solved. I also think to your point, it's been really, really inspiring to see, I think a lot of young people getting interested in interpretability, actually not just young people also like scientists to have been, you know, experts in physics for many years and in biology or things like this, um, transitioning into interp, because the barrier of, of what's now interp. So it's really cool to see a number to entry is, you know, in some ways low and there's a lot of information out there and ways to get started. There's this anecdote of like professors at universities saying that all of a sudden every incoming PhD student wants to study interpretability, which was not the case a few years ago. So it just goes to show how, I guess, like exciting the field is, how fast it's moving, how quick it is to get started and things like that.Mark Bissell [00:39:10]: And also just a very welcoming community. You know, there's an open source McInturk Slack channel. There are people are always posting questions and just folks in the space are always responsive if you ask things on various forums and stuff. But yeah, the open paper, open problems paper is a really good one.Myra Deng [00:39:28]: For other people who want to get started, I think, you know, MATS is a great program. What's the acronym for? Machine Learning and Alignment Theory Scholars? It's like the...Vibhu Sapra [00:39:40]: Normally summer internship style.Myra Deng [00:39:42]: Yeah, but they've been doing it year round now. And actually a lot of our full-time staff have come through that program or gone through that program. And it's great for anyone who is transitioning into interpretability. There's a couple other fellows programs. We do one as well as Anthropic. And so those are great places to get started if anyone is interested.Mark Bissell [00:40:03]: Also, I think been seen as a research field for a very long time. But I think engineering... I think engineers are sorely wanted for interpretability as well, especially at Goodfire, but elsewhere, as it does scale up.Shawn Wang [00:40:18]: I should mention that Lee actually works with you guys, right? And in the London office and I'm adding our first ever McInturk track at AI Europe because I see this industry applications now emerging. And I'm pretty excited to, you know, help push that along. Yeah, I was looking forward to that. It'll effectively be the first industry McInturk conference. Yeah. I'm so glad you added that. You know, it's still a little bit of a bet. It's not that widespread, but I can definitely see this is the time to really get into it. We want to be early on things.Mark Bissell [00:40:51]: For sure. And I think the field understands this, right? So at ICML, I think the title of the McInturk workshop this year was actionable interpretability. And there was a lot of discussion around bringing it to various domains. Everyone's adding pragmatic, actionable, whatever.Shawn Wang [00:41:10]: It's like, okay, well, we weren't actionable before, I guess. I don't know.Vibhu Sapra [00:41:13]: And I mean, like, just, you know, being in Europe, you see the Interp room. One, like old school conferences, like, I think they had a very tiny room till they got lucky and they got it doubled. But there's definitely a lot of interest, a lot of niche research. So you see a lot of research coming out of universities, students. We covered the paper last week. It's like two unknown authors, not many citations. But, you know, you can make a lot of meaningful work there. Yeah. Yeah. Yeah.Shawn Wang [00:41:39]: Yeah. I think people haven't really mentioned this yet. It's just Interp for code. I think it's like an abnormally important field. We haven't mentioned this yet. The conspiracy theory last two years ago was when the first SAE work came out of Anthropic was they would do like, oh, we just used SAEs to turn the bad code vector down and then turn up the good code. And I think like, isn't that the dream? Like, you know, like, but basically, I guess maybe, why is it funny? Like, it's... If it was realistic, it would not be funny. It would be like, no, actually, we should do this. But it's funny because we know there's like, we feel there's some limitations to what steering can do. And I think a lot of the public image of steering is like the Gen Z stuff. Like, oh, you can make it really love the Golden Gate Bridge, or you can make it speak like Gen Z. To like be a legal reasoner seems like a huge stretch. Yeah. And I don't know if that will get there this way. Yeah.Myra Deng [00:42:36]: I think, um, I will say we are announcing. Something very soon that I will not speak too much about. Um, but I think, yeah, this is like what we've run into again and again is like, we, we don't want to be in the world where steering is only useful for like stylistic things. That's definitely not, not what we're aiming for. But I think the types of interventions that you need to do to get to things like legal reasoning, um, are much more sophisticated and require breakthroughs in, in learning algorithms. And that's, um...Shawn Wang [00:43:07]: And is this an emergent property of scale as well?Myra Deng [00:43:10]: I think so. Yeah. I mean, I think scale definitely helps. I think scale allows you to learn a lot of information and, and reduce noise across, you know, large amounts of data. But I also think we think that there's ways to do things much more effectively, um, even, even at scale. So like actually learning exactly what you want from the data and not learning things that you do that you don't want exhibited in the data. So we're not like anti-scale, but we are also realizing that scale is not going to get us anywhere. It's not going to get us to the type of AI development that we want to be at in, in the future as these models get more powerful and get deployed in all these sorts of like mission critical contexts. Current life cycle of training and deploying and evaluations is, is to us like deeply broken and has opportunities to, to improve. So, um, more to come on that very, very soon.Mark Bissell [00:44:02]: And I think that that's a use basically, or maybe just like a proof point that these concepts do exist. Like if you can manipulate them in the precise best way, you can get the ideal combination of them that you desire. And steering is maybe the most coarse grained sort of peek at what that looks like. But I think it's evocative of what you could do if you had total surgical control over every concept, every parameter. Yeah, exactly.Myra Deng [00:44:30]: There were like bad code features. I've got it pulled up.Vibhu Sapra [00:44:33]: Yeah. Just coincidentally, as you guys are talking.Shawn Wang [00:44:35]: This is like, this is exactly.Vibhu Sapra [00:44:38]: There's like specifically a code error feature that activates and they show, you know, it's not, it's not typo detection. It's like, it's, it's typos in code. It's not typical typos. And, you know, you can, you can see it clearly activates where there's something wrong in code. And they have like malicious code, code error. They have a whole bunch of sub, you know, sub broken down little grain features. Yeah.Shawn Wang [00:45:02]: Yeah. So, so the, the rough intuition for me, the, why I talked about post-training was that, well, you just, you know, have a few different rollouts with all these things turned off and on and whatever. And then, you know, you can, that's, that's synthetic data you can kind of post-train on. Yeah.Vibhu Sapra [00:45:13]: And I think we make it sound easier than it is just saying, you know, they do the real hard work.Myra Deng [00:45:19]: I mean, you guys, you guys have the right idea. Exactly. Yeah. We replicated a lot of these features in, in our Lama models as well. I remember there was like.Vibhu Sapra [00:45:26]: And I think a lot of this stuff is open, right? Like, yeah, you guys opened yours. DeepMind has opened a lot of essays on Gemma. Even Anthropic has opened a lot of this. There's, there's a lot of resources that, you know, we can probably share of people that want to get involved.Shawn Wang [00:45:41]: Yeah. And special shout out to like Neuronpedia as well. Yes. Like, yeah, amazing piece of work to visualize those things.Myra Deng [00:45:49]: Yeah, exactly.Shawn Wang [00:45:50]: I guess I wanted to pivot a little bit on, onto the healthcare side, because I think that's a big use case for you guys. We haven't really talked about it yet. This is a bit of a crossover for me because we are, we are, we do have a separate science pod that we're starting up for AI, for AI for science, just because like, it's such a huge investment category and also I'm like less qualified to do it, but we actually have bio PhDs to cover that, which is great, but I need to just kind of recover, recap your work, maybe on the evil two stuff, but then, and then building forward.Mark Bissell [00:46:17]: Yeah, for sure. And maybe to frame up the conversation, I think another kind of interesting just lens on interpretability in general is a lot of the techniques that were described. are ways to solve the AI human interface problem. And it's sort of like bidirectional communication is the goal there. So what we've been talking about with intentional design of models and, you know, steering, but also more advanced techniques is having humans impart our desires and control into models and over models. And the reverse is also very interesting, especially as you get to superhuman models, whether that's narrow superintelligence, like these scientific models that work on genomics, data, medical imaging, things like that. But down the line, you know, superintelligence of other forms as well. What knowledge can the AIs teach us as sort of that, that the other direction in that? And so some of our life science work to date has been getting at exactly that question, which is, well, some of it does look like debugging these various life sciences models, understanding if they're actually performing well, on tasks, or if they're picking up on spurious correlations, for instance, genomics models, you would like to know whether they are sort of focusing on the biologically relevant things that you care about, or if it's using some simpler correlate, like the ancestry of the person that it's looking at. But then also in the instances where they are superhuman, and maybe they are understanding elements of the human genome that we don't have names for or specific, you know, yeah, discoveries that they've made that that we don't know about, that's, that's a big goal. And so we're already seeing that, right, we are partnered with organizations like Mayo Clinic, leading research health system in the United States, our Institute, as well as a startup called Prima Menta, which focuses on neurodegenerative disease. And in our partnership with them, we've used foundation models, they've been training and applied our interpretability techniques to find novel biomarkers for Alzheimer's disease. So I think this is just the tip of the iceberg. But it's, that's like a flavor of some of the things that we're working on.Shawn Wang [00:48:36]: Yeah, I think that's really fantastic. Obviously, we did the Chad Zuckerberg pod last year as well. And like, there's a plethora of these models coming out, because there's so much potential and research. And it's like, very interesting how it's basically the same as language models, but just with a different underlying data set. But it's like, it's the same exact techniques. Like, there's no change, basically.Mark Bissell [00:48:59]: Yeah. Well, and even in like other domains, right? Like, you know, robotics, I know, like a lot of the companies just use Gemma as like the like backbone, and then they like make it into a VLA that like takes these actions. It's, it's, it's transformers all the way down. So yeah.Vibhu Sapra [00:49:15]: Like we have Med Gemma now, right? Like this week, even there was Med Gemma 1.5. And they're training it on this stuff, like 3d scans, medical domain knowledge, and all that stuff, too. So there's a push from both sides. But I think the thing that, you know, one of the things about McInturpp is like, you're a little bit more cautious in some domains, right? So healthcare, mainly being one, like guardrails, understanding, you know, we're more risk adverse to something going wrong there. So even just from a basic understanding, like, if we're trusting these systems to make claims, we want to know why and what's going on.Myra Deng [00:49:51]: Yeah, I think there's totally a kind of like deployment bottleneck to actually using. foundation models for real patient usage or things like that. Like, say you're using a model for rare disease prediction, you probably want some explanation as to why your model predicted a certain outcome, and an interpretable explanation at that. So that's definitely a use case. But I also think like, being able to extract scientific information that no human knows to accelerate drug discovery and disease treatment and things like that actually is a really, really big unlock for science, like scientific discovery. And you've seen a lot of startups, like say that they're going to accelerate scientific discovery. And I feel like we actually are doing that through our interp techniques. And kind of like, almost by accident, like, I think we got reached out to very, very early on from these healthcare institutions. And none of us had healthcare.Shawn Wang [00:50:49]: How did they even hear of you? A podcast.Myra Deng [00:50:51]: Oh, okay. Yeah, podcast.Vibhu Sapra [00:50:53]: Okay, well, now's that time, you know.Myra Deng [00:50:55]: Everyone can call us.Shawn Wang [00:50:56]: Podcasts are the most important thing. Everyone should listen to podcasts.Myra Deng [00:50:59]: Yeah, they reached out. They were like, you know, we have these really smart models that we've trained, and we want to know what they're doing. And we were like, really early that time, like three months old, and it was a few of us. And we were like, oh, my God, we've never used these models. Let's figure it out. But it's also like, great proof that interp techniques scale pretty well across domains. We didn't really have to learn too much about.Shawn Wang [00:51:21]: Interp is a machine learning technique, machine learning skills everywhere, right? Yeah. And it's obviously, it's just like a general insight. Yeah. Probably to finance too, I think, which would be fun for our history. I don't know if you have anything to say there.Mark Bissell [00:51:34]: Yeah, well, just across the science. Like, we've also done work on material science. Yeah, it really runs the gamut.Vibhu Sapra [00:51:40]: Yeah. Awesome. And, you know, for those that should reach out, like, you're obviously experts in this, but like, is there a call out for people that you're looking to partner with, design partners, people to use your stuff outside of just, you know, the general developer that wants to. Plug and play steering stuff, like on the research side more so, like, are there ideal design partners, customers, stuff like that?Myra Deng [00:52:03]: Yeah, I can talk about maybe non-life sciences, and then I'm curious to hear from you on the life sciences side. But we're looking for design partners across many domains, language, anyone who's customizing language models or trying to push the frontier of code or reasoning models is really interesting to us. And then also interested in the frontier of modeling. There's a lot of models that work in, like, pixel space, as we call it. So if you're doing world models, video models, even robotics, where there's not a very clean natural language interface to interact with, I think we think that Interp can really help and are looking for a few partners in that space.Shawn Wang [00:52:43]: Just because you mentioned the keyword
Jonathan Siddharth is the founder and CEO of Turing, a $2.2 billion AI company that provides coding and reasoning data to train frontier models for OpenAI, Google, Meta, Anthropic and more. Turing's mission is to accelerate superintelligence to drive economic growth. In this episode of World of DaaS, Jonathan and Auren discuss:How Turing creates expert data for frontier modelsWhy SaaS is dying in the age of AI agentsDisrupting the $30 trillion market for digital knowledge workBuilding a stage five company cultureYou can find Auren Hoffman on X at @auren and Jonathan Siddharth on X at @jonsidd.Editing and post-production work for this episode was provided by The Podcast Consultant (https://thepodcastconsultant.com)
Achatamento METAFÍSICO - Gödel e Turing Esse meu vídeo irá analisar as fronteiras ontológicas que distinguem a inteligência humana do processamento computacional, fundamentando-se nos limites lógicos/matemáticos estabelecidos por Gödel e Turing. Veremos também, apoiado em Roger Penrose, que a mente transcende algoritmos formais por possuir uma compreensão semântica e uma inteligência sentiente que as máquinas não podem replicar. Através da filosofia de Xavier Zubiri e a teoria ecológica de Gibson, tentarei contestar o modelo representacional do cérebro, propondo que a percepção humana é uma apreensão direta do real e não uma mera representação de captações de coisas incognocíveis. Por fim, a discussão integra metafísica e física quântica para sustentar uma visão realista, onde a consciência e o livre-arbítrio operam em uma hierarquia ontológica superior à causalidade mecânica de sistemas formais. Disto, estabelece-se que a diferença entre homens e máquinas é de natureza estrutural, não meramente quantitativa, definindo o ser humano por sua abertura viva ao mundo em contraste com o confinamento abstrato da computação em mera linguagem.Autores e Obras Citados nas Fontes• Alan Turing: Citado pelo seu modelo conceitual da Máquina de Turing, uma abstração matemática que define o que significa calcular e que serve de base para toda a computação moderna.• Charles Sanders Peirce: Referenciado por sua Semiótica, na qual o signo é uma mediação entre o sujeito e a realidade. Suas obras sobre semiótica e pragmatismo possuem diversas edições no Brasil, frequentemente sob o título Semiótica.• David Chalmers: Mencionado pelo problema dos Qualia e pela dimensão subjetiva da consciência ("como é" a experiência). Sua obra mais influente, A Mente Consciente, trata extensivamente desses temas e está disponível em português.• Gottfried Wilhelm Leibniz: Citado pelo conceito de Compossível e sua teoria dos mundos possíveis, que integra leis naturais e decisões livres.• James J. Gibson: Autor da Teoria Ecológica da Percepção e crítico do modelo representacional da mente. Sua obra principal, The Ecological Approach to Visual Perception, é a base dessas discussões.• Kurt Gödel: Citado pelos seus Teoremas da Incompletude, que demonstraram limites estruturais na matemática formal e na computação.• Luis de Molina: Criador do Molinismo, citado por sua doutrina sobre o conhecimento médio de Deus e os contrafactuais da liberdade.• Roger Penrose: Citado por sua crítica à redução da mente humana a sistemas algorítmicos, inspirada em Gödel. Penrose argumenta que o pensamento humano possui uma apreensão semântica da verdade que máquinas não possuem. Obras como A Mente Nova do Rei e O Microcosmo da Mente abordam esses temas e estão disponíveis em português.• Tomás de Aquino: Referenciado por sua Metafísica Tomista e pelas Cinco Vias, especificamente a via da contingência, que trata de Deus como causa necessária e vertical. Sua obra magna, a Suma Teológica, está amplamente disponível em português.• Wolfgang Smith: Autor do livro O Enigma Quântico (citado explicitamente com o título em português), onde propõe uma estrutura tripartida da realidade para explicar o colapso da função de onda.• Xavier Zubiri: Citado pela sua obra e conceito de Inteligência Sentiente, que defende a unidade entre sensação e intelecção como uma apreensão direta do real. O livro Inteligência Sentiente possui tradução para o português - editora É Realizações.Quer Ajudar o canal? Veja como:*Link do meu Livro: https://amzn.to/4dbsdhK*Pix: https://widget.livepix.gg/embed/e47d6b80-f832-4fc2-a6af-ee6fa4c9ad9a*Apoie o Canal: https://apoia.se/canaldosocran
The magic survival number for Chinese EVs + why NIO/XPeng/Li Auto are dumping NVIDIA. Two stories, one theme: Chinese EVs leveling up.THE 500K SAFETY LINE:Not 2M. Not 1M. At 500K annual units, Chinese EV makers gain:Financial viability (100-150B yuan revenue = profitability)Strategic breathing room (absorb 20% drops like Li Auto)Operational efficiency (economies of scale kick in)Community critical mass (self-sustaining ecosystems)Supply chain power (justify vertical integration)WHO'S WHERE:Li Auto: 400K (safe, crossed line 2024)Leapmotor: 600K (just crossed, suddenly legitimate)XPeng: 430K (+126%), but 41% from budget model—not safe yetNIO: 326K, but cost structure SO high needs 600-700KHarmonyOS: 590K, Aito 423K, Huawei backing = different gameTHE NVIDIA EXODUS:ALL going in-house:XPeng: G6/G7/G9/P7+ use Turing chip. He Xiaopeng: "Best AI companies develop own chips."Li Auto: M100 chip = 2 NVIDIA Thor-U (LLM tasks), 3 Thor-U (vision). Delayed products waiting for Thor, learned lesson.NIO: Spent $300M+ on Orin X in 2024. Shenji NX9031: 2 chips = 8 Orin X, saves 10K yuan per car. Rolling to full lineup.WHY:Thor disaster: 2,000 TOPS promised, 700 delivered, multiple delaysCost: 10K yuan savings x 500K units = billionsIndependence: core tech can't be foreign dependencyNVIDIA'S PROBLEM:China share: 39% → 25% despite having Orin + Thor1.16M units (NIO+XPeng+Li Auto) moving away+HarmonyOS +Leapmotor = 2M+ annual systematically shut outShort-term: only 1% revenueLong-term: most advanced market building own siliconTHE CONNECTION:Same story. 500K scale enables strategic independence. NVIDIA exodus IS exercising that independence.At 100K units: must use NVIDIA.At 500K units: $200-300M chip development pays back in 2-3 years.Autonomous driving = core differentiator. Why outsource it?
...we Tossed our way into oblivion with the exciting new show Pluribus in this one.Hello audient!The new show Pluribus comes at us with all the niceties, trappings and plot pitfalls to be found in our rot-attacked brains. While the first Season leaves many more questions than answers, leaving large holes in the logic and the story, it makes for a perfect Tossers episode; a real conceptual skeet-shooting playground.Pluribus deals with an alien(?) hive mind that has taken over every person on Earth except for very few who had an "incompatible" genetic material. The fact that this hive speaks and behaves in an almost one-to-one parody on the "personality" of ChatGPT in our increasingly compartmentalized realities and interactions with one-another (like in Social Media) makes the connection to AI-related issues almost inescapable.We'll mention here just one particularly interesting toss we cam across, concerning the 'body' (the 'animal' part of Aristotle's famous definition of the "Human" as 'a talking animal'). It juts out of the narrative like a sore thumb, like an unmourned loss: what does it mean for the main protagonist to "fall in love" with a body from the hive mind? How does a hive mind approach real issues of attachment, like pain and discord, as they arise through the "affair" with the protagonist? What is the role - within our attachments - of the body's memories, its unique history, when it is "pluribussed" like that?This also connected to a recurring theme in the Tossers' arsenal, the ethical imperative we inherited from our Derrida(ddy), the one we express as "Never let the dead bury the dead." As the bodies of the entire world's population become an indistinct mass grave, a Frankenstein-monster-cum-Turing-machine, it becomes both overly relevant and no longer relevant: when the dead are recycled into food for the "hive-people" and yet you can fall in love with a person whose body is completely controlled by the "everyone" of the hive, including going and sleeping with another "survivor" (of the assimilation) while expressing love to our protagonist.Confused? We hope so. But it is a rather productive confusion, we believe, in our day and age. Rather appropriately, all Stars made an appearance here in one way or another.The rest will be told to our one, loyal (and virtual!) audient, and her flaming lips...
Scott and Wes sit down with Dimitri Metropolis to explore the wild edges of TypeScript—from running Doom in the type system to building tools like Typeslayer. They dig into Turing-complete types, performance limits, and what the future might hold for TypeScript and programming languages as a whole. Show Notes 00:00 Welcome to Syntax! 00:27 Dimitri Metropolis Introduction 01:29 What is Doom in TypeScript? 03:10 TypeScript Types and Turing Completeness 04:06 Project Overview and Challenges 04:57 ASCII Art and Visual Representation 06:50 Performance Issues with TypeScript 09:27 Brought to you by Sentry.io 09:51 Typeslayer Tool Introduction 16:19 Building in Tauri 20:54 Challenges around packaging 24:03 Future of TypeScript and AI 27:40 Is the Go-based compiler significantly faster? TSperf 30:23 Should there be something to follow Typescript? 36:27 Staying up to date with WASM. 37:08 SquiggleConf Overview 38:26 Hosting a conference 40:45 What are your thoughts on Zig? 45:07 Vibe coding as an end goal 50:01 Sick Picks & Shameless Plugs Sick Picks Dimitri: pullfrog Shameless Plugs Dimitri: Michigan TypeScript on YouTube Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads
Lords: * Erica * Micah * https://www.reddit.com/r/micahwrites/ Topics: * Puerto Rico branded holiday jams * Chive drama on Reddit * https://www.reddit.com/r/KitchenConfidential/comments/1o0j6hq/cuttingacupofchiveseverydayuntilthereddit/ * How to cure tinnitus (maybe) * The Ballad of Blasphemous Bill * https://www.poetryfoundation.org/poems/46647/the-ballad-of-blasphemous-bill * In defense of making movies sequels until they're good again Microtopics: * Being finished with horrible shit. * Being in the middle of things forever. * The Minutes of the Intermittent Meetings of the Society of Apocryphal Gentlefolk, by Dark Art * Coming down from the high of PiCoSteveMo. * Explaining PiCoSteveMo to someone like it's their first time at Rocky Horror Picture Show. * Two things I'm willing to sacrifice to play PiCoSteveMo games. * Putting your PICO-8 game in a CRT filter. * Lawnmower Man, based on the title by Stephen King. * Developing a field system in Puerto Rico. * Winston's face appearing to the extent that Zoom thinks it's part of my chest. * A deafening hospital siren playing while you're trying to have a good time at the beach. * Pirate-themed massage. * Stealing the windsurfing gear and going for a ride. * Walking past the site of a pirate massage and fatal accident holding a solo cup. * Getting pushed off the road by seven full-sized Coca Cola trucks led by a Santa Sleigh and followed by a party truck with a giant octagonal speaker spreading holiday cheer. * Charging more for a well-traveled Coca Cola. * Holiday-Branded Traffic Jams. * Shipping your worst wine to India and it turns out that the sea voyage turns it into your best wine. * Spanish Milk. * Visiting Puerto Rico during linear time. * The Puerto Rican version of Sleep No More in which Bad Bunny might pull you into a dark corner for a one-on-one and it's not clear whether he works for the event or if he's just another attendee. * Day 57 of chopping chives on Reddit. * Drawing airplanes crashing into the chives that are too long. * Working with (and living with) the Chive Lord. * Comparing Day 1 chives with day 55 chives. * Finding Yoshi in a pile of chopped chives. * A job that exists. (But not one you get paid for.) * Asking the robot to add heart shapes to your food processor chives. * These are the Days of our Chives. * Each Sale I Drink a Glass of Water. * Self-hosting memes and Turing-complete memes. * Phase canceling your tinnitus. * Not wanting to look it up because then you'd know. * Curing tinnitus with extremely specific grenades. * A party where everyone is constantly singing their personal tinnitus tones. * Why don't we get bass tinnitus? * Can you cure bass tinnitus with snail caviar? * The native word for white people who are doing poorly in Alaska. * The ice worms wriggling their purple heads through the crust of the pale blue snow. * Pine trees cracking like little guns in the silence of the wood. * Prankster Bill dying with his arms and legs outstretched so that he won't fit in his coffin. * Poems that demand to be performed with a banjo. * Whether they have banjos in Alaska. * Having fun with the way words sound. * I'm not gonna make it – but I can be an X shape. * The Cremation of Sam McGee. * Burning your house down to get the insurance money to buy a telescope. * Making bad art until it becomes good. * Tremors 3: Back to Perfection. * A giant worm monster drilling up out of the ground in order to slice chives. * Really dwelling on how much you don't have in common with other people. * Six topics (and Shrieker Island) * A dollmaker on the run after making farcical plays about Hitler. * The Saved by the Bell themed music video featuring the same actors as the Final Destination movie it's promoting. * Would you take 90 minutes off of your life to have not seen Final Destination 4? * The replacement for the 1 to 10 pain scale where you decide which Final Destination movie you'd be willing to watch to take the pain away. * Low pain awareness. * Chess boxing win/loss ratios.
In this episode of Transform Work, John Winsor sits down with James Raybould, SVP at Turing, to explore what it really takes for organizations to move from talking about AI to actually benefiting from it.James shares how Turing works with the world's leading AI labs to make models smarter, while also helping enterprises deploy AI in practical, high-impact ways. Drawing on his own hands-on experimentation, he talks about how AI is reshaping everyday workflows, creativity, and how work gets done inside modern organizations.Before joining Turing, James spent years at Bain and more than a decade at LinkedIn, where he worked across strategy, analytics, product marketing, and growth. Together, John and James discuss what leaders need to understand now as AI shifts from experimentation to real operational change.
In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Josh Halliday, who works on training super intelligence with frontier data at Turing. The conversation explores the fascinating world of reinforcement learning (RL) environments, synthetic data generation, and the crucial role of high-quality human expertise in AI training. Josh shares insights from his years working at Unity Technologies building simulated environments for everything from oil and gas safety scenarios to space debris detection, and discusses how the field has evolved from quantity-focused data collection to specialized, expert-verified training data that's becoming the key bottleneck in AI development. They also touch on the philosophical implications of our increasing dependence on AI technology and the emerging job market around AI training and data acquisition.Timestamps00:00 Introduction to AI and Reinforcement Learning03:12 The Evolution of AI Training Data05:59 Gaming Engines and AI Development08:51 Virtual Reality and Robotics Training11:52 The Future of Robotics and AI Collaboration14:55 Building Applications with AI Tools17:57 The Philosophical Implications of AI20:49 Real-World Workflows and RL Environments26:35 The Impact of Technology on Human Cognition28:36 Cultural Resistance to AI and Data Collection31:12 The Bottleneck of High-Quality Data in AI32:57 Philosophical Perspectives on Data35:43 The Future of AI Training and Human Collaboration39:09 The Role of Subject Matter Experts in Data Quality43:20 The Evolution of Work in the Age of AI46:48 Convergence of AI and Human ExperienceKey Insights1. Reinforcement Learning environments are sophisticated simulations that replicate real-world enterprise workflows and applications. These environments serve as training grounds for AI agents by creating detailed replicas of tools like Salesforce, complete with specific tasks and verification systems. The agent attempts tasks, receives feedback on failures, and iterates until achieving consistent success rates, effectively learning through trial and error in a controlled digital environment.2. Gaming engines like Unity have evolved into powerful platforms for generating synthetic training data across diverse industries. From oil and gas companies needing hazardous scenario data to space intelligence firms tracking orbital debris, these real-time 3D engines with advanced physics can create high-fidelity simulations that capture edge cases too dangerous or expensive to collect in reality, bridging the gap where real-world data falls short.3. The bottleneck in AI development has fundamentally shifted from data quantity to data quality. The industry has completely reversed course from the previous "scale at all costs" approach to focusing intensively on smaller, higher-quality datasets curated by subject matter experts. This represents a philosophical pivot toward precision over volume in training next-generation AI systems.4. Remote teleoperation through VR is creating a new global workforce for robotics training. Workers wearing VR headsets can remotely control humanoid robots across the globe, teaching them tasks through direct demonstration. This creates opportunities for distributed talent while generating the nuanced human behavioral data needed to train autonomous systems.5. Human expertise remains irreplaceable in the AI training pipeline despite advancing automation. Subject matter experts provide crucial qualitative insights that go beyond binary evaluations, offering the contextual "why" and "how" that transforms raw data into meaningful training material. The challenge lies in identifying, retaining, and properly incentivizing these specialists as demand intensifies.6. First-person perspective data collection represents the frontier of human-like AI training. Companies are now paying people to life-log their daily experiences, capturing petabytes of egocentric data to train models more similarly to how human children learn through constant environmental observation, rather than traditional batch-processing approaches.7. The convergence of simulation, robotics, and AI is creating unprecedented philosophical and practical challenges. As synthetic worlds become indistinguishable from reality and AI agents gain autonomy, we're entering a phase where the boundaries between digital and physical, human and artificial intelligence, become increasingly blurred, requiring careful consideration of dependency, agency, and the preservation of human capabilities.
AI Hustle: News on Open AI, ChatGPT, Midjourney, NVIDIA, Anthropic, Open Source LLMs
Jaeden & Jamie discuss several AI companies that have raised significant funding, including Reflection AI, Turing, Shield AI, and Anthropic. They explore the potential and current projects of these companies, highlighting the trends in AI technology and investment. The conversation provides insights into the future of AI and its applications in various industries, including defense and coding.Our Skool Community: https://www.skool.com/aihustleGet the top 40+ AI Models for $20 at AI Box: https://aibox.aiSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Kim Stanley Robinson discusses Real Utopian Futures. Find the feed of English episodes only here: https://www.futurehistories-international.com/ You can also import the RSS feed to your favorite app: https://www.futurehistories-international.com/feed.xml Shownotes The reference page on Kim Stanley Robinson, his works, interviews, talks, etc. (including a discussion forum): https://www.kimstanleyrobinson.info/ Robinson, K. S. (2020). The Ministry for the Future. Orbit Books. https://www.orbit-books.co.uk/titles/kim-stanley-robinson/the-ministry-for-the-future/9780356508863/ Robinson, K. S. (2017). New York 2140. Orbit Books. https://www.orbit-books.co.uk/titles/kim-stanley-robinson/new-york-2140/9780356508788/ Robinson, K. S. (1988). The Gold Coast. Macmillan. https://us.macmillan.com/books/9780312890377/thegoldcoast/ Blumenfeld, J. (2024). Managing Decline. Cured Quail, Vol. 3. https://curedquail.com/Managing-Decline Blumenfeld, J. (2022). Climate Barbarism. Adapting to a wrong World. Constellations, 30, 162–178. https://onlinelibrary.wiley.com/doi/10.1111/1467-8675.12596 the quoted Kohei Saito video: https://youtube.com/shorts/WnvhD7p651M?si=SdfPftKOCJM6MS9j the lecture in which Kim Stanley Robinson talks about “futurecide” and “preemptive capitulation”: https://youtu.be/HpzXkpx29S4?si=PVlOE53Hj5-BZR5B reporting on and summary of the talk: https://www.ioes.ucla.edu/article/the-war-on-science-is-here-kim-stanley-robinson-says-its-just-the-beginning/ Löwy, M. (2005). What is Ecosocialism? Capitalism Nature Socialism, 16(2), 15–24. https://www.tandfonline.com/doi/abs/10.1080/10455750500108237 for an overview of the history and different schools of Ecomarxist/Ecosocialist theory: https://www.historicalmaterialism.org/ecology-marxism-andreas-malm/ on Anna Kornbluh: http://www.annakornbluh.com/ on Mass Extinction Events: https://www.nhm.ac.uk/discover/what-is-mass-extinction-and-are-we-facing-a-sixth-one.html Dressler, A. (2025). You have 100 ‘Energy Slaves'. The Climate Brink. https://www.theclimatebrink.com/p/you-have-100-energy-slaves on the 30 by 30 Biodiversity Goal: https://www.cop28.com/en/thought-leadership/The-30x30-Biodiversity-Goal-at-COP28 the International Maritime Organization: https://www.imo.org/ on the ‘Half-Earth Project': https://eowilsonfoundation.org/what-is-the-half-earth-project/ Wilson, E. O. (2016). Half-Earth. Our Planet's Fight for Life. Norton Books. https://wwnorton.com/books/9781631492525 Pendergrass, D. & Vettese, T. (2022). Half-Earth Socialism. A Plan to Save the Future from Extinction, Climate Change and Pandemics. Verso. https://www.versobooks.com/products/2650-half-earth-socialism one of the many interviews/talks in which Kim Stanley Robinson talks about science fiction as the realism of our times: https://youtu.be/p1wNhc46xjE?si=hOdKuwRQhef-9tLs on the Turing Test: https://en.wikipedia.org/wiki/Turing_test on Neoliberalism attaching itself to demands of the New Left: Boltanski, L. & Chiapello, E. (2018). The New Spirit of Capitalism. Verso. https://www.versobooks.com/products/1980-the-new-spirit-of-capitalism on Friedrich Hayek: https://en.wikipedia.org/wiki/Friedrich_Hayek Williams, R. (2015). Structures of Feeling. In: D. Sharma & F. Tygstrup (Ed.), Structures of Feeling. Affectivity and the Study of Culture (pp. 20-26). https://www.degruyterbrill.com/document/doi/10.1515/9783110365481.20/html on Keynesianism: https://en.wikipedia.org/wiki/Keynesian_economics Vogl, J. (2017). The Ascendancy of Finance. Polity Press. https://www.politybooks.com/bookdetail?book_slug=the-ascendancy-of-finance--9781509509294 Graeber, D. (2011). Debt. The First 5,000 Years. Melville House. https://files.libcom.org/files/__Debt__The_First_5_000_Years.pdf on Thomas Piketty: https://en.wikipedia.org/wiki/Thomas_Piketty on Gabriel Zucman: https://en.wikipedia.org/wiki/Gabriel_Zucman on the ‘Zucman tax': https://www.lemonde.fr/en/les-decodeurs/article/2025/09/23/zucman-tax-what-the-proposed-wealth-tax-would-mean-for-france_6745653_8.html on Carbon Taxes: https://en.wikipedia.org/wiki/Carbon_tax Sorg, C. (2023). Finance as a Form of Economic Planning. Competition & Change, 29(1), 17-37. https://journals.sagepub.com/doi/10.1177/10245294231217578 Sarkar, S. (2024). The Carbon Coin. An Eco-Speculative Approach to Decarbonisation in Kim Stanley Robinson's The Ministry for the Future. Green Letters, 28(4), 297–310. https://www.tandfonline.com/doi/full/10.1080/14688417.2025.2483998 A policy proposal on ‘Carbon Reward' from the same researcher whose earlier policy work inspired the ‘Carbon Coin' idea in The Ministry for the Future: https://deltonchen.substack.com/p/new-economic-blueprint-for-resolving see also: https://globalcarbonreward.org/newsletters/carbon-coin/ on Quantitative Easing: https://en.wikipedia.org/wiki/Quantitative_easing on Carbon Drawdown: https://en.wikipedia.org/wiki/Carbon_sequestration on Nicolas Stern: https://en.wikipedia.org/wiki/Nicholas_Stern,_Baron_Stern_of_Brentford on the Democratic Socialists of America: https://www.dsausa.org/ the Network for Greening the Financial System: https://www.ngfs.net/en on COP30 in Belém: https://unfccc.int/cop30 Solnit, R. (2022). Orwell's Roses. Penguin. https://www.penguinrandomhouse.com/books/607057/orwells-roses-by-rebecca-solnit/ Future Histories Episodes on Related Topics S3E47 | Jason W. Moore on Socialism in the Web of Life https://www.futurehistories.today/episoden-blog/s03/e47-jason-w-moore-on-socialism-in-the-web-of-life/ S03E44 | Anna Kornbluh on Climate Counteraesthetics https://www.futurehistories.today/episoden-blog/s03/e44-anna-kornbluh-on-climate-counteraesthetics/ S03E32 | Jacob Blumenfeld on Climate Barbarism and Managing Decline https://www.futurehistories.today/episoden-blog/s03/e32-jacob-blumenfeld-on-climate-barbarism-and-managing-decline/ S03E30 | Matt Huber & Kohei Saito on Growth, Progress and Left Imaginaries https://www.futurehistories.today/episoden-blog/s03/e30-matt-huber-kohei-saito-on-growth-progress-and-left-imaginaries/ S03E23 | Andreas Malm on Overshooting into Climate Breakdown https://www.futurehistories.today/episoden-blog/s03/e23-andreas-malm-on-overshooting-into-climate-breakdown/ S02E18 | Drew Pendergrass and Troy Vettese on Half Earth Socialism https://www.futurehistories.today/episoden-blog/s02/e18-drew-pendergrass-and-troy-vettese-on-half-earth-socialism/ --- If you are interested in democratic economic planning, these resources might be of help: Democratic planning – an information website https://www.democratic-planning.com/ Sorg, C. & Groos, J. (eds.)(2025). Rethinking Economic Planning. Competition & Change Special Issue Volume 29 Issue 1. https://journals.sagepub.com/toc/ccha/29/1 Groos, J. & Sorg, C. (2025). Creative Construction - Democratic Planning in the 21st Century and Beyond. Bristol University Press. [for a review copy, please contact: amber.lanfranchi[at]bristol.ac.uk] https://bristoluniversitypress.co.uk/creative-construction International Network for Democratic Economic Planning https://www.indep.network/ Democratic Planning Research Platform: https://www.planningresearch.net/ --- Future Histories Contact & Support If you like Future Histories, please consider supporting us on Patreon: https://www.patreon.com/join/FutureHistories Contact: office@futurehistories.today Twitter: https://twitter.com/FutureHpodcast Instagram: https://www.instagram.com/futurehpodcast/ Mastodon: https://mstdn.social/@FutureHistories English webpage: https://futurehistories-international.com Episode Keywords #KimStanleyRobinson, #JanGroos, #Interview, #FutureHistories, #FutureHistoriesInternational, #futurehistoriesinternational, #Utopia, #RealUtopias, #DemocraticPlanning, #Keynes, #Dystopia, #DemocraticEconomicPlanning, #Capitalism, #EcoSocialism, #Socialism, #GreenCapitalism, #Narratives, #ClimateCounterAesthetics, #Transition, #SocioEcologicalTransition, #SocialDemocracy, #ScienceFiction
Welcome back to another EUVC Podcast, where we explore the lessons, frameworks, and insights shaping venture ecosystems across the globe.In this special Southeast Asia edition this week, David Cruz e Silva from EUVC and Ambika from Circle Capital sit down with Binh Tran from AVV (Ascend Vietnam Ventures) - a VC firm headquartered in Ho Chi Minh City, backing tech founders across Vietnam, Southeast Asia, and the U.S.A serial founder turned VC, Binh sold his first company Klout for $200M in 2014 before launching 500 Startups Vietnam and later AVV, which has now backed about 500 startups, including unicorns Turing, Skymavis, and ApplyBoard.Together, they unpack Vietnam's ecosystem growth, power-law returns in emerging markets, government catalysts, and how to back founders with both grit and global ambition.
Backtracking on your standard of living. What America 250 could have looked like with Dome at the helm. What happened with Dan leaving the FBI? Turing around a criminal organization. Communism is the religion of the malcontent wherever you go. Follow The Jesse Kelly Show on YouTube: https://www.youtube.com/@TheJesseKellyShowSee omnystudio.com/listener for privacy information.
This is not another book about how machines are becoming too smart, or a lecture about how we may lose control. It is the story of how artificial intelligence was born from human longing, grief, and ambition. And it is the story of the humans who are at the forefront of this field, from Hinton to Lovelace, Turing to Altman.Always the same: Are you scared? Should I be scared?' We have all heard the stories and had barroom conversations. There is no point in beating around the bush. The conversation about AI can be terrifying. From the far- fetched fantasy of killer robots taking over to the very real threat of mass automation, it seems like AI captured the public imagination overnight.”He is the author of "Artificially Intelligent: The Very Human Story of AI." https://www.amazon.ae/Artificially-Intelligent-Very-Human-Story/dp/1487567677http://www.yourlotandparcel.org
Why do we even need AI — and why now? Host Nick Schutt sits down with Sunil Prasad, Leidos Principal Solutions Architect and Georgia Tech-trained data scientist, for a sweeping journey through AI's 100-year history — from 1920s conceptual foundations to today's agentic AI — and a candid debate on whether machines will ever climb to the top of the food chain. Key topics: The 5 phases of AI evolution (1920s–2025) From perceptrons to LLMs to agentic workflows Why AI is outsourcing human cognition for the first time since the Industrial Revolution Where AI shines today (predictive analytics, autonomous systems) Where it shouldn't go (parenting, therapy, lethal decisions) If you're wondering whether AI is just hype or the next industrial leap, this episode is for you. Channel: @RobotsandRedTapeAI | Host: Nick Schutt Subscribe for more no-BS conversations on tech, bureaucracy, and the future.
Fredrik chats to Dylan Beattie about Rockstar, esoteric programming languages (Perl in latin, anyone?), and what might happen after the AI bubble. AI will ruin jokes, they can't do things just right. But some things hiding under the label are actually useful as well. Have we been in any similarly strange bubbles before, and what might be left that's useful after it? Also evolution, revolution, and strange Scrabble facts. Recorded during Øredev 2025. The episode is sponsored by Ellipsis - let us edit your podcast and make it sound just as good as Kodsnack! With more than ten years and 1200 episodes of experience, Ellipsis gets your podcast edited, chapterized, and described with all related links in a prompt and professional manner. Thank you Cloudnet for sponsoring our VPS! Comments, questions or tips? We a re @kodsnack, @tobiashieta, @oferlundand @bjoreman on Twitter, have a page on Facebook and can be emailed at info@kodsnack.se if you want to write longer. We read everything we receive. If you enjoy Kodsnack we would love a review in iTunes! You can also support the podcast by buying us a coffee (or two!) through Ko-fi. Links Dylan Dylan also has a podcast - Tech, bugs & rock'n'roll Dylan's presentation at Øredev 2025: Rockstar 2.0: building an esoteric language interpreter in .NET Rockstar Formal grammar Esoteric programming languages Damian Conway Perl Perl in Latin - the paper and the module Latin Inflectional grammar Domain-specific languages Lilypond - Scheme dialect for sheet music Context-free grammar Engraving - the art of creating sheet music codewithrockstar.com Support us on Ko-fi! Scrabble Metal umlaut Piet - the language which should have been called Mondrian Piet Mondrian Mondrian - the undeserving tool Turing completeness The Buster Keaton house scene The dot-com bubble The subprime mortgage crisis Enron Douglas Adams Three mile island Windows Vista Tim Berners-Lee Solid - Tim's project of holding your data locally Ellipsis - sponsor of the week: we edit Kodsnack, and we can edit your podcast too! The emperor's new mind Quantum computing Hadamard gate The linebreakers - Dylan's band of conference speakers ASML Titles Always good fun that one The version of the story that I tell in the talk Enough clichés Resident mad scientist of the Perl community Felis commidet piscem Always the cat that is eating Lexical flexibility Fundamentally, programming is programming A big win for everyone Linguistic conventions and extended alphabets That's a different letter Regional assumptions German ortography A piece of impressionist art Hang it on the wall Something hidden in something else Physical comedy at its greatest Money people believe exists The amount of pretend money It has to come from reality Fortunately, I do not have a trillion dollars Quietly siphoned off Emotionally flat What can I steal from? A little LLM that works for you A spectacular collapse A billion lines of crap Pruning the decision tree Fix the next milestone in the public consciousness Five years of excitement, five years of disappointment Overdue for a little disappointment Reliant on Dutch technology
Date with a Debut is a podcast hosted by writer Nick Wasiliev: shining a light on debut authors, their incredible books, and their journeys to publication. This week, Nick sits down with Nick Croydon to discuss his debut speculative fiction novel, The Turing Protocol. Enjoy the show? Drop us a review, it really helps the show out! You can also support and follow Nick on Substack: https://nickwasiliev.substack.com/ Subscribe on Podbean to never miss an episode: https://www.podbean.com/site/podcatcher/index/blog/3mRME5IMoSJx Date with a Debut is featured as part of the Australian literary platform Words & Nerds. Check out all shows on the platform here: https://danivee.com.au/podcasts/ BOOKS: Debut Feature: The Turing Protocol / Nick Croydon: https://booktopia.kh4ffx.net/55ozV3 Other Books Mentioned: The Peak / Sam Guthrie: https://booktopia.kh4ffx.net/N97EZb As I Walked out One Midsummer Morning / Laurie Lee: https://booktopia.kh4ffx.net/MA7EJ2 Slaughterhouse 5 / Kurt Vonnegut: https://booktopia.kh4ffx.net/GK71e9 PRODUCTION NOTES: Host: Nick Wasiliev Guest: Nick Croydon Editing & Production: Nick Wasiliev Podcast Theme: ‘Chill' by Sakura Hz Production Code: 3:20 Episode Number: #54 Additional Credits: Dani Vee (Words & Nerds), Tace Kelly (Affirm Press), Biarta Parnham, Piper Daly (Biased) FOLLOW NICK WASILIEV (+ DATE WITH A DEBUT): A LITTLE IDEA PODCAST: Feed SUBSTACK: nickwasiliev.substack.com WEBSITE: www.nicholaswasiliev.com YOUTUBE: @NickWasiliev_Official INSTAGRAM: @nickwasiliev TWITTER/X: @Nick_Wasiliev FACEBOOK: Nick Wasiliev TIKTOK: @nickwasiliev © 2025 Nick Wasiliev and Breathe Art Holdings ‘Date With A Debut' is a Words and Nerds and Breathe Art Podcasts co-production recorded and edited on Awabakal Country, and we pay our respects to all elders past and present.
Welcome to Creative Block! This week's guest is JESSE MOYNIHAN. Jesse is an cartoonist, animator, story artist, and art director, whose work can be seen on Adventure Time, Midnight Gospel, and is the creator of the shorts Manly, Forming, Jesus 2, and Hungy Ghost.In this episode, VEE and SEAN talk to JESSE about PITCHING JESUS 2, Finding money for projects, balancing the beauty of projects, and so much more!While we talk, we doodle on a MAGMA, where we draw from prompts we got on TWITTER, INSTAGRAM, YOUTUBE, THREADS, NEWGROUNDS, and PATREON. Subscribe to our channel to hear more stories of other animation professionals! ■■■■■■■■■■■■■■■Thank you to our Patreon Blockheads for supporting the show!Want to become a Patron? ► https://www.patreon.com/crtvblockHit subscribe and follow our socials for updates! ►https://bsky.app/profile/crtvblock.bsky.social ►https://x.com/crtvblock ► https://www.instagram.com/crtv.block/ ► https://www.threads.net/@crtv.block ► https://creativeblockpod.newgrounds.com/■■■■■■■■■■■■■■■Follow the hosts!VEE! ► https://x.com/violainebriat ► https://www.instagram.com/violainebriat ► https://www.threads.net/@violainebriat ► https://www.violainebriat.com/SEAN! ► https://linktr.ee/lordspew ► https://x.com/lordspew ► https://www.instagram.com/lordspew/ ► https://www.threads.net/@lordspewFollow the guests!JESSE! ► https://jessemoynihan.com/ ► https://www.youtube.com/@JesseMoynihan ► https://www.patreon.com/cw/studiopicatrix ■■■■■■■■■■■■■■■Editing by Clemence Briat ► https://twitter.com/clem_n_mProduced by Marco Beltran ► https://twitter.com/orcsocksReels/Shorts by Ebuka.PNG ► https://www.instagram.com/ebuka_0fomaTheme song by Louie Zong ► https://twitter.com/everydaylouie■■■■■■■■■■■■■■■1:14 Turing an old Pitch into JESUS 212:43 Rich people doing weird blood stuff17:29 Negging during a pitch22:10 Approach to creating Jesus 235:22 Money talk (financing and fundraising projects)51:03 “People are going to watch it on their phone anyway”1:00:37 Thinking about a demographic1:02:00 Listener Questions
Franco Buffoni"Aureole e tigri dal mondo queer"Racconti di un'altra letteraturail ramo e la foglia edizioniwww.ilramoelafogliaedizioni.itQuesto libro vuole essere anzitutto la testimonianza di un cinquantennale impegno, umile e costante, vòlto a dare dignità letteraria ad alcune istanze provenienti dal mondo queer.Perché, negli anni della mia formazione, quelli come me, a destra venivano considerati degli sporcaccioni, al centro dei peccatori, a sinistra una degenerazione borghese. Non era così, naturalmente: la classe operaia, di quelli come me, ne contava tanti quanti le altre classi sociali, solo che stavano nel closet. Ma, come diceva Mario Mieli, bastava frequentare i vespasiani per rendersi conto di quanti proletari en battuage li frequentassero.La situazione, almeno nel mondo occidentale, cominciò lentamente a cambiare con il Sessantotto, in un crescendo culminato il 17 maggio 1990 con la dichiarazione dell'OMS, che definì l'orientamento sessuale di quelli come me “una variante naturale dell'umana sessualità”. Dunque nessuna malattia, nessuna cura: nessuna terapia riparativa.Questi racconti sono stati scritti nella convinzione che qualche giovane possa trarre motivo di orgoglio e rinnovata dignità.*«A Central Park, senza dirgli nulla in precedenza, Louis gli fa incontrare il figlio di Benny Paret. Emile lo scruta un po', poi avvicina il viso e gli dice: “Non volevo. Assomigli molto a tuo padre”.Nel 2005 partecipa al Gay Pride e in un'intervista su Sports Illustrated fa coming out: “Se uccido un uomo tante persone capiscono e mi perdonano. Se invece amo un uomo, per molti questo resta un crimine imperdonabile. È allora che divento cattivo. Per questo, anche se non sono mai stato fisicamente in prigione, è come se fossi rimasto in cella per quasi tutta la vita”.»*Franco Buffoni ha pubblicato Suora carmelitana (1997), Il profilo del Rosa (2000), Guerra (2005), Noi e loro (2008). L'Oscar Poesie 1975-2012 raccoglie la sua opera poetica. Con Jucci (2014) ha vinto il Premio Viareggio. In seguito sono apparsi Avrei fatto la fine di Turing (2015), l'opera teatrale Personae (2017), La linea del cielo (2018), Betelgeuse e altre poesie scientifiche (2021). È autore dei romanzi Zamel (2009), Il servo di Byron (2012), La casa di via Palestro (2014), Il racconto dello sguardo acceso (2016), Due Pub tre poeti e un desiderio (2019), Silvia è un anagramma (2020), Vite negate (2021), Il Gesuita (2023). Nel 2025 Interlinea ha pubblicato la raccolta di saggi Nel nome del male; Mondadori Poesie 1975-2025 comprensivo del nuovo libro La coda del pavone (2025). Il suo sito è www.francobuffoni.itDiventa un supporter di questo podcast: https://www.spreaker.com/podcast/il-posto-delle-parole--1487855/support.IL POSTO DELLE PAROLEascoltare fa pensarehttps://ilpostodelleparole.it/
✨《我想聽你說 2 Popcorn Talks 2》歡樂對話卡牌組全新推出
✨《我想聽你說 2 Popcorn Talks 2》歡樂對話卡牌組全新推出
China EVs & More is back with a special Thanksgiving episode — and the China auto world did not take the week off.
The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
Jonathan Siddharth is Founder and CEO of Turing, one of the fastest-growing AI companies advancing frontier models. Jonathan has led the company to an astonishing $350M ARR with just $225M raised and a profitable company. A Stanford-trained AI scientist, Jonathan previously helped pioneer natural language search at Powerset, which was acquired by Microsoft. AGENDA: 03:35 Data, Compute, Algorithms: What is Most Abundant? What is Lacking Most? 09:18 What Does No One Know About AI's Data Requirements That Everyone Should? 17:05 The Biggest Challenges Enterprises Have with AI Adoption 20:38 Why Will 99% of Knowledge Work Will be Gone in 10 Years 27:12 How Will Data-Driven Feedback Loops Replace Technology as the Moat 36:08 Who Wins the Data Labelling Market? Who Loses? 38:23 Is Revenue BS in Data Labelling? Are Players Calling GMV Revenue? 45:20 Why is SaaS Dead in a World of AI? 51:23 Will the Phone be the Primary User Interface to an AI World? 57:07 Quickfire Round
Les dernières innovations technologiques et l’impact de l’intelligence artificielle sur notre quotidien ; Marble, un outil transformant images et textes en environnements 3D, avancées en génération vidéo avec Kandinsky V5 et Time to Move. Les infos également sur la création musicale avec Suno et Udio, les défis des IA en éducation, ainsi que les dangers des chatbots pour les jeunes. On parle également du départ du CEO de Mastodon et des complications pour TSMC et Intel dans la production de semi-conducteurs. Patreon YouTube Discord Modèles IA de la semaine C'est un monde ! Marble, PAN et SIMA 2. Vidéo : Kandinsky 5, Natex, Time to move et Univa. Suno n'est plus sous l'eau. Les IA sont mauvaises en math… ou peut-être pas. Revanche des premières L : les IA se laissent étourdir par la poésie. Les LLM font-ils fondre votre cerveau ? Les vidéos courtes aussi ! Turing 2.0 : si c'est poli, c'est une AI. Même le créateur de Mastodon en a marre des réseaux sociaux… C'est du lourd SC25 : pour les power users. Produire aux US, c'est cher, Intel mise sur le packaging. Des radars terahertz arrivent sur les voitures. 40 nuances de nounours : GPT4o cet obsédé… Parle à ma main : de la suite dans les RFID. Votre mission si vous l'acceptez : trouver une prise USB. Participants Une émission préparée par Guillaume Poggiaspalla Présenté par Guillaume Vendé
In Episode 227, Tu and Lei break down a massive week in the global EV industry — one where China's innovation pace keeps accelerating while Western automakers scramble to respond. Xiaomi's YU7 officially outsells the Tesla Model Y in October, marking a symbolic shift in China's most competitive EV segment. Meanwhile, Tesla's domestic sales slump to 26,000, signaling that aggressive price cuts and financing perks may not be enough as Chinese challengers tighten the pressure.The hosts also unpack XPeng's viral AI Day, featuring the “Iron Lady” humanoid robot, new L4 capable RoboTaxi prototypes, the Turing chip's rising importance, and XPeng's “physical AI” strategy — positioning the company as a vertically integrated mobility+AI platform rather than just an automaker.On the U.S. side, GM sparks headlines after reportedly urging suppliers to “de-China” their supply chains by 2027 — a massive, risky reshoring effort that could reshape cost structures across North America. Tu and Lei discuss the feasibility and geopolitical backdrop, including the Nexperia crisis, ICE tariff pressures, and USMCA uncertainty._____________________They also hit:
Greg Davis, Pastor Danny Duvall along with Bishop Jim Lowe host this roundtable. They begin the hour with thoughts on Thanksgiving and Graditude. They then transition to drawing thoughts from the artist Jelly Roll and his song "Save Me" and the best practice to evangelize.Second Hour begins by the group talking about the issues of religious liberty and Christian persecution in Nigera and the west's susceptibility to Islamic growth. The conversation then turns to threats related to digital ID and digital currency. Duvall ends with a discussion of what it means to be human and the Turing test.
MAGAISM/BRO CULTURE CRONYISM/CEOs RULE!/ELONISMI am deeply ashamed': Larry Summers to step back from public commitments after new Epstein emails Senator Elizabeth Warren: “[Summers] cannot be trusted to advise our nation's politicians, policymakers and institutions — or teach a generation of students at Harvard or anywhere else.”And an unidentified Trump administration official told Politico that companies and organizations should end their association with Summers.The former Treasury secretary, along with Bill Clinton and the Democratic megadonor Reid Hoffman, are among the Democratic figures whom the Justice Department is investigating over Epstein ties — at President Trump's behest.Economist Warns That Trump's Investments in the Tech Industry Could Crash the Whole EconomyItalian economist Mariana Mazzucato, a University College London economics professor:“I think the kind of capitalism Trump has is crony capitalism.”“I would describe crony capitalism as Mafia-like. You're showing your upper hand. You're handing out favors to some. But then divide and conquer. Picking and choosing without a particular strategy.”She argues that the Intel deal is poorly designed because it doesn't have any conditions to incentivize the company to be build new products, while the government simply acts as a passive investor.All SNAP recipients required to reapply as Trump admin cracks down on fraud: 'Business as usual is over' SEC to Allow Companies to Block Shareholder ProposalsThe U.S. Securities and Exchange Commission (SEC) announced that it will not express opinions to requests from companies asking to exclude shareholder proposals from their proxy voting materials during the 2025-2026 proxy season, effectively allowing companies to avoid voting on issues proposed by investors such as climate, sustainability and diversity at annual meetings. Elon Musk is set to make more than every U.S. elementary teacher combinedWashing Post: BezosWhite nationalist talking points and racial pseudoscience: welcome to Elon Musk's Grokipedia: World's richest person wanted to ‘purge' propaganda from Wikipedia, so he created a compendium of racist disinformationSTAKEHOLDERS RULE!‘We've probably made housing unaffordable for a whole generation of Americans': top real-estate CEO on the real cost of Covid economic firefightingSean Dobson, CEO of The Amherst GroupFord CEO says he has 5,000 open mechanic jobs with 6-figure salaries from the shortage of manually skilled workers: ‘We are in trouble in our country' CEO James D. Farley, Jr: $24,861,866; 253:1.Ford Family Executive Chair William Clay Ford, Jr. 20,379,912; 207:1$519,845 for personal use of aircraft$1,394,538 for securityEOnly 16% of Large Companies on Track for Net ZeroMissing at U.N.'s Climate Meeting: American ExecutivesWOKE DATADisney ditches 'diversity' and 'DEI' in business report for the first time since 2019Nearly half of LGBTQ characters AXED from TV amid Trump-era rollback of woke DEI initiativesAccording to Deadline, around 41% of the 489 LGBTQ characters that were on the small screen this year will not return due to series cancellations and endings.AIJeff Bezos is putting $6.2 billion—and himself as co-CEO—behind a new AI startupProject PrometheusVik Bajaj: StanfordOpenAI accused of ‘consistent and dangerous pattern' rushing product to market that is ‘inherently unsafe or lacking in needed guardrails'The nonprofit Public Citizen is now demanding OpenAI withdraw Sora 2 from the public, writing in a letter to the company and CEO Sam Altman that the app's hasty release so that it could launch ahead of competitors shows a “consistent and dangerous pattern of OpenAI rushing to market with a product that is either inherently unsafe or lacking in needed guardrails.”Sora 2, the letter says, shows a “reckless disregard” for product safety, as well as people's rights to their own likeness and the stability of democracy.OpenAI didn't immediately respond to a request for comment.In the age of AI, CEOs quietly signal that layoffs are a badge of honorPeople Are Having AI “Children” With Their AI Partners Eli Lilly CEO says he has 'at least 1 or 2 AIs running' during every meeting he's in David Ricks: Ricks said he doesn't like OpenAI's ChatGPT for science-related questions — "It's too verbal," he said. Instead, he prefers Anthropic's Claude and xAI's Grok.Still, he has to be careful to watch for hallucinations, an issue the frontier model companies are still trying to tamp down.The CEO of $2.2 billion AI company Turing can't live without ChatGPT, swears by his Kindle, and has only taken 2 weeks of vacation in 7 yearsJonathan Siddharth; StanfordSTUPIDThe CFO Centre names Natalie Garfield as new CFOHeinz goes all-in on Thanksgiving leftovers with squeezable turkey gravy
Aujourd'hui dans Silicon Carne, on parle de :
Neo The Home Robot: The future is here! Or is it? A new AI robot dubbed NEO has been announced but is it just an Indian guy using VR to control him. Zoom Court Fails: A cop shows up to Zoom court with no pants. Are we going to just let this happen!? Andy Richter and Adrien Skye: Andy Richter survives another night during the Halloween episode! How far can the Fandy's go!? We also get an alert that the Adrien Skye listening party is going on! THE BEAR!, FUCK YOU, WATCH THIS!, MICHAEL JACKSON!, THRILLER!, HALLOWEEN!, THIRSTY THURSDAY!, THRILLER DANCE!, ZOMBIE MJ!, MEMORIES!, FAVORITE PART!, MAKING OF VHS!, RICK BAKER!, BEHIND THE SCENES!, COREYWEEN HANG!, HOME ROBOT!, TECHNOLOGY!, NEO!, PUPPET!, REMOTELY RUN!, INDIA!, MAID!, VR HEADSET!, GOGGLES!, SCARY ACCENT!, NORWEGIAN!, ROBOT ATTACKED!, EGG CHARGER!, GAY AS HELL!, GLEB!, FIRE A GUN!, VIOLENCE!, POSSESSOR!, ADS!, REDWOOD AI!, DR ANDY SKILONAKIS!, TURING!, ZOOM COURT FAILS!, DRAG RACING!, DISORDERLY!, BLUNT!, DANCING WITH THE STARS!, ANDY!, HALLOWEEN NIGHT!, STILL ALIVE!, FOG!, DANCING!, HIDE!, FANDY!, GO HOME!, MORMON WIVES!, BABY!, DANCE!, RSV!, HOSPITAL!, HALLOWEEN!, GOTHSPEL!, CIRCUS QUEEN!, PLASTIC STANDARDS!, VAMPIRES BALLAD!, I'M DOWN!, PITTS OF HELL!, BETTER NOW!, LIVE!, CLUB!, BAR! You can find the videos from this episode at our Discord RIGHT HERE!
What if tomorrow's AI stares back with hidden inner life - and we're blind to it? Philosopher Eric Schwitzgebel shares his new thesis from AI and Consciousness (2026): an unbreakable "fog" of uncertainty means we'll build legions of disputably conscious machines before we ever know. Join Dr Eric Schwitzgebel and Dr Tevin Naidu as they unpack why consciousness detection is doomed, how mimicry fools us, and what humble action looks like in the void.Send detailed comments on the manuscript by Dec 15 to receive a signed hard-copy of AI and Consciousness + your name in print! Your voice matters. Email all comments to " eschwitz " at domain: " ucr.edu" for a copy of this paper or click this link: https://faculty.ucr.edu/~eschwitz/SchwitzAbs/AIConsciousness.htmMind. Matter. Meaning. Subscribe for philosophy that hits home. Is AI conscious yet? Debate below!
Lockpicks, Microsoft, CoPhish, Atlas, Turing, ForumTroll, PKD, even Kilgore Trout, the Amazing Aaran Leyland, and more on the Security Weekly News. Visit https://www.securityweekly.com/swn for all the latest episodes! Show Notes: https://securityweekly.com/swn-524
Lockpicks, Microsoft, CoPhish, Atlas, Turing, ForumTroll, PKD, even Kilgore Trout, the Amazing Aaran Leyland, and more on the Security Weekly News. Show Notes: https://securityweekly.com/swn-524
Lockpicks, Microsoft, CoPhish, Atlas, Turing, ForumTroll, PKD, even Kilgore Trout, the Amazing Aaran Leyland, and more on the Security Weekly News. Visit https://www.securityweekly.com/swn for all the latest episodes! Show Notes: https://securityweekly.com/swn-524
¿Por qué es tan extraño el 3i Atlas? Este cometa presenta una colección de anomalías, tiene una cola de material que fluye hacia el sol, es escaso de gua, libera níquel sin rastro de hierro y se ha calificado como un montón de escombros. Hoy en Pulso te guiaré por las fotos obtenidas desde Marte por la ESA, el silencio informativo de la NASA durante su paso clave, y la inquietante reserva de imágenes clasificadas hasta 2099. Además, exploraremos la hipótesis del astrofísico Avi Loeb sobre si esto podría ser un faro cósmico o un test de Turing galáctico. Acompáñeme a descifrar si este es solo el tercer visitante interestelar, o si estamos ante algo completamente novedoso que reformará nuestra comprensión del cosmos.
00:48 How hunger, hormones and aggression interact in mouse brainsResearchers have uncovered the neural mechanisms that underlie an aggressive behaviour in mice prompted by hunger and hormonal state. Virgin female mice can become aggressive towards mouse pups when they are food deprived, but it seems that the relevant amounts of pregnancy hormones, oestrogen and progesterone, also played a role. By investigating the neurons involved, neuroscientists showed how hunger and hormones are integrated by the brain to lead to aggressive behaviour. This could help researchers understand more about how multiple stimuli are interpreted by the brain, something much harder to study than single stimulus effects.Research Article: Cao et al.09:35 Research HighlightsThe overlooked environmental costs of wastewater treatment facilities — plus, an ancient communal hunting system that lasted well into the eighteenth century.Research Highlight: Wastewater treatment produces surprising amounts of greenhouse gasesResearch Highlight: Andean peoples hunted and gathered long after they embraced farming11:53 What generative AI could mean for higher educationAround the world, universities and students are scrambling to adapt to the use of generative AI tools such as ChatGPT. And while there is optimism that these tools could improve education, there are also concerns about the ways they could stifle independent, critical thought. We hear about the studies trying to unpick the potential impact of this new technology.News Feature: Universities are embracing AI: will students get smarter or stop thinking?21:26 Briefing ChatA blood test for Alzheimer's, and what should be the next ‘test' for AI after the Turing test?Nature: Blood tests are now approved for Alzheimer's: how accurate are they?Nature: AI language models killed the Turing test: do we even need a replacement?Subscribe to Nature Briefing, an unmissable daily round-up of science news, opinion and analysis free in your inbox every weekday. Hosted on Acast. See acast.com/privacy for more information.
Dr. Neil deGrasse Tyson is an astrophysicist, author, and science communicator known for making complex cosmic concepts accessible to the public. He serves as the director of the Hayden Planetarium at the American Museum of Natural History in New York City. Through his books, television appearances, and the podcast StarTalk, Dr. Tyson inspires curiosity about the universe and promotes scientific literacy worldwide. His engaging storytelling and wit have made him one of the most recognizable voices in modern science.In our conversation we discuss:(01:08) Mysteries that keep Neil deGrasse Tyson up at night(03:47) How scientists learn to ask the right questions(07:14) Philosophy's role and value in modern science(10:43) Why philosophers stopped influencing physical sciences(12:54) Misinterpretations of Neil's comments on philosophy(17:03) Becoming famous and public accountability(21:07) How scientists stay connected and exchange ideas(24:51) Choosing between teaching, science, and public outreach(28:14) Current research interests and unsolved astrophysics questions(30:43) Impact of private space travel on science(35:16) Relationship between science, politics, and the military(36:30) Why Elon Musk won't reach Mars first(37:49) Future of space tourism and affordability(41:00) Expanding human presence across the solar system(47:35) Genetic engineering, ethics, and human evolution(49:27) Global cooperation and genetic regulation challenges(52:29) Human–AI integration and Neuralink skepticism(55:01) Future of robots and human labor(58:07) Early AI history and the Turing test(1:02:21) Skills young people need in the AI era(1:04:09) Teaching curiosity and lifelong learning(1:07:04) How Neil developed communication and teaching skills(1:09:37) Creating meaning and purpose in life(1:11:01) How Neil wants to be remembered(1:12:53) StarTalk, books, and inspiring public curiosityLearn more about Dr. Neil:https://en.wikipedia.org/wiki/Neil_deGrasse_TysonWatch full episodes on: https://www.youtube.com/@seankimConnect on IG: https://instagram.com/heyseankim
Most organizations approaching AI are struggling and running pilot projects that go nowhere. The common assumption is that the technology itself is flawed, over hyped, or too complex. However, the employers that are succeeding with AI have discovered something different. The technology isn't the problem, and the real barriers are human. Employee resistance, fear about job security, and the inertia of doing things the way they've always been done. What makes the difference between AI projects that fail and those that transform how teams actually work? My guest this week is Taylor Bradley, VP Talent Strategy & Success at Turing. In our conversation, Taylor shares how he built grassroots adoption in his team by starting with simple prompt libraries, the framework for deciding what should be automated, what should be augmented, and what should be left to humans, and why every AI project is really a human change management project in disguise.. In the interview, we discuss: Unlocking AI's full potential The most significant challenges when implementing AI in HR and TA Why AI pilots fail AI projects are actually human change management projects The inertia of the status quo Talent use cases When to augment and when to automate Breaking down roles into tasks Surprising measures of success How HR and TA roles need to evolve Considering adverse impacts What will the future look like? Follow this podcast on Apple Podcasts. Follow this podcast on Spotify.
AI therapists and caregivers. Digital tutors and advisors and friends. Artificial lovers. Griefbots trained to imitate dead loved ones. Welcome, to the bustling world of AI-powered chatbots. This was once the stuff of science fiction, but it's becoming just the stuff of everyday life. What will these systems do to our society, to our relationships, to our social skills and motivations? Are these bots destined to leave us hollowed out, socially stunted, screen-addicted, and wary of good-old-fashioned, in-the-flesh human interaction? Or could they actually be harnessed for good? My guest today is Dr. Henry Shevlin. Henry is a philosopher and AI ethicist at the Leverhulme Centre for the Future of Intelligence (CFI) at Cambridge University. In a series of recent papers, Henry has been exploring this brave new world of "social AI" and its philosophical, ethical, and psychological dimensions. Here, Henry and I sketch the current landscape of social AI—from dedicated platforms like Replika and CharacterAI to the more subtly social uses of ChatGPT and Claude. We consider several tragic cases that have recently rocketed these kinds of services into public awareness. We talk about what's changed about AI systems—quite recently—that's now made them capable of sustained relationships. We linger on the possible risks of social AI and, perhaps less obviously, on the possible benefits. And we consider the prospects for regulation. Along the way, Henry and I also talk about his 81-year-old father, his teenage self, and, of course, the kids these days; we consider whether social AI, in its potential harms, is more like social media or more like violent video games; we talk about "deskilling" and it's opposite "upskilling"; and we of course take stock of a certain elephant in the room. Alright friends, this is a fun one. We've been wanting to explore this dawning age of social AI for some time. And we finally found, in Henry, the right person to do it with. Enjoy! Notes 3:00 – The piece in The Guardian—'It's time to prepare for AI personhood'—by Jacy Reece Anthis. 5:00 – The Replika subreddit. 9:30 – News coverage of recent research on the bedside manner of AI systems. 10:30 – For a recent paper on AI by the philosopher Ophelia Deroy, see here. 11:30 – For some of Dr. Shevlin's recent writing about "social AI", see here and here. 13:30 – OpenAI's recent report, 'How People Use ChatGPT'. 16:30 – For examples of popular media coverage of recent (tragic) cases involving chatbots, see here, here, here, and here. 21:00 – The paper by Rose Guingrich and Michael Graziano on how users describe their relationships with chatbots. 24:00 – The precise quote by Mark Twain is: “Nothing so needs reforming as other people's habits.” 25:30 – The classic paper on Mary's room by Frank Jackson. 27:00 – Dr. Shevlin has also worked on questions about animal minds (e.g., here), as well as a number of issues in AI beyond “social AI” (e.g., here, here). 30:00 – The classic essay by Isaiah Berlin on hedgehogs and foxes. 32:00 – The classic paper on ELIZA, introduced by Joseph Weizenbaum in 1966. A version of ELIZA that you can interact with. For work by Sherry Turkle, see here. 34:00 – Dr. Shevlin's recent paper about the “anthropomimetic turn” in contemporary AI. 41:00 – For recent work on whether current chatbots pass a version of the Turing test, see here. 45:00 – Ted Chiang's story, ‘The Lifecycle of Software Objects,' was re-published as part his collection of short fiction, Exhalation. 46:00 – For Dr. Shevlin's recent writing on machine consciousness, see here. 48:00 – For more on the possibility of consciousness in borderline cases (like AI systems), see our past episodes here and here. 52:00 – The study on whether people attribute consciousness to LLMs. 54:30 – A recent paper on griefbots by scholars at the University of Cambridge. A popular article about the phenomenon. 55:30 – A blogpost describing the so-called DigiDan experiment. 1:00:00 – Some of the potentially positive social qualities of AIs are discussed in this essay by Paul Bloom. 1:19:30 – For more on Iain Banks' culture series, see here. 1:20:30 – A popular article on the phenomenon of hikikomori. Recommendations The Oxford Intersections: AI in Society collection The new podcast, Our Lives with Bots Many Minds is a project of the Diverse Intelligences Summer Institute, which is made possible by a generous grant from the John Templeton Foundation to Indiana University. The show is hosted and produced by Kensy Cooperrider, with help from Assistant Producer Urte Laukaityte and with creative support from DISI Directors Erica Cartmill and Jacob Foster. Our artwork is by Ben Oldroyd. Subscribe to Many Minds on Apple, Stitcher, Spotify, Pocket Casts, Google Play, or wherever you listen to podcasts. You can also now subscribe to the Many Minds newsletter here! We welcome your comments, questions, and suggestions. Feel free to email us at: manymindspodcast@gmail.com. For updates about the show, visit our website or follow us on Bluesky (@manymindspod.bsky.social).
This episode is the intro to a special project by The AI Fundamentalists' hosts and friends. We hope you're ready for a metaphysics mini‑series to explore what thinking and reasoning really mean and how those definitions should shape AI research. Join us for thought-provoking discussions as we tackle basic questions: What is metaphysics and its relevance to AI? What constitutes reality? What defines thinking? How do we understand time? And perhaps most importantly, should AI systems attempt to "think," or are we approaching the entire concept incorrectly? Show notes:• Why metaphysics matters for AI foundations• Definitions of thinking from peers and what they imply• Mixture‑of‑experts, ranking, and the illusion of reasoning• Turing test limits versus deliberation and causality• Towers of Hanoi, agentic workflows, and brittle stepwise reasoning• Math, context, and multi‑component system failures• Proposed plan for the series and areas to explore• Invitation for resources, critiques, and future guestsWe hope you enjoy this philosophical journey to examine the intersection of ancient philosophical questions and cutting-edge technology.What did you think? Let us know.Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
My guest today is Jonathan Siddharth, co-founder and CEO of Turing.Jonathan incubated Turing in Foundation Capital's Palo Alto office in 2018. Since then, it has grown into a multi-billion dollar company that powers nearly every frontier AI lab: OpenAI, Anthropic, Google, Meta, Microsoft, and others. If you've seen a breakthrough in how AI reasons or codes, odds are Turing had a hand in it.Jonathan has a provocative thesis: within three years, every white-collar job, including the CEO's, will be automated. In this episode, we talk about what it will take to reach artificial superintelligence, why this goal matters, and how the agentic era will fundamentally reshape work. We also dig into his founder journey: what he learned from his first startup Rover, how he built Turing from day one, and how his leadership style has evolved to emphasize speed, intensity, and staying in the details.Jonathan has been at the edge of AI for years, and he has the rare ability to translate what's happening at the frontier into lessons for builders today.Hope you enjoy the conversation! Chapters: 00:00 Cold open00:02:06 Jonathan's backstory: his experience at Stanford00:06:37 Lessons from Rover00:08:39 Early Turing: incubation at Foundation Capital and finding PMF00:13:52 Why Turing took off00:15:12 Evolving from developer cloud to AGI partner for frontier labs00:16:49 How coding improved reasoning - and why Turing became essential00:20:38 Founder lessons: building org speed and intensity00:23:33 Why work-life balance is a false dichotomy00:24:17 Daily standups, flat orgs, and Formula One culture00:25:15 Confrontational energy and Frank Slootman's influence00:29:50 Positioning Turing as “Switzerland” in the AI arms race00:34:32 The four pillars of superintelligence: multimodality, reasoning, tool use, coding00:37:39 From copilots to agents: the 100x improvement00:40:00 Why enterprise hasn't had its “ChatGPT moment” yet00:43:09 Jonathan's thoughts on RL gyms, algorithmic techniques, and evals00:46:32 The blurring line between model providers and AI apps00:47:35 Why defensibility depends on proprietary data and evals00:55:20 RL gyms: how enterprises train agents in simulated environments00:57:39 Underhyped: $30T of white-collar work will be automated
¿Quién decide primero: tú o tu mente? ¿Y qué nos diferencia realmente de la inteligencia artificial que no deja de avanzar? En esta conversación con Alberto, partimos del test de Turing para abrir una pregunta mayor: la consciencia. A partir de ahí, nos detenemos en ese microinstante entre impulso y respuesta y exploramos cómo cultivar una actitud de humildad radical para pasar del impulso al diálogo; del “tengo razón” al escuchar de verdad. Deseo que esta conversación te ayude a habitar el instante entre estímulo y respuesta, a recuperar el control y a recordar lo esencial de lo humano en tiempos de IA.
See omnystudio.com/listener for privacy information.