American science-fiction writer
POPULARITY
From Palantir and Two Sigma to building Goodfire into the poster-child for actionable mechanistic interpretability, Mark Bissell (Member of Technical Staff) and Myra Deng (Head of Product) are trying to turn “peeking inside the model” into a repeatable production workflow by shipping APIs, landing real enterprise deployments, and now scaling the bet with a recent $150M Series B funding round at a $1.25B valuation.In this episode, we go far beyond the usual “SAEs are cool” take. We talk about Goodfire's core bet: that the AI lifecycle is still fundamentally broken because the only reliable control we have is data and we post-train, RLHF, and fine-tune by “slurping supervision through a straw,” hoping the model picks up the right behaviors while quietly absorbing the wrong ones. Goodfire's answer is to build a bi-directional interface between humans and models: read what's happening inside, edit it surgically, and eventually use interpretability during training so customization isn't just brute-force guesswork.Mark and Myra walk through what that looks like when you stop treating interpretability like a lab demo and start treating it like infrastructure: lightweight probes that add near-zero latency, token-level safety filters that can run at inference time, and interpretability workflows that survive messy constraints (multilingual inputs, synthetic→real transfer, regulated domains, no access to sensitive data). We also get a live window into what “frontier-scale interp” means operationally (i.e. steering a trillion-parameter model in real time by targeting internal features) plus why the same tooling generalizes cleanly from language models to genomics, medical imaging, and “pixel-space” world models.We discuss:* Myra + Mark's path: Palantir (health systems, forward-deployed engineering) → Goodfire early team; Two Sigma → Head of Product, translating frontier interpretability research into a platform and real-world deployments* What “interpretability” actually means in practice: not just post-hoc poking, but a broader “science of deep learning” approach across the full AI lifecycle (data curation → post-training → internal representations → model design)* Why post-training is the first big wedge: “surgical edits” for unintended behaviors likereward hacking, sycophancy, noise learned during customization plus the dream of targeted unlearning and bias removal without wrecking capabilities* SAEs vs probes in the real world: why SAE feature spaces sometimes underperform classifiers trained on raw activations for downstream detection tasks (hallucination, harmful intent, PII), and what that implies about “clean concept spaces”* Rakuten in production: deploying interpretability-based token-level PII detection at inference time to prevent routing private data to downstream providers plus the gnarly constraints: no training on real customer PII, synthetic→real transfer, English + Japanese, and tokenization quirks* Why interp can be operationally cheaper than LLM-judge guardrails: probes are lightweight, low-latency, and don't require hosting a second large model in the loop* Real-time steering at frontier scale: a demo of steering Kimi K2 (~1T params) live and finding features via SAE pipelines, auto-labeling via LLMs, and toggling a “Gen-Z slang” feature across multiple layers without breaking tool use* Hallucinations as an internal signal: the case that models have latent uncertainty / “user-pleasing” circuitry you can detect and potentially mitigate more directly than black-box methods* Steering vs prompting: the emerging view that activation steering and in-context learning are more closely connected than people think, including work mapping between the two (even for jailbreak-style behaviors)* Interpretability for science: using the same tooling across domains (genomics, medical imaging, materials) to debug spurious correlations and extract new knowledge up to and including early biomarker discovery work with major partners* World models + “pixel-space” interpretability: why vision/video models make concepts easier to see, how that accelerates the feedback loop, and why robotics/world-model partners are especially interesting design partners* The north star: moving from “data in, weights out” to intentional model design where experts can impart goals and constraints directly, not just via reward signals and brute-force post-training—Goodfire AI* Website: https://goodfire.ai* LinkedIn: https://www.linkedin.com/company/goodfire-ai/* X: https://x.com/GoodfireAIMyra Deng* Website: https://myradeng.com/* LinkedIn: https://www.linkedin.com/in/myra-deng/* X: https://x.com/myra_dengMark Bissell* LinkedIn: https://www.linkedin.com/in/mark-bissell/* X: https://x.com/MarkMBissellFull Video EpisodeTimestamps00:00:00 Introduction00:00:05 Introduction to the Latent Space Podcast and Guests from Goodfire00:00:29 What is Goodfire? Mission and Focus on Interpretability00:01:01 Goodfire's Practical Approach to Interpretability00:01:37 Goodfire's Series B Fundraise Announcement00:02:04 Backgrounds of Mark and Myra from Goodfire00:02:51 Team Structure and Roles at Goodfire00:05:13 What is Interpretability? Definitions and Techniques00:05:30 Understanding Errors00:07:29 Post-training vs. Pre-training Interpretability Applications00:08:51 Using Interpretability to Remove Unwanted Behaviors00:10:09 Grokking, Double Descent, and Generalization in Models00:10:15 404 Not Found Explained00:12:06 Subliminal Learning and Hidden Biases in Models00:14:07 How Goodfire Chooses Research Directions and Projects00:15:00 Troubleshooting Errors00:16:04 Limitations of SAEs and Probes in Interpretability00:18:14 Rakuten Case Study: Production Deployment of Interpretability00:20:45 Conclusion00:21:12 Efficiency Benefits of Interpretability Techniques00:21:26 Live Demo: Real-Time Steering in a Trillion Parameter Model00:25:15 How Steering Features are Identified and Labeled00:26:51 Detecting and Mitigating Hallucinations Using Interpretability00:31:20 Equivalence of Activation Steering and Prompting00:34:06 Comparing Steering with Fine-Tuning and LoRA Techniques00:36:04 Model Design and the Future of Intentional AI Development00:38:09 Getting Started in Mechinterp: Resources, Programs, and Open Problems00:40:51 Industry Applications and the Rise of Mechinterp in Practice00:41:39 Interpretability for Code Models and Real-World Usage00:43:07 Making Steering Useful for More Than Stylistic Edits00:46:17 Applying Interpretability to Healthcare and Scientific Discovery00:49:15 Why Interpretability is Crucial in High-Stakes Domains like Healthcare00:52:03 Call for Design Partners Across Domains00:54:18 Interest in World Models and Visual Interpretability00:57:22 Sci-Fi Inspiration: Ted Chiang and Interpretability01:00:14 Interpretability, Safety, and Alignment Perspectives01:04:27 Weak-to-Strong Generalization and Future Alignment Challenges01:05:38 Final Thoughts and Hiring/Collaboration Opportunities at GoodfireTranscriptShawn Wang [00:00:05]: So welcome to the Latent Space pod. We're back in the studio with our special MechInterp co-host, Vibhu. Welcome. Mochi, Mochi's special co-host. And Mochi, the mechanistic interpretability doggo. We have with us Mark and Myra from Goodfire. Welcome. Thanks for having us on. Maybe we can sort of introduce Goodfire and then introduce you guys. How do you introduce Goodfire today?Myra Deng [00:00:29]: Yeah, it's a great question. So Goodfire, we like to say, is an AI research lab that focuses on using interpretability to understand, learn from, and design AI models. And we really believe that interpretability will unlock the new generation, next frontier of safe and powerful AI models. That's our description right now, and I'm excited to dive more into the work we're doing to make that happen.Shawn Wang [00:00:55]: Yeah. And there's always like the official description. Is there an understatement? Is there an unofficial one that sort of resonates more with a different audience?Mark Bissell [00:01:01]: Well, being an AI research lab that's focused on interpretability, there's obviously a lot of people have a lot that they think about when they think of interpretability. And I think we have a pretty broad definition of what that means and the types of places that can be applied. And in particular, applying it in production scenarios, in high stakes industries, and really taking it sort of from the research world into the real world. Which, you know. It's a new field, so that hasn't been done all that much. And we're excited about actually seeing that sort of put into practice.Shawn Wang [00:01:37]: Yeah, I would say it wasn't too long ago that Anthopic was like still putting out like toy models or superposition and that kind of stuff. And I wouldn't have pegged it to be this far along. When you and I talked at NeurIPS, you were talking a little bit about your production use cases and your customers. And then not to bury the lead, today we're also announcing the fundraise, your Series B. $150 million. $150 million at a 1.25B valuation. Congrats, Unicorn.Mark Bissell [00:02:02]: Thank you. Yeah, no, things move fast.Shawn Wang [00:02:04]: We were talking to you in December and already some big updates since then. Let's dive, I guess, into a bit of your backgrounds as well. Mark, you were at Palantir working on health stuff, which is really interesting because the Goodfire has some interesting like health use cases. I don't know how related they are in practice.Mark Bissell [00:02:22]: Yeah, not super related, but I don't know. It was helpful context to know what it's like. Just to work. Just to work with health systems and generally in that domain. Yeah.Shawn Wang [00:02:32]: And Mara, you were at Two Sigma, which actually I was also at Two Sigma back in the day. Wow, nice.Myra Deng [00:02:37]: Did we overlap at all?Shawn Wang [00:02:38]: No, this is when I was briefly a software engineer before I became a sort of developer relations person. And now you're head of product. What are your sort of respective roles, just to introduce people to like what all gets done in Goodfire?Mark Bissell [00:02:51]: Yeah, prior to Goodfire, I was at Palantir for about three years as a forward deployed engineer, now a hot term. Wasn't always that way. And as a technical lead on the health care team and at Goodfire, I'm a member of the technical staff. And honestly, that I think is about as specific as like as as I could describe myself because I've worked on a range of things. And, you know, it's it's a fun time to be at a team that's still reasonably small. I think when I joined one of the first like ten employees, now we're above 40, but still, it looks like there's always a mix of research and engineering and product and all of the above. That needs to get done. And I think everyone across the team is, you know, pretty, pretty switch hitter in the roles they do. So I think you've seen some of the stuff that I worked on related to image models, which was sort of like a research demo. More recently, I've been working on our scientific discovery team with some of our life sciences partners, but then also building out our core platform for more of like flexing some of the kind of MLE and developer skills as well.Shawn Wang [00:03:53]: Very generalist. And you also had like a very like a founding engineer type role.Myra Deng [00:03:58]: Yeah, yeah.Shawn Wang [00:03:59]: So I also started as I still am a member of technical staff, did a wide range of things from the very beginning, including like finding our office space and all of this, which is we both we both visited when you had that open house thing. It was really nice.Myra Deng [00:04:13]: Thank you. Thank you. Yeah. Plug to come visit our office.Shawn Wang [00:04:15]: It looked like it was like 200 people. It has room for 200 people. But you guys are like 10.Myra Deng [00:04:22]: For a while, it was very empty. But yeah, like like Mark, I spend. A lot of my time as as head of product, I think product is a bit of a weird role these days, but a lot of it is thinking about how do we take our frontier research and really apply it to the most important real world problems and how does that then translate into a platform that's repeatable or a product and working across, you know, the engineering and research teams to make that happen and also communicating to the world? Like, what is interpretability? What is it used for? What is it good for? Why is it so important? All of these things are part of my day-to-day as well.Shawn Wang [00:05:01]: I love like what is things because that's a very crisp like starting point for people like coming to a field. They all do a fun thing. Vibhu, why don't you want to try tackling what is interpretability and then they can correct us.Vibhu Sapra [00:05:13]: Okay, great. So I think like one, just to kick off, it's a very interesting role to be head of product, right? Because you guys, at least as a lab, you're more of an applied interp lab, right? Which is pretty different than just normal interp, like a lot of background research. But yeah. You guys actually ship an API to try these things. You have Ember, you have products around it, which not many do. Okay. What is interp? So basically you're trying to have an understanding of what's going on in model, like in the model, in the internal. So different approaches to do that. You can do probing, SAEs, transcoders, all this stuff. But basically you have an, you have a hypothesis. You have something that you want to learn about what's happening in a model internals. And then you're trying to solve that from there. You can do stuff like you can, you know, you can do activation mapping. You can try to do steering. There's a lot of stuff that you can do, but the key question is, you know, from input to output, we want to have a better understanding of what's happening and, you know, how can we, how can we adjust what's happening on the model internals? How'd I do?Mark Bissell [00:06:12]: That was really good. I think that was great. I think it's also a, it's kind of a minefield of a, if you ask 50 people who quote unquote work in interp, like what is interpretability, you'll probably get 50 different answers. And. Yeah. To some extent also like where, where good fire sits in the space. I think that we're an AI research company above all else. And interpretability is a, is a set of methods that we think are really useful and worth kind of specializing in, in order to accomplish the goals we want to accomplish. But I think we also sort of see some of the goals as even more broader as, as almost like the science of deep learning and just taking a not black box approach to kind of any part of the like AI development life cycle, whether that. That means using interp for like data curation while you're training your model or for understanding what happened during post-training or for the, you know, understanding activations and sort of internal representations, what is in there semantically. And then a lot of sort of exciting updates that were, you know, are sort of also part of the, the fundraise around bringing interpretability to training, which I don't think has been done all that much before. A lot of this stuff is sort of post-talk poking at models as opposed to. To actually using this to intentionally design them.Shawn Wang [00:07:29]: Is this post-training or pre-training or is that not a useful.Myra Deng [00:07:33]: Currently focused on post-training, but there's no reason the techniques wouldn't also work in pre-training.Shawn Wang [00:07:38]: Yeah. It seems like it would be more active, applicable post-training because basically I'm thinking like rollouts or like, you know, having different variations of a model that you can tweak with the, with your steering. Yeah.Myra Deng [00:07:50]: And I think in a lot of the news that you've seen in, in, on like Twitter or whatever, you've seen a lot of unintended. Side effects come out of post-training processes, you know, overly sycophantic models or models that exhibit strange reward hacking behavior. I think these are like extreme examples. There's also, you know, very, uh, mundane, more mundane, like enterprise use cases where, you know, they try to customize or post-train a model to do something and it learns some noise or it doesn't appropriately learn the target task. And a big question that we've always had is like, how do you use your understanding of what the model knows and what it's doing to actually guide the learning process?Shawn Wang [00:08:26]: Yeah, I mean, uh, you know, just to anchor this for people, uh, one of the biggest controversies of last year was 4.0 GlazeGate. I've never heard of GlazeGate. I didn't know that was what it was called. The other one, they called it that on the blog post and I was like, well, how did OpenAI call it? Like officially use that term. And I'm like, that's funny, but like, yeah, I guess it's the pitch that if they had worked a good fire, they wouldn't have avoided it. Like, you know what I'm saying?Myra Deng [00:08:51]: I think so. Yeah. Yeah.Mark Bissell [00:08:53]: I think that's certainly one of the use cases. I think. Yeah. Yeah. I think the reason why post-training is a place where this makes a lot of sense is a lot of what we're talking about is surgical edits. You know, you want to be able to have expert feedback, very surgically change how your model is doing, whether that is, you know, removing a certain behavior that it has. So, you know, one of the things that we've been looking at or is, is another like common area where you would want to make a somewhat surgical edit is some of the models that have say political bias. Like you look at Quen or, um, R1 and they have sort of like this CCP bias.Shawn Wang [00:09:27]: Is there a CCP vector?Mark Bissell [00:09:29]: Well, there's, there are certainly internal, yeah. Parts of the representation space where you can sort of see where that lives. Yeah. Um, and you want to kind of, you know, extract that piece out.Shawn Wang [00:09:40]: Well, I always say, you know, whenever you find a vector, a fun exercise is just like, make it very negative to see what the opposite of CCP is.Mark Bissell [00:09:47]: The super America, bald eagles flying everywhere. But yeah. So in general, like lots of post-training tasks where you'd want to be able to, to do that. Whether it's unlearning a certain behavior or, you know, some of the other kind of cases where this comes up is, are you familiar with like the, the grokking behavior? I mean, I know the machine learning term of grokking.Shawn Wang [00:10:09]: Yeah.Mark Bissell [00:10:09]: Sort of this like double descent idea of, of having a model that is able to learn a generalizing, a generalizing solution, as opposed to even if memorization of some task would suffice, you want it to learn the more general way of doing a thing. And so, you know, another. A way that you can think about having surgical access to a model's internals would be learn from this data, but learn in the right way. If there are many possible, you know, ways to, to do that. Can make interp solve the double descent problem?Shawn Wang [00:10:41]: Depends, I guess, on how you. Okay. So I, I, I viewed that double descent as a problem because then you're like, well, if the loss curves level out, then you're done, but maybe you're not done. Right. Right. But like, if you actually can interpret what is a generalizing or what you're doing. What is, what is still changing, even though the loss is not changing, then maybe you, you can actually not view it as a double descent problem. And actually you're just sort of translating the space in which you view loss and like, and then you have a smooth curve. Yeah.Mark Bissell [00:11:11]: I think that's certainly like the domain of, of problems that we're, that we're looking to get.Shawn Wang [00:11:15]: Yeah. To me, like double descent is like the biggest thing to like ML research where like, if you believe in scaling, then you don't need, you need to know where to scale. And. But if you believe in double descent, then you don't, you don't believe in anything where like anything levels off, like.Vibhu Sapra [00:11:30]: I mean, also tendentially there's like, okay, when you talk about the China vector, right. There's the subliminal learning work. It was from the anthropic fellows program where basically you can have hidden biases in a model. And as you distill down or, you know, as you train on distilled data, those biases always show up, even if like you explicitly try to not train on them. So, you know, it's just like another use case of. Okay. If we can interpret what's happening in post-training, you know, can we clear some of this? Can we even determine what's there? Because yeah, it's just like some worrying research that's out there that shows, you know, we really don't know what's going on.Mark Bissell [00:12:06]: That is. Yeah. I think that's the biggest sentiment that we're sort of hoping to tackle. Nobody knows what's going on. Right. Like subliminal learning is just an insane concept when you think about it. Right. Train a model on not even the logits, literally the output text of a bunch of random numbers. And now your model loves owls. And you see behaviors like that, that are just, they defy, they defy intuition. And, and there are mathematical explanations that you can get into, but. I mean.Shawn Wang [00:12:34]: It feels so early days. Objectively, there are a sequence of numbers that are more owl-like than others. There, there should be.Mark Bissell [00:12:40]: According to, according to certain models. Right. It's interesting. I think it only applies to models that were initialized from the same starting Z. Usually, yes.Shawn Wang [00:12:49]: But I mean, I think that's a, that's a cheat code because there's not enough compute. But like if you believe in like platonic representation, like probably it will transfer across different models as well. Oh, you think so?Mark Bissell [00:13:00]: I think of it more as a statistical artifact of models initialized from the same seed sort of. There's something that is like path dependent from that seed that might cause certain overlaps in the latent space and then sort of doing this distillation. Yeah. Like it pushes it towards having certain other tendencies.Vibhu Sapra [00:13:24]: Got it. I think there's like a bunch of these open-ended questions, right? Like you can't train in new stuff during the RL phase, right? RL only reorganizes weights and you can only do stuff that's somewhat there in your base model. You're not learning new stuff. You're just reordering chains and stuff. But okay. My broader question is when you guys work at an interp lab, how do you decide what to work on and what's kind of the thought process? Right. Because we can ramble for hours. Okay. I want to know this. I want to know that. But like, how do you concretely like, you know, what's the workflow? Okay. There's like approaches towards solving a problem, right? I can try prompting. I can look at chain of thought. I can train probes, SAEs. But how do you determine, you know, like, okay, is this going anywhere? Like, do we have set stuff? Just, you know, if you can help me with all that. Yeah.Myra Deng [00:14:07]: It's a really good question. I feel like we've always at the very beginning of the company thought about like, let's go and try to learn what isn't working in machine learning today. Whether that's talking to customers or talking to researchers at other labs, trying to understand both where the frontier is going and where things are really not falling apart today. And then developing a perspective on how we can push the frontier using interpretability methods. And so, you know, even our chief scientist, Tom, spends a lot of time talking to customers and trying to understand what real world problems are and then taking that back and trying to apply the current state of the art to those problems and then seeing where they fall down basically. And then using those failures or those shortcomings to understand what hills to climb when it comes to interpretability research. So like on the fundamental side, for instance, when we have done some work applying SAEs and probes, we've encountered, you know, some shortcomings in SAEs that we found a little bit surprising. And so have gone back to the drawing board and done work on that. And then, you know, we've done some work on better foundational interpreter models. And a lot of our team's research is focused on what is the next evolution beyond SAEs, for instance. And then when it comes to like control and design of models, you know, we tried steering with our first API and realized that it still fell short of black box techniques like prompting or fine tuning. And so went back to the drawing board and we're like, how do we make that not the case and how do we improve it beyond that? And one of our researchers, Ekdeep, who just joined is actually Ekdeep and Atticus are like steering experts and have spent a lot of time trying to figure out like, what is the research that enables us to actually do this in a much more powerful, robust way? So yeah, the answer is like, look at real world problems, try to translate that into a research agenda and then like hill climb on both of those at the same time.Shawn Wang [00:16:04]: Yeah. Mark has the steering CLI demo queued up, which we're going to go into in a sec. But I always want to double click on when you drop hints, like we found some problems with SAEs. Okay. What are they? You know, and then we can go into the demo. Yeah.Myra Deng [00:16:19]: I mean, I'm curious if you have more thoughts here as well, because you've done it in the healthcare domain. But I think like, for instance, when we do things like trying to detect behaviors within models that are harmful or like behaviors that a user might not want to have in their model. So hallucinations, for instance, harmful intent, PII, all of these things. We first tried using SAE probes for a lot of these tasks. So taking the feature activation space from SAEs and then training classifiers on top of that, and then seeing how well we can detect the properties that we might want to detect in model behavior. And we've seen in many cases that probes just trained on raw activations seem to perform better than SAE probes, which is a bit surprising if you think that SAEs are actually also capturing the concepts that you would want to capture cleanly and more surgically. And so that is an interesting observation. I don't think that is like, I'm not down on SAEs at all. I think there are many, many things they're useful for, but we have definitely run into cases where I think the concept space described by SAEs is not as clean and accurate as we would expect it to be for actual like real world downstream performance metrics.Mark Bissell [00:17:34]: Fair enough. Yeah. It's the blessing and the curse of unsupervised methods where you get to peek into the AI's mind. But sometimes you wish that you saw other things when you walked inside there. Although in the PII instance, I think weren't an SAE based approach actually did prove to be the most generalizable?Myra Deng [00:17:53]: It did work well in the case that we published with Rakuten. And I think a lot of the reasons it worked well was because we had a noisier data set. And so actually the blessing of unsupervised learning is that we actually got to get more meaningful, generalizable signal from SAEs when the data was noisy. But in other cases where we've had like good data sets, it hasn't been the case.Shawn Wang [00:18:14]: And just because you named Rakuten and I don't know if we'll get it another chance, like what is the overall, like what is Rakuten's usage or production usage? Yeah.Myra Deng [00:18:25]: So they are using us to essentially guardrail and inference time monitor their language model usage and their agent usage to detect things like PII so that they don't route private user information.Myra Deng [00:18:41]: And so that's, you know, going through all of their user queries every day. And that's something that we deployed with them a few months ago. And now we are actually exploring very early partnerships, not just with Rakuten, but with other people around how we can help with potentially training and customization use cases as well. Yeah.Shawn Wang [00:19:03]: And for those who don't know, like it's Rakuten is like, I think number one or number two e-commerce store in Japan. Yes. Yeah.Mark Bissell [00:19:10]: And I think that use case actually highlights a lot of like what it looks like to deploy things in practice that you don't always think about when you're doing sort of research tasks. So when you think about some of the stuff that came up there that's more complex than your idealized version of a problem, they were encountering things like synthetic to real transfer of methods. So they couldn't train probes, classifiers, things like that on actual customer data of PII. So what they had to do is use synthetic data sets. And then hope that that transfer is out of domain to real data sets. And so we can evaluate performance on the real data sets, but not train on customer PII. So that right off the bat is like a big challenge. You have multilingual requirements. So this needed to work for both English and Japanese text. Japanese text has all sorts of quirks, including tokenization behaviors that caused lots of bugs that caused us to be pulling our hair out. And then also a lot of tasks you'll see. You might make simplifying assumptions if you're sort of treating it as like the easiest version of the problem to just sort of get like general results where maybe you say you're classifying a sentence to say, does this contain PII? But the need that Rakuten had was token level classification so that you could precisely scrub out the PII. So as we learned more about the problem, you're sort of speaking about what that looks like in practice. Yeah. A lot of assumptions end up breaking. And that was just one instance where you. A problem that seems simple right off the bat ends up being more complex as you keep diving into it.Vibhu Sapra [00:20:41]: Excellent. One of the things that's also interesting with Interp is a lot of these methods are very efficient, right? So where you're just looking at a model's internals itself compared to a separate like guardrail, LLM as a judge, a separate model. One, you have to host it. Two, there's like a whole latency. So if you use like a big model, you have a second call. Some of the work around like self detection of hallucination, it's also deployed for efficiency, right? So if you have someone like Rakuten doing it in production live, you know, that's just another thing people should consider.Mark Bissell [00:21:12]: Yeah. And something like a probe is super lightweight. Yeah. It's no extra latency really. Excellent.Shawn Wang [00:21:17]: You have the steering demos lined up. So we were just kind of see what you got. I don't, I don't actually know if this is like the latest, latest or like alpha thing.Mark Bissell [00:21:26]: No, this is a pretty hacky demo from from a presentation that someone else on the team recently gave. So this will give a sense for, for technology. So you can see the steering and action. Honestly, I think the biggest thing that this highlights is that as we've been growing as a company and taking on kind of more and more ambitious versions of interpretability related problems, a lot of that comes to scaling up in various different forms. And so here you're going to see steering on a 1 trillion parameter model. This is Kimi K2. And so it's sort of fun that in addition to the research challenges, there are engineering challenges that we're now tackling. Cause for any of this to be sort of useful in production, you need to be thinking about what it looks like when you're using these methods on frontier models as opposed to sort of like toy kind of model organisms. So yeah, this was thrown together hastily, pretty fragile behind the scenes, but I think it's quite a fun demo. So screen sharing is on. So I've got two terminal sessions pulled up here. On the left is a forked version that we have of the Kimi CLI that we've got running to point at our custom hosted Kimi model. And then on the right is a set up that will allow us to steer on certain concepts. So I should be able to chat with Kimi over here. Tell it hello. This is running locally. So the CLI is running locally, but the Kimi server is running back to the office. Well, hopefully should be, um, that's too much to run on that Mac. Yeah. I think it's, uh, it takes a full, like each 100 node. I think it's like, you can. You can run it on eight GPUs, eight 100. So, so yeah, Kimi's running. We can ask it a prompt. It's got a forked version of our, uh, of the SG line code base that we've been working on. So I'm going to tell it, Hey, this SG line code base is slow. I think there's a bug. Can you try to figure it out? There's a big code base, so it'll, it'll spend some time doing this. And then on the right here, I'm going to initialize in real time. Some steering. Let's see here.Mark Bissell [00:23:33]: searching for any. Bugs. Feature ID 43205.Shawn Wang [00:23:38]: Yeah.Mark Bissell [00:23:38]: 20, 30, 40. So let me, uh, this is basically a feature that we found that inside Kimi seems to cause it to speak in Gen Z slang. And so on the left, it's still sort of thinking normally it might take, I don't know, 15 seconds for this to kick in, but then we're going to start hopefully seeing him do this code base is massive for real. So we're going to start. We're going to start seeing Kimi transition as the steering kicks in from normal Kimi to Gen Z Kimi and both in its chain of thought and its actual outputs.Mark Bissell [00:24:19]: And interestingly, you can see, you know, it's still able to call tools, uh, and stuff. It's um, it's purely sort of it's it's demeanor. And there are other features that we found for interesting things like concision. So that's more of a practical one. You can make it more concise. Um, the types of programs, uh, programming languages that uses, but yeah, as we're seeing it come in. Pretty good. Outputs.Shawn Wang [00:24:43]: Scheduler code is actually wild.Vibhu Sapra [00:24:46]: Yo, this code is actually insane, bro.Vibhu Sapra [00:24:53]: What's the process of training in SAE on this, or, you know, how do you label features? I know you guys put out a pretty cool blog post about, um, finding this like autonomous interp. Um, something. Something about how agents for interp is different than like coding agents. I don't know while this is spewing up, but how, how do we find feature 43, two Oh five. Yeah.Mark Bissell [00:25:15]: So in this case, um, we, our platform that we've been building out for a long time now supports all the sort of classic out of the box interp techniques that you might want to have like SAE training, probing things of that kind, I'd say the techniques for like vanilla SAEs are pretty well established now where. You take your model that you're interpreting, run a whole bunch of data through it, gather activations, and then yeah, pretty straightforward pipeline to train an SAE. There are a lot of different varieties. There's top KSAEs, batch top KSAEs, um, normal ReLU SAEs. And then once you have your sparse features to your point, assigning labels to them to actually understand that this is a gen Z feature, that's actually where a lot of the kind of magic happens. Yeah. And the most basic standard technique is look at all of your d input data set examples that cause this feature to fire most highly. And then you can usually pick out a pattern. So for this feature, If I've run a diverse enough data set through my model feature 43, two Oh five. Probably tends to fire on all the tokens that sounds like gen Z slang. You know, that's the, that's the time of year to be like, Oh, I'm in this, I'm in this Um, and, um, so, you know, you could have a human go through all 43,000 concepts andVibhu Sapra [00:26:34]: And I've got to ask the basic question, you know, can we get examples where it hallucinates, pass it through, see what feature activates for hallucinations? Can I just, you know, turn hallucination down?Myra Deng [00:26:51]: Oh, wow. You really predicted a project we're already working on right now, which is detecting hallucinations using interpretability techniques. And this is interesting because hallucinations is something that's very hard to detect. And it's like a kind of a hairy problem and something that black box methods really struggle with. Whereas like Gen Z, you could always train a simple classifier to detect that hallucinations is harder. But we've seen that models internally have some... Awareness of like uncertainty or some sort of like user pleasing behavior that leads to hallucinatory behavior. And so, yeah, we have a project that's trying to detect that accurately. And then also working on mitigating the hallucinatory behavior in the model itself as well.Shawn Wang [00:27:39]: Yeah, I would say most people are still at the level of like, oh, I would just turn temperature to zero and that turns off hallucination. And I'm like, well, that's a fundamental misunderstanding of how this works. Yeah.Mark Bissell [00:27:51]: Although, so part of what I like about that question is you, there are SAE based approaches that might like help you get at that. But oftentimes the beauty of SAEs and like we said, the curse is that they're unsupervised. So when you have a behavior that you deliberately would like to remove, and that's more of like a supervised task, often it is better to use something like probes and specifically target the thing that you're interested in reducing as opposed to sort of like hoping that when you fragment the latent space, one of the vectors that pops out.Vibhu Sapra [00:28:20]: And as much as we're training an autoencoder to be sparse, we're not like for sure certain that, you know, we will get something that just correlates to hallucination. You'll probably split that up into 20 other things and who knows what they'll be.Mark Bissell [00:28:36]: Of course. Right. Yeah. So there's no sort of problems with like feature splitting and feature absorption. And then there's the off target effects, right? Ideally, you would want to be very precise where if you reduce the hallucination feature, suddenly maybe your model can't write. Creatively anymore. And maybe you don't like that, but you want to still stop it from hallucinating facts and figures.Shawn Wang [00:28:55]: Good. So Vibhu has a paper to recommend there that we'll put in the show notes. But yeah, I mean, I guess just because your demo is done, any any other things that you want to highlight or any other interesting features you want to show?Mark Bissell [00:29:07]: I don't think so. Yeah. Like I said, this is a pretty small snippet. I think the main sort of point here that I think is exciting is that there's not a whole lot of inter being applied to models quite at this scale. You know, Anthropic certainly has some some. Research and yeah, other other teams as well. But it's it's nice to see these techniques, you know, being put into practice. I think not that long ago, the idea of real time steering of a trillion parameter model would have sounded.Shawn Wang [00:29:33]: Yeah. The fact that it's real time, like you started the thing and then you edited the steering vector.Vibhu Sapra [00:29:38]: I think it's it's an interesting one TBD of what the actual like production use case would be on that, like the real time editing. It's like that's the fun part of the demo, right? You can kind of see how this could be served behind an API, right? Like, yes, you're you only have so many knobs and you can just tweak it a bit more. And I don't know how it plays in. Like people haven't done that much with like, how does this work with or without prompting? Right. How does this work with fine tuning? Like, there's a whole hype of continual learning, right? So there's just so much to see. Like, is this another parameter? Like, is it like parameter? We just kind of leave it as a default. We don't use it. So I don't know. Maybe someone here wants to put out a guide on like how to use this with prompting when to do what?Mark Bissell [00:30:18]: Oh, well, I have a paper recommendation. I think you would love from Act Deep on our team, who is an amazing researcher, just can't say enough amazing things about Act Deep. But he actually has a paper that as well as some others from the team and elsewhere that go into the essentially equivalence of activation steering and in context learning and how those are from a he thinks of everything in a cognitive neuroscience Bayesian framework, but basically how you can precisely show how. Prompting in context, learning and steering exhibit similar behaviors and even like get quantitative about the like magnitude of steering you would need to do to induce a certain amount of behavior similar to certain prompting, even for things like jailbreaks and stuff. It's a really cool paper. Are you saying steering is less powerful than prompting? More like you can almost write a formula that tells you how to convert between the two of them.Myra Deng [00:31:20]: And so like formally equivalent actually in the in the limit. Right.Mark Bissell [00:31:24]: So like one case study of this is for jailbreaks there. I don't know. Have you seen the stuff where you can do like many shot jailbreaking? You like flood the context with examples of the behavior. And the topic put out that paper.Shawn Wang [00:31:38]: A lot of people were like, yeah, we've been doing this, guys.Mark Bissell [00:31:40]: Like, yeah, what's in this in context learning and activation steering equivalence paper is you can like predict the number. Number of examples that you will need to put in there in order to jailbreak the model. That's cool. By doing steering experiments and using this sort of like equivalence mapping. That's cool. That's really cool. It's very neat. Yeah.Shawn Wang [00:32:02]: I was going to say, like, you know, I can like back rationalize that this makes sense because, you know, what context is, is basically just, you know, it updates the KV cache kind of and like and then every next token inference is still like, you know, the sheer sum of everything all the way. It's plus all the context. It's up to date. And you could, I guess, theoretically steer that with you probably replace that with your steering. The only problem is steering typically is on one layer, maybe three layers like like you did. So it's like not exactly equivalent.Mark Bissell [00:32:33]: Right, right. There's sort of you need to get precise about, yeah, like how you sort of define steering and like what how you're modeling the setup. But yeah, I've got the paper pulled up here. Belief dynamics reveal the dual nature. Yeah. The title is Belief Dynamics Reveal the Dual Nature of Incompetence. And it's an exhibition of the practical context learning and activation steering. So Eric Bigelow, Dan Urgraft on the who are doing fellowships at Goodfire, Ekt Deep's the final author there.Myra Deng [00:32:59]: I think actually to your question of like, what is the production use case of steering? I think maybe if you just think like one level beyond steering as it is today. Like imagine if you could adapt your model to be, you know, an expert legal reasoner. Like in almost real time, like very quickly. efficiently using human feedback or using like your semantic understanding of what the model knows and where it knows that behavior. I think that while it's not clear what the product is at the end of the day, it's clearly very valuable. Thinking about like what's the next interface for model customization and adaptation is a really interesting problem for us. Like we have heard a lot of people actually interested in fine-tuning an RL for open weight models in production. And so people are using things like Tinker or kind of like open source libraries to do that, but it's still very difficult to get models fine-tuned and RL'd for exactly what you want them to do unless you're an expert at model training. And so that's like something we'reShawn Wang [00:34:06]: looking into. Yeah. I never thought so. Tinker from Thinking Machines famously uses rank one LoRa. Is that basically the same as steering? Like, you know, what's the comparison there?Mark Bissell [00:34:19]: Well, so in that case, you are still applying updates to the parameters, right?Shawn Wang [00:34:25]: Yeah. You're not touching a base model. You're touching an adapter. It's kind of, yeah.Mark Bissell [00:34:30]: Right. But I guess it still is like more in parameter space then. I guess it's maybe like, are you modifying the pipes or are you modifying the water flowing through the pipes to get what you're after? Yeah. Just maybe one way.Mark Bissell [00:34:44]: I like that analogy. That's my mental map of it at least, but it gets at this idea of model design and intentional design, which is something that we're, that we're very focused on. And just the fact that like, I hope that we look back at how we're currently training models and post-training models and just think what a primitive way of doing that right now. Like there's no intentionalityShawn Wang [00:35:06]: really in... It's just data, right? The only thing in control is what data we feed in.Mark Bissell [00:35:11]: So, so Dan from Goodfire likes to use this analogy of, you know, he has a couple of young kids and he talks about like, what if I could only teach my kids how to be good people by giving them cookies or like, you know, giving them a slap on the wrist if they do something wrong, like not telling them why it was wrong or like what they should have done differently or something like that. Just figure it out. Right. Exactly. So that's RL. Yeah. Right. And, and, you know, it's sample inefficient. There's, you know, what do they say? It's like slurping feedback. It's like, slurping supervision. Right. And so you'd like to get to the point where you can have experts giving feedback to their models that are, uh, internalized and, and, you know, steering is an inference time way of sort of getting that idea. But ideally you're moving to a world whereVibhu Sapra [00:36:04]: it is much more intentional design in perpetuity for these models. Okay. This is one of the questions we asked Emmanuel from Anthropic on the podcast a few months ago. Basically the question, was you're at a research lab that does model training, foundation models, and you're on an interp team. How does it tie back? Right? Like, does this, do ideas come from the pre-training team? Do they go back? Um, you know, so for those interested, you can, you can watch that. There wasn't too much of a connect there, but it's still something, you know, it's something they want toMark Bissell [00:36:33]: push for down the line. It can be useful for all of the above. Like there are certainly post-hocVibhu Sapra [00:36:39]: use cases where it doesn't need to touch that. I think the other thing a lot of people forget is this stuff isn't too computationally expensive, right? Like I would say, if you're interested in getting into research, MechInterp is one of the most approachable fields, right? A lot of this train an essay, train a probe, this stuff, like the budget for this one, there's already a lot done. There's a lot of open source work. You guys have done some too. Um, you know,Shawn Wang [00:37:04]: There's like notebooks from the Gemini team for Neil Nanda or like, this is how you do it. Just step through the notebook.Vibhu Sapra [00:37:09]: Even if you're like, not even technical with any of this, you can still make like progress. There, you can look at different activations, but, uh, if you do want to get into training, you know, training this stuff, correct me if I'm wrong is like in the thousands of dollars, not even like, it's not that high scale. And then same with like, you know, applying it, doing it for post-training or all this stuff is fairly cheap in scale of, okay. I want to get into like model training. I don't have compute for like, you know, pre-training stuff. So it's, it's a very nice field to get into. And also there's a lot of like open questions, right? Um, some of them have to go with, okay, I want a product. I want to solve this. Like there's also just a lot of open-ended stuff that people could work on. That's interesting. Right. I don't know if you guys have any calls for like, what's open questions, what's open work that you either open collaboration with, or like, you'd just like to see solved or just, you know, for people listening that want to get into McInturk because people always talk about it. What are, what are the things they should check out? Start, of course, you know, join you guys as well. I'm sure you're hiring.Myra Deng [00:38:09]: There's a paper, I think from, was it Lee, uh, Sharky? It's open problems and, uh, it's, it's a bit of interpretability, which I recommend everyone who's interested in the field. Read. I'm just like a really comprehensive overview of what are the things that experts in the field think are the most important problems to be solved. I also think to your point, it's been really, really inspiring to see, I think a lot of young people getting interested in interpretability, actually not just young people also like scientists to have been, you know, experts in physics for many years and in biology or things like this, um, transitioning into interp, because the barrier of, of what's now interp. So it's really cool to see a number to entry is, you know, in some ways low and there's a lot of information out there and ways to get started. There's this anecdote of like professors at universities saying that all of a sudden every incoming PhD student wants to study interpretability, which was not the case a few years ago. So it just goes to show how, I guess, like exciting the field is, how fast it's moving, how quick it is to get started and things like that.Mark Bissell [00:39:10]: And also just a very welcoming community. You know, there's an open source McInturk Slack channel. There are people are always posting questions and just folks in the space are always responsive if you ask things on various forums and stuff. But yeah, the open paper, open problems paper is a really good one.Myra Deng [00:39:28]: For other people who want to get started, I think, you know, MATS is a great program. What's the acronym for? Machine Learning and Alignment Theory Scholars? It's like the...Vibhu Sapra [00:39:40]: Normally summer internship style.Myra Deng [00:39:42]: Yeah, but they've been doing it year round now. And actually a lot of our full-time staff have come through that program or gone through that program. And it's great for anyone who is transitioning into interpretability. There's a couple other fellows programs. We do one as well as Anthropic. And so those are great places to get started if anyone is interested.Mark Bissell [00:40:03]: Also, I think been seen as a research field for a very long time. But I think engineering... I think engineers are sorely wanted for interpretability as well, especially at Goodfire, but elsewhere, as it does scale up.Shawn Wang [00:40:18]: I should mention that Lee actually works with you guys, right? And in the London office and I'm adding our first ever McInturk track at AI Europe because I see this industry applications now emerging. And I'm pretty excited to, you know, help push that along. Yeah, I was looking forward to that. It'll effectively be the first industry McInturk conference. Yeah. I'm so glad you added that. You know, it's still a little bit of a bet. It's not that widespread, but I can definitely see this is the time to really get into it. We want to be early on things.Mark Bissell [00:40:51]: For sure. And I think the field understands this, right? So at ICML, I think the title of the McInturk workshop this year was actionable interpretability. And there was a lot of discussion around bringing it to various domains. Everyone's adding pragmatic, actionable, whatever.Shawn Wang [00:41:10]: It's like, okay, well, we weren't actionable before, I guess. I don't know.Vibhu Sapra [00:41:13]: And I mean, like, just, you know, being in Europe, you see the Interp room. One, like old school conferences, like, I think they had a very tiny room till they got lucky and they got it doubled. But there's definitely a lot of interest, a lot of niche research. So you see a lot of research coming out of universities, students. We covered the paper last week. It's like two unknown authors, not many citations. But, you know, you can make a lot of meaningful work there. Yeah. Yeah. Yeah.Shawn Wang [00:41:39]: Yeah. I think people haven't really mentioned this yet. It's just Interp for code. I think it's like an abnormally important field. We haven't mentioned this yet. The conspiracy theory last two years ago was when the first SAE work came out of Anthropic was they would do like, oh, we just used SAEs to turn the bad code vector down and then turn up the good code. And I think like, isn't that the dream? Like, you know, like, but basically, I guess maybe, why is it funny? Like, it's... If it was realistic, it would not be funny. It would be like, no, actually, we should do this. But it's funny because we know there's like, we feel there's some limitations to what steering can do. And I think a lot of the public image of steering is like the Gen Z stuff. Like, oh, you can make it really love the Golden Gate Bridge, or you can make it speak like Gen Z. To like be a legal reasoner seems like a huge stretch. Yeah. And I don't know if that will get there this way. Yeah.Myra Deng [00:42:36]: I think, um, I will say we are announcing. Something very soon that I will not speak too much about. Um, but I think, yeah, this is like what we've run into again and again is like, we, we don't want to be in the world where steering is only useful for like stylistic things. That's definitely not, not what we're aiming for. But I think the types of interventions that you need to do to get to things like legal reasoning, um, are much more sophisticated and require breakthroughs in, in learning algorithms. And that's, um...Shawn Wang [00:43:07]: And is this an emergent property of scale as well?Myra Deng [00:43:10]: I think so. Yeah. I mean, I think scale definitely helps. I think scale allows you to learn a lot of information and, and reduce noise across, you know, large amounts of data. But I also think we think that there's ways to do things much more effectively, um, even, even at scale. So like actually learning exactly what you want from the data and not learning things that you do that you don't want exhibited in the data. So we're not like anti-scale, but we are also realizing that scale is not going to get us anywhere. It's not going to get us to the type of AI development that we want to be at in, in the future as these models get more powerful and get deployed in all these sorts of like mission critical contexts. Current life cycle of training and deploying and evaluations is, is to us like deeply broken and has opportunities to, to improve. So, um, more to come on that very, very soon.Mark Bissell [00:44:02]: And I think that that's a use basically, or maybe just like a proof point that these concepts do exist. Like if you can manipulate them in the precise best way, you can get the ideal combination of them that you desire. And steering is maybe the most coarse grained sort of peek at what that looks like. But I think it's evocative of what you could do if you had total surgical control over every concept, every parameter. Yeah, exactly.Myra Deng [00:44:30]: There were like bad code features. I've got it pulled up.Vibhu Sapra [00:44:33]: Yeah. Just coincidentally, as you guys are talking.Shawn Wang [00:44:35]: This is like, this is exactly.Vibhu Sapra [00:44:38]: There's like specifically a code error feature that activates and they show, you know, it's not, it's not typo detection. It's like, it's, it's typos in code. It's not typical typos. And, you know, you can, you can see it clearly activates where there's something wrong in code. And they have like malicious code, code error. They have a whole bunch of sub, you know, sub broken down little grain features. Yeah.Shawn Wang [00:45:02]: Yeah. So, so the, the rough intuition for me, the, why I talked about post-training was that, well, you just, you know, have a few different rollouts with all these things turned off and on and whatever. And then, you know, you can, that's, that's synthetic data you can kind of post-train on. Yeah.Vibhu Sapra [00:45:13]: And I think we make it sound easier than it is just saying, you know, they do the real hard work.Myra Deng [00:45:19]: I mean, you guys, you guys have the right idea. Exactly. Yeah. We replicated a lot of these features in, in our Lama models as well. I remember there was like.Vibhu Sapra [00:45:26]: And I think a lot of this stuff is open, right? Like, yeah, you guys opened yours. DeepMind has opened a lot of essays on Gemma. Even Anthropic has opened a lot of this. There's, there's a lot of resources that, you know, we can probably share of people that want to get involved.Shawn Wang [00:45:41]: Yeah. And special shout out to like Neuronpedia as well. Yes. Like, yeah, amazing piece of work to visualize those things.Myra Deng [00:45:49]: Yeah, exactly.Shawn Wang [00:45:50]: I guess I wanted to pivot a little bit on, onto the healthcare side, because I think that's a big use case for you guys. We haven't really talked about it yet. This is a bit of a crossover for me because we are, we are, we do have a separate science pod that we're starting up for AI, for AI for science, just because like, it's such a huge investment category and also I'm like less qualified to do it, but we actually have bio PhDs to cover that, which is great, but I need to just kind of recover, recap your work, maybe on the evil two stuff, but then, and then building forward.Mark Bissell [00:46:17]: Yeah, for sure. And maybe to frame up the conversation, I think another kind of interesting just lens on interpretability in general is a lot of the techniques that were described. are ways to solve the AI human interface problem. And it's sort of like bidirectional communication is the goal there. So what we've been talking about with intentional design of models and, you know, steering, but also more advanced techniques is having humans impart our desires and control into models and over models. And the reverse is also very interesting, especially as you get to superhuman models, whether that's narrow superintelligence, like these scientific models that work on genomics, data, medical imaging, things like that. But down the line, you know, superintelligence of other forms as well. What knowledge can the AIs teach us as sort of that, that the other direction in that? And so some of our life science work to date has been getting at exactly that question, which is, well, some of it does look like debugging these various life sciences models, understanding if they're actually performing well, on tasks, or if they're picking up on spurious correlations, for instance, genomics models, you would like to know whether they are sort of focusing on the biologically relevant things that you care about, or if it's using some simpler correlate, like the ancestry of the person that it's looking at. But then also in the instances where they are superhuman, and maybe they are understanding elements of the human genome that we don't have names for or specific, you know, yeah, discoveries that they've made that that we don't know about, that's, that's a big goal. And so we're already seeing that, right, we are partnered with organizations like Mayo Clinic, leading research health system in the United States, our Institute, as well as a startup called Prima Menta, which focuses on neurodegenerative disease. And in our partnership with them, we've used foundation models, they've been training and applied our interpretability techniques to find novel biomarkers for Alzheimer's disease. So I think this is just the tip of the iceberg. But it's, that's like a flavor of some of the things that we're working on.Shawn Wang [00:48:36]: Yeah, I think that's really fantastic. Obviously, we did the Chad Zuckerberg pod last year as well. And like, there's a plethora of these models coming out, because there's so much potential and research. And it's like, very interesting how it's basically the same as language models, but just with a different underlying data set. But it's like, it's the same exact techniques. Like, there's no change, basically.Mark Bissell [00:48:59]: Yeah. Well, and even in like other domains, right? Like, you know, robotics, I know, like a lot of the companies just use Gemma as like the like backbone, and then they like make it into a VLA that like takes these actions. It's, it's, it's transformers all the way down. So yeah.Vibhu Sapra [00:49:15]: Like we have Med Gemma now, right? Like this week, even there was Med Gemma 1.5. And they're training it on this stuff, like 3d scans, medical domain knowledge, and all that stuff, too. So there's a push from both sides. But I think the thing that, you know, one of the things about McInturpp is like, you're a little bit more cautious in some domains, right? So healthcare, mainly being one, like guardrails, understanding, you know, we're more risk adverse to something going wrong there. So even just from a basic understanding, like, if we're trusting these systems to make claims, we want to know why and what's going on.Myra Deng [00:49:51]: Yeah, I think there's totally a kind of like deployment bottleneck to actually using. foundation models for real patient usage or things like that. Like, say you're using a model for rare disease prediction, you probably want some explanation as to why your model predicted a certain outcome, and an interpretable explanation at that. So that's definitely a use case. But I also think like, being able to extract scientific information that no human knows to accelerate drug discovery and disease treatment and things like that actually is a really, really big unlock for science, like scientific discovery. And you've seen a lot of startups, like say that they're going to accelerate scientific discovery. And I feel like we actually are doing that through our interp techniques. And kind of like, almost by accident, like, I think we got reached out to very, very early on from these healthcare institutions. And none of us had healthcare.Shawn Wang [00:50:49]: How did they even hear of you? A podcast.Myra Deng [00:50:51]: Oh, okay. Yeah, podcast.Vibhu Sapra [00:50:53]: Okay, well, now's that time, you know.Myra Deng [00:50:55]: Everyone can call us.Shawn Wang [00:50:56]: Podcasts are the most important thing. Everyone should listen to podcasts.Myra Deng [00:50:59]: Yeah, they reached out. They were like, you know, we have these really smart models that we've trained, and we want to know what they're doing. And we were like, really early that time, like three months old, and it was a few of us. And we were like, oh, my God, we've never used these models. Let's figure it out. But it's also like, great proof that interp techniques scale pretty well across domains. We didn't really have to learn too much about.Shawn Wang [00:51:21]: Interp is a machine learning technique, machine learning skills everywhere, right? Yeah. And it's obviously, it's just like a general insight. Yeah. Probably to finance too, I think, which would be fun for our history. I don't know if you have anything to say there.Mark Bissell [00:51:34]: Yeah, well, just across the science. Like, we've also done work on material science. Yeah, it really runs the gamut.Vibhu Sapra [00:51:40]: Yeah. Awesome. And, you know, for those that should reach out, like, you're obviously experts in this, but like, is there a call out for people that you're looking to partner with, design partners, people to use your stuff outside of just, you know, the general developer that wants to. Plug and play steering stuff, like on the research side more so, like, are there ideal design partners, customers, stuff like that?Myra Deng [00:52:03]: Yeah, I can talk about maybe non-life sciences, and then I'm curious to hear from you on the life sciences side. But we're looking for design partners across many domains, language, anyone who's customizing language models or trying to push the frontier of code or reasoning models is really interesting to us. And then also interested in the frontier of modeling. There's a lot of models that work in, like, pixel space, as we call it. So if you're doing world models, video models, even robotics, where there's not a very clean natural language interface to interact with, I think we think that Interp can really help and are looking for a few partners in that space.Shawn Wang [00:52:43]: Just because you mentioned the keyword
We don't know if they understand the difference between a pod and a cast The Becks have arrived on your podcast feed with a brand new episode on the 2016 film Arrival, based on the 1998 story Story of Your Life by Ted Chiang. In this episode, they talk about having pets, how they met, and obviously the film and story in question. Enjoy! ko-fi.com/soonmajorpod linktr.ee/soonmajorpod Next Episode Homework: In The Heat of the Night (1967)
Una matemática descubre que los cimientos de su disciplina son inconsistentes con la realidad. Lo que debería ser un logro profesional la lleva al borde del abismo. En Division Entre Cero, Ted Chiang muestra el colapso de una idea puede arrastrar al alma a lo oscuro de la existencia. Más info de Bibliotequeando Learn more about your ad choices. Visit megaphone.fm/adchoices
Real Life We kick things off with a round of Real Life check-ins, because apparently none of us are allowed to simply exist quietly. Ben opens with Bedroom Talk with Ben Lawless, which is exactly as awkward, candid, and vaguely alarming as it sounds. No further clarification is offered, nor requested. Devon reports that snowboarding with his kids was actually great. No injuries, no disasters—just genuine fun on the mountain, which frankly feels suspicious but we'll allow it. He also shares that he's been practicing guitar for an hour a day, really locking in on technique. That means working through BPMs, tightening up tapping and sweeping, and grinding away at the Blackened solo like a man possessed. Progress is being made, fingers are suffering, and discipline is winning (for now). Steven talks about Hawaii, which lands somewhere between "kinda cool" and "why did we do this to ourselves." The travel was awful, the resort was pretty great, and Moana… apparently isn't Moana anymore? We don't resolve this, but we are confident Disney has a lot to answer for. Ben also brings in Blippo+, a surreal streaming service that feels like channel surfing through an alternate universe. If you're curious (or concerned), you can explore it directly at https://blippo.plus/ or read more context over at The A.V. Club: https://www.avclub.com/blippo-makes-art-out-of-channel-surfing. Future or Now In Future or Now, Ben highlights a sobering study out of Japan linking poor oral health in older adults to higher mortality rates and increased need for long-term care. The research, conducted by Osaka Metropolitan University and the Institute of Science Tokyo, suggests brushing and dental care might matter more than we'd like to admit. You can read the full breakdown via The Japan Times: https://www.japantimes.co.jp/news/2026/01/05/japan/science-health/elderly-dental-hygiene/ Devon follows up with This Week in Space, reacting to the news that the U.S. has effectively killed NASA's Mars Sample Return Mission. What happens now? Confusion, disappointment, and a lot of unanswered questions. The full story is covered here: https://www.iflscience.com/us-just-killed-nasas-mars-sample-return-mission-so-what-happens-now-82148 Book Club This week's Book Club pick is "The Janitor in Space" by Amber Sparks, a short story that sparked very different reactions around the table. Steven enjoyed it, Ben didn't care for it at all, and Devon—rather than choosing a side—asked ChatGPT to turn it into a song, which may be the most Devon response possible. You can read the story yourself here: https://americanshortfiction.org/janitor-space/ Looking ahead, next week's selection is Ted Chiang's "What's Expected of Us", originally published in Nature (July 7, 2005). We'll be digging into free will, determinism, and the uncomfortable feeling that the universe might already know what you're about to do. As always, thanks for listening, reading, and continuing to question whether brushing your teeth might actually save your life.
Science fiction heroes aren't usually humanities professors, but Arrival (2016) is the exception to that rule. Amy Adams stars as Dr. Louise Banks, who may be the only person on Earth who can figure out what a pair of mysterious aliens are trying to say. Today on AirSpace, Matt and Emily discuss the film, its source material (Ted Chiang's novella Story of Your Life), linguistics, non-linear time, extraterrestrials, explosions, geopolitical tension, oat milk, and other mysteries of the universe. The transcript for this episode is at s.si.edu/airspaces11e4 Subscribe to our monthly newsletter at s.si.edu/airspacenewsletter.AirSpace is made possible with the generous support of Lockheed Martin.
Abu and Obssa complete their read-through of Exhalation by Ted Chiang. They dive into the ninth short story in the collection, Anxiety Is the Dizziness of Freedom, and explore the horrifying and empowering cost of true free will. Get bonus content and helpful reading materials: https://www.patreon.com/scifibookclubpod Keep the conversation going in our free Discord: https://discord.gg/bVrhwWm7j4 Watch the video version of this episode: www.youtube.com/@loreparty Keep up with this season's reading schedule: https://tinyurl.com/sfbc-season3 (00:00) Intro (02:38) Summary (07:17) Our Impressions (17:03) Free Will is Dizzying (21:22) Are Your Decisions Meaningless? (24:01) The Horrible Price of Free Will (27:08) Grappling with Infinite Possibilities (32:52) You Have the Power to Change (36:36) Final Ratings (39:24) What We're Reading Next Learn more about your ad choices. Visit megaphone.fm/adchoices
Abu and Obssa continue their read-through of Exhalation by Ted Chiang. They dive into the eighth short story in the collection, Ompahlos, and explore the philosophy of existentialism. Get bonus content and helpful reading materials: https://www.patreon.com/scifibookclubpod Keep the conversation going in our free Discord: https://discord.gg/bVrhwWm7j4 Watch the video version of this episode: www.youtube.com/@loreparty Keep up with this season's reading schedule: https://tinyurl.com/sfbc-season3 (00:00) Intro (02:56) Summary (08:49) Our Impressions (15:43) A Small Nitpick (17:59) What is Existentialism? (19:46) Jean-Paul Sartre and Simone de Beauvoir (21:17) Core Tenets of Existentialism (23:05) Critiques of Existentialism (25:40) Are We Existentialists? (29:34) The Absurd Part of Existentialism (33:31) What We're Reading Next Learn more about your ad choices. Visit megaphone.fm/adchoices
Leading writers and researchers will discuss and explain the issues that arise in writing with the entrance of large language models into this space. Are they useful for fiction and nonfiction writers, and in what ways? Can their use be considered ethical? About the Speakers Nina Beguš is a researcher at UC Berkeley working in artificial humanities, an interdisciplinary approach she designed to understand the cultural, ethical and philosophical dimensions of AI. Focusing on language and literature, her work foregrounds our imaginary around AI. She lives in the West Coast's only residential college, Bowles Hall, with her husband, three sons, and 188 students. James Yu is a speculative fiction writer and entrepreneur. He is the co-founder of Sudowrite, the AI assistant for creative writers. His writing explores how technology mediates our everyday experiences. He lives in San Francisco with his wife, two kids, and a growing number of AIs (none sentient yet.). Ted Chiang is an American science fiction writer. His work has won four Nebula awards, four Hugo awards, six Locus awards, and the PEN Malamud Award. His novella “Story of Your Life” was the basis of the film Arrival (2016). His most recent short story collection, Exhalation (Knopf, 2019), was listed as one of the Top Ten Books of 2019 by The New York Times and was included in former President Barack Obama's 2019 reading list. In 2023, he was named one of Time magazine's 100 Most Influential People in AI. A Technology & Society Member-led Forum program. Forums at the Club are organized and run by volunteer programmers who are members of The Commonwealth Club, and they cover a diverse range of topics. Learn more about our Forums. OrganizerGerald Anthony Harris Learn more about your ad choices. Visit megaphone.fm/adchoices
Abu and Obssa continue their read-through of Exhalation by Ted Chiang. They dive into the seventh short story in the collection, The Great Silence, and explore how close we are to communicating with animals. Get bonus content and helpful reading materials: https://www.patreon.com/scifibookclubpod Keep the conversation going in our free Discord: https://discord.gg/bVrhwWm7j4 Watch the video version of this episode: www.youtube.com/@loreparty Keep up with this season's reading schedule: https://tinyurl.com/sfbc-season3 Learn more about your ad choices. Visit megaphone.fm/adchoices
This month, we read stories about AI.• The Weird (ed. Ann & Jeff VanderMeer)• Dangerous Visions (ed. Harlan Ellison) • The Complete John Silence (by Algernon Blackwood) • Patreon (Free Bonus Episodes) • Email us at genrepodcast@gmail.com
Hello everyone!!We're talking about a great Sci-fi film, going into language, history and time. As we discuss some of Ted Chiang's short story "Story of Your Life", but primarily, the Denis Villeneuve film "Arrival"!Why are aliens here? How can our understanding of reality changing help us confront the horrors of capitalism? How is time's linearity a simplistic way to confront capital's victories over past, present and future?Come along as we answer those questions and more, in our more Posadist episode yet!Enjoy!If you can and are interested in early episodes and our bonus content, soon to be plenty more, check out our Patreon!https://www.patreon.com/leftpage Also! If you're not there already, feel free to join our Discord!https://discord.gg/J2wgG3yrPNIntro Music: I Would Never Have To Know - Samurai Drive. From the Album: Samurai Drive ℗ 2020 Daniel RiederauerOutro Music: Don't Leave! · El-Funoun Palestinian Popular Dance Troupe. From the Album: Zareef ℗ 2006 El-Funoun Hosted on Acast. See acast.com/privacy for more information.
O ano já está chegando ao fim e, como era de se esperar, a inteligência artificial apareceu de novo na conversa. Dessa vez, chamei a Ana Freitas pra um papo sobre IA — mas com uma proposta: nada de hype vazio, nem previsão de fim do mundo. A gente quis entender, com os pés no chão, o que realmente muda na vida das pessoas agora que essas ferramentas estão em toda parte. A Ana trouxe uma ideia daquelas de alugar triplex na minha cabeça: a IA, no fundo, tem esse poder de puxar todo mundo pra média. Quem é ruim em alguma coisa pode parecer melhor com a ajuda da máquina, e quem é excelente, se não souber usar direito, pode acabar ficando... só ok.Falamos também de um medo mais profundo, que não é da tecnologia em si, mas do que o capitalismo vai fazer com ela — a partir do papo que tive este ano com o Ted Chiang. E de como, talvez por isso, a gente só sabe imaginar futuros distópicos com robôs e algoritmos — e nunca utopias. Falta sonhar melhor. Falta disputar esse futuro. Foi daqueles episódios que começam falando de ferramentas e terminam em filosofia de bar, com direito a Black Mirror, Spore e simulações do universo. Ou seja: minha cara.No final, aproveitamos para falar do IA em curso — conhecimento à prova de futuro, uma comunidade que criamos para continuar esse papo de forma prática e constante — e explicamos por que não é só um curso. Tem mentoria quinzenal comigo e com a Ana, grupo de troca no Telegram, recursos comentados e um monte de conversa boa pra quem não quer só entender IA — mas aprender a viver com ela.Ainda por cima, mostramos como somos os piores vendedores do mundo, sem saber ser coach e vender fórmulas mágicas e certezas. É o nosso jeitinho, espero ver você lá. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit boanoiteinternet.com.br/subscribe
Abu and Obssa continue their read-through of Exhalation by Ted Chiang. They dive into the sixth short story in the collection, The Truth of Fact, the Truth of Feeling, and explore the ways technology has always helped us remember our lives. Get bonus content and helpful reading materials: https://www.patreon.com/scifibookclubpod Keep the conversation going in our free Discord: https://discord.gg/bVrhwWm7j4 Watch the video version of this episode: www.youtube.com/@loreparty Keep up with this season's reading schedule: https://tinyurl.com/sfbc-season3 Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode we dive into the fabulous 2016 film Arrival; directed by Denis Villenueve and based on a short story by Ted Chiang. In this episode we discuss the contextual nature of language, seeing the big picture, education and learning, fractal reality, the importance of failure, the importance of excellence, foresight, choosing the hard path, and having an existential meta awareness of death all the time. I am stoked because, for the first time in three years, David is back on the show! Thanks for listening!
Heute treffe ich den Illustrator und Grafikdesigner Peter Hoffmann bei einem Zeichenkurs in Hannover. Wir sprechen in der Mittagspause über sein kreatives Schaffen, warum Routinen so wichtig sind (und was das mit David Lynch zu tun hat), über seine beiden Zeichenbücher, über Science-Fiction und einiges mehr. 00:00:00 Intro 00:01:18 Das Fundstück, Teil 1 00:04:15 Das Fundstück, Teil 2 00:10:10 Steady-Shoutout 00:12:35 Das Interview mit Peter Hoffmann 00:54:13 Ausleitung —— Shownotes: 1. Mein Gast ist Peter Hoffmann, https://www.instagram.com/peter_thesecond/ Peters Bücher, hier ansehen Science Fiction-Literatur, Autor-Empfehlung von Peter ist Ted Chiang (z.B. Kurzgeschichte zum Film "Arrival") Autoren-Empfehlungen von Roberta sind Philip K. Dick und Margret Atwood. 2. Unterstütze «Der kreative Flow» bei Steady mit einer VIP-Mitgliedschaft und erhalte exklusive Boni, werde Teil der Flow-Community & mach bei Community-Aktionen mit, https://steady.de/derkreativeflow/ 3. YouTube-Kanal, https://www.youtube.com/c/derkreativeflow 4. Sprachnachricht für den Podcast schicken, https://www.speakpipe.com/derkreativeflow 5. Flow-Letter, Jetzt abonnieren 6. Bücher, https://robertabergmann.shop 7. Kreativ-Stammtisch (für Steady-Supporter kostenlos), Ticket buchen: https://shop.derkreativeflow.de/s/robertabergmann/kreativ-stammtisch-online 8. Flow-Blog, https://www.derkreativeflowblog.de 9. Shop Kurse & Tickets: https://shop.derkreativeflow.de 10. Buch "Kreative Identität & Selbsterkenntnis", hier kaufen Credits Podcast: Der kreative Flow, 2025 Idee, Design, Schnitt & Host: Roberta Bergmann, https://www.robertabergmann.de Sounds & Mix: Peter M. Glantz, https://www.glantz.info Alle Infos unter: https://www.derkreativeflow.de Folge direkt herunterladen -- Credits Podcast: Der kreative Flow, 2025 Idee, Design & Host: Roberta Bergmann, https://www.robertabergmann.de Tonmischung & Sounds: Peter M. Glantz, https://www.glantz.info Alle Infos unter: https://www.derkreativeflow.de
Lightbreakers by Aja Gabel is a genre-defying story about how far one would go to save the person they love. Aja joins us to talk about time travel, faith vs fear, screenwriting, contemporary art, Marfa and more with host Miwa Messer. This episode of Poured Over was hosted by Miwa Messer and mixed by Harry Liang. New episodes land Tuesdays and Thursdays (with occasional Saturdays) here and on your favorite podcast app. Featured Books (Episode): Lightbreakers by Aja Gabel The Ensemble by Aja Gabel Exhalation by Ted Chiang Stories of Your Life and Others by Ted Chiang
Ancora paura in sala con "Predator: Badlands", horror diretto da Dan Trachtenberg, con Elle Fanning e Dimitrius Schuster-Koloamatangi. Ne parliamo con il critico ed esperto del genere Emanuele Di Nicola.Con il nostro Boris Sollazzo parliamo di "Un semplice incidente" diretto da Jafar Panahi, con Vahid Mobasseri, Ebrahim Azizi e Mariam Afshari e di "Anemone", diretto da Ronan Day-Lewis, che vede il ritorno sugli schermi dopo una lunga assenza di Daniel Day Lewis assieme a Sean Bean e Samantha Morton. In qualità di co-direttore artistico del Linea d'Ombra Festival, Boris ci anticipa anche le proposte e gli incontri più interessanti tra quelli in palinsesto a Salerno fino al 15 novembre 2025.L'attrice Claudia Gerini presenta "Fuori la verità", film diretto da Davide Minella che la vede protagonista assieme a Claudio Amendola e Claudia Pandolfi.Si è appena concluso il Festival Science+Fiction di Trieste al quale non potevamo mancare. È stata l'occasione per fare il punto su questi generi cinematografici con interviste al regista Gabriele Mainetti (presidente della giuria), allo scrittore di fantascienza Ted Chiang e al direttore del festival Alan Jones.
Abu and Obssa continue their read-through of Exhalation by Ted Chiang. They dive into the fifth short story in the collection, Dacey's Patent Automatic Nanny, and explore the challenges of raising children in a technological world. Get bonus content and helpful reading materials: https://www.patreon.com/scifibookclubpod Keep the conversation going in our free Discord: https://discord.gg/bVrhwWm7j4 Watch the video version of this episode: www.youtube.com/@loreparty Keep up with this season's reading schedule: https://tinyurl.com/sfbc-season3 Learn more about your ad choices. Visit megaphone.fm/adchoices
Abu and Obssa continue their read-through of Exhalation by Ted Chiang. They dive into the fourth short story in the collection, The Lifecycle of Software Objects, and explore the realities of forming relationships with artificial intelligence. Get bonus content and helpful reading materials: https://www.patreon.com/scifibookclubpod Keep the conversation going in our free Discord: https://discord.gg/bVrhwWm7j4 Watch the video version of this episode: www.youtube.com/@loreparty Keep up with this season's reading schedule: https://tinyurl.com/sfbc-season3 (00:00) Intro (03:01) Summary (10:52) Our Impressions (16:40) Abu's Favorite Moment (18:25) Weakest Story in the Collection (20:38) The Icky Truth About Marco (23:15) Can We Have Relationships with AI? (28:22) The Case for AI Relationships (35:13) The Case Against AI Relationships (42:41) There Are No Shortcuts to Relationships (46:47) Outro Learn more about your ad choices. Visit megaphone.fm/adchoices
There's scientifically plausible time travel, fantasy / sci fi time travel, 'traditional' time travel centered around real history, people trapped in time loops, time travel romance, and we even threw in a couple of great time travel kids books - something here for every reader to love!As we were editing the episode we realized we forgot an incredible, recent time travel book from the list that we'd meant to include - it's one we've mentioned in a previous episode. Drop us a line on discord if you think you know what we forgot (or if you've got a time travel book you love that you think should have been on the list)!Join the Hugonauts book club on discord to tell us about your favorite time travel booksOr you can watch the episode on YouTube if you prefer videoThis episode is sponsored by Maya: Seed Takes Root, which you can get here on kickstarterIf you want to jump around, here are the timestamps for all the books we talked about: 00:00 Intro 1:03 Sponsor - MAYA: Seed Takes Root 1:34 Fantastical / far future time travel 2:04 Night Watch by Terry Pratchett 3:15 The Dark Tower series by Stephen King 4:36 Fall of Hyperion by Dan Simmons 6:10 Scientifically plausible time travel 6:50 Tau Zero by Poul Anderson 9:20 Story of Your Life by Ted Chiang 10:38 The Forever War by Joe Haldeman 12:15 Children of Time by Adrian Tchaikovsky 13:47 Looping time travel stories 14:14 The 7 1/2 Deaths of Evelyn Hardcastle by Stuart Turton 14:44 All You Need is Kill by Hiroshi Sakurazaka 17:31 Great Time Travel Kids Books 20:25 Kindred by Octavia Butler 22:09 Lightning by Dean Koontz 23:48 11/22/63 by Stephen King 25:50 The First Fifteen Lives of Harry August by Claire North 29:18 The Rise and Fall of DODO by Neal Stephenson and Nicole Galland 31:40 Time and Again by Jack Finney 35:00 The Life of Chuck by Stephen King 36:30 Slaughterhouse Five by Kurt Vonnegut 40:43 The Time Traveler's Wife by Audrey Niffenegger 44:05 Our top 3 favorite time travel books
Generative AI is the source of a lot of angst. For students, for workers, for artists. We’ll talk to Bellevue-based science fiction writer Ted Chiang about how he thinks about it as a writer. We can only make Seattle Now because listeners support us. Tap here to make a gift and keep Seattle Now in your feed. Got questions about local news or story ideas to share? We want to hear from you! Email us at seattlenow@kuow.org, leave us a voicemail at (206) 616-6746 or leave us feedback online.See omnystudio.com/listener for privacy information.
Abu and Obssa continue their read-through of Exhalation by Ted Chiang. They dive into the third short story in the collection, What's Expected of Us, and explore the existential dread of not having free will. Get bonus content and helpful reading materials: https://www.patreon.com/scifibookclubpod Keep the conversation going in our free Discord: https://discord.gg/bVrhwWm7j4 Watch the video version of this episode: www.youtube.com/@loreparty Keep up with this season's reading schedule: https://tinyurl.com/sfbc-season3 Learn more about your ad choices. Visit megaphone.fm/adchoices
AI therapists and caregivers. Digital tutors and advisors and friends. Artificial lovers. Griefbots trained to imitate dead loved ones. Welcome, to the bustling world of AI-powered chatbots. This was once the stuff of science fiction, but it's becoming just the stuff of everyday life. What will these systems do to our society, to our relationships, to our social skills and motivations? Are these bots destined to leave us hollowed out, socially stunted, screen-addicted, and wary of good-old-fashioned, in-the-flesh human interaction? Or could they actually be harnessed for good? My guest today is Dr. Henry Shevlin. Henry is a philosopher and AI ethicist at the Leverhulme Centre for the Future of Intelligence (CFI) at Cambridge University. In a series of recent papers, Henry has been exploring this brave new world of "social AI" and its philosophical, ethical, and psychological dimensions. Here, Henry and I sketch the current landscape of social AI—from dedicated platforms like Replika and CharacterAI to the more subtly social uses of ChatGPT and Claude. We consider several tragic cases that have recently rocketed these kinds of services into public awareness. We talk about what's changed about AI systems—quite recently—that's now made them capable of sustained relationships. We linger on the possible risks of social AI and, perhaps less obviously, on the possible benefits. And we consider the prospects for regulation. Along the way, Henry and I also talk about his 81-year-old father, his teenage self, and, of course, the kids these days; we consider whether social AI, in its potential harms, is more like social media or more like violent video games; we talk about "deskilling" and it's opposite "upskilling"; and we of course take stock of a certain elephant in the room. Alright friends, this is a fun one. We've been wanting to explore this dawning age of social AI for some time. And we finally found, in Henry, the right person to do it with. Enjoy! Notes 3:00 – The piece in The Guardian—'It's time to prepare for AI personhood'—by Jacy Reece Anthis. 5:00 – The Replika subreddit. 9:30 – News coverage of recent research on the bedside manner of AI systems. 10:30 – For a recent paper on AI by the philosopher Ophelia Deroy, see here. 11:30 – For some of Dr. Shevlin's recent writing about "social AI", see here and here. 13:30 – OpenAI's recent report, 'How People Use ChatGPT'. 16:30 – For examples of popular media coverage of recent (tragic) cases involving chatbots, see here, here, here, and here. 21:00 – The paper by Rose Guingrich and Michael Graziano on how users describe their relationships with chatbots. 24:00 – The precise quote by Mark Twain is: “Nothing so needs reforming as other people's habits.” 25:30 – The classic paper on Mary's room by Frank Jackson. 27:00 – Dr. Shevlin has also worked on questions about animal minds (e.g., here), as well as a number of issues in AI beyond “social AI” (e.g., here, here). 30:00 – The classic essay by Isaiah Berlin on hedgehogs and foxes. 32:00 – The classic paper on ELIZA, introduced by Joseph Weizenbaum in 1966. A version of ELIZA that you can interact with. For work by Sherry Turkle, see here. 34:00 – Dr. Shevlin's recent paper about the “anthropomimetic turn” in contemporary AI. 41:00 – For recent work on whether current chatbots pass a version of the Turing test, see here. 45:00 – Ted Chiang's story, ‘The Lifecycle of Software Objects,' was re-published as part his collection of short fiction, Exhalation. 46:00 – For Dr. Shevlin's recent writing on machine consciousness, see here. 48:00 – For more on the possibility of consciousness in borderline cases (like AI systems), see our past episodes here and here. 52:00 – The study on whether people attribute consciousness to LLMs. 54:30 – A recent paper on griefbots by scholars at the University of Cambridge. A popular article about the phenomenon. 55:30 – A blogpost describing the so-called DigiDan experiment. 1:00:00 – Some of the potentially positive social qualities of AIs are discussed in this essay by Paul Bloom. 1:19:30 – For more on Iain Banks' culture series, see here. 1:20:30 – A popular article on the phenomenon of hikikomori. Recommendations The Oxford Intersections: AI in Society collection The new podcast, Our Lives with Bots Many Minds is a project of the Diverse Intelligences Summer Institute, which is made possible by a generous grant from the John Templeton Foundation to Indiana University. The show is hosted and produced by Kensy Cooperrider, with help from Assistant Producer Urte Laukaityte and with creative support from DISI Directors Erica Cartmill and Jacob Foster. Our artwork is by Ben Oldroyd. Subscribe to Many Minds on Apple, Stitcher, Spotify, Pocket Casts, Google Play, or wherever you listen to podcasts. You can also now subscribe to the Many Minds newsletter here! We welcome your comments, questions, and suggestions. Feel free to email us at: manymindspodcast@gmail.com. For updates about the show, visit our website or follow us on Bluesky (@manymindspod.bsky.social).
Abu and Obssa continue their read-through of Exhalation by Ted Chiang. They dive into the second short story in the collection, the titular Exhalation, and explore why humans are unique in their pursuit of meaning. Get bonus content and helpful reading materials: https://www.patreon.com/scifibookclubpod Keep the conversation going in our free Discord: https://discord.gg/bVrhwWm7j4 Watch the video version of this episode: www.youtube.com/@loreparty Keep up with this season's reading schedule: https://tinyurl.com/sfbc-season3 Learn more about your ad choices. Visit megaphone.fm/adchoices
Send us a textWelcome to Read Before Midnight, your passage into the strange and the sinister. Season 2 continues with Episode 4 – The Forgotten Place, Part One by W.H. Maxwell. Step into the Allegheny dusk, where four friends descend into a valley hiding a village that was never meant to be found.
Take in every detail, don't let a moment go unnoticed. One day it may all be a memory...From master short story sci-fi writer Ted Chiang, today we're exploring the inspiration behind the movie Arrival, 'Story of Your Life'.Sign up for 'BTMC: Protagonist Edition', where you get EXTENDED VERSIONS of the episodes to take you even deeper into the story with more scenes, more lessons, and more of everything that makes the show what it is, as well as access to all of the Character Analysis episodes. Sign up link below: ---------------------------Get BTMC: PROTAGONIST EDITION: https://becomingmain.supercast.com/--GET THE FREE NEWSLETTER: "THE SCHOOL OF PROTAGONISM"Substack: https://substack.com/@schoolofprotagonismFOLLOW BTMC FOR MORE GREAT CONTENT: Instagram: https://instagram.com/becomingmainX: https://twitter.com/becomingmain
Abu and Obssa begin their read-through of Exhalation by Ted Chiang. They dive into the first short story in the collection, The Merchant and the Alchemist's Gate, and explore the power of storytelling to change our perspective on life. Get bonus content and helpful reading materials: https://www.patreon.com/scifibookclubpod Keep the conversation going in our free Discord: https://discord.gg/bVrhwWm7j4 Watch the video version of this episode: www.youtube.com/@loreparty Keep up with this season's reading schedule: https://tinyurl.com/sfbc-season3 Learn more about your ad choices. Visit megaphone.fm/adchoices
Na primeira semana de agosto, ganhei um presentão e nem foi de Dia dos Pais. Fui convidado pela Pina a passar 20 minutos com o escritor estadunidense Ted Chiang e perguntar sobre tecnologia, IA, arte e por que temos medo do futuro. No dia seguinte, Chiang falou para uma plateia de 80 pessoas na Casa Manioca e lá estava eu mais uma vez encontrando o cara que é dos meus maiores ídolos na literatura, parte de uma nova geração de escritores de sci-fi e fantasia que não contam apenas a história do homem branco genial que precisa vencer obstáculos para concretizar sua genialidade, que também tem nomes como N.K. Jemisin, Robin Sloan, R.F. Kuang, Nnedi Okorafor e muito mais gente legal. (Ou nem tão nova assim, Chiang nasceu em 1967.)A conversa foi o empurrão que faltava para voltar com o Boa Noite Internet neste segundo semestre, que você escuta agora em toda sua glória… em língua inglesa, porque Chiang não speak Brazilian.Nada tema! A seguir, você encontra a tradução da nossa conversa. Mas o podcast não traz só nossos 20 minutos de papo, mas outras conversas que tivemos sem microfone durante os dois dias de Chiangpalooza. Falamos de criatividade, tecnologia e o que nos faz humanos (spoiler: a arte). Nesta conversa, Chiang apresenta a visão da professora Anna Rogers sobre diferentes usos da IA para escrita, como “incômodo” ou como “pensamento”. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit boanoiteinternet.com.br/subscribe
Merriam-Webster's Word of the Day for August 14, 2025 is: immutable ih-MYOO-tuh-bul adjective Immutable is a formal adjective used to describe something that is unable to be changed. // It is hardly an immutable fact that cats and dogs are sworn enemies; over the years our golden retriever has grown both fond and protective of her tabby housemate. See the entry > Examples: “... by the 1800s, naturalists like Lamarck were questioning the assumption that species were immutable; they suggested that over time organisms actually grew more complex, with the human species as the pinnacle of the process. Darwin brought these speculations into public consciousness in 1859 with On the Origin of Species, and while he emphasized that evolution branches in many directions without any predetermined goal in mind, most people came to think of evolution as a linear progression.” — Ted Chiang, LitHub.com, 6 Mar. 2025 Did you know? Immutable may describe something that is incapable of change, but the word itself—like all words—is mutable, both capable of and prone to alteration. To put a finer point on it, if language were fixed, we wouldn't have immutable itself, which required a variety of mutations of the Latin verb mutare (“to change”) to reach our tongues (or pens, keyboards, or touchscreens—oh the many permutations of communication!). Other English words that can be traced back to mutare include mutate, transmute, and commute. Which reminds us—the mutability of language makes great food for thought during one's commute.
Abu and Leo bring their Dune obsession through the alchemist's portal to Ted Chiang's incredible short story collection, Exhalation, examining the common themes in Ted's writing along side Frank's. You be good. We love you. This episode contains SPOILERS through God Emperor of Dune, and of course, for the nine stories featured in Exhalation. Get ad-free episodes and bonus content: https://www.patreon.com/GomJabbar Say thank you with a tip: http://buymeacoffee.com/gomjabbar Watch video versions of select episodes: https://www.youtube.com/@loreparty Get yourself some custom-designed Dune swag: https://www.gomjabbarshop.com Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode, a few pages of the following books will be read:The Woman Who Borrowed Memories: Selected Stories by Tove Jansson, translated from the Swedish by Thomas Teal and Silvester MazzarellaPublic Library and Other Stories by Ali SmithExhalation: Stories by Ted Chiang
Containing Matters of Mapping MemoryTimestamps:Ted Chiang background, non-spoiler discussion (0:00)spoiler summary, spoiler discussion, "Arrival" discussion (17:21)Bibliography:Brady, Amy - "Barack Obama's 2019 Summer Reading List", Chicago Review of Books, https://chireviewofbooks.com/2019/08/20/barack-obamas-2019-summer-reading-list/Grant, Gavin J. - interview with Ted Chiang, indiebound https://web.archive.org/web/20170407032812/http://www.indiebound.org/author-interviews/chiangtedMcCarron, Meghan - "The Legendary Ted Chiang on Seeing His Stories Adapted and the Ever-Expanding Popularity of SF"https://electricliterature.com/the-legendary-ted-chiang-on-seeing-his-stories-adapted-and-the-ever-expanding-popularity-of-sf/
Kirk, Jason, and Maddy bring on special guest Ray Chase (Final Fantasy XV, Play Nice) to talk about his experiences as a voice actor and his new game, Date Everything! Ray talks about how he came up with the game, the seven-year saga of building it, and how it's so much more than a romance simulator.One More Thing:Kirk: The Eyre Affair (Jasper Fforde)Maddy: I quit Polygon lolJason: Marble Hall Murders (Anthony Horowitz)LINKS:Kirk's new collection “Music For Podcasting”: https://kirkhamilton.bandcamp.com/album/music-for-podcastingKindness Coins: https://arden.itch.io/kindness-coins“Why AI Isn't Going To Make Art” by Ted Chiang for The New Yorker: https://www.newyorker.com/culture/the-weekend-essay/why-ai-isnt-going-to-make-artTriple Click LIVE in Portland, July 11: https://albertarosetheatre.com/event/triple-click-live/alberta-rose-theatre/portland-oregon/Support Triple Click: http://maximumfun.org/joinBuy Triple Click Merch: https://maxfunstore.com/search?q=triple+click&options%5Bprefix%5D=lastJoin the Triple Click Discord: http://discord.gg/tripleclickpodTriple Click Ethics Policy: https://maximumfun.org/triple-click-ethics-policy/
This week on Sinica, I chat with Stephen Platt, historian at UMass Amherst and author, most recently, of the book The Raider: The Untold Story of a Renegade Marine and the Birth of U.S. Special Forces in World War II. Like his previous works, Autumn in the Heavenly Kingdom and Imperial Twilight, it offers a compelling narrative history of an overlooked chapter through a deeply empathetic and well-researched examination of individual lives. Please make sure to listen to the excerpt from the audiobook at the end of this podcast.04:21 - Evans Carlson: A forgotten hero07:49 - The Real Carlson vs. the constructed Carlson10:04 - The book's origin12:20 - Carlson's ideological transformation16:50 - Carlson's religious beliefs and public perception20:04 - Emerson's influence on Carlson's thinking 23:46 - Inner conflicts: Soul-searching or regret?27:15 - Carlson's relationship with President Franklin D. Roosevelt30:39 - Gung Ho Meetings: meaning, practice, and legacy33:34 - Zhu De's influence on Carlson 40:28 - Carlson's relationships with Agnes Smedley and Edgar Snow47:49 - Hopes for U.S.-China alliance 51:57 - Carlson's death and his legacy 58:01 - Lessons from CarlsonPaying it Forward: Peter Thilly, Emily MokrosRecommendations: Stephen: 11.22.63 by Stephen King; Ted Chiang (author); Otoboke Beaver (band); Book of Mormon (musical)Kaiser: Wobbler (band); The Religion by Tim Willocks; Zappa (2020)See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
From Murderbot to Sense and Sensiblity, what are our favourite adaptations from books that we love? Inspired by the recent Apple adaptation of Martha Wells sci-fi novels The Murderbot Diaries, this episode is a celebration of the world of books to film. From the joy of seeing a book that we love brought to the big screen, to the pitfalls when things don't match up to our expectations, we're considering the hits and misses, and passing on our recommendations. You'll be hearing from pod regulars Laura Potter and Phil Chaffee, plus we meet Philippa Donovan, a literary scout to the film and TV world. Philippa founded her consultancy Smart Quill to bridge the gap between agents, publishers and authors around the world. She's giving us the inside track on the world of book to film. All that, plus a peek into the future and the upcoming projects we've earmarked as ones to watch. Interview Listen to the full interview with literary scout Philippa Donovan here [TO COME] Patreon Come and listen to the episodes ad-free over on Patreon, plus The Book Club Review Weekend, join our chat threads where you'll be able to swap book recommendations with Kate and other Book Club Review listeners and if you want to come and talk books with Kate in person at the higher tier you can join the pod's monthly book club. Head to Patreon.com/thebookclubreview for all the benefits and how to sign up. Booklist All Systems Red by Martha Wells (Book 1) Game of Thrones by George R. R. Martin Artificial Condition by Martha Wells (Book 2) Room by Emma Donoghue Normal People by Sally Rooney The Horse Whisperer by Nicholas Sparks The Bridges of Madison County by Robert James Waller The Notebook by Nicholas Sparks Exhalations by Ted Chiang (the film Arrival is based on Story of Your Life) Brokeback Mountain by Annie Proulx Friday Night Lights by H. G. Bissinger Rivals by Jilly Cooper The Girl on the Train by Paula Hawkins Call Me By Your Name by André Aciman Children of Men by P. D. James Sense and Sensibility by Jane Austen Pride and Prejudice by Jane Austen Hunting and Gathering by Anna Gavalda Barn Burning by Haruki Murakami Barn Burning by William Falkner Jonathan Strange and Mr. Norell by Susanna Clarke Fleishman is in Trouble by Taffy Brodesser-Akner Hot Milk by Deborah Levy The Friend by Sigrid Nunez People We Meet on Vacation by Emily Henry The Salt Path by Raynor Wynn Everything I Know About Love andGood Material by Dolly Alderton Universality by Natasha Brown Theory and Practice by Michelle de Kretser Transcript Head over to the episode page at thebookclubreview.co.uk for a full transcript
Real Life Things kicked off with stories from Friday night's bonfire, where the nature of reality was hotly debated between toasted marshmallows. That conversation somehow spiraled into a serious (and slightly absurd) discussion about Noodles and Soba—Ben's son's pet rats—and the potential benefits of getting female rats fixed. Apparently, doing so can add about a year to their lifespan by preventing reproductive cancers, but the surgery's cost is a tough sell when you're in what Ben called “debt paydown mode.” Devon floated the idea of unscrupulous “rat hustlers” faking the procedure, which—frankly—feels like a dark Netflix documentary waiting to happen. From there, it was a short hop to a conversation about whether rats lay eggs (they don't), Jurassic Park's “life finds a way,” and then straight into tearing apart Gremlins logic. What even is “midnight,” anyway? Local time? Greenwich Mean? Galactic zenith? And why are we trusting a kid instead of the old shopkeeper? Gremlins may now officially live in the “science fantasy/biological fiction” corner of the canon. Saturday brought gaming with their friend Greg. They played Relic Blade, where Devon managed to escort a yak to safety despite Steven's attempts at sabotage. Greg used a clever trick involving a D20 and gravity to determine movement direction, which frankly should be in the rulebook. They also played Marvel Dice Throne, where Devon's Wolverine got obliterated almost immediately thanks to poor positioning and cruel dice. Then came Living Well, a minimalist dice game with retro 70s-style art and some satisfying ability upgrades. Plans to play Arcs got shelved after a medical emergency—Nicole was hit hard by the heat and ended up needing CPR at the hospital (despite having a pulse and breathing, which… yeah, it was a weird night). She's recovering now. Future or Now TV-wise, the gang wrapped up Season 4 of Love, Death & Robots—with highlights including a talking cat, an occult bomber mission, and gang warfare against colossal babies. Over on Amazon Prime, they watched the Secret Level take on Pac-Man, which was surprisingly grim and humanoid-heavy. Ben and his son also dove into Scott Pilgrim territory, rewatching the movie and starting Scott Pilgrim Takes Off, which quickly turns into a clever alternate universe story that's fun, stylish, and charming enough to inspire a trip through the graphic novels. Ben gave a thumbs-up to the newest season of Black Mirror, calling one episode a bit conceptually broken but championing another as a "new Callister." Book Club In Book Club, the crew dug into “Liking What You See: A Documentary” by Ted Chiang, from Stories of Your Life and Others. Framed as a mockumentary, the story centers on Caliagnosia—a reversible condition that disables facial beauty perception. The ethical and social ramifications are explored through interviews and propaganda, making the story feel eerily real. It raises questions about freedom, superficiality, advertising, and the influence of unseen tech on our minds. Tamara's personal journey through switching Cali off and on again added a human element to the philosophical questions. Everyone agreed: it was a banger of a story. Next up for Book Club: the first three chapters of A Psalm for the Wild-Built by Becky Chambers. Get reading!
Ted Chiang propone una idea inquietante: ¿y si la inteligencia artificial creciera como un niño, con cuidado, tiempo y afecto? En este episodio analizo El Ciclo de Vida de los Objetos de Software, no es solo un cuento, es una advertencia sobre el vínculo humano con tecnologías que ya están entre nosotros.
Real Life Roundup Let's address the elephant not in the room: Devon is dead. Well, not dead-dead. Just birthday-visit-family-IRL-dead. We pour one out for our absent co-host, and prepare for his resurrection next week. Meanwhile, Steven has been watching robots get wild. The Wild Robot, that is. The new animated flick has dropped (IMDB link), and Steven's verdict is in: heartwarming vibes, metal clanking emotions, and just enough kid-friendly existentialism to make you question whether your Roomba has feelings. Also, did you know Black Adam shows up in DC League of Super Pets? Steven does. And he's not okay about it. Then came Doom. And then came… more Doom. One minute Steven's a casual fan, next he's elbows-deep in lore breakdowns and watching two-hour YouTube essays on timeline chaos. Marines killing demons across dimensions? Say less. Just hand him the BFG and back away slowly. Oh—and he's forging now. He didn't elaborate. Just forging. Like, swords? Friendships? The future? Who knows. Steven contains multitudes. Ben, on the other hand, has been diving into his subconscious with dream journaling. The result? Vivid, borderline cinematic dreamscapes. Not terrifying at all. He's also been getting deep with the Waking Up app, based on the book by Sam Harris. (Here's the app link). Ben reports that it's good for mindfulness, bad for avoiding personal epiphanies. Use at your own risk. Future or Now Ben introduces us to Space to Bark, a bizarre, short dungeon crawler where you play as a first-person Dogman navigating an underground labyrinth. Created by ComputerJames, it features: Bark-based controls ([SPACE] to BARK!) Wobbly hand-drawn dog sprites Combat! Puzzles! Dogmen lore! Dogman95 isn't just a pup with a dream—he's a legend in training, guided by the sacred Dogmaiden. This is the kind of weird internet treasure we live for. Hat tip to Web Curios for digging this one up. Devon, once again, is astral projecting or off the grid. No one really knows. Steven had… nothing. Just an existential stare. Book Club (but not really) This week's book club has been canceled due to lack of effort. Blame Devon. Blame the Void. Blame our over-scheduled lives. Either way, we didn't read anything this week, and we're not sorry. Next week, however, we're diving into “Liking What You See: A Documentary” from Stories of Your Life and Others by Ted Chiang. It's a short story about beauty, perception, and what happens when you turn off the part of your brain that notices appearances. It's Chiang, so expect deep thoughts and possible feelings. That's it from us! Come back next week for more co-host resurrection, dream logic, robotic feelings, and maybe even a book. If you like what we do, bark into the void or support us on Patreon. Your choice.
Real Life Devon went full medieval this week with a trip to a Renaissance Fair—this one featuring permanent structures that actually looked “authentic” instead of slapped together by ye olde hot glue. There were swinging rides, wooden horses, and some legit jousting. Unfortunately, the real fantasy was thinking the kids would have fun. Big downer energy. Steven is gearing up for an Arizona trip but had to make a sudden detour into Best Buy territory after his TV gave up the ghost. On the plus side, Andor continues to be amazing and makes up for any consumer electronics woes. (It really is still that good.) Ben has seen Labyrinth (have you?), and he's here for the dream logic and David Bowie's entire vibe. Also thrown into the cinematic blender: The Island and Cliffhanger. We're now seeking out more films where geological or man-made features are basically the co-stars. Let us know if you have one. Oh, and Ben also saw the Slate all-electric pickup truck, which looked like something out of Black Mirror. Meanwhile, TVs just… work now? What a time to be alive. Future or Now Time for some spicy Star Wars takes. We got into it over which trilogy was better: the Prequels or the Sequels. Episode IX (The Rise of Skywalker) got roasted—Devon called it "the worst." Ben leaned sequel-side, arguing they're better than the prequels overall. The breakdown went something like: Prequels: bad films, good plots Sequels: good films, bad plots There were also complaints about Starkiller Base, which feels like someone said “What if Death Star, but more?” But then there's Andor, which everyone agrees is just pure excellence. So Star Wars can still be good when they let writers write. Our rankings for maximum judgment: Devon's list: The Phantom Menace, The Force Awakens, Rise of Skywalker, Attack of the Clones, The Last Jedi, Revenge of the Sith Ben's list: Rise of Skywalker, The Phantom Menace, Attack of the Clones, Revenge of the Sith, The Force Awakens, The Last Jedi Your move, Internet.
Description Returning guest Rachel Armstron joins Joe to discuss the film Arrival. Starring Amy Adams as Louise Banks, Arrival is an adaption of Ted Chiang’s short story “Story of Your Life.” The contemplative film explores issues of language, aliens, and … Continue reading →
Author Anna Sortino stops by the podcast to discuss Ted Chiang's 'Story of Your Life.' https://annasortino.com/ https://chireviewofbooks.com/ https://www.storystudiochicago.org/
“Incomprehensible guttural noises”The HeptapodsArrival is as difficult a movie to discuss in the limited space of our show notes as it is a truly great work of modern science fiction filmmaking. It's also virtually impossible to discuss without spoilers. Heck, we had trouble cramming our discussion of the central concepts of the film into One of director Denis Villeneuve's crowning cinematic achievements (and almost certainly the thing that made us all realize that he might be the only director who would be able to get Dune right on screen), Arrival is an alien invasion movie unlike any other, one in which the humans don't cope with our new and strange looking neighbors with aggression, but rather by using science and reasoning to understand and communicate with them. Imagine that!You might fancy yourself a wiseacre and suggest the very notion of science and reasoning “does not fly” given the state of the world these days, but let's set that cynicism aside for the moment and get at the heart of this week's topic. Because in order to understand the way these aliens (the heptapods, not to be confused with Hakeem's ongoing Planet of the Cephalopods pitch) communicate via bizarre and smoky glyphs, humans are able to change their perception of time itself. You've heard of “perception determines reality” so get ready for “language affects perception which thus helps determine reality.” And if that sounds confusing, don't worry, because you've got Dr. Hakeem Oluseyi and Tamara Krinsky to hold your smoky hand (limb?) and walk you through it (forwards, backwards, and perhaps both at once). All of this and more is explored on a special, extra-sized episode of Does it Fly?...https://youtu.be/K_Duabt4f1s?si=9MGhHmj22EatyFQ8SUGGESTED VIEWING You mean you haven't seen Arrival? And you're watching and/or listening to this show? What's wrong with you! Go watch one of the most beautiful sci-fi movies of the last 25 years and THEN come back and hang out with us.FURTHER READING Do you want to delve a little deeper into the facts, concepts, and stories Hakeem and Tamara referenced in today's episode? Of course you do!Story of Your LifeArrival is based on a short story by Ted Chiang, called “Story of Your Life” which won the 1999 Hugo Award for Best Novella. It's available in a collection of Chiang's short stories, Stories of Your Life and Others.Relativity in ArrivalAlso known as the Sapir-Worf Hypothesis, based on the work of Edward Sapir and Benjamin Lee Whorf, but actually first stated as such Harry Hoijer in 1954. To quote Hoijer (via the Stanford Encyclopedia of Philosophy who have the most comprehensive explanation of this that we've been able to find): “language functions, not simply as a device for reporting experience, but also, and more significantly, as a way of defining experience for its speakers.” Arrival takes that to the next level by showing how it could define how we experience time itself! It also incorporates elements of the Many-Worlds theory, which we discussed in our Back to the Future episode!For extra credit, read up on Presentism, which postulates that only the current moment we live in is actual existence vs Eternalism, which states that our past AND future are equally real at all times. Then go take an Advil or something.The End of TimeNo, we're not talking about whatever horrors have you doomscrolling at the moment. It's Julian Barbour's book The End of Time: The Next Revolution in Our Understanding of the Universe, which argues that time as we know/perceive it, isn't really a thing.Speaking of time being an illusion…PsilocybinWe aren't endorsing anything, but…WANT MORE FROM DOES IT FLY?Speaking of some of the greatest sci-fi movies of the 21st Century, we'd like to remind you that Children of Men also exists and we dug into the disturbing real world implications of that movie right here.Andor season 2 is currently reminding people how good Star Wars can be when it actually has a conscience, so we took a look at the Star Wars franchise's most powerful and iconic megaweapon, the Death Star in one of our best episodes ever! FOLLOW US!Stay in the loop! Follow DoesItFly? on YouTube and TikTok and let us know what you think! Subscribe to Does It Fly? Pod: https://www.youtube.com/@doesitflypod?sub_confirmation=1And don't forget to follow Roddenberry Entertainment:Instagram: @RoddenberryOfficial Facebook: RoddenberryBluesky: @roddenberrypod.bsky.socialFor Advertising Inquiries: doesitfly@roddenberry.comCheck out the official Does it Fly? playlist, too!
This collection of short stories runs the gamut from biblical fiction to sci-fi mockumentary to "short story that inspired a very successful film named Arrival." Recurring themes include Creation, Thought, and Perception. Pretty heavy stuff! But Chiang tackles it all with creativity and flair. This episode is sponsored by Squarespace. Go to squarespace.com/overdue for 10% of your first purchase of a website or domain.Our theme music was composed by Nick Lerangis.Follow @overduepod on Instagram and BlueskyAdvertise on OverdueSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Producer Dan Levine recalls the making of 2016's iconic Arrival. From the moment he read Ted Chiang's Story of Your Life, Dan knew he had to find a way to bring it to the screen. Director Denis Villeneuve, an incredible talent but still relatively unknown, was the perfect choice. Producers had to wait until Denis wrapped Sicario, but things moved quickly once Amy Adams came on board and suggested Jeremy Renner as her co-star. Production went smoothly, although the material presented endless challenges for Denis and the producers. The scariest moment was after the finished film was screened for the studio - they did not like it and wanted rewrites and a reshoot to change the ending. Dan knew the film worked and fought for the ending that garnered 8 Oscar nominations. Hosted on Acast. See acast.com/privacy for more information.
Who doesn’t love a fresh take on a classic story? The Merchant & The Alchemists Gate, from Ted Chiang’s 2019 collection of short stories “Exhalation” is one of our favorite time travel stories. Follow Jason: twitter.com/netw3rk Follow Rosie: IG & Letterboxd Follow X-Ray Vision on Instagram Join the X-Ray Vision DiscordSee omnystudio.com/listener for privacy information.
Today, we take you inside Ted Chiang's Public Lecture on AI and Art, cover Princeton's High-Definition Images of the Baby Universe, and finish out with a behind the scenes look in the ‘Prince's puzzle-making process with Jack Noymer.Jack Noymer's Puzzle: https://crossword.dailyprincetonian.com/
Do you want to know the key to the Narnian universe? Today, on Mythmakers, Julia Golding and Jacob Rennaker take a quick tour around the seven heavens as they discuss C.S. Lewis's book The Discarded Image, as well as the Medieval model, Michael Ward's groundbreaking study, Planet Narnia, and so much more. What other scientific model inspirations have writers found, and where would it be best to live within a Medieval universe? Join the conversation as we find out! Among the books mentioned is Ted Chiang’s Stories of Your Life and Others, available at: https://www.panmacmillan.com/authors/ted-chiang/stories-of-your-life-and-others/9781035038596 as well as Cixin Liu’s Three Body Problem: https://torpublishinggroup.com/the-three-body-problem/ (00:05) CS Lewis and the Discarded Image(16:51) CS Lewis and Science(25:22) Planetary Imagery in Narnia(37:07) Lewis(53:30) Fantasy Reimaginings of Medieval Worlds(58:41) Rethinking the Discarded Image For more information on the Oxford Centre for Fantasy, our writing courses, and to check out our awesome social media content visit: Website: https://centre4fantasy.com/website Instagram: https://centre4fantasy.com/Instagram Facebook: https://centre4fantasy.com/Facebook TikTok: https://centre4fantasy.com/tiktok
It's a crossover episode of Finding Favorites and Upper Middle Brow - Leah Jones, Jesse Dukes and Chris Bagg are talking about robot friends and enemies. We start with M3GAN and go on a winding conversation from there. Subscribe to Upper Middle Brow and rate them 5 stars: https://uppermiddlebrow.com/ Recommendations from this show: Ted Chiang's Exhalation Alien: Romulus Duncan Jones Moon Honorable Mentions: Robocop The Terminator/Terminator 2 Short Circuit M3gan Robot Visions Doctor Who Follow Finding Favorites on Instagram at @FindingFavsPod and leave a 5 star rating on Apple Podcasts, GoodPods or Spotify. Got a question or want to suggest a guest? email Leah at FindingFavoritesPodcast@gmail.com Support Finding Favorites by shopping for books by guests or recommended by guests on Bookshop.
John and Craig answer twenty listener questions on craft, career, and the future of the industry. Questions include: How do you correct well wishes you haven't earned? What kind of relationship should you have with the person who created your source material? How do you keep your reps invested? What's going on with that Stereophonic lawsuit? And are writers retreats helpful or a total waste of time? In our bonus segment for premium members, John and Craig celebrate the new D&D Player's Handbook by looking back through every edition since 1978. Like the handbook, it gets less dense as it goes. Links: Scriptnotes LIVE! at Austin Film Festival Drew's Emmy certificate Why AI Isn't Going to Make Art by Ted Chiang for The New Yorker The Stereophonic Lawsuit Rachel Bloom's “Death, Let Me Do My Special” on Netflix Warner Bros. Studios Burbank Save Scarecrow Video in Seattle Get a Scriptnotes T-shirt! Check out the Inneresting Newsletter Gift a Scriptnotes Subscription or treat yourself to a premium subscription! Craig Mazin on Threads and Instagram John August on Threads, Instagram, Twitter and Mastodon Outro by Nick Moore (send us yours!) Scriptnotes is produced by Drew Marquardt and edited by Matthew Chilelli. Email us at ask@johnaugust.com You can download the episode here.