POPULARITY
Categories
"Bu aralar salgın var ya, herkes öyle" denilen şey 300.000 yıldır süregelen insanlık salgınıdır sevgili dostlarımız. Gezegenimizden tüm kozmosa yayılmakta olan bu amansız salgına birçok farklı tedavi önerisi, tarih boyunca çeşitli çabalarla masaya konmuştur. Kimi savaşı, kimi barışı, kimi sevgiyi, kimi sövgüyü, soruna çare olma isteğiyle çıkış noktası olarak belirlemiştir. Henüz tam olarak nihai bir sonuç elde edemesek de en azından bu illetle yaşamayı az çok öğrendik gibi. Siz yine de ihtiyatlı olun, bol bol C vitamini basın, öksürürüken, hapşırırken, esnerken, konuşurken, düşünürken, duş alırken ağzınızı kapatın. Diğer tüm deliklerimiz yaşamak için yeterlidir. Sevgiyle kalın & Hoşça kalın....... Recoreded @ Atölye5 Nişantaşı Session StudiosEdit: Erman "Yazık" ÇağlarMiks & Mastering: Göksel Elbüken
Text us your thoughts on the episode or the show!In this episode of Ops Cast, we explore a side of operations leadership that rarely appears in roadmaps or system diagrams but determines whether teams thrive or burn out.Kimi Corrigan, Vice President of Marketing Operations at Huntress, joins Michael Hartmann on our latest Ops Cast episode. Kimi shares her perspective on servant leadership, psychological safety, and the emotional intelligence required to lead effectively inside fast-growing, complex organizations.The conversation goes beyond tools and processes to focus on the human side of operations. Kimi discusses how to lead with empathy without lowering standards, how to navigate difficult conversations with honesty and accountability, and how to create sustainable team rhythms in environments that often default to constant firefighting.They also examine how ops leaders can enter new organizations thoughtfully, read culture before pushing change, and decide where to invest their energy early. Kimi shares where AI can genuinely support leadership development, not as a replacement for judgment, but as a tool for reflection, communication, and clarity.What you will learn: • How to balance servant leadership with high performance expectations • Why psychological safety is essential in ops teams • How to lead through growth and organizational transition • Ways to build sustainable team trust outside of crisis moments • The non-technical skills that prepare operators for leadership roles • Where AI can strengthen communication and self-awarenessIf you are leading a Marketing Ops team or aspiring to step into leadership, this episode highlights the interpersonal skills that often matter more than technical mastery.Be sure to subscribe, rate, and review Ops Cast, and join the conversation at MarketingOps.com.Episode Brought to You By MO Pros The #1 Community for Marketing Operations Professionals We're an official media partner of B2BMX 2026 — the B2B Marketing Exchange — happening March 9-11 at the Omni La Costa Resort in Carlsbad, CA. It's practitioner-focused with 50+ breakout sessions, keynotes, and hands-on workshops covering AI in B2B, GTM strategy, and advanced ABM. Real networking, real takeaways. And because we're a media partner, you get 20% off an All-Access Pass with code B2BMAOP at checkout. Head to b2bmarketing.exchange to grab your spot. MarketingOps.com is curating the GTM Ops Track at Demand & Expand (May 19-20, San Francisco) - the premier B2B marketing event featuring 600+ practitioners sharing real solutions to real problems. Use code MOPS20 for 20% off tickets, or get 35-50% off as a MarketingOps.com member. Learn more at demandandexpand.com.Support the show
Focus sur le nouveau MacBook à 600 euros d’Apple, les LLMs et leur impact sur le travail, les ambitions spatiales de Musk et Bezos, et les nouveautés d’Android 17. Me soutenir sur Patreon Me retrouver sur YouTube On discute ensemble sur Discord Interactions auditeurs Lou et le Mac low cost. Gremi, t'es dur avec moi sur Temu et… Lucy 2 !? Heureusement… peut-être pas. Mika : Braga part, Seedance 2 est une boucherie. Will Smith, valide. Alih s'homard bien, les IA aussi. Per Aspera Claude Wars : la revanche de Kimi. Codex passe la seconde avec Cerebras. Culture pub : la mauvaise foi, ça marche. Gemini man : l'ARC se termine. Médecines alternatives : pas encore généraliste, mais peut-être proctologues ? Les usages numériques excessifs des français. C'est l'eXode chez X / Space X / xAI. Ad Astra Le lièvre, la tortue, la grenouille et le panda. Crystal chronicles : des semi-conducteurs très spatiaux. Aurora est beau comme un camion mais aux US, l'électrique est sous tension. Android 17, je vis pour cannelle. Jeux vidéo Contrôle, vampires, poulet géant, du kratos et du Saros, c’était le state of play. Participants Avec Cassim Montilla Présenté par Guillaume Poggiaspalla
Aparıcılar: DJ Fateh və Rəvan Bağırov
This episode, we talk about two monumental projects that were started in this reign. One was the historiographical project that likely led to the creation of the Kojiki and the Nihon Shoki. And then there was the start of the first permanent capital city: the Fujiwara Capital. Listen to the episode and find more on our website: https://sengokudaimyo.com/podcast/episode-143 Rough Transcript Welcome to Sengoku Daimyo's Chronicles of Japan. My name is Joshua and this is Episode 143: Temmu's Monumental Projects Ohoama sat astride his horse and looked out at the land in front of him. He could still see the image of the rice fields, now long fallow, spreading out on the plain. To the north, east, and west, he could see the mountains that would frame his vision. As his ministers started to rattle off information about the next steps of the plan, Ohoama began to smile. He thought of the reports his embassies to the Great Tang had brought back, about the great walled cities of the continent. In his mind's eye, Ohoama envisioned something similar, rising up on the plain in front of him. There would be an earth and stone wall, surrounding the great city. The gates would be grand, much like the temples, but on an even greater scale. Houses would be packed in tight, each within their own walled compounds. In the center painted red and white, with green accents, would be a palace to rival any other structure in the archipelago. The people would stream in, and the city would be bustling with traffic. This was a new center, from which the power of Yamato would be projected across the islands and even to the continent. Greetings everyone, and welcome back. This episode we are still focused on the reign of Ohoama, aka Temmu Tennou, between the years 672 and 686. Last episode we talked about the Four Great Temples—or the Four National Temples. Much of this episode was focused on the rise and spread of Buddhism as we see in the building of these national temples, but also on the changes that occurred as the relationship between Buddhism and the State evolved. This was part of Ohoama's work to build up the State into something beyond what it had been in the past—or perhaps into something comparable to what they believed it to have been in the past. After all, based on the size of the tomb mounds in the kofun period, it does seem that there was a peak of prosperity in the 5th century, around the time of Wakatakeru, aka Yuryaku Tennou, and then a decline, to the point that the lineage from Wohodo, aka Keitai Tennou, seemed to have come in during a time when they were rebuilding Yamato power and authority. This episode we are going to talk about two projects that Ohoama kicked off during his reign. He wouldn't see the completion of either one, since both took multiple decades to complete, but both focused on linking the past and the future. The first we'll talk about is a new attempt to gather historical documents and records—the last time that was done was in the time of Kashikiya Hime, over 50 years ago. That was during the height of Soga power. Since then a lot had changed, and presumably there were even more stories and records that had been written down. Plus the tide had changed. So they needed to update—and maybe even correct—the historical record. But beyond that, there was a greater goal: Ohoama and his court also needed to make sure that the past was something that they wanted to go back to, among other things. The other thing we are going to discuss is the start of a project to build a brand new capital city. And when we talk a bout city, we really mean a city. This was a massive undertaking, likely unlike anything that we've seen so far. Sure, there had been monumental building projects, but this was something that was going to take a lot more work - how much more monumental could you get than a new city? And it would create a physical environment that would be the embodiment of the new centralization of power and authority, and the new state that Ohoama was building, with his administration—and Yamato—at the center. Let's start with the big ones. First and foremost, we have the entry from the 17th day of the 3rd month of the 681. Ohoama gave a decree from the Daigokuden to commit to writing a Chronicle of the sovereigns and various matters of high antiquity. Bentley translates this as saying that they were to record and confirm the Teiki, which Aston translated as the Chronicle of the Sovereigns, and various accounts of ancient times. This task was given out to a slew of individuals, including the Royal Princes Kawashima and Osakabe; the Princes Hirose, Takeda, Kuwada, and Mino; as well as Kamitsukenu no Kimi no Michichi, Imbe no Muraji no Kobito, Adzumi no Muraji no Inashiki, Naniwa no Muraji no Ohogata, Nakatomi no Muraji no Ohoshima, and Heguri no Omi no Kobito. Ohoshima and Kobito were specifically chosen as the scribes for this effort. We aren't told what work was started at this time. Aston, in his translation of the Nihon Shoki, assumes that this is the start of the Kojiki. Bentley notes that this is the first in a variety of records about gathering the various records, including gathering records from the various families, and eventually even records from the various provinces. And I think we can see why. Legitimizing a new state and a new way of doing things often means ensuring that you have control of the narrative. Today, that often means doing what you can to control media and the stories that are in the national consciousness. In Ohoama's day, I'd argue that narrative was more about the various written sources, and how they were presented. After all, many of the rituals and evidence that we are looking at would rely on the past to understand the present. The various family records would not only tell of how those families came to be, but would have important information about what else was going on, and how that was presented could determine whether something was going to be seen as auspicious, or otherwise. Even without getting rid of those records, it would be important to have the official, State narrative conform to the Truth that the state was attempting to implement. Ultimately, there is no way to know, exactly, how everything happened. If the Nihon Shoki had a preface, it has been lost. The Kojiki, for its part, does have a preface, and it points to an origin in the reign of Ohoama—known as the sovereign of Kiyomihara. In there we are told that the sovereign had a complaint—that the Teiki and Honji, that is the chronicles of the sovereigns and the various other stories and legends, that had been handed down by various houses had come to differ from the truth. They said they had many falsehoods, which likely meant that they just didn't match the Truth that the State was trying to push. Thus they wanted to create a so-called "true" version to pass down. This task was given to 28 year old Hieda no Are. It says they were intelligent and had an incredible memory. They studied all of the sources, and the work continued beyond the reign of Ohoama. Later, in 711 CE, during the reign of Abe, aka Genmei Tennou, Oho no Yasumaro was given the task of writing down everything that Hieda no Are had learned. The astute amongst you may have noticed that this mentions none of the individuals mentioned in the Nihon Shoki. Nor does the Nihon Shoki mention anything about Hieda no Are. So was this a separate effort, or all part of the same thing? Was Are using the materials collected by the project? As you may recall, we left the Kojiki behind some time ago, since it formally ends with the reign of Kashikiya hime, aka Suiko Tennou, but realistically it ended with Wohodo, aka Keitai Tennou—after that point there are just lists of the various heirs. As such, there is some speculation that this was originally built off of earlier histories, perhaps arranged during the Soga era. The general explanation for all of this is that Hieda no Are memorized the poems and stories, and then Yasumaro wrote them down. Furthermore, though the language in the Kojiki does not express a particular gender, in the Edo period there was a theory that Hieda no Are was a woman, which is still a popular theory. Compare all of that to the Nihon Shoki. Where the Kojiki was often light on details and ends with Suiko Tennou, the Nihon Shoki often includes different sources, specifically mentions some of them by name, and continues up through the year 697. Furthermore, textual analysis of the Nihon Shoki suggests that it was a team effort, with multiple Chroniclers, and likely multiple teams of Chroniclers. I have to admit, that sounds a lot more like the kind of thing that Ohoama was kicking off. We have an entry in the Shoku Nihongi, the work that follows the Nihon Shoki, that suggests 720 for the finished compilation of the Nihon Shoki. So did it take from 681 to 720 to put together? That is a really long project, with what were probably several generations of individuals working on it. Or should this be read in a broader sense? Was this a historiographical project, as Bentley calls it, but one that did not, immediately, know the form it would take? It isn't the first such project—we have histories of the royal lineage and other stories that were compiled previously—much of that attributed to Shotoku Taishi, but likely part of an earlier attempt by the court. In fact, given that the Kojiki and Sendai Hongi both functionally end around the time of Kashikiya hime, that is probably because the official histories covered those periods. Obviously, though, a lot had happened, and some of what was written might not fit the current narrative. And so we see a project to gather and compile various sources. While this project likely culminated in the projects of the Kojiki and the Nihon Shoki, I doubt that either work was necessarily part of the original vision. Rather, it looks like the original vision was to collect what they could and then figure things out. It would have been after they started pulling the accounts together, reading them, and noticing the discrepancies that they would have needed to then edit them in such a way that they could tell a cohesive story. That there are two separate compilations is definitely interesting. I do suspect that Oho no Yasumaro was working from the efforts of Hieda no Are, either writing down something that had been largely captured in memory or perhaps finishing a project that Are had never completed. The Nihon Shoki feels like it was a different set of teams, working together, but likely drawing from many of the same sources. And as to why we don't have the earlier sources? I once heard it said that for books to be forgotten they didn't need to be banned—they just needed to fall out of circulation and no longer be copied anymore. As new, presumably more detailed, works arose, it makes sense that older sources would not also be copied, as that information was presumably in the updated texts, and any information that wasn't brought over had been deemed counterfactual. Even the Nihon Shoki risked falling into oblivion; the smaller and more digestible Kojiki was often more sought after. The Kojiki generally presents a single story, and often uses characters phonetically, demonstrating how to read names and places. And it just has a more story-like narrative to it. The Nihon Shoki, comparatively, is dense, written in an old form of kanbun, often relying more on kanbun than on phonetic interpretations. It was modeled on continental works, but as such it was never going to be as easy to read. And so for a long time the Kojiki seems to have held pride of place for all but the most ardent scholars of history. Either way, I think that it is still fair to say that the record of 681 was key to the fact that we have this history, today, even if there was no way for Ohoama, at the time, to know just what form it would take. Another ambitious project that got started under Ohoama was the development of a new and permanent capital city. Up to this point we've talked about the various capitals of Yamato, but really it was more that we were talking about the palace compounds where the sovereign lived. From the Makimuku Palace, where either Mimaki Iribiko or possibly even Himiko herself once held sway, to the latest palace, that of Kiyomihara, the sovereigns of Yamato were known by their palaces. This is, in part, because for the longest time each successive sovereign would build a new palace after the previous sovereign passed away. There are various reasons why this may have been the case, often connected to insular concepts of spiritual pollution brought on by the death of an individual, but also the practical consideration that the buildings, from what we can tell, were largely made of untreated wood. That made them easier to erect, but also made them vulnerable to the elements, over time, and is probably one of the reasons that certain shrines, like the Shrine at Ise, similarly reconstitute themselves every 20 years or so. Furthermore, we talk about palaces, but we don't really talk about cities. There were certainly large settlements—even going back to the Wei chronicles we see the mention of some 70 thousand households in the area of Yamateg. It is likely that the Nara basin was filled with cultivated fields and many households. Princes and noble households had their own compounds—remember that both Soga no Umako and Prince Umayado had compounds large enough that they could build temples on the compounds and have enough left over for their own palatial residences, as well. However, these compounds were usually distributed in various areas, where those individuals presumably held some level of local control. It is unclear to me how exactly the early court functioned as far as housing individuals, and how often the court was "in session", as it were, with the noble houses. Presumably they had local accommodations and weren't constantly traveling back and forth to the palace all the time. We know that some houses sent individuals, men and women, to be palace attendants, even though they lived some distance away. This was also likely a constraint on the Yamato court's influence in the early days. We do see the sovereign traveling, and various "temporary" palaces being provided. I highly doubt that these were all built on the spot, and were likely conversions of existing residences, and similar lodging may have been available for elites when they traveled, though perhaps without such pomp and circumstance. What we don't really see in all of this, are anything resembling cities. Now, the term "city" doesn't exactly have a single definition, but as I'm using it, I would note that we don't see large, permanent settlements of significant size that demonstrate the kind of larger civil planning that we would expect of such a settlement. We certainly don't have cities in the way of the large settlements along the Yangzi and Yellow rivers. We talked some time back about the evolution of capital city layouts on the continent. We mentioned that the early theoretical plan for a capital city was based on a square plan, itself divided into 9 square districts, with the central district constituting the palace. This design works great on paper, but not so much in practice, especially with other considerations, such as the north-south orientation of most royal buildings. And then there are geographic considerations. In a place like Luoyang, this square concept was interrupted by the river and local topography. Meanwhile, in Chang'an, they were able to attain a much more regular rectangular appearance. Here, the court and the palace were placed in the center of the northernmost wall. As such, most of the city was laid out to the south of the palace. In each case, however, these were large, planned cities with a grid of streets that defined the neighborhoods. On each block were various private compounds, as well as the defined markets, temples, et cetera. The first possible attempt at anything like this may have been with the Toyosaki palace, in Naniwa. There is some consideration that, given the size of the palace, there may have been streets and avenues that were built alongside it, with the intention of having a similar city layout. If so, it isn't at all clear that it was ever implemented, and any evidence may have been destroyed by later construction on the site. Then we have the Ohotsu palace, but that doesn't seem to be at the same scale as the Toyosaki palace—though it is possible that, again, we are missing some key evidence. Nonetheless, the records don't really give us anything to suggest that these were large cities rather than just palaces. There is also the timeline. While both the Toyosaki palace and the Ohotsu palace took years to build, they did not take the time and amount of manpower that would be needed to create a true capital city. We can judge this based on what it took to build the new capital at Nihiki. This project gets kicked off in the 11th month of 676. We are told that there was an intent to make the capital at Nihiki, so all of the rice-fields and gardens within the precincts, public and private property alike, were left fallow and became totally overgrown. This likely took some time. The next time we see Nihiki is in the 3rd month of 682, when Prince Mino, a minister of the Household Department, and others, went there to examine the grounds. At that point they apparently made the final decision to build the capital there. Ohoama came out to visit later that same month. However, a year later, in the 12th month of 683, we are told that there was a decree for there to be multiple capitals and palaces in multiple sites, and they were going to make the Capital at Naniwa one of those places. And so public functionaries were to go figure out places for houses. So it wasn't just that they wanted to build one new, grand capital. It sounds like they were planning to build two or three, so not just the one at Nihiki. This is also where I have to wonder if the Toyosaki Palace was still being used as an administrative center, at the very least. Or was it repurposed, as we saw that the Asuka palaces had been when the court moved to Ohotsu? This is further emphasized a few months later, when Prince Hirose and Ohotomo Yasumaro, at the head of a group of clerks, officials, artisans, and yin yang diviners were sent around the Home Provinces to try and divine sites suitable for a capital. In addition, Prince Mino, Uneme no Oni no Tsukura, and others were sent to Shinano to see about setting up a capital there as well. Perhaps this was inspired by the relationship between the two Tang capitals of Chang'an and Luoyang. Or perhaps it was so that if one didn't work out another one might. Regardless, Nihiki seemed to be the primary target for this project, and in the third lunar month of 684 Ohoama visited the now barren grounds and decided on a place for the new palace. A month later, Prince Mino and others returned with a map of Shinano, but there is no indication of where they might want to build another capital. After that, we don't hear anything more of Shinano or of a site in the Home Provinces. We do hear one more thing about Naniwa, which we mentioned a couple of episodes back, and that is that in 686 there was a fire that burned down the palace at Naniwa, after which they seem to have abandoned that as a palace site. And so we are left with the area of Nihiki. This project would take until the very end of 694 before it was ready. In total, we are looking at a total of about 18 years—almost two decades, to build a new capital. Some of this may have been the time spent researching other sites, but there also would have been significant time taken to clear and level. This wasn't just fields—based on what we know, they were even taking down old kofun; we are later told about how they had to bury the bodies that were uncovered. There was also probably a pause of some kind during the mourning period when Ohoama passed away. And on top of it, this really was a big project. It wasn't just building the palace, it was the roads, the infrastructure, and then all of the other construction—the city gates, the various private compounds, and more. One can only imagine how much was being invested, especially if they were also looking at other sites and preparing them at the same time. I suspect that they eventually abandoned the other sites when they realized just how big a project it really was that they were undertaking. Today we know that capital as Fujiwara-kyo, based on the name of the royal palace that was built there, and remarkably, we know where it was. Excavations have revealed the site of the palace, and have given us an idea of the extent of the city: It was designed as a square, roughly 5.3 kilometers, or 10 ri, on each side. The square itself was interrupted by various terrain features, including the three holy mountains. Based on archaeological evidence, the street grid was the first thing they laid out, and from what we can tell they were using the ideal Confucian layout as first dictated in the Zhouli, or Rites of Zhou. This meant a square grid, with the palace in the center. Indeed, the palace was centered, due south of Mt. Miminashi, and you can still go and see the palace site, today. When they went to build the palace, they actually had to effectively erase, or bury, the roads they had laid out. They did the same thing for Yakushi-ji, or Yakushi-temple, when they built it as part of the city; one of the reasons we know it had to have been built after the roads were laid out. We will definitely talk about this more when we get to that point of the Chronicles, but for now, know that the Fujiwara palace itself, based on excavations of the site, was massive. The city itself would surpass both Heijo-kyo, at Nara, and Heian-kyo, in modern Kyoto. And the palace was like the Toyosaki Naniwa palace on steroids. It included all of the formal features of the Toyosaki Palace for running the government, but then enclosed that all in a larger compound with various buildings surrounding the court itself. Overall, the entire site is massive. This was meant as a capital to last for the ages. And yet, we have evidence that it was never completed. For one thing, there is no evidence that a wall was ever erected around it—perhaps there was just no need, as relations with the mainland had calmed down, greatly. But there is also evidence that parts of the palace, even, were not finished at the time that they abandoned it. Fujiwara-kyo would only be occupied for about 16 years before a new capital was built—Heijo-kyo, in Nara. There are various reasons as to why they abandoned what was clearly meant to be the first permanent capital city, and even with the move to a new city in Nara it would be clear that it was going to take the court a bit of time before they were ready to permanently settle down—at least a century or so. Based on all the evidence we have, and assuming this was the site of the eventual capital, Nihiki was the area of modern Kashihara just north of Asuka, between—and around—the mountains of Unebi, Miminashi, and Kagu. If these mountains are familiar, they popped up several times much earlier in the Chronicles--Mostly in the Age of the Gods and in the reign of the mythical Iware-biko, aka Jimmu Tennou. Yet these three mountains help to set out the boundaries of the capital city that was being built at this time. There is definitely some consideration that they were emphasized in the early parts of the Chronicles—the mythical sections, which were bolstering the story of Amaterasu and the Heavenly Grandchild, setting up the founding myths for the dynasty. Even though the Chronicles were not completed until well after the court had moved out, the Fujiwara capital is the climax of the Nihon Shoki, which ends in 697, three years into life at the new palace. And so we can assume that much of the early, critical editing of the Kojiki and Nihon Shoki were done with the idea that this would be the new capital, and so it was woven into the histories, and had it continued as the capital, the very landscape would have recalled the stories of the divine origins of the Royal family and the state of Yamato itself. This was the stage on which Ohoama's state was built. He, and his successors, didn't just change the future path of the Yamato government. They rearranged the physical and temporal environment, creating a world that centered them and their government. I suspect that Ohoama didn't originally consider that these wouldn't be finished during his reign. That said, he came to power in his 40s, only slightly younger than his brother, who had just died. He would live to be 56 years old—a respectable age for male sovereigns, around that time. From a quick glance, Naka no Oe was about 45 or 46 years old, while Karu lived to about 57 or 58. Tamura only made it to 48. The female sovereigns seem to have lasted longer, with Ohoama's mother surviving until she was 66 or 67 years old, and Kashikiya Hime made it to the ripe old age of 74. That said, it is quite likely that he thought he would make it longer. After all, look at all the merit he was accruing! Still, he passed away before he could see these projects fully accomplished. That would have to be left for the next reign—and even that wasn't enough. The Fujiwara Capital would only be occupied for a short time before being abandoned about two reigns later, and the histories as we know them wouldn't be complete for three more reigns. So given all of this, let's take another quick look at Ohoama himself and where he stands at this pivotal moment of Yamato history.When we look at how he is portrayed, Ohoama is generally lionized for the work he is said to have accomplished. I would argue that he is the last of three major figures to whom are attributed most of the changes that resulted in the sinification of the Yamato government. The first is prince Umayado, aka Shotoku Taishi, who is said to have written the 17 article constitution, the first rank system, and the introduction of Buddhism. To be fair, these things—which may not have been exactly as recorded in the Chronicles—were likely products of the court as a whole. Many people attribute more to Kashikiya Hime, aka Suiko Tennou, as well as Soga no Umako. Of course, Soga no Umako wasn't a sovereign, or even a member of the royal family, and Kashikiya Hime, aka Suiko Tennou, seems to have likewise been discounted, at least later, possibly due to the fact that she is thought to have come to power more as a compromise candidate than anything else—she was the wife of a previous sovereign and niece to Soga no Umako. Many modern scholars seem to focus more on the agency of Kashikiya Hime and suggest that she had more say than people tend to give her credit for. That said, Shotoku Taishi seems to have been the legendary figure that was just real enough to ascribe success to. That he died before he could assume the throne just meant that he didn't have too many problematic decisions of his own to apparently work around. The next major figure seems to be Naka no Oe, aka Tenji Tennou. Naka no Oe kicks off the period of Great Change, the Taika era, and is credited with a lot of the changes—though I can't help but notice that the formal sovereign, Naka no Oe's uncle, Karu, seems to have stuck with the new vision of the Toyosaki Palace and the administrative state while Naka no Oe and his mother moved back to the traditional capital. And when Naka no Oe moved the capital to Ohotsu, he once again built a palace more closely aligned to what we see in Asuka than the one in Naniwa, which brings some questions about how the new court was operating. But many of his reforms clearly were implemented, leveraging the new concepts of continental rulership to solidify the court's hegemony over the rest of the archipelago. Ohoama, as represented in the Chronicles, appears to be the culmination of these three. He is building on top of what his brother had implemented through the last three reigns. Some of what he did was consolidate what Naka no Oe had done, but there were also new creations, for which Ohoama is credited, even if most of the work was done outside of Ohoama's reign, but they were attributed to Ohoama, nonetheless. Much of this was started later in Ohoama's reign, and even today there seem to be some questions about who did what. Nonetheless, we can at least see how the Chroniclers were putting the story together. There are a lot of scholars that point to the fact that the bulk of the work of these projects would actually be laid out in the following reigns, and who suggest that individuals like the influential Uno no Sarara, who held the control of the government in Ohoama's final days, may have had a good deal more impact on how things turned out, ultimately. In fact, they might even have been more properly termed her projects—there are some that wonder if some of the attributions to Ohoama were meant to bolster the authority of later decrees, but I don't really see a need for that, and it seems that there is enough evidence to suggest that these projects were begun in this period. All of this makes it somewhat ironic that by the time the narrative was consolidated and published to the court, things were in a much different place—literally. The Fujiwara capital had been abandoned. The court, temples, and the aristocracy had picked up stakes and moved north. Fujiwara no Fuhito had come on the scene, and now his family was really taking off. This was not the same world that the Chronicles had been designed around. And yet, that is what was produced. Perhaps there is a reason that they ended where they did. From that point on, though, there were plenty of other projects to record what was happening. Attempts to control the narrative would need to do a lot more. We see things like the Sendai Kuji Hongi, with its alternative, and perhaps even subversive, focus on the Mononobe family. And then later works like the Kogoshui, recording for all time the grievances of the Imbe against their rivals—for all the good that it would do. With more people learning to write, it was no longer up to the State what did or did not get written down. But that has taken us well beyond the scope of this reign—and this episode, which we should probably be bringing to a close. There are still some things here and there that I want to discuss about this reign—so the next episode may be more of a miscellany of various records that we haven't otherwise covered, so far. Until then if you like what we are doing, please tell your friends and feel free to rate us wherever you listen to podcasts. If you feel the need to do more, and want to help us keep this going, we have information about how you can donate on Patreon or through our KoFi site, ko-fi.com/sengokudaimyo, or find the links over at our main website, SengokuDaimyo.com/Podcast, where we will have some more discussion on topics from this episode. Also, feel free to reach out to our Sengoku Daimyo Facebook page. You can also email us at the.sengoku.daimyo@gmail.com. Thank you, also, to Ellen for their work editing the podcast. And that's all for now. Thank you again, and I'll see you next episode on Sengoku Daimyo's Chronicles of Japan.
HTML All The Things - Web Development, Web Design, Small Business
The pace of AI model releases is becoming almost impossible to follow. In just two weeks we saw GPT-5.3-Codex, GPT-5.2 updates, Gemini 3 Deep Think upgrades, Claude Opus 4.6 with a 1M context window in beta, Qwen3-Coder-Next, GLM-5, MiniMax M2.5, Cursor Composer 1.5, and even Kimi 2.5 just outside the window. This isn't a quarterly product cycle anymore - it's a daily arms race. In this episode Matt and Mike break down what this acceleration means for developers, open source, frontier labs, and the broader industry. Are we witnessing healthy innovation, or unsustainable velocity? At what point does this stabilize - if it ever does? If you're trying to build, learn, or compete in AI right now… this conversation is for you. Show Notes: https://www.htmlallthethings.com/podcast/ai-competition-is-out-of-control
Aparıcılar: DJ Fateh və Rəvan Bağırov
"I didn't use my own software this week because the OpenAI agents were better. And that's me retiring my own software." — Keith TeareSomething broke this week. Both Anthropic and OpenAI launched multi-agent systems—"agent swarms"—that don't just assist with tasks but replace custom-built software entirely. The market noticed: Adobe, Salesforce, Workday, and other legacy SaaS companies saw their stocks collapse in what some are calling a trillion-dollar selloff. Keith Teare joins Andrew Keen on Super Bowl weekend to unpack what may be the most consequential week in AI since ChatGPT launched.The conversation ranges from the Anthropic-OpenAI advertising spat (Dario Amodei's Super Bowl ad vs. Sam Altman's "online tantrum") to the deeper structural shifts: Microsoft and Amazon becoming utilities, Google betting $185 billion on an AI-first pivot, and Elon Musk merging SpaceX with xAI to put data centers in space. Along the way, Teare and Keen debate whether the AI race is a myth or a wacky race, whether venture capital is in crisis, and what happens to human labor when agents do the work.About the GuestKeith Teare is a British-American entrepreneur, investor, and technology analyst. He co-founded RealNames Corporation, a pioneering internet company, and later served as Executive Chairman of TechCrunch. He is the founder of That Was The Week and SignalRank, and publishes a widely-read weekly newsletter on technology, venture capital, and the business of innovation. He brings four decades of experience in Silicon Valley to his analysis of the AI revolution.Chapters:00:00 Super Bowl and the Anthropic ad The spat between Dario Amodei and Sam Altman01:09 "Fundamentally dishonest" Keith's take on the ad war and who's really Dick Dastardly05:47 Anthropic's breakout week Claude Opus 4.6 and the agent swarm launch06:48 OpenAI Codex Multiple agents collaborating on tasks in 10-15 minutes07:42 "It replaces software" Keith retires his own custom-built tools08:16 The trillion-dollar selloff Adobe, Salesforce, Workday, PayPal collapse11:02 Infrastructure vs. innovation Microsoft and Amazon become "utilities"11:45 Google's $185 billion bet Pivoting from hybrid to AI-first13:15 The SpaceX/xAI merger Musk's plan for space-based data centers15:18 The AI wacky race Kimi, OpenAI, Anthropic leapfrog Google17:03 Does AI make us smarter? Leverage tools, not intelligence18:53 AI growing up, CEOs not The adolescence of the industry21:06 US job openings hit five-year low The coming labor crisis22:44 The VC crisis Five funds sucking the air out of the room25:04 Palantir and Anduril The winners in defense AI25:42 Facebook as laggard Huge revenues, no AI momentum26:41 The Washington Post crisis "Boogeyman journalism" and partisan media29:23 Ads in AI Paid links vs. enshittification31:26 Spotify's innovation Physical book + audiobook bundle32:32 Startup of the week Cursor for CRM, $20M from Sequoia33:45 Om Malik on the end of software distribution From CDs to app stores to self-made35:41 Super Bowl prediction Seattle vs. New England36:02 Closing "That really was the week in tech"Links & ReferencesMentioned in this episode:That Was The Week newsletter by Keith TeareAnthropic's Super Bowl ad and ad-free pledge (CNBC)Sam Altman's response to Anthropic ads (TechCrunch)SpaceX acquires xAI in $1.25 trillion merger (CNBC)The Washington Post layoffs and crisis (Poynter)Om Malik on the evolution of software distributionOpenAI Codex app launch (OpenAI)About Keen On America Nobody asks more impertinent questions than the Anglo-American writer, filmmaker and SiliconValley entrepreneur Andrew Keen. In Keen On America , Andrew brings his sharp Transatlanticwit to the forces reshaping the United States — hosting daily interviews with leading thinkersand writers about American history, politics, technology, culture, and business. With nearly2,800 episodes since the show launched on TechCrunch in 2010, Keen On America is the mostprolific intellectual interview show in the history of podcasting.Website | Substack | YouTube
From Palantir and Two Sigma to building Goodfire into the poster-child for actionable mechanistic interpretability, Mark Bissell (Member of Technical Staff) and Myra Deng (Head of Product) are trying to turn “peeking inside the model” into a repeatable production workflow by shipping APIs, landing real enterprise deployments, and now scaling the bet with a recent $150M Series B funding round at a $1.25B valuation.In this episode, we go far beyond the usual “SAEs are cool” take. We talk about Goodfire's core bet: that the AI lifecycle is still fundamentally broken because the only reliable control we have is data and we post-train, RLHF, and fine-tune by “slurping supervision through a straw,” hoping the model picks up the right behaviors while quietly absorbing the wrong ones. Goodfire's answer is to build a bi-directional interface between humans and models: read what's happening inside, edit it surgically, and eventually use interpretability during training so customization isn't just brute-force guesswork.Mark and Myra walk through what that looks like when you stop treating interpretability like a lab demo and start treating it like infrastructure: lightweight probes that add near-zero latency, token-level safety filters that can run at inference time, and interpretability workflows that survive messy constraints (multilingual inputs, synthetic→real transfer, regulated domains, no access to sensitive data). We also get a live window into what “frontier-scale interp” means operationally (i.e. steering a trillion-parameter model in real time by targeting internal features) plus why the same tooling generalizes cleanly from language models to genomics, medical imaging, and “pixel-space” world models.We discuss:* Myra + Mark's path: Palantir (health systems, forward-deployed engineering) → Goodfire early team; Two Sigma → Head of Product, translating frontier interpretability research into a platform and real-world deployments* What “interpretability” actually means in practice: not just post-hoc poking, but a broader “science of deep learning” approach across the full AI lifecycle (data curation → post-training → internal representations → model design)* Why post-training is the first big wedge: “surgical edits” for unintended behaviors likereward hacking, sycophancy, noise learned during customization plus the dream of targeted unlearning and bias removal without wrecking capabilities* SAEs vs probes in the real world: why SAE feature spaces sometimes underperform classifiers trained on raw activations for downstream detection tasks (hallucination, harmful intent, PII), and what that implies about “clean concept spaces”* Rakuten in production: deploying interpretability-based token-level PII detection at inference time to prevent routing private data to downstream providers plus the gnarly constraints: no training on real customer PII, synthetic→real transfer, English + Japanese, and tokenization quirks* Why interp can be operationally cheaper than LLM-judge guardrails: probes are lightweight, low-latency, and don't require hosting a second large model in the loop* Real-time steering at frontier scale: a demo of steering Kimi K2 (~1T params) live and finding features via SAE pipelines, auto-labeling via LLMs, and toggling a “Gen-Z slang” feature across multiple layers without breaking tool use* Hallucinations as an internal signal: the case that models have latent uncertainty / “user-pleasing” circuitry you can detect and potentially mitigate more directly than black-box methods* Steering vs prompting: the emerging view that activation steering and in-context learning are more closely connected than people think, including work mapping between the two (even for jailbreak-style behaviors)* Interpretability for science: using the same tooling across domains (genomics, medical imaging, materials) to debug spurious correlations and extract new knowledge up to and including early biomarker discovery work with major partners* World models + “pixel-space” interpretability: why vision/video models make concepts easier to see, how that accelerates the feedback loop, and why robotics/world-model partners are especially interesting design partners* The north star: moving from “data in, weights out” to intentional model design where experts can impart goals and constraints directly, not just via reward signals and brute-force post-training—Goodfire AI* Website: https://goodfire.ai* LinkedIn: https://www.linkedin.com/company/goodfire-ai/* X: https://x.com/GoodfireAIMyra Deng* Website: https://myradeng.com/* LinkedIn: https://www.linkedin.com/in/myra-deng/* X: https://x.com/myra_dengMark Bissell* LinkedIn: https://www.linkedin.com/in/mark-bissell/* X: https://x.com/MarkMBissellFull Video EpisodeTimestamps00:00:00 Introduction00:00:05 Introduction to the Latent Space Podcast and Guests from Goodfire00:00:29 What is Goodfire? Mission and Focus on Interpretability00:01:01 Goodfire's Practical Approach to Interpretability00:01:37 Goodfire's Series B Fundraise Announcement00:02:04 Backgrounds of Mark and Myra from Goodfire00:02:51 Team Structure and Roles at Goodfire00:05:13 What is Interpretability? Definitions and Techniques00:05:30 Understanding Errors00:07:29 Post-training vs. Pre-training Interpretability Applications00:08:51 Using Interpretability to Remove Unwanted Behaviors00:10:09 Grokking, Double Descent, and Generalization in Models00:10:15 404 Not Found Explained00:12:06 Subliminal Learning and Hidden Biases in Models00:14:07 How Goodfire Chooses Research Directions and Projects00:15:00 Troubleshooting Errors00:16:04 Limitations of SAEs and Probes in Interpretability00:18:14 Rakuten Case Study: Production Deployment of Interpretability00:20:45 Conclusion00:21:12 Efficiency Benefits of Interpretability Techniques00:21:26 Live Demo: Real-Time Steering in a Trillion Parameter Model00:25:15 How Steering Features are Identified and Labeled00:26:51 Detecting and Mitigating Hallucinations Using Interpretability00:31:20 Equivalence of Activation Steering and Prompting00:34:06 Comparing Steering with Fine-Tuning and LoRA Techniques00:36:04 Model Design and the Future of Intentional AI Development00:38:09 Getting Started in Mechinterp: Resources, Programs, and Open Problems00:40:51 Industry Applications and the Rise of Mechinterp in Practice00:41:39 Interpretability for Code Models and Real-World Usage00:43:07 Making Steering Useful for More Than Stylistic Edits00:46:17 Applying Interpretability to Healthcare and Scientific Discovery00:49:15 Why Interpretability is Crucial in High-Stakes Domains like Healthcare00:52:03 Call for Design Partners Across Domains00:54:18 Interest in World Models and Visual Interpretability00:57:22 Sci-Fi Inspiration: Ted Chiang and Interpretability01:00:14 Interpretability, Safety, and Alignment Perspectives01:04:27 Weak-to-Strong Generalization and Future Alignment Challenges01:05:38 Final Thoughts and Hiring/Collaboration Opportunities at GoodfireTranscriptShawn Wang [00:00:05]: So welcome to the Latent Space pod. We're back in the studio with our special MechInterp co-host, Vibhu. Welcome. Mochi, Mochi's special co-host. And Mochi, the mechanistic interpretability doggo. We have with us Mark and Myra from Goodfire. Welcome. Thanks for having us on. Maybe we can sort of introduce Goodfire and then introduce you guys. How do you introduce Goodfire today?Myra Deng [00:00:29]: Yeah, it's a great question. So Goodfire, we like to say, is an AI research lab that focuses on using interpretability to understand, learn from, and design AI models. And we really believe that interpretability will unlock the new generation, next frontier of safe and powerful AI models. That's our description right now, and I'm excited to dive more into the work we're doing to make that happen.Shawn Wang [00:00:55]: Yeah. And there's always like the official description. Is there an understatement? Is there an unofficial one that sort of resonates more with a different audience?Mark Bissell [00:01:01]: Well, being an AI research lab that's focused on interpretability, there's obviously a lot of people have a lot that they think about when they think of interpretability. And I think we have a pretty broad definition of what that means and the types of places that can be applied. And in particular, applying it in production scenarios, in high stakes industries, and really taking it sort of from the research world into the real world. Which, you know. It's a new field, so that hasn't been done all that much. And we're excited about actually seeing that sort of put into practice.Shawn Wang [00:01:37]: Yeah, I would say it wasn't too long ago that Anthopic was like still putting out like toy models or superposition and that kind of stuff. And I wouldn't have pegged it to be this far along. When you and I talked at NeurIPS, you were talking a little bit about your production use cases and your customers. And then not to bury the lead, today we're also announcing the fundraise, your Series B. $150 million. $150 million at a 1.25B valuation. Congrats, Unicorn.Mark Bissell [00:02:02]: Thank you. Yeah, no, things move fast.Shawn Wang [00:02:04]: We were talking to you in December and already some big updates since then. Let's dive, I guess, into a bit of your backgrounds as well. Mark, you were at Palantir working on health stuff, which is really interesting because the Goodfire has some interesting like health use cases. I don't know how related they are in practice.Mark Bissell [00:02:22]: Yeah, not super related, but I don't know. It was helpful context to know what it's like. Just to work. Just to work with health systems and generally in that domain. Yeah.Shawn Wang [00:02:32]: And Mara, you were at Two Sigma, which actually I was also at Two Sigma back in the day. Wow, nice.Myra Deng [00:02:37]: Did we overlap at all?Shawn Wang [00:02:38]: No, this is when I was briefly a software engineer before I became a sort of developer relations person. And now you're head of product. What are your sort of respective roles, just to introduce people to like what all gets done in Goodfire?Mark Bissell [00:02:51]: Yeah, prior to Goodfire, I was at Palantir for about three years as a forward deployed engineer, now a hot term. Wasn't always that way. And as a technical lead on the health care team and at Goodfire, I'm a member of the technical staff. And honestly, that I think is about as specific as like as as I could describe myself because I've worked on a range of things. And, you know, it's it's a fun time to be at a team that's still reasonably small. I think when I joined one of the first like ten employees, now we're above 40, but still, it looks like there's always a mix of research and engineering and product and all of the above. That needs to get done. And I think everyone across the team is, you know, pretty, pretty switch hitter in the roles they do. So I think you've seen some of the stuff that I worked on related to image models, which was sort of like a research demo. More recently, I've been working on our scientific discovery team with some of our life sciences partners, but then also building out our core platform for more of like flexing some of the kind of MLE and developer skills as well.Shawn Wang [00:03:53]: Very generalist. And you also had like a very like a founding engineer type role.Myra Deng [00:03:58]: Yeah, yeah.Shawn Wang [00:03:59]: So I also started as I still am a member of technical staff, did a wide range of things from the very beginning, including like finding our office space and all of this, which is we both we both visited when you had that open house thing. It was really nice.Myra Deng [00:04:13]: Thank you. Thank you. Yeah. Plug to come visit our office.Shawn Wang [00:04:15]: It looked like it was like 200 people. It has room for 200 people. But you guys are like 10.Myra Deng [00:04:22]: For a while, it was very empty. But yeah, like like Mark, I spend. A lot of my time as as head of product, I think product is a bit of a weird role these days, but a lot of it is thinking about how do we take our frontier research and really apply it to the most important real world problems and how does that then translate into a platform that's repeatable or a product and working across, you know, the engineering and research teams to make that happen and also communicating to the world? Like, what is interpretability? What is it used for? What is it good for? Why is it so important? All of these things are part of my day-to-day as well.Shawn Wang [00:05:01]: I love like what is things because that's a very crisp like starting point for people like coming to a field. They all do a fun thing. Vibhu, why don't you want to try tackling what is interpretability and then they can correct us.Vibhu Sapra [00:05:13]: Okay, great. So I think like one, just to kick off, it's a very interesting role to be head of product, right? Because you guys, at least as a lab, you're more of an applied interp lab, right? Which is pretty different than just normal interp, like a lot of background research. But yeah. You guys actually ship an API to try these things. You have Ember, you have products around it, which not many do. Okay. What is interp? So basically you're trying to have an understanding of what's going on in model, like in the model, in the internal. So different approaches to do that. You can do probing, SAEs, transcoders, all this stuff. But basically you have an, you have a hypothesis. You have something that you want to learn about what's happening in a model internals. And then you're trying to solve that from there. You can do stuff like you can, you know, you can do activation mapping. You can try to do steering. There's a lot of stuff that you can do, but the key question is, you know, from input to output, we want to have a better understanding of what's happening and, you know, how can we, how can we adjust what's happening on the model internals? How'd I do?Mark Bissell [00:06:12]: That was really good. I think that was great. I think it's also a, it's kind of a minefield of a, if you ask 50 people who quote unquote work in interp, like what is interpretability, you'll probably get 50 different answers. And. Yeah. To some extent also like where, where good fire sits in the space. I think that we're an AI research company above all else. And interpretability is a, is a set of methods that we think are really useful and worth kind of specializing in, in order to accomplish the goals we want to accomplish. But I think we also sort of see some of the goals as even more broader as, as almost like the science of deep learning and just taking a not black box approach to kind of any part of the like AI development life cycle, whether that. That means using interp for like data curation while you're training your model or for understanding what happened during post-training or for the, you know, understanding activations and sort of internal representations, what is in there semantically. And then a lot of sort of exciting updates that were, you know, are sort of also part of the, the fundraise around bringing interpretability to training, which I don't think has been done all that much before. A lot of this stuff is sort of post-talk poking at models as opposed to. To actually using this to intentionally design them.Shawn Wang [00:07:29]: Is this post-training or pre-training or is that not a useful.Myra Deng [00:07:33]: Currently focused on post-training, but there's no reason the techniques wouldn't also work in pre-training.Shawn Wang [00:07:38]: Yeah. It seems like it would be more active, applicable post-training because basically I'm thinking like rollouts or like, you know, having different variations of a model that you can tweak with the, with your steering. Yeah.Myra Deng [00:07:50]: And I think in a lot of the news that you've seen in, in, on like Twitter or whatever, you've seen a lot of unintended. Side effects come out of post-training processes, you know, overly sycophantic models or models that exhibit strange reward hacking behavior. I think these are like extreme examples. There's also, you know, very, uh, mundane, more mundane, like enterprise use cases where, you know, they try to customize or post-train a model to do something and it learns some noise or it doesn't appropriately learn the target task. And a big question that we've always had is like, how do you use your understanding of what the model knows and what it's doing to actually guide the learning process?Shawn Wang [00:08:26]: Yeah, I mean, uh, you know, just to anchor this for people, uh, one of the biggest controversies of last year was 4.0 GlazeGate. I've never heard of GlazeGate. I didn't know that was what it was called. The other one, they called it that on the blog post and I was like, well, how did OpenAI call it? Like officially use that term. And I'm like, that's funny, but like, yeah, I guess it's the pitch that if they had worked a good fire, they wouldn't have avoided it. Like, you know what I'm saying?Myra Deng [00:08:51]: I think so. Yeah. Yeah.Mark Bissell [00:08:53]: I think that's certainly one of the use cases. I think. Yeah. Yeah. I think the reason why post-training is a place where this makes a lot of sense is a lot of what we're talking about is surgical edits. You know, you want to be able to have expert feedback, very surgically change how your model is doing, whether that is, you know, removing a certain behavior that it has. So, you know, one of the things that we've been looking at or is, is another like common area where you would want to make a somewhat surgical edit is some of the models that have say political bias. Like you look at Quen or, um, R1 and they have sort of like this CCP bias.Shawn Wang [00:09:27]: Is there a CCP vector?Mark Bissell [00:09:29]: Well, there's, there are certainly internal, yeah. Parts of the representation space where you can sort of see where that lives. Yeah. Um, and you want to kind of, you know, extract that piece out.Shawn Wang [00:09:40]: Well, I always say, you know, whenever you find a vector, a fun exercise is just like, make it very negative to see what the opposite of CCP is.Mark Bissell [00:09:47]: The super America, bald eagles flying everywhere. But yeah. So in general, like lots of post-training tasks where you'd want to be able to, to do that. Whether it's unlearning a certain behavior or, you know, some of the other kind of cases where this comes up is, are you familiar with like the, the grokking behavior? I mean, I know the machine learning term of grokking.Shawn Wang [00:10:09]: Yeah.Mark Bissell [00:10:09]: Sort of this like double descent idea of, of having a model that is able to learn a generalizing, a generalizing solution, as opposed to even if memorization of some task would suffice, you want it to learn the more general way of doing a thing. And so, you know, another. A way that you can think about having surgical access to a model's internals would be learn from this data, but learn in the right way. If there are many possible, you know, ways to, to do that. Can make interp solve the double descent problem?Shawn Wang [00:10:41]: Depends, I guess, on how you. Okay. So I, I, I viewed that double descent as a problem because then you're like, well, if the loss curves level out, then you're done, but maybe you're not done. Right. Right. But like, if you actually can interpret what is a generalizing or what you're doing. What is, what is still changing, even though the loss is not changing, then maybe you, you can actually not view it as a double descent problem. And actually you're just sort of translating the space in which you view loss and like, and then you have a smooth curve. Yeah.Mark Bissell [00:11:11]: I think that's certainly like the domain of, of problems that we're, that we're looking to get.Shawn Wang [00:11:15]: Yeah. To me, like double descent is like the biggest thing to like ML research where like, if you believe in scaling, then you don't need, you need to know where to scale. And. But if you believe in double descent, then you don't, you don't believe in anything where like anything levels off, like.Vibhu Sapra [00:11:30]: I mean, also tendentially there's like, okay, when you talk about the China vector, right. There's the subliminal learning work. It was from the anthropic fellows program where basically you can have hidden biases in a model. And as you distill down or, you know, as you train on distilled data, those biases always show up, even if like you explicitly try to not train on them. So, you know, it's just like another use case of. Okay. If we can interpret what's happening in post-training, you know, can we clear some of this? Can we even determine what's there? Because yeah, it's just like some worrying research that's out there that shows, you know, we really don't know what's going on.Mark Bissell [00:12:06]: That is. Yeah. I think that's the biggest sentiment that we're sort of hoping to tackle. Nobody knows what's going on. Right. Like subliminal learning is just an insane concept when you think about it. Right. Train a model on not even the logits, literally the output text of a bunch of random numbers. And now your model loves owls. And you see behaviors like that, that are just, they defy, they defy intuition. And, and there are mathematical explanations that you can get into, but. I mean.Shawn Wang [00:12:34]: It feels so early days. Objectively, there are a sequence of numbers that are more owl-like than others. There, there should be.Mark Bissell [00:12:40]: According to, according to certain models. Right. It's interesting. I think it only applies to models that were initialized from the same starting Z. Usually, yes.Shawn Wang [00:12:49]: But I mean, I think that's a, that's a cheat code because there's not enough compute. But like if you believe in like platonic representation, like probably it will transfer across different models as well. Oh, you think so?Mark Bissell [00:13:00]: I think of it more as a statistical artifact of models initialized from the same seed sort of. There's something that is like path dependent from that seed that might cause certain overlaps in the latent space and then sort of doing this distillation. Yeah. Like it pushes it towards having certain other tendencies.Vibhu Sapra [00:13:24]: Got it. I think there's like a bunch of these open-ended questions, right? Like you can't train in new stuff during the RL phase, right? RL only reorganizes weights and you can only do stuff that's somewhat there in your base model. You're not learning new stuff. You're just reordering chains and stuff. But okay. My broader question is when you guys work at an interp lab, how do you decide what to work on and what's kind of the thought process? Right. Because we can ramble for hours. Okay. I want to know this. I want to know that. But like, how do you concretely like, you know, what's the workflow? Okay. There's like approaches towards solving a problem, right? I can try prompting. I can look at chain of thought. I can train probes, SAEs. But how do you determine, you know, like, okay, is this going anywhere? Like, do we have set stuff? Just, you know, if you can help me with all that. Yeah.Myra Deng [00:14:07]: It's a really good question. I feel like we've always at the very beginning of the company thought about like, let's go and try to learn what isn't working in machine learning today. Whether that's talking to customers or talking to researchers at other labs, trying to understand both where the frontier is going and where things are really not falling apart today. And then developing a perspective on how we can push the frontier using interpretability methods. And so, you know, even our chief scientist, Tom, spends a lot of time talking to customers and trying to understand what real world problems are and then taking that back and trying to apply the current state of the art to those problems and then seeing where they fall down basically. And then using those failures or those shortcomings to understand what hills to climb when it comes to interpretability research. So like on the fundamental side, for instance, when we have done some work applying SAEs and probes, we've encountered, you know, some shortcomings in SAEs that we found a little bit surprising. And so have gone back to the drawing board and done work on that. And then, you know, we've done some work on better foundational interpreter models. And a lot of our team's research is focused on what is the next evolution beyond SAEs, for instance. And then when it comes to like control and design of models, you know, we tried steering with our first API and realized that it still fell short of black box techniques like prompting or fine tuning. And so went back to the drawing board and we're like, how do we make that not the case and how do we improve it beyond that? And one of our researchers, Ekdeep, who just joined is actually Ekdeep and Atticus are like steering experts and have spent a lot of time trying to figure out like, what is the research that enables us to actually do this in a much more powerful, robust way? So yeah, the answer is like, look at real world problems, try to translate that into a research agenda and then like hill climb on both of those at the same time.Shawn Wang [00:16:04]: Yeah. Mark has the steering CLI demo queued up, which we're going to go into in a sec. But I always want to double click on when you drop hints, like we found some problems with SAEs. Okay. What are they? You know, and then we can go into the demo. Yeah.Myra Deng [00:16:19]: I mean, I'm curious if you have more thoughts here as well, because you've done it in the healthcare domain. But I think like, for instance, when we do things like trying to detect behaviors within models that are harmful or like behaviors that a user might not want to have in their model. So hallucinations, for instance, harmful intent, PII, all of these things. We first tried using SAE probes for a lot of these tasks. So taking the feature activation space from SAEs and then training classifiers on top of that, and then seeing how well we can detect the properties that we might want to detect in model behavior. And we've seen in many cases that probes just trained on raw activations seem to perform better than SAE probes, which is a bit surprising if you think that SAEs are actually also capturing the concepts that you would want to capture cleanly and more surgically. And so that is an interesting observation. I don't think that is like, I'm not down on SAEs at all. I think there are many, many things they're useful for, but we have definitely run into cases where I think the concept space described by SAEs is not as clean and accurate as we would expect it to be for actual like real world downstream performance metrics.Mark Bissell [00:17:34]: Fair enough. Yeah. It's the blessing and the curse of unsupervised methods where you get to peek into the AI's mind. But sometimes you wish that you saw other things when you walked inside there. Although in the PII instance, I think weren't an SAE based approach actually did prove to be the most generalizable?Myra Deng [00:17:53]: It did work well in the case that we published with Rakuten. And I think a lot of the reasons it worked well was because we had a noisier data set. And so actually the blessing of unsupervised learning is that we actually got to get more meaningful, generalizable signal from SAEs when the data was noisy. But in other cases where we've had like good data sets, it hasn't been the case.Shawn Wang [00:18:14]: And just because you named Rakuten and I don't know if we'll get it another chance, like what is the overall, like what is Rakuten's usage or production usage? Yeah.Myra Deng [00:18:25]: So they are using us to essentially guardrail and inference time monitor their language model usage and their agent usage to detect things like PII so that they don't route private user information.Myra Deng [00:18:41]: And so that's, you know, going through all of their user queries every day. And that's something that we deployed with them a few months ago. And now we are actually exploring very early partnerships, not just with Rakuten, but with other people around how we can help with potentially training and customization use cases as well. Yeah.Shawn Wang [00:19:03]: And for those who don't know, like it's Rakuten is like, I think number one or number two e-commerce store in Japan. Yes. Yeah.Mark Bissell [00:19:10]: And I think that use case actually highlights a lot of like what it looks like to deploy things in practice that you don't always think about when you're doing sort of research tasks. So when you think about some of the stuff that came up there that's more complex than your idealized version of a problem, they were encountering things like synthetic to real transfer of methods. So they couldn't train probes, classifiers, things like that on actual customer data of PII. So what they had to do is use synthetic data sets. And then hope that that transfer is out of domain to real data sets. And so we can evaluate performance on the real data sets, but not train on customer PII. So that right off the bat is like a big challenge. You have multilingual requirements. So this needed to work for both English and Japanese text. Japanese text has all sorts of quirks, including tokenization behaviors that caused lots of bugs that caused us to be pulling our hair out. And then also a lot of tasks you'll see. You might make simplifying assumptions if you're sort of treating it as like the easiest version of the problem to just sort of get like general results where maybe you say you're classifying a sentence to say, does this contain PII? But the need that Rakuten had was token level classification so that you could precisely scrub out the PII. So as we learned more about the problem, you're sort of speaking about what that looks like in practice. Yeah. A lot of assumptions end up breaking. And that was just one instance where you. A problem that seems simple right off the bat ends up being more complex as you keep diving into it.Vibhu Sapra [00:20:41]: Excellent. One of the things that's also interesting with Interp is a lot of these methods are very efficient, right? So where you're just looking at a model's internals itself compared to a separate like guardrail, LLM as a judge, a separate model. One, you have to host it. Two, there's like a whole latency. So if you use like a big model, you have a second call. Some of the work around like self detection of hallucination, it's also deployed for efficiency, right? So if you have someone like Rakuten doing it in production live, you know, that's just another thing people should consider.Mark Bissell [00:21:12]: Yeah. And something like a probe is super lightweight. Yeah. It's no extra latency really. Excellent.Shawn Wang [00:21:17]: You have the steering demos lined up. So we were just kind of see what you got. I don't, I don't actually know if this is like the latest, latest or like alpha thing.Mark Bissell [00:21:26]: No, this is a pretty hacky demo from from a presentation that someone else on the team recently gave. So this will give a sense for, for technology. So you can see the steering and action. Honestly, I think the biggest thing that this highlights is that as we've been growing as a company and taking on kind of more and more ambitious versions of interpretability related problems, a lot of that comes to scaling up in various different forms. And so here you're going to see steering on a 1 trillion parameter model. This is Kimi K2. And so it's sort of fun that in addition to the research challenges, there are engineering challenges that we're now tackling. Cause for any of this to be sort of useful in production, you need to be thinking about what it looks like when you're using these methods on frontier models as opposed to sort of like toy kind of model organisms. So yeah, this was thrown together hastily, pretty fragile behind the scenes, but I think it's quite a fun demo. So screen sharing is on. So I've got two terminal sessions pulled up here. On the left is a forked version that we have of the Kimi CLI that we've got running to point at our custom hosted Kimi model. And then on the right is a set up that will allow us to steer on certain concepts. So I should be able to chat with Kimi over here. Tell it hello. This is running locally. So the CLI is running locally, but the Kimi server is running back to the office. Well, hopefully should be, um, that's too much to run on that Mac. Yeah. I think it's, uh, it takes a full, like each 100 node. I think it's like, you can. You can run it on eight GPUs, eight 100. So, so yeah, Kimi's running. We can ask it a prompt. It's got a forked version of our, uh, of the SG line code base that we've been working on. So I'm going to tell it, Hey, this SG line code base is slow. I think there's a bug. Can you try to figure it out? There's a big code base, so it'll, it'll spend some time doing this. And then on the right here, I'm going to initialize in real time. Some steering. Let's see here.Mark Bissell [00:23:33]: searching for any. Bugs. Feature ID 43205.Shawn Wang [00:23:38]: Yeah.Mark Bissell [00:23:38]: 20, 30, 40. So let me, uh, this is basically a feature that we found that inside Kimi seems to cause it to speak in Gen Z slang. And so on the left, it's still sort of thinking normally it might take, I don't know, 15 seconds for this to kick in, but then we're going to start hopefully seeing him do this code base is massive for real. So we're going to start. We're going to start seeing Kimi transition as the steering kicks in from normal Kimi to Gen Z Kimi and both in its chain of thought and its actual outputs.Mark Bissell [00:24:19]: And interestingly, you can see, you know, it's still able to call tools, uh, and stuff. It's um, it's purely sort of it's it's demeanor. And there are other features that we found for interesting things like concision. So that's more of a practical one. You can make it more concise. Um, the types of programs, uh, programming languages that uses, but yeah, as we're seeing it come in. Pretty good. Outputs.Shawn Wang [00:24:43]: Scheduler code is actually wild.Vibhu Sapra [00:24:46]: Yo, this code is actually insane, bro.Vibhu Sapra [00:24:53]: What's the process of training in SAE on this, or, you know, how do you label features? I know you guys put out a pretty cool blog post about, um, finding this like autonomous interp. Um, something. Something about how agents for interp is different than like coding agents. I don't know while this is spewing up, but how, how do we find feature 43, two Oh five. Yeah.Mark Bissell [00:25:15]: So in this case, um, we, our platform that we've been building out for a long time now supports all the sort of classic out of the box interp techniques that you might want to have like SAE training, probing things of that kind, I'd say the techniques for like vanilla SAEs are pretty well established now where. You take your model that you're interpreting, run a whole bunch of data through it, gather activations, and then yeah, pretty straightforward pipeline to train an SAE. There are a lot of different varieties. There's top KSAEs, batch top KSAEs, um, normal ReLU SAEs. And then once you have your sparse features to your point, assigning labels to them to actually understand that this is a gen Z feature, that's actually where a lot of the kind of magic happens. Yeah. And the most basic standard technique is look at all of your d input data set examples that cause this feature to fire most highly. And then you can usually pick out a pattern. So for this feature, If I've run a diverse enough data set through my model feature 43, two Oh five. Probably tends to fire on all the tokens that sounds like gen Z slang. You know, that's the, that's the time of year to be like, Oh, I'm in this, I'm in this Um, and, um, so, you know, you could have a human go through all 43,000 concepts andVibhu Sapra [00:26:34]: And I've got to ask the basic question, you know, can we get examples where it hallucinates, pass it through, see what feature activates for hallucinations? Can I just, you know, turn hallucination down?Myra Deng [00:26:51]: Oh, wow. You really predicted a project we're already working on right now, which is detecting hallucinations using interpretability techniques. And this is interesting because hallucinations is something that's very hard to detect. And it's like a kind of a hairy problem and something that black box methods really struggle with. Whereas like Gen Z, you could always train a simple classifier to detect that hallucinations is harder. But we've seen that models internally have some... Awareness of like uncertainty or some sort of like user pleasing behavior that leads to hallucinatory behavior. And so, yeah, we have a project that's trying to detect that accurately. And then also working on mitigating the hallucinatory behavior in the model itself as well.Shawn Wang [00:27:39]: Yeah, I would say most people are still at the level of like, oh, I would just turn temperature to zero and that turns off hallucination. And I'm like, well, that's a fundamental misunderstanding of how this works. Yeah.Mark Bissell [00:27:51]: Although, so part of what I like about that question is you, there are SAE based approaches that might like help you get at that. But oftentimes the beauty of SAEs and like we said, the curse is that they're unsupervised. So when you have a behavior that you deliberately would like to remove, and that's more of like a supervised task, often it is better to use something like probes and specifically target the thing that you're interested in reducing as opposed to sort of like hoping that when you fragment the latent space, one of the vectors that pops out.Vibhu Sapra [00:28:20]: And as much as we're training an autoencoder to be sparse, we're not like for sure certain that, you know, we will get something that just correlates to hallucination. You'll probably split that up into 20 other things and who knows what they'll be.Mark Bissell [00:28:36]: Of course. Right. Yeah. So there's no sort of problems with like feature splitting and feature absorption. And then there's the off target effects, right? Ideally, you would want to be very precise where if you reduce the hallucination feature, suddenly maybe your model can't write. Creatively anymore. And maybe you don't like that, but you want to still stop it from hallucinating facts and figures.Shawn Wang [00:28:55]: Good. So Vibhu has a paper to recommend there that we'll put in the show notes. But yeah, I mean, I guess just because your demo is done, any any other things that you want to highlight or any other interesting features you want to show?Mark Bissell [00:29:07]: I don't think so. Yeah. Like I said, this is a pretty small snippet. I think the main sort of point here that I think is exciting is that there's not a whole lot of inter being applied to models quite at this scale. You know, Anthropic certainly has some some. Research and yeah, other other teams as well. But it's it's nice to see these techniques, you know, being put into practice. I think not that long ago, the idea of real time steering of a trillion parameter model would have sounded.Shawn Wang [00:29:33]: Yeah. The fact that it's real time, like you started the thing and then you edited the steering vector.Vibhu Sapra [00:29:38]: I think it's it's an interesting one TBD of what the actual like production use case would be on that, like the real time editing. It's like that's the fun part of the demo, right? You can kind of see how this could be served behind an API, right? Like, yes, you're you only have so many knobs and you can just tweak it a bit more. And I don't know how it plays in. Like people haven't done that much with like, how does this work with or without prompting? Right. How does this work with fine tuning? Like, there's a whole hype of continual learning, right? So there's just so much to see. Like, is this another parameter? Like, is it like parameter? We just kind of leave it as a default. We don't use it. So I don't know. Maybe someone here wants to put out a guide on like how to use this with prompting when to do what?Mark Bissell [00:30:18]: Oh, well, I have a paper recommendation. I think you would love from Act Deep on our team, who is an amazing researcher, just can't say enough amazing things about Act Deep. But he actually has a paper that as well as some others from the team and elsewhere that go into the essentially equivalence of activation steering and in context learning and how those are from a he thinks of everything in a cognitive neuroscience Bayesian framework, but basically how you can precisely show how. Prompting in context, learning and steering exhibit similar behaviors and even like get quantitative about the like magnitude of steering you would need to do to induce a certain amount of behavior similar to certain prompting, even for things like jailbreaks and stuff. It's a really cool paper. Are you saying steering is less powerful than prompting? More like you can almost write a formula that tells you how to convert between the two of them.Myra Deng [00:31:20]: And so like formally equivalent actually in the in the limit. Right.Mark Bissell [00:31:24]: So like one case study of this is for jailbreaks there. I don't know. Have you seen the stuff where you can do like many shot jailbreaking? You like flood the context with examples of the behavior. And the topic put out that paper.Shawn Wang [00:31:38]: A lot of people were like, yeah, we've been doing this, guys.Mark Bissell [00:31:40]: Like, yeah, what's in this in context learning and activation steering equivalence paper is you can like predict the number. Number of examples that you will need to put in there in order to jailbreak the model. That's cool. By doing steering experiments and using this sort of like equivalence mapping. That's cool. That's really cool. It's very neat. Yeah.Shawn Wang [00:32:02]: I was going to say, like, you know, I can like back rationalize that this makes sense because, you know, what context is, is basically just, you know, it updates the KV cache kind of and like and then every next token inference is still like, you know, the sheer sum of everything all the way. It's plus all the context. It's up to date. And you could, I guess, theoretically steer that with you probably replace that with your steering. The only problem is steering typically is on one layer, maybe three layers like like you did. So it's like not exactly equivalent.Mark Bissell [00:32:33]: Right, right. There's sort of you need to get precise about, yeah, like how you sort of define steering and like what how you're modeling the setup. But yeah, I've got the paper pulled up here. Belief dynamics reveal the dual nature. Yeah. The title is Belief Dynamics Reveal the Dual Nature of Incompetence. And it's an exhibition of the practical context learning and activation steering. So Eric Bigelow, Dan Urgraft on the who are doing fellowships at Goodfire, Ekt Deep's the final author there.Myra Deng [00:32:59]: I think actually to your question of like, what is the production use case of steering? I think maybe if you just think like one level beyond steering as it is today. Like imagine if you could adapt your model to be, you know, an expert legal reasoner. Like in almost real time, like very quickly. efficiently using human feedback or using like your semantic understanding of what the model knows and where it knows that behavior. I think that while it's not clear what the product is at the end of the day, it's clearly very valuable. Thinking about like what's the next interface for model customization and adaptation is a really interesting problem for us. Like we have heard a lot of people actually interested in fine-tuning an RL for open weight models in production. And so people are using things like Tinker or kind of like open source libraries to do that, but it's still very difficult to get models fine-tuned and RL'd for exactly what you want them to do unless you're an expert at model training. And so that's like something we'reShawn Wang [00:34:06]: looking into. Yeah. I never thought so. Tinker from Thinking Machines famously uses rank one LoRa. Is that basically the same as steering? Like, you know, what's the comparison there?Mark Bissell [00:34:19]: Well, so in that case, you are still applying updates to the parameters, right?Shawn Wang [00:34:25]: Yeah. You're not touching a base model. You're touching an adapter. It's kind of, yeah.Mark Bissell [00:34:30]: Right. But I guess it still is like more in parameter space then. I guess it's maybe like, are you modifying the pipes or are you modifying the water flowing through the pipes to get what you're after? Yeah. Just maybe one way.Mark Bissell [00:34:44]: I like that analogy. That's my mental map of it at least, but it gets at this idea of model design and intentional design, which is something that we're, that we're very focused on. And just the fact that like, I hope that we look back at how we're currently training models and post-training models and just think what a primitive way of doing that right now. Like there's no intentionalityShawn Wang [00:35:06]: really in... It's just data, right? The only thing in control is what data we feed in.Mark Bissell [00:35:11]: So, so Dan from Goodfire likes to use this analogy of, you know, he has a couple of young kids and he talks about like, what if I could only teach my kids how to be good people by giving them cookies or like, you know, giving them a slap on the wrist if they do something wrong, like not telling them why it was wrong or like what they should have done differently or something like that. Just figure it out. Right. Exactly. So that's RL. Yeah. Right. And, and, you know, it's sample inefficient. There's, you know, what do they say? It's like slurping feedback. It's like, slurping supervision. Right. And so you'd like to get to the point where you can have experts giving feedback to their models that are, uh, internalized and, and, you know, steering is an inference time way of sort of getting that idea. But ideally you're moving to a world whereVibhu Sapra [00:36:04]: it is much more intentional design in perpetuity for these models. Okay. This is one of the questions we asked Emmanuel from Anthropic on the podcast a few months ago. Basically the question, was you're at a research lab that does model training, foundation models, and you're on an interp team. How does it tie back? Right? Like, does this, do ideas come from the pre-training team? Do they go back? Um, you know, so for those interested, you can, you can watch that. There wasn't too much of a connect there, but it's still something, you know, it's something they want toMark Bissell [00:36:33]: push for down the line. It can be useful for all of the above. Like there are certainly post-hocVibhu Sapra [00:36:39]: use cases where it doesn't need to touch that. I think the other thing a lot of people forget is this stuff isn't too computationally expensive, right? Like I would say, if you're interested in getting into research, MechInterp is one of the most approachable fields, right? A lot of this train an essay, train a probe, this stuff, like the budget for this one, there's already a lot done. There's a lot of open source work. You guys have done some too. Um, you know,Shawn Wang [00:37:04]: There's like notebooks from the Gemini team for Neil Nanda or like, this is how you do it. Just step through the notebook.Vibhu Sapra [00:37:09]: Even if you're like, not even technical with any of this, you can still make like progress. There, you can look at different activations, but, uh, if you do want to get into training, you know, training this stuff, correct me if I'm wrong is like in the thousands of dollars, not even like, it's not that high scale. And then same with like, you know, applying it, doing it for post-training or all this stuff is fairly cheap in scale of, okay. I want to get into like model training. I don't have compute for like, you know, pre-training stuff. So it's, it's a very nice field to get into. And also there's a lot of like open questions, right? Um, some of them have to go with, okay, I want a product. I want to solve this. Like there's also just a lot of open-ended stuff that people could work on. That's interesting. Right. I don't know if you guys have any calls for like, what's open questions, what's open work that you either open collaboration with, or like, you'd just like to see solved or just, you know, for people listening that want to get into McInturk because people always talk about it. What are, what are the things they should check out? Start, of course, you know, join you guys as well. I'm sure you're hiring.Myra Deng [00:38:09]: There's a paper, I think from, was it Lee, uh, Sharky? It's open problems and, uh, it's, it's a bit of interpretability, which I recommend everyone who's interested in the field. Read. I'm just like a really comprehensive overview of what are the things that experts in the field think are the most important problems to be solved. I also think to your point, it's been really, really inspiring to see, I think a lot of young people getting interested in interpretability, actually not just young people also like scientists to have been, you know, experts in physics for many years and in biology or things like this, um, transitioning into interp, because the barrier of, of what's now interp. So it's really cool to see a number to entry is, you know, in some ways low and there's a lot of information out there and ways to get started. There's this anecdote of like professors at universities saying that all of a sudden every incoming PhD student wants to study interpretability, which was not the case a few years ago. So it just goes to show how, I guess, like exciting the field is, how fast it's moving, how quick it is to get started and things like that.Mark Bissell [00:39:10]: And also just a very welcoming community. You know, there's an open source McInturk Slack channel. There are people are always posting questions and just folks in the space are always responsive if you ask things on various forums and stuff. But yeah, the open paper, open problems paper is a really good one.Myra Deng [00:39:28]: For other people who want to get started, I think, you know, MATS is a great program. What's the acronym for? Machine Learning and Alignment Theory Scholars? It's like the...Vibhu Sapra [00:39:40]: Normally summer internship style.Myra Deng [00:39:42]: Yeah, but they've been doing it year round now. And actually a lot of our full-time staff have come through that program or gone through that program. And it's great for anyone who is transitioning into interpretability. There's a couple other fellows programs. We do one as well as Anthropic. And so those are great places to get started if anyone is interested.Mark Bissell [00:40:03]: Also, I think been seen as a research field for a very long time. But I think engineering... I think engineers are sorely wanted for interpretability as well, especially at Goodfire, but elsewhere, as it does scale up.Shawn Wang [00:40:18]: I should mention that Lee actually works with you guys, right? And in the London office and I'm adding our first ever McInturk track at AI Europe because I see this industry applications now emerging. And I'm pretty excited to, you know, help push that along. Yeah, I was looking forward to that. It'll effectively be the first industry McInturk conference. Yeah. I'm so glad you added that. You know, it's still a little bit of a bet. It's not that widespread, but I can definitely see this is the time to really get into it. We want to be early on things.Mark Bissell [00:40:51]: For sure. And I think the field understands this, right? So at ICML, I think the title of the McInturk workshop this year was actionable interpretability. And there was a lot of discussion around bringing it to various domains. Everyone's adding pragmatic, actionable, whatever.Shawn Wang [00:41:10]: It's like, okay, well, we weren't actionable before, I guess. I don't know.Vibhu Sapra [00:41:13]: And I mean, like, just, you know, being in Europe, you see the Interp room. One, like old school conferences, like, I think they had a very tiny room till they got lucky and they got it doubled. But there's definitely a lot of interest, a lot of niche research. So you see a lot of research coming out of universities, students. We covered the paper last week. It's like two unknown authors, not many citations. But, you know, you can make a lot of meaningful work there. Yeah. Yeah. Yeah.Shawn Wang [00:41:39]: Yeah. I think people haven't really mentioned this yet. It's just Interp for code. I think it's like an abnormally important field. We haven't mentioned this yet. The conspiracy theory last two years ago was when the first SAE work came out of Anthropic was they would do like, oh, we just used SAEs to turn the bad code vector down and then turn up the good code. And I think like, isn't that the dream? Like, you know, like, but basically, I guess maybe, why is it funny? Like, it's... If it was realistic, it would not be funny. It would be like, no, actually, we should do this. But it's funny because we know there's like, we feel there's some limitations to what steering can do. And I think a lot of the public image of steering is like the Gen Z stuff. Like, oh, you can make it really love the Golden Gate Bridge, or you can make it speak like Gen Z. To like be a legal reasoner seems like a huge stretch. Yeah. And I don't know if that will get there this way. Yeah.Myra Deng [00:42:36]: I think, um, I will say we are announcing. Something very soon that I will not speak too much about. Um, but I think, yeah, this is like what we've run into again and again is like, we, we don't want to be in the world where steering is only useful for like stylistic things. That's definitely not, not what we're aiming for. But I think the types of interventions that you need to do to get to things like legal reasoning, um, are much more sophisticated and require breakthroughs in, in learning algorithms. And that's, um...Shawn Wang [00:43:07]: And is this an emergent property of scale as well?Myra Deng [00:43:10]: I think so. Yeah. I mean, I think scale definitely helps. I think scale allows you to learn a lot of information and, and reduce noise across, you know, large amounts of data. But I also think we think that there's ways to do things much more effectively, um, even, even at scale. So like actually learning exactly what you want from the data and not learning things that you do that you don't want exhibited in the data. So we're not like anti-scale, but we are also realizing that scale is not going to get us anywhere. It's not going to get us to the type of AI development that we want to be at in, in the future as these models get more powerful and get deployed in all these sorts of like mission critical contexts. Current life cycle of training and deploying and evaluations is, is to us like deeply broken and has opportunities to, to improve. So, um, more to come on that very, very soon.Mark Bissell [00:44:02]: And I think that that's a use basically, or maybe just like a proof point that these concepts do exist. Like if you can manipulate them in the precise best way, you can get the ideal combination of them that you desire. And steering is maybe the most coarse grained sort of peek at what that looks like. But I think it's evocative of what you could do if you had total surgical control over every concept, every parameter. Yeah, exactly.Myra Deng [00:44:30]: There were like bad code features. I've got it pulled up.Vibhu Sapra [00:44:33]: Yeah. Just coincidentally, as you guys are talking.Shawn Wang [00:44:35]: This is like, this is exactly.Vibhu Sapra [00:44:38]: There's like specifically a code error feature that activates and they show, you know, it's not, it's not typo detection. It's like, it's, it's typos in code. It's not typical typos. And, you know, you can, you can see it clearly activates where there's something wrong in code. And they have like malicious code, code error. They have a whole bunch of sub, you know, sub broken down little grain features. Yeah.Shawn Wang [00:45:02]: Yeah. So, so the, the rough intuition for me, the, why I talked about post-training was that, well, you just, you know, have a few different rollouts with all these things turned off and on and whatever. And then, you know, you can, that's, that's synthetic data you can kind of post-train on. Yeah.Vibhu Sapra [00:45:13]: And I think we make it sound easier than it is just saying, you know, they do the real hard work.Myra Deng [00:45:19]: I mean, you guys, you guys have the right idea. Exactly. Yeah. We replicated a lot of these features in, in our Lama models as well. I remember there was like.Vibhu Sapra [00:45:26]: And I think a lot of this stuff is open, right? Like, yeah, you guys opened yours. DeepMind has opened a lot of essays on Gemma. Even Anthropic has opened a lot of this. There's, there's a lot of resources that, you know, we can probably share of people that want to get involved.Shawn Wang [00:45:41]: Yeah. And special shout out to like Neuronpedia as well. Yes. Like, yeah, amazing piece of work to visualize those things.Myra Deng [00:45:49]: Yeah, exactly.Shawn Wang [00:45:50]: I guess I wanted to pivot a little bit on, onto the healthcare side, because I think that's a big use case for you guys. We haven't really talked about it yet. This is a bit of a crossover for me because we are, we are, we do have a separate science pod that we're starting up for AI, for AI for science, just because like, it's such a huge investment category and also I'm like less qualified to do it, but we actually have bio PhDs to cover that, which is great, but I need to just kind of recover, recap your work, maybe on the evil two stuff, but then, and then building forward.Mark Bissell [00:46:17]: Yeah, for sure. And maybe to frame up the conversation, I think another kind of interesting just lens on interpretability in general is a lot of the techniques that were described. are ways to solve the AI human interface problem. And it's sort of like bidirectional communication is the goal there. So what we've been talking about with intentional design of models and, you know, steering, but also more advanced techniques is having humans impart our desires and control into models and over models. And the reverse is also very interesting, especially as you get to superhuman models, whether that's narrow superintelligence, like these scientific models that work on genomics, data, medical imaging, things like that. But down the line, you know, superintelligence of other forms as well. What knowledge can the AIs teach us as sort of that, that the other direction in that? And so some of our life science work to date has been getting at exactly that question, which is, well, some of it does look like debugging these various life sciences models, understanding if they're actually performing well, on tasks, or if they're picking up on spurious correlations, for instance, genomics models, you would like to know whether they are sort of focusing on the biologically relevant things that you care about, or if it's using some simpler correlate, like the ancestry of the person that it's looking at. But then also in the instances where they are superhuman, and maybe they are understanding elements of the human genome that we don't have names for or specific, you know, yeah, discoveries that they've made that that we don't know about, that's, that's a big goal. And so we're already seeing that, right, we are partnered with organizations like Mayo Clinic, leading research health system in the United States, our Institute, as well as a startup called Prima Menta, which focuses on neurodegenerative disease. And in our partnership with them, we've used foundation models, they've been training and applied our interpretability techniques to find novel biomarkers for Alzheimer's disease. So I think this is just the tip of the iceberg. But it's, that's like a flavor of some of the things that we're working on.Shawn Wang [00:48:36]: Yeah, I think that's really fantastic. Obviously, we did the Chad Zuckerberg pod last year as well. And like, there's a plethora of these models coming out, because there's so much potential and research. And it's like, very interesting how it's basically the same as language models, but just with a different underlying data set. But it's like, it's the same exact techniques. Like, there's no change, basically.Mark Bissell [00:48:59]: Yeah. Well, and even in like other domains, right? Like, you know, robotics, I know, like a lot of the companies just use Gemma as like the like backbone, and then they like make it into a VLA that like takes these actions. It's, it's, it's transformers all the way down. So yeah.Vibhu Sapra [00:49:15]: Like we have Med Gemma now, right? Like this week, even there was Med Gemma 1.5. And they're training it on this stuff, like 3d scans, medical domain knowledge, and all that stuff, too. So there's a push from both sides. But I think the thing that, you know, one of the things about McInturpp is like, you're a little bit more cautious in some domains, right? So healthcare, mainly being one, like guardrails, understanding, you know, we're more risk adverse to something going wrong there. So even just from a basic understanding, like, if we're trusting these systems to make claims, we want to know why and what's going on.Myra Deng [00:49:51]: Yeah, I think there's totally a kind of like deployment bottleneck to actually using. foundation models for real patient usage or things like that. Like, say you're using a model for rare disease prediction, you probably want some explanation as to why your model predicted a certain outcome, and an interpretable explanation at that. So that's definitely a use case. But I also think like, being able to extract scientific information that no human knows to accelerate drug discovery and disease treatment and things like that actually is a really, really big unlock for science, like scientific discovery. And you've seen a lot of startups, like say that they're going to accelerate scientific discovery. And I feel like we actually are doing that through our interp techniques. And kind of like, almost by accident, like, I think we got reached out to very, very early on from these healthcare institutions. And none of us had healthcare.Shawn Wang [00:50:49]: How did they even hear of you? A podcast.Myra Deng [00:50:51]: Oh, okay. Yeah, podcast.Vibhu Sapra [00:50:53]: Okay, well, now's that time, you know.Myra Deng [00:50:55]: Everyone can call us.Shawn Wang [00:50:56]: Podcasts are the most important thing. Everyone should listen to podcasts.Myra Deng [00:50:59]: Yeah, they reached out. They were like, you know, we have these really smart models that we've trained, and we want to know what they're doing. And we were like, really early that time, like three months old, and it was a few of us. And we were like, oh, my God, we've never used these models. Let's figure it out. But it's also like, great proof that interp techniques scale pretty well across domains. We didn't really have to learn too much about.Shawn Wang [00:51:21]: Interp is a machine learning technique, machine learning skills everywhere, right? Yeah. And it's obviously, it's just like a general insight. Yeah. Probably to finance too, I think, which would be fun for our history. I don't know if you have anything to say there.Mark Bissell [00:51:34]: Yeah, well, just across the science. Like, we've also done work on material science. Yeah, it really runs the gamut.Vibhu Sapra [00:51:40]: Yeah. Awesome. And, you know, for those that should reach out, like, you're obviously experts in this, but like, is there a call out for people that you're looking to partner with, design partners, people to use your stuff outside of just, you know, the general developer that wants to. Plug and play steering stuff, like on the research side more so, like, are there ideal design partners, customers, stuff like that?Myra Deng [00:52:03]: Yeah, I can talk about maybe non-life sciences, and then I'm curious to hear from you on the life sciences side. But we're looking for design partners across many domains, language, anyone who's customizing language models or trying to push the frontier of code or reasoning models is really interesting to us. And then also interested in the frontier of modeling. There's a lot of models that work in, like, pixel space, as we call it. So if you're doing world models, video models, even robotics, where there's not a very clean natural language interface to interact with, I think we think that Interp can really help and are looking for a few partners in that space.Shawn Wang [00:52:43]: Just because you mentioned the keyword
The Tree Party by Kimi by 826 Valencia
Darmstadt'ta yaşayan Aytaç Sevim bir benzin istasyonu işletirken çektiği küçük bir video sonrasında kendini bir anda sosyal medyanın ortasında bulmuş. Instagram'da 400 bine yakın takipçisi var. Şimdilerde „Geldik, Gördük, Dönemedik“ isimli bir oyun ve stand up gösterisi ile Almanya'yı dolaşıyor. Kimi zaman Almanya'daki gündemi, kimi zaman da Türkiye'de Almanya hakkında konuşulanları esprili bir dille ele alıyor. Aytaç Sevim sunucumuz Gökçe Göksu'nun sorularını yanıtladı, ortaya hoş bir sohbet çıktı. Von Gökçe Göksu und Eren Mahir Gençer.
Send us a textIn this high-octane episode of Sidecar Sync, Amith and Mallory cover an ambitious trio of AI developments with massive implications for associations. They dive into Kimi K2.5, a Chinese open-source model built for multimodal agent swarms that rival GPT-5.2 at a fraction of the cost. Then, they explore Claude's new domain-specific plugins for Cowork and what it means for associations when Big AI moves into vertical markets like legal and finance. Finally, they unpack Elon Musk's latest megamerger: SpaceX and xAI joining forces to launch AI data centers into orbit. Whether it's AI agents that run teams of themselves or compute infrastructure leaving Earth altogether, this episode challenges assumptions and encourages leaders to rethink what's possible.
The podcast might be down a member down this anime season, but the super brothers pick up the slack and cover 9 new anime shows for 2026. Also, Miles gets caught in a time loop as he races his anime horse girls. Ray goes for the extra credit this winter series in hopes to take home the crown. Video Games and Anime We Talked About this Episode: Like a Dragon: Infinite Wealth Umamusume: Pretty Derby Mayonaka Heart Tune Shiboyugi: Playing Death Games to Put Food on the Table Kirei ni Shitemoraemasu ka Toumei Otoko to Ningen Onna: Sonouchi Fuufu ni Naru Futari Hikuidori: Ushuu Boro Tobi-gumi Ikoku Nikki Tamon-kun Ima Docchi!? Yuusha-kei ni Shosu: Choubatsu Yuusha 9004-tai Keimu Kiroku Yuusha no Kuzu Seihantai na Kimi to Boku Uruwashi no Yoi no Tsuki Time Markers: Teaser: 0:00 Intro: 0:43 Miles's Week: 3:37 Ray's Week: 10:21 Wash It All Away 28:25 The Invisible Man and His Soon-to-Be Wife 34:42 Oedo Fire Slayer: The Legend of Phoenix 44:09 Journal with Witch 51:06 Tamon's B-Side 58:46 Sentenced to Be a Hero 1:06:10 Scum of the Brave 1:15:11 You and I Are Polar Opposites 1:20:42 In the Clear Moonlit Dusk 1:29:53 Winter Anime Rankings: 1:36:07 Outro: 1:47:25 Podcast Website: https://www.tacootaku.com/ Podcast Bluesky: https://bsky.app/profile/tacootakupod.bsky.social Podcast Instagram: https://www.instagram.com/taco_otaku_podcast/ Watch Ray's YouTube Channel here: https://www.youtube.com/c/SourdoughShonen Morning Phoenix (OP) composed and performed by Ivan C. Heroes! (ED) composed and performed by Ivan C. Check out Ivan's Music here: https://www.youtube.com/@ivancervantes9858
Het diepe besef dat elke voorbijganger een leven leidt dat net zo levendig en complex is als dat van jou. werken op een klantenservicehoe lager het loon, hoe meer ze je controlerenop tijd komen was niet echt een talent van memaar ik manipuleerde het systeemtrage pc'smijn truc om iets langer pauze te houdenals de hoorn niet goed terug op de haak werd gelegdOORGETUIGEZoë Kravitz (1988)Steven Soderbergh (1963)Kimi (2022)pensioengaten detecterenwat een klotebaaneen topper op de afdeling palmde iedereen in met zijn zware stempas op voor eenzame mensen aan de lijnwees eens assertieversonderhttps://www.thedictionaryofobscuresorrows.com/concept/sonderhet diepe besef dat elke voorbijganger een leven leidt dat net zo levendig en complex is als dat van jouChuck Palahniuk (1962)je ontdekt iets wat zo privé is, waardoor je blik op die persoon verandertwe staren allemaal thuis naar een schermdie ene collega die er opeens niet meer waszijn hele leven was niet waardoor de telefoon van iemand anders gaan en dingen zien die je niet wistals je vrienden iets voor je verzwijgentoen ik werd ontslagenverwarde mensen op straatHandjes boven de dekens. Slaap lekker. Word lid van de tomson darko club voor 2,50 euro per maand of 25 euro per jaar. Krijg toegang tot het archief, elke donderdag de exclusieve weekupdate, persoonlijke mail in je inbox en andere obscure extra's. Ga naar www.petjeaf.com/tomsondarko.Support the show1) Ontvang elke woensdagavond een mail van me over gevoelens waar niemand over praat. 2) Mijn shop vol boeken boeken, posters en tasjes3) Steun me via petjeaf.com/tomsondarko en luister exclusieve afleveringen.
After a brief hiatus, Mark and Shashank dive into the whirlwind of AI developments from recent weeks. They explore Kimi 2.5's impressive open-source capabilities, Google's groundbreaking Project Genie world model, and AI solving previously unsolved mathematical problems. The conversation shifts to the Davos discussions between Demis Hassabis and Dario Amodei on AGI timelines, before taking a fascinating detour into space-based data centers. The episode culminates with an in-depth look at OpenClaw (formerly ClawdBot) and Moltbook—a Reddit-like social network for AI agents that's spawning everything from cryptocurrency to manifestos. The hosts grapple with both the exciting possibilities and unsettling implications of autonomous AI agents collaborating at scale.
This episode we are talking about the Four Great Temples--Asukadera, Daikandaiji (aka Kudara Odera), Kawaradera, and Yakushiji. Much of the information, outside of the Nihon Shoki itself, comes from Donald F. McCallum's book: "The Four Great Temples: Buddhist Archaeology, Architecture, and Icons of Seventh-Century Japan". For sources, photos, and more information, check out our blogpost at: https://sengokudaimyo.com/podcast/episode-142 Rough Transcript Welcome to Sengoku Daimyo's Chronicles of Japan. My name is Joshua and this is episode 142: The Four Great Temples Rising up into the sky, the bronze spire atop the pagoda seemed to touch the heavens. The beams, doors, and railings were all painted bright red, with white walls, and green painted bars on the windows. At each level, the eaves swept out, covered in dark ceramic tiles, with shining bronze plaques covering the ends of the roof beams. At each corner, a bronze bell hung, chiming in the breeze. This pattern continued upwards, tier after tier. Around the base of the pagoda, throngs of government officials dressed in their formal robes of office moved past, flowing through the temple's central gates. As they passed, they looked up at the impressive tower, the largest of its kind in all of Yamato. From somewhere, a deep bell chimed, and the crowds made their way towards the lecture hall. There, the monks were prepared, with sutras and voices at the ready. Facing a sacred image, they would read through their sutras in unison. Their voices would carry through the great empty space and reverberate through the crowds—those that could get close enough to hear, anyway. The chanting created a musical cacophony. In that sea of human voices, one could almost sense something more—something spiritual. A power, that one could almost believe could hold at bay just about any disaster that could befall a person—or even the state itself. Alright, so this episode we are still in the reign of Ohoama, aka Temmu Tennou. I know we've already seen how that ends, but there is still a lot to cover. But before I go too far, I'd like to first give a shout out to Lisa for helping to support the show on Ko-Fi. I can't tell you how much we appreciate it. If you would like to support us as well, we'll have more information at the end of this, and every episode. We've talked about how the reign of Ohoama was a time where the court reinforced, but also subtly adjusted, the laws of the Ritsuryo state. They seem to have equally courted the Kami, Buddhism, and even continental ideas of yin and yang. Today we are going to dive into Buddhism and the State. More specifically, I want to talk about something called the Yondaiji, the Four Great Temples, and look at how these government temples, also known as "kanji" or "Tsukasa no dera" came to be, what we know about them from archaeological research, and the role they played in the State. This is going to probably recap things from earlier episodes. I am also drawing a lot from a book by Donald F. McCallum called, appropriately, "The Four Great Temples", which goes into a lot more detail than I'll be able to get into, here, but I recommend it for those who are really interested in this subject. Up to this point, we've talked a little about the relationship that the court had with Buddhism. By the late 7th century, Buddhism had spread throughout the archipelago, and there were many temples likely created by local elites. Sensoji, in Asakusa, Tokyo, claims a founding of 628, though it may have actually been founded sometime just after 645. There are other temples around Japan, far from the Home Provinces, which likewise had similar claims to being founded in the early to late 7th century, and I question how much a role the government had in each of them. . In 673, there were two temple-related mentions of note in the Chronicles. In one of Ohoama's earliest edicts he orders the copying of the Issaiko, the Buddhist canon, at Kawaradera. That same year, 673, Prince Mino and Ki no Omi no Katamaro—whom we discussed last episode—were sent to build Takechi temple, later known as Daikandaiji. I mention Daikandaiji specifically because while it was originally built as the Temple of Takechi, at some point took on that other name—"Daikandaiji", aka Ohotsukasa no Ohodera—which Aston translates as the "Great Temple of the Great Palace", as it appears to have specifically been designated as the great temple of the government. In other words, it is one of a few National Temples. And this became particularly important in the year 680, which is the year we are told the government stopped administering—and, more importantly, stopped funding—all but a handful of so-called "national temples". At this point, as I've mentioned, Buddhism was widespread enough that there were enough adherents that could maintain their own local temples. Of course, local elites likely found some cachet in funding temples, and communities of believers in various areas would likewise have been asked to provide funds as well. So the court accordingly declared that going forward, the government would only administer 2 or 3 national temples. For all other temples, if tthey had been granted the proceeds of sustenance-fiefs, those would be limited, from the first year to the last, of 30 years in total. As I read it, that indicates that if they had received the fiefs 15 years ago, they would be allowed to hold onto them for another 15 years, after which point they would need to find alternative sources of funding. The early national temples appear to be Daikandaiji and Kawaradera. Finally, there is Yakushiji, which Ohoama began construction on in 680 for his queen, Uno no Sarara, when she was ill—and just hold on to that for now. Interestingly, Asukadera, or Houkouji, in many ways the original national temple, was not designated as such in the new reorganization, but it would continue to be administered by the government as a temple in a special arrangement. That's why the original count in the Nihon Shoki mentions "2 or 3" national temples instead of four. These four temples are mentioned in the Shoku Nihongi, the Chronicles following the Nihon Shoki, as the Four Great Temples, or Yondaiji. Although that work wasn't compiled and published until the end of the 8th century, the term Yondaiji appears in an entry for 702, about five years after the last entry in the Nihon Shoki, and over a decade before its publication So at this point we're going to look at each of these "great" temples individually, plus a couple of other important ones, and what they tell us about the history of Buddhism, Buddhist temples, and the Yamato state at this point in Ohoama's reign. The first of these four temples, chronologically, is Asukadera. This is the temple originally built by the Soga, and the first major Buddhist temple built. Its layout shows three separate golden image halls, or kondou. And here we should probably recap something about the general layout of a Buddhist temple, so we can understand what we are talking about. The most important buildings in a Buddhist temple at this time were the kondou, the golden image halls; the pagoda, or stupa; and the koudou, or lecture hall. The golden image halls held golden Buddhist images—Buddhas, Boddhisatvas, Arthats, and more. These rooms are often somewhat dark, and would have been lit mainly by candles, as well as the sun coming through—though even then the sun often is obscured by overhanging rooves and latticework. Sometimes the doors would have small openings so that the sun's rays strike in a particular way at different times. All of this presents an image of bright gleaming gold in the darkness—a metaphor for the teachings of the Buddha, but also an intentionally awe inspiring display for those who came to view them and pray. The kondo were usually the first structures to be built for a temple, so if your temple had nothing else, it probably had an image hall. The next structure that one would probably build would be the stupa, or pagoda. A pagoda was a tower, in which were sometimes kept images, but more importantly, it would often hold some kind of relic. The idea of the stupa originated as a place to house relics—often bone fragments and teeth attributed to the Buddha, even if those were actually precious stones. Stupas were originally (and still, in many places) large mounds, but as Buddhism made its way over the Silk Road, these were replaced with multi-tiered towers. Pagodas are often 3 or 5 storeys, though the number of stories can go up to 7 or 9 or as low as 1. Once again, in a world where most buildings, other than perhaps a specially made lookout tower, were only one or maybe two stories in height, a three to five story pagoda must have been something to behold, especially covered with tiled eaves, adorned with bronze bells, and brightly painted in the continental fashion. In Europe I would point to similar uses of gold and ostentatious ornamentation on the cathedrals of the day, and even in churches more generally, if on a smaller scale. This is meant to impress and thus lend authority to the institution. And of course, because that institution was so closely aligned to the State, it gave the State authority as well. We mentioned, previously, how the monumental structures of the kofun had given way to the Buddhist temples as a form of ritual display. The last of the three buildings I would mention is the lecture hall, or Koudou. This would also likely have Buddhist images, but it was more of a functional hall for conducting rituals, including recitation of sutras and presenting Buddhist teachings. The koudou was often at the back or north end of the temple complex. In early Buddhist temple layouts, it was common to have everything in a straight line, more or less, and to remain symmetrical. So there would be a main gate through which one would enter. In front of you there you probably saw the pagoda. Beyond the pagoda was a path, and then the kondou, or image hall, typically with a lantern in front, and behind that was the koudou, or lecture hall. This was all typically oriented on a north-south axis, such that one would enter through the southern gate and walk north towards the lecture hall. The north-south orientation is likely another feature from the continent, where the most important buildings were often south-facing, and thus in the north of the compound. This was the same with the palace layout, and likely for similar reasons—not just cultural, but also practical. After all, the sun, in the northern hemisphere, remains slightly to the south, and so this would have provided the most light through the day. This layout was not strictly adhered to, however. For instance, if we look at Asukadera, you would enter through the southernmost gate and you were then met with another gate for an inner compound. This middle gate would lead you to a large courtyard, about 320 meters on a side, with a covered walkway, or gallery, along the entire circumference of the compound. Entering through the middle gate one would have first noticed the large pagoda and not one but three golden image halls. A path led to the pagoda, and then beyond from the pagoda to the central kondou. There is even a stone where a large bronze lantern was likely situated between the pagoda and the kondou. Based on archaeological evidence, it appears that there was originally just one image hall, directly north of the pagoda, but at a later date, they added two more kondou to the east and west of the pagoda. This has been compared to a temple layout found in Goguryeo, but given that these were likely later additions, and we know that Baekje artisans were involved, I suspect that is just later coincidence. Connecting the layout of the temples to continental examples has been a keen area of study for many scholars. The general theory is that temple layouts can help point to whether there was more of a Baekje, Silla, or Goguryeo influence during the construction of the temple, and what that might have meant for Yamato's international relations as well as various political factions in the court who may have leaned more towards one group or another. The last building at Asukadera, the koudou, or lecture hall, was directly north of the kondou, but you couldn't get there directly. The entire pagoda and image hall compound was separate from the lecture hall, which stood north and apart, though still on the temple grounds, which would have been surrounded by an outer wall. At this point, since we're talking about the layout of Asukadera and where it came from, I'm going to digress from the next of the four great temples and talk about two other early temples that are important for understanding Buddhist temple building at this time. So bear with me for this slight detour. The first of these is Shitennoji, the Temple of the Four Heavenly Kings, in modern Osaka. This temple is said to have been built in 593, and is attributed to Shotoku Taishi. Presumably he made a vow to do so during the war between the Soga and the Mononobe, which we discussed back in episode 91. As you may recall from that and earlier episodes, the Mononobe were considered to be against the idea of Buddhism, while the Soga were promoting it. Shitennouji was important, but doesn't show up in the Chronicles as much as other temples, and was all the way over in Naniwa. As such, I suspect that it was not considered a good candidate for "national" temple status at the time. Still, if we look at the original layout, Shitennoji is quite similar to what we see in Asukadera. Everything is on a north-south axis. You go through a middle gate to the inner compound. There you find a pagoda, and past that, a lantern and then the kondou. Unlike Asukadera, the koudou, or lecture hall, is incorporated into the back wall, such that the gallery continues from the middle gate around to either side, and then meets at the sides of the lecture hall. There are also east and west gates, as well as other buildings, but the main layout is pretty comparable. The second is another temple, which also lays claim to being founded by Prince Shotoku Taishi, and which was not included in the four great temples. This may have had to do with the fact that it wasn't in the Asuka valley, but also may have had to do with just the timing. That temple is the famous one known as Horyuji. Horyuji was founded on the site of the Ikaruga palace, said to have been the home of none other than Prince Umayado, aka Shotoku Taishi. As such, one imagines it was quite the prominent temple in its day. However, it was at a distance from the capital, and it also had the misfortune to have burned down in about 670, just before Ohoama ascended the throne, and it wasn't fully rebuilt until about 711, leaving a forty year gap where the temple was not necessarily at the forefront of Buddhism. Still, like Shitennoji, it is interesting to look at the original layout for Horyuji and compare it to Asukadera. First off, you have the same north-south orientation, and you have the same separate, internal compound for the image hall and the pagoda. Unlike in Asukadera, however, the kondou and the pagoda, which both faced south, were on an east-west axis, flanking the central pathway. Entering through the middle gate one would have seen a five storey pagoda on the left and the kondo on the right. The Koudou was outside the inner compound in the rear, along that central north-south axis. There is also evidence of two other buildings. One likely held a large bell—and possibly a drum—and the other was likely a sutra repository, where they could keep holy texts and various ritual implements. I will also note that, even though Horyuji burned down in 670 and was accordingly not that prominent during Ohoama's reign, it is absolutely worth visiting because substantial portions of those rebuilt buildings are still standing today. Indeed, both the Horyuji pagoda and kondou are among the oldest wooden buildings in the world. The central pillar of the pagoda was felled in 594 according to dendrochronological dating. The kondou was damaged by fire during a restoration in 1949, but about 15-20% of the original building from 670 still remains. Going back to the Great Temples, the next of these to be built was Kudara Ohodera. Kudara here means "Baekje", but this appears to refer more to the temple's location near the Kudara river, rather than to the kingdom of Baekje. Kudara Ohodera is remarkable in a couple of different ways. First off, there is the fact that it is the first temple with a firm royal lineage—that is to say a temple that claims to have been founded by the sovereign. Asukadera was founded by Soga no Umako, the Prime Minister, and though Prince Umayado is said to have been the Crown Prince, nonetheless, he never reigned as sovereign, though he was considered the founder of both Shitenouji and Houryuuji. Kudara Ohodera, however, is said to have been founded at the behest of Tamura, aka Jomei Tennou, who reigned from 629-641. The temple appears to get its start in a record dated to 639, and by 645 it appears to be fully operational. There is another tale of its founding—in the Daianji Engi, the history of Daianji, a successor temple to Kudara Ohodera, there is mention of a Kumagori Dojo, and many modern histories claim that this was the actual first temple, but there isn't much evidence. Donald McCallum, in his treatment of Kudara Ohodera's history in his book, "The Four Great Temples", suggests that the Kumagori Dojo story is likely a later legendary founding that got recorded, as there is scant evidence for it, and no mention of it in other records. On the actual founding of Kudara Ohodera, however, there does appear to be general agreement with the Nihon Shoki, despite some minor differences in the dates. The call to build Kudara Ohodera comes alongside Tamura's also building Kudara Palace. Kudara Ohodera was also built on a grand scale, and it is said to have had a nine-storey pagoda—almost double the size of a five-storey pagoda, which already towered over other buildings of the time. Despite all of this, for a long time it was unclear where Kudara Ohodera was actually situated. There were several sites proposed, but most recently archaeological research on Kibi Pond seems to have placed the temple there. At excavations on the southern side of the pond were found remnants of the foundations of two buildings, arranged in an east-west format. The western foundation would appear to be for a pagoda—but one much larger than any of the five storey pagodas we've seen elsewhere. And to the east was the foundation for what appears to be the kondo. This golden image hall, however, is likewise much larger than any other hall of this time. This arrangement would fit very well with a Houryuuji-like temple layout. There were also various other traces that were consistent with the early mid-7th century, which would coincide with the 639-645 dates for Kudara Ohodera's construction. Subsequent excavations appear to have found quarters for the priests, as well as at least part of a gallery wall and one gate, situated due south of the kondo. There may have been another gate south of the pagoda. The koudou, the lecture hall, may have been in the area that was later excavated to create the pond, and therefore we may never have any hard evidence of its location, despite numerous attempts to dig trenches to find more of the temple buildings. This probably also means that, similar to Shitennouji, the lecture hall was incorporated into the enclosing gallery wall rather than being outside, because if it was outside, then it likely would have been farther north and we would probably have seen some trace. As it is, the lack of any trace suggests that it was inside or part of the enclosure with the pagoda and kondou. The large size of this archeological site concurs with what we know about Kudara Ohodera, both in its description and in the fact that it is referred to as "Ohodera", or "Great Temple"—no other temple has really been given that name directly, though there are a few references to "Ohodera" that are ambiguous and might refer either to this temple or Asukadera.. Still, if this temple, sometimes also called Kibi Pond Temple due to its location, is *not* Kudara Ohodera then that just brings up more questions. How could there have been such a monumental Buddhist temple this close to Asuka and within the bounds of the later Fujiwara-kyo and yet nobody thinks to mention it? It doesn't appear to have been started and abandoned, as there were quite a few structures built. So if this isn't Kudara Temple then someone has some 'splaining to do. Indeed, McCallum notes that while there are some objections, the preponderance of evidence seems to lean greatly in favor of the Kibi Pond site for Kudara Ohodera. We still have yet to find the Kudara palace, however, so who knows. There are also questions about the construction as various architectural features are missing in ways that are not consistent with other sites. Some oddities, such as a seeming lack of rooftiles given the apparent size of the building, actually may be a point in favor of this being Kudara Ohodera, since we know that the temple was moved in 673 when Ohoama requested that they build the Takechi Ohodera, which appears to have been Kudara's successor temple. If they had reused the material from Kudara Ohodera to build, at least in part, Takechi Ohodera, that could explain why rooftiles and other such things are not present in the numbers expected at the Kibi Pond site. Takechi Ohodera is another bit of a mystery. I can't help but note that Takechi is the name given Ohoama's son who was with him on the front lines of the Jinshin no Ran. We also see a "Takechi no Agata-nushi", who is noted as the governor of the district of Takechi. In all cases here it is spelled "Taka-ichi", or "high market", and it is not an uncommon name—we even find a Miwa no Kimi no Takechimaro. In the record of the Jinshin no Ran it is noted that the governor of Takechi was possessed by the kami of Takechi and of Musa. These were named as Kotoshironushi and Ikuikazuchi. They claimed that they had been the kami that escorted Ohoama to Fuwa and saw him safely there. As such, donations were made to their shrines. Musa is an area in modern Takaichi district, which includes the area of Asuka, and is part of Kashihara city. The Takaichi Agata Jinja—or the Takechi District Shrine—sits in the Shijo area of Kashihara city, north of Mt. Unebi. There are several proposed locations for Takechi Ohodera, but despite excavations, no clear temple features have been found. As such, there isn't anything to clearly point to one or the other. What we do know is that Takechi Ohodera underwent another transformation. According to the Daianji Engi, the Takechi Ohodera was renamed to Daikandaiji in 677. There is no specific mention of this in the Nihon Shoki, other than a note that Takechi Ohodera was also known as Daikandaiji and a reference, in 679, of "fixing the names". Personally, I can't help but wonder if this is a case of a nickname becoming the name-in-fact. As I mentioned earlier in the episode, Daikandaijij, which can also be read as "Oho-tsukasa no Oho-tera" can be translated into something like Great Government Official Great Temple or Great Temple of the Royal Court. We do know the location of this temple in later years, but this is probably not exactly where Takechi Ohodera was originally built. For one thing, it is suspicious that the temple lines up exactly with the later grid for Fujiwara-kyo, the later capital city that was built north of Asuka. We also are told by the Daianji Engi that a nine storey pagoda and kondou were built between 697 and 707 CE. There are also notes about activities at the temple mentioned in the Shoku Nihongi for the same period. And yet there were also activities being held during that time which would not seem feasible if they were renovating in place. So likely the new construction was at a new site—possibly near the old site. And at this later site, the rooftiles were from a later period, closer to the period of the later construction and not really matching with earlier construction dates. So what did this temple of many names – Kudara Ohodera, then Takechi Ohodera, then Daikandaiji – actually look like? We probably have a layout for the original temple and the later temple. If Kibi Pond Temple is the original Kudara Ohodera, the original temple had the kondou and the pagoda on the same east-west axis, and likely had the koudou north of that – very Horyuji-like. But based on the layout at the later temple site, we have something quite different. From the central gate, there is a path straight towards the Kondou, with the Koudou directly north of that, and the nine-storey pagoda in an odd, off-set position, southeast of the kondou. This disrupts the symmetry even more than the Kudara Ohodera layout. There is some speculation that this asymmetry was temporary and that they planned to fill the other space but just never got around to it, but there is no indication that they had prepared for anything, either. Also odd is the fact that the koudou, the lecture hall, was the same size as the image hall, the kondou, and that was roughly the same size as the enormous hall at Toudaiji, which is really saying something. This really was a tremendous building, fitting for the main temple of the royal government. The third of the four great temples is Kawaradera, and this one is challenging to plot out chronologically as there isn't a lot of documentation. There is no exact date for the building of Kawaradera. There is a mention of it in 653, but the same entry in the Nihon Shoki also states that there are sources that claim it should be Yamadadera, instead. Based on other evidence, this actually seems more likely. Yamadadera is thought to have been the work of Soga no Kurayamada no Ishikawa no Maro, and it is where he eventually fled when accused of treason. It was founded in 641, according to the Joguki, the record of Prince Shotoku, but construction didn't actually start until2 years later, and monks only began to occupy it in 648. The following year, however, construction halted as that is when Ishikawa no Maro fled there and committed suicide. Construction was resumed in 663, but still took time. Still, even in the middle of this very long DIY project, it makes sense that there might be some activities in 653, even if construction was paused. Later the temple would be completed, and seems to have had powerful backing. Uno no Sarara, Ohoama's queen, was a granddaughter of Ishikawa no Maro, and so likely had a connection to the temple, but it never attained the status of a national temple the way the others had. As far as its layout—it was similar to Shitennouji, with the pagoda, kondo, and koudou all in a line on the north-south axis. Kawaradera was another matter. Though we aren't sure when it was built, exactly. If we discount the 653 date as applying to Yamadadera instead, then the first date we really see anything at Kawara is Kawara Palace, built for Takara Hime—aka Saimei Tennou—who took up residence there when the Itabuki Palace burned. Later it would be used for her mogari—her temporary interment. The next mention of a temple at Kawara isn't until this reign, in 673, when Ohoama had the Buddhist canon, the Issaiko, copied, as I noted at the top of the episode. So it must have been established and built some time before 673. Although we don't know when it was founded, we very clearly know where it was, as the foundations stones are still present, and quite clear—and unlike other Asuka era temples, it would stay in Asuka, rather than being removed up to the new capital at Heijo-kyo. Given everything else and its apparent importance, the lack of information on when Kawaradera was established is quite odd. McCallum suggests that this could have been deliberate as a way to help delegitimize the temple in the 8th century, but also admits that it may have just been due to the general problems with early record keeping back in the day and there may not have been a good record of why and when the temple was founded. The rooftiles are similar to those used during the time that the court was at Ohotsu. I would also note that there is a connection between the foundation stones and a quarry up near Ohotsu at what is, today, Ishiyamadera. That still doesn't tell us when Kawaradera was founded, as that could have been any time, and doesn't necessarily mean that it was during the time the court was in Ohotsu. Regardless of what textual evidence does or does not exist, the archaeological evidence is pretty staggering. Even today you can go and see some of the exposed foundation stones. This was a massive temple. There was a south gate and then a middle gate just north of that. The main enclosure was divided into two courtyards. In the first, just beyond the middle gate, at the north end was the middle kondo, while in the courtyard itself, facing each other on an east-west axis, was a western kondou and the temple pagoda. Past the middle kondou was a larger courtyard, with the koudou, or lecture hall, in the north, with a bell tower or sutra hall in the south west and southeast corners. The walls of the enclosure were made up of a covered gallery, and around the outside of the northern courtyard, containing the koudou, were smaller chambers believed to be the monks quarters, something we don't necessarily see at all of the other sites. Despite being an important temple, and one of the Four Great Temples during the Asuka periods, when the capital eventually moved to Heijo-kyo, in modern Nara, Kawaradera had the distinction of being the only one of the four that was not moved as well. All three of the other Great Temples had new compounds built in Heijo-kyo, and the temples were thus "transferred" to the new capital. Presumably that means that most of the monks and administration moved there, and those new temples took up the roles, duties, and responsibilities of the old temples. The temple complexes in Asuka were not necessarily destroyed or deconstructed, but instead were apparently left to their own devices, becoming reduced in status. Many of them fell into disrepair, and when disasters, such as fire, struck they were not rebuilt to the same extent as before, if at all. Kawaradera, however, appears to have not been transferred. It would eventually be replaced as one of the Four Great Temples by the temple of Koufukuji, which was specifically a temple for the Fujiwara family, who were having a bit of a moment in the Nara period. Some have speculated that Kawaradera was specifically left behind in Asuka for that reason—so that the Fujiwara family temple could sneak into the ranks of national temples. Or it may have been that Kawaradera had a particular connection to Takara Hime and the site of her interment. If it was a memorial temple to her, then perhaps it didn't seem appropriate to remove it from its physical location. McCallum also suggests that it was so powerful in its position in Asuka that it preferred to stay and keep its stipend-fiefs, perhaps believing that even the move to Heijo-kyo would be just another short fad, as had been Ohotsu and Fujiwara-kyo. Of course, if so, they were sorely mistaken. And so Kawaradera would eventually fade from the picture, but during the time of Ohoama's reign, and into that of his immediate successors, it seems that it certainly held some sway. The fourth of the Four Great Temples was the temple of Yakushiji—the temple of the Medicine Buddha. This is the latest temple of the bunch. Its construction was ordered in the year 680 in response to Ohoama's queen, Uno no Sarara, falling ill. And so he vowed to build a temple for her—specifically a temple to Yakushi Nyorai, the Medicine Buddha, whom we discussed last episode. That said, there is considerable time between the order to construct a temple and getting enough of it built to actually be functional. I haven't really touched on this, except when I briefly discussed Yamadadera and how long that took to build, but all of these temples were massive works, much more complicated than the traditional palace buildings. For the most part, palace architecture could be built relatively quickly with the tools and labor available. This was a good thing, seeing as how, for many years, the sovereign had moved again and again, either because of the previous sovereign's death in the palace or just because they chose a new location for a palace. As such, one couldn't spend years building a new palace. So palace buildings were simply made with wooden posts, sunk into the ground, with thatched roofs. In a few examples we see attempts to use wooden boards or tiles, but they weren't complicated. A temple, on the other hand, was something different. Temples were largely wood, but they were massive in size and their roofs were covered in heavy ceramic tiles. All of that weight had to be properly distributed on a strong base—simple posts were not likely to work. Instead they were built on raised stone foundations. That's great for us looking at them, today, but at the time it would have been an inordinate amount of labor. Hence why a temple like Yamadadera took so long to build. So Yakushiji may have been founded in 680, but was likely not finished until much later, which is why we don't really see it in the records for Ohoama's reign and why the order for national temples probably only states that there were just two or three. However, it would become one of the four great temples, and is also notable because, in its transfer to Heijokyo, it largely retained its shape and layout, meaning that you can go to it, today, and still get some sense of what it may have been like back in the Asuka period. Granted, there are certainly differences, but there are enough similarities that it is likely worth a visit. Many of the other temples were significantly modified when they were rebuilt in the new capital in Nara. The layout for Yakushiji is a basic rectangular layout. North of the central gate there is not one, but two pagodas, on an east-west axis from each other, flanking the path to the kondo, roughly in the center. Finally the koudou at the north end, built into the roofed gallery. The modern Yakushiji, a UNESCO world heritage site, maintains one of the pagodas from 730. Other buildings have been lost and rebuilt over the years. Today, the covered gallery only goes around half of the compound. This temple would be important, but mostly in the period following the current reign. This period of the four Great Temples perhaps gives us some insight into the relationship between Buddhism and the State. Early on, Buddhism was the province largely of the Soga family, and Soga no Umako was apparently the most powerful figure of his day. He founded Asukadera, and early temples weree founded by Soga or their associates, including Prince Umayado. McCallum points out that the National Temples, however, were, with one exception, founded by sovereigns. Kudara Ohodera was the first, Kawaradera was likely founded for Takara Hime, and Yakushiji was founded for Queen Uno. The only one of the four that wasn't expressly founded on a sovereign's order was that of Asukadera, the temple by Soga no Umako. This may explain why it was both included and excluded as a national temple in the Chronicles. After all, there is no doubting its importance, but the narrative of a single, strong, royal house is somewhat impeded by the idea that one of those temples was founded by what was, for all of his power and authority, a private individual. Ultimately they didn't include it in the edict and yet still acknowledged it as one of the Great Temples. McCallum also points out that these four may not have been fixed quite so early on. For example, on the matter of Houryuuji—there is a bronze plaque that mentions an "Ikaruga no Ohodera", suggesting that the Ikaruga Temple—that is to say Houryuuji, founded on the estates of Prince Umayado—was at one time granted that title. Of course, there are questions as to the exact date of the inscription, and whether or not they meant "Ohodera" in the later sense of a national temple or simply in the sense that it was large; and the term may have meant something else, earlier on. The roster of official temples, the Tsukasa no Tera or Kanji, would grow over time, but that is something for a later period. It is worth noting, though, that the Chronicles at this point seem to distinguish between three types or levels of temples at this time, based on other edicts that we see. There is also the matter of temple names. The first edict is from the 5th day of the 4th lunar month of 679, six years into Ohoama's reign. The declaration states that the court would consider the history of any temple with sustenance fiefs and add or remove them as appropriate. This suggests that there were temples with sustenance fiefs—that is, that had stipends based on lands whose official output went to their upkeep—and temples without such fiefs. The latter were likely more local temples, likely funded by local elites, possibly out of actual devotion, or an attempt to gain the power that Buddhism presumably brought, or possibly just in emulation of the central court, much as the peripheral elites had also constructed the keyhole shaped kofun. Along with the adjustments of stipends, we are also told that the administration quote-unquote "fixed" the names of the temples. This again goes to the government's control of the temples and Buddhism. McCallum suggests that what is meant here is that they moved away from locative names to Buddhist names for the temple; up to this point, temple names appear to be about the location of the temple. So we have Asuka dera, or Asuka Temple, built in Asuka. Kudara Ohodera is Kudara Great Temple because it was by the Kudara river and the Kudara palace. When it was moved to Takechi, they changed the name to Takechi temple. Kawaradera was at Kawara, while the temple we know as Houryuuji was known at the time as Ikaruga Temple—or possibly Ikaruga Great Temple. But later these temples would be known by their Buddhist names, so Asukadera is Houkouji. Kudara Ohodera becomes Daikandaiji—and in fact, it is after this point that we see Daikandaiji in the narrative. Ikaruga dera—though not one of the yondaiji, or four Great Temples—becomes Horyuuji. I'm not quite so sure about Kawaradera, but Yakushiji, which is founded after this decree, comes to us with a Buddhist name rather than just the name of a location. This change in name likely simplified, somewhat, the concept of moving, or transferring the temples. Rather than establishing a brand new temple with new administration and everything, they could build a new temple, but grant it the name and rights of the old temple. The old temple grounds could still be used and occupied—it was still *a* temple, but it was no longer *the* temple, at least for official purposes. It would be strange, however, to move the Asuka Temple up to the area of modern Nara city and still call it the Asuka Temple. The year after reassessing the stipends and fixing the names of the temples we get the edict about the 2 or 3 national temples. And we've mostly discussed that, but here I would just point out that it does add a third distinction to the types of temples. So we have temples with no stipends, temples with stipends—but they would only last for 30 years total after which they were expected to find new sources of funding—and the national temples, which would presumably receive funding through the government in perpetuity—or until the court changed its mind. So why do we care about any of this? Obviously Buddhism has had a huge impact on Japanese culture. However, this isn't just about the religion as an idea, but about the institutions. These temples—especially these great temples—contained a fair amount of wealth. It wasn't just the golden images, or the elaborate amount of work and materials that went into the creation of the buildings. There was also the sustenance-fiefs that were paying for the upkeep. These temples were also being managed by formal government administrators. They also performed rituals that the court relied on. Association with these temples was no doubt important. Later we see princes and other members of high status families taking high ranking positions, and the temples ended up cultivating their own power. Over time, the power of various Buddhist institutions would grow, often challenging or even rivaling the power of the court itself. There are a few other items from this reign that we see related to these temples and Buddhism, more generally. In 677 we see a Buddhist festival at Asukadera, where the entire canon was apparently reda out. The sovereign himself showed up and did obeisance to the Three Precious Things—an interesting bit of religious piety and humility. At the same time, he had all of the Princes and Ministers find one person each to renounce the world and become a monk or nun—both men and women were chosen, without apparent distinction. We are also assured that they all did so of their own volition, and weren't forced. In 679, we see a regulation on the clothing of priests and nuns, as well as the men and horses who accompanied them when they traveled. If priests are going around with a full on noble retinue, well, that probably says something about the status of priests—at least the abbots and heads of these institutions. 680 – A fire breaks out at the nunnery at Tachibana temple. Tachibanadera is situated south of Kawaradera, and similar to that temple, it seems to have previously been the site of a royal palace and also isn't recorded as being founded in the Nihon Shoki—it appears fully formed in this record. Tachibanadera's own records seem to suggest that it was founded in 606, and claims a founding by Shotoku Taishi. It is also said to be the site of the palace where Shotoku Taishi was born to his mother, Princess Anahobe no Hashibito, consort of Tachibana no Toyohi, aka Yomei Tennou. Shotoku Taishi is also the subject of the primary image of Tachibana temple, today. Although Tachibanadera wasn't one of the Four Great Temples, it was likely connected to one—Kawaradera. Not only was it built on the same north-south axis as Kawaradera, but some of the tiles are similar to Kawaradera's founding tiles. The layout was similar to Yamada-dera or Shitennouji, with the pagoda, kondou, and kooudou, all in a single north-south orientation. It is possible that Kawaradera was a monastery for male monks while Tachibanadera may have been the complementary nunnery for female initiates. 680 had a lot going on. In the 10th lunar month, the sovereign handed out alms to monks and nuns—silk and cloth. A month later, Ohoama vowed Yakushiji in hopes that it would help his wife, Queen Uno, who was unwell. He also granted a general amnesty, likely to just add further merit. Apparently it was successful, as she would go on to live for quite some time after that, even helping to take the reins of government when Ohoama himself fell ill. In 682, Princess Hidaka fell ill. 190 people, both men and women, were pardoned for capital or lesser crimes, in an attempt to make merit, and the following day we are told that over 140 people renounced the world at Daikandaiji—likely on the Princess's behalf. The year after that, 683, we see the sovereign making appointments to the official buddhist offices of Soujou, Soudzu, and Risshi—Doctors of the Law. This was probably a somewhat regular occurrence, though this is the first time we see the Risshi, it seems. The mention here is apparently due to the admonition given that "Those who control the monks and nuns should act according to the law." Definitely seems to be something there—perhaps a reason as to why the Soujou and Soudzu were being appointed. But the Nihon Shoki doesn't give us a lot more to go on other than speculation. Later that same year, in the 7th lunar month, we see priests and nuns gathered at the palace for the first ever ango, or retreat. An ango is where priests and nuns of different temples are brought together. The term refers to a practice said to come from the time of Shakyamuni, before there were temples. Shakyamuni's acolytes, who spent much of the year wandering, would return to one place during the rainy season. At that time they would listen and discuss Shakyamuni's teachings. In some sects, this practice of coming together would be particularly important, and it was a mark of honor for how many retreats a monk might have attended over the years. In 685, the court promoted Buddhism with an edict requiring every household to maintain a Buddhist altar, with a statue of the Buddha and a copy of a sutra inside. It is unclear to me if this was just for merit-making or what, but it must have been somewhat lucrative for the various temples, who would have likely been the source for said sutras, and, at least peripherally, the statues as well. Later that year, in the 4th lunar month, there was another ango at the palace. The month after that, Ohoama went to Asukadera and presented precious objects and worshipped. In the 8th lunar month Ohoama went to Joudouji – Aston claims this is Asukadera, also known as Houkouji—and the next day he visited Kawaradera and provided rice to the monks there. One month after that, Ohoama was feeling ill, so the court ordered Daikandaiji, Kawaradera, and Asukadera—the three Great Temples that were fully operational at that point—to chant sutras for his sake. In return they were granted various quantities of rice. Ohoama recovered for a time, but it was perhaps a precursor of what was to come. A month later a monk from Baekje and a lay monk were sent out to seek a medicinal herb known as white okera. Today, a similar compound is known in Chinese traditional medicine as Bái Zhú. A few months later Ohoama went to the medicinal herb garden of Shiranishiki, and a few weeks later he was presented with Bai Zhu, the boiled white okera. That same day, ritualists performed the Chikonsai, the "Calling of the Spirit". All of this seems to indicate the early onset of symptoms that may have been temporarily abated, but likely were part of the disease or illness that would eventually take his life. But we covered most of that last episode, and we are already dragging on longer than I expected, so I think I'm going to end it here. Coming up in the narrative, since I started to mention it, I'll probably take a look next at the founding of the new capital of Fujiwara kyo, and what that would mean, along with other initiatives that would outlive Ohoama. Until then if you like what we are doing, please tell your friends and feel free to rate us wherever you listen to podcasts. If you feel the need to do more, and want to help us keep this going, we have information about how you can donate on Patreon or through our KoFi site, ko-fi.com/sengokudaimyo, or find the links over at our main website, SengokuDaimyo.com/Podcast, where we will have some more discussion on topics from this episode. Also, feel free to reach out to our Sengoku Daimyo Facebook page. You can also email us at the.sengoku.daimyo@gmail.com. Thank you, also, to Ellen for their work editing the podcast. And that's all for now. Thank you again, and I'll see you next episode on Sengoku Daimyo's Chronicles of Japan.
ChatGPT Health vs Google MedGemma 1.5 - giganci Generative AI chcą podbić świat medycyny. Czy już wkrótce będzie to realna alternatywa klasycznej służby zdrowia? Inny z gigantów, Anthropic, próbuje nadać technologii moralny kręgosłup, publikując nową konstytucję Claude'a definiującą ścisłą hierarchię wartości modelu. Tymczasem w Chinach Moonshot AI chwali się opanowaniem "Agent Swarm" - dzięki orkiestracji „roju” agentów, firma drastycznie przyspiesza złożone zadania programistyczne w KIMI K2.5. Na horyzoncie pojawia się także GLM-4.7, uderzający w zachodnich gigantów wydajnością klasy premium przy wielokrotnie niższych kosztach. Zastanawiamy się, czy te zmiany to realna demokratyzacja wiedzy, czy raczej ryzykowna gra o nasze najbardziej wrażliwe dane.Komentuj, obserwuj i wystaw nam 5/5 - dzięki!
Join Simtheory: https://simtheory.aiRegister for the STILL RELEVANT tour: https://simulationtheory.ai/16c0d1db-a8d0-4ac9-bae3-d25074589a80---The hype train is 2026 knows only Moltbot (RIP Clawdbot). In this episode, we unpack the viral open-source AI assistant that's taken over the internet what it actually does, why everyone's losing their minds, and whether it's worth the $750/day token bills some users are racking up. We dive deep into why locally-run skills and CLI tools are beating computer-use clicking, how smaller models like GPT-5 Mini are crushing it in agentic workflows, and why the real magic is in targeted context - not massive swarms. Plus: Kimi K2.5 drops as a near-Sonnet-level model at 1/10th the price, we debate whether SaaS is dead, and yes – there are TWO Kimi K2.5 diss tracks. One made by Opus pretending to be Kimi. It might just slap?CHAPTERS:0:00 Intro - Still Relevant Tour Update0:48 What is Moltbot? The Viral AI Assistant Explained3:57 Token Bill Shock: $750/Day and Anthropic Bans5:00 The Dream of Digital Coworkers on Mac Minis6:52 Why CLI Tools & Skills Beat Computer-Use Clicking10:57 Why This Way of Working Is Genuinely Exciting14:47 Smaller Models Crushing It: GPT-5 Mini & Targeted Context17:30 Wild Agentic Behavior: Chrome Tab Hijacking & Auto-Retries20:10 Security Architecture: Locked-Down Machines & Enterprise Use24:01 AI Building Its Own Tools On-The-Fly27:08 The Fear & Overwhelm of Rapid Progress29:10 2026: The Year of Agent Workers31:43 The Challenge of Directing AI Work (Everyone's a Manager Now)37:24 Skills Will Take Over: Why MCPs & Atlassian Can't Stop Us40:38 Real-World Use Cases: Doctors, Lawyers & Accountants46:28 Cost Solutions: Build Workflows Around Cheaper Models52:58 Kimi K2.5: Sonnet-Level Performance at 1/10th the Price1:00:55 The "1,500 Tool Calls" Claim: Marketing vs Reality1:05:23 The Kimi K2.5 Diss Tracks (Opus vs Kimi)1:08:08 Demo: Black Hole Simulator & Self-Trolling CRM1:12:55 Is SaaS Dead?1:14:30 BONUS: Full Kimi K2.5 Diss TracksThanks for listening. Like & Sub. Links below for the Still Relevant Tour signup and Simtheory. The future is open source, apparently. xoxo
You can watch this episode as a video on youtube: https://youtu.be/C2atVWsvkS0 To support the show/get bonus content: www.patreon.com/terriblelizards We've barely mentioned African dinosaurs (apart from you-know-what) over the years and have repeatedly failed to give much love to the early sauropodomorphs either (the 'prosauropods'). Happily, this month we're getting a great two-for-one deal by speaking to Kimi Chapelle who tells us all about her work on the incredibly well-represented, but not actually that well-studied Massospondylus. This species is known from dozens of complete skeletons but has attracted surprisingly little attention in the scientific literature and Kimi has been working to correct that with a whole series of projects on this animal and its relatives. There's plenty to discuss and more to come on these overlooked dinosaurs, so headphones on and enjoy. Please support the podcast and get access to bonus content: https://www.patreon.com/terriblelizards Kimi's website: Kimberley (Kimi) Chapelle | Renaissance School of Medicine at Stony Brook University https://renaissance.stonybrookmedicine.edu/anatomy/people/facultypage/chapelle A profile of her and her work from the Superscientists website: Dinova - Kimberley Chapelle — SuperScientists https://www.superscientists.org/superscientists/chapelle
De testdagen in Barcelona zijn begonnen en dus zien we eindelijk de eerste bewegende beelden van de nieuwe auto's. In deze eerste aflevering van het seizoen gaan we in op de testdagen, de ‘zeropods‘ van Red Bull, de nieuwste elektronische inhaalmogelijkheden en voelen we eventjes aan onze onderbuik over de titelkansen. Met coureur Ho-Pin Tung en NU.nl-verslaggevers Joost Nederpelt, Patrick Moeke en Bas Scharwachter. Vragen? Voor vragen of opmerkingen over De Boordradio kan je ons altijd mailen op podcast@nu.nl of je kan reageren via NUjij of X. Je kunt je ook gratis abonneren op de De Boordradio-podcast. Dat kan via Apple Podcasts, Spotify of jouw favoriete podcast-app. Video's Wil je de gezichten achter de stemmen van De Boordradio zien? Dat kan op TikTok, Instagram en YouTube. De podcast wordt gefilmd en elke aflevering komen er korte clipjes op sociale media. Volg ons ook daar!See omnystudio.com/listener for privacy information.
Aparıcılar: DJ Fateh və Rəvan Bağırov
İBB'ye yönelik asrın yolsuzluğu kapsamında haklarında yakalama kararı olup da kaçabilen ve yakalanamayan üç kişi var. Murat Gülibrahimoğlu, İbrahim Bülbüllü ve Emrah Bağdatlı. Diğer tüm zanlılar gözaltına alınarak haklarında gerekli işlemler yapıldı. Kimi tutuklu yargılanıyor, kimi adli kontrolle serbest bırakıldı, kimi de etkin pişmanlıktan yararlandığı için şartlı tahliye edildi.
Bienvenue pour ce quatorzième numéro, début 2026 oblige, on fait le bilan de l'année écoulée. Comme toujours en compagnie de Dimu sama, on se retrouve, pour papoter de nos tops et déceptions de 2025 que ce soit films/séries/JV/lectures et autres, et aussi notre bilan perso. Sans oublier aussi de vous partager nos attentes pour cette année. Au programme : news, focus sur 100 mètres, bilan 2025, 2026 à venir, du small talk et puis voilà, voilà… Bonne écoute. AU PROGRAMME • [00:00] générique + sommaire NEWS • [02:12] Le retour de l'AV-98 Ingram • [07:31] Bilan interne • [17:51] NOS AVIS SUR 100 MÈTRES • Spoil [22:24] / Fin Spoil [26:46] • [28:00] BILAN 2025/EN ROUTE POUR 2026 CLAP DE FIN • [1:19:45] On fait quoi pour 2026 ? ??? • [1:23:00] Comme d'habitude, n'hésitez pas à donner vos avis, toute critique est bonne à prendre, mais surtout dans le respect. Bonne écoute. See ya Space Cowboys. Aaaah et comme je le disais, on lance une FAQ, donc si jamais vous avez des questions, n'hésitez pas à les balancer soit via BlueSky/Notre Discord/Instagram ou par mail : oyatacast@gmail.com. Pour nous suivre sur les RS : Le Discord Upcast.fr (n'hésitez pas à nous demander une invitation) Instagram : https://www.instagram.com/oyatacast BlueSky : @upcast.bsky.social Extrait des morceaux : • Sono Mama no Kimi de Ite (Nito Yuko - Patlabor OP S1) • Patlabor EZY - Trailer • 100 Mètres - Trailer • Starts to Rain & Phantom Run (Hiroaki Tsutsumi - 100 Meters/Hyakuemu ost) • ReawakeR (LiSA feat. Felix of Stray Kids - Solo Leveling OP S2) • Express Yourself (Campfire feat Francci Richard - Win or Lose ost) • Little Nemo - Trailer • Scarlet - Trailer • Thank You For Playing (Yasumasa Kitagawa- PRAGMATA ost) Crédit générique : Titre : Sakuya2 Auteur: Peritune Source: https://soundcloud.com/sei_peridot Licence: https://creativecommons.org/licenses/by/3.0/deed.fr Téléchargement: https://www.auboutdufil.com Et aussi avec l'autorisation de mon petit pour la voix. ^^ Crédit de fin : Kaos Syoten · Osamu Totsuka Yoroiden Samurai Troopers "Sei Ran Hen" ℗ 1993 SUNRISE MUSIC INC. Released on: 1993-02-05 Music Publisher: SUNRISE Music INC.
** This podcast talks about suicide. If you are suicidal, please call/text/chat 988 ** My friend Gordon Laws joins us to talk about supporting Dave and Kimi Martin when they lost their transgender son Levi to suicide in Dec 2022. Gordon talks about Levi—a bright, capable, curious young man—and the difficult journey he walked having Swyer Syndrome and being transgender. Gordon talks about the valiant efforts of Dave and Kimi to support their son. Gordon talks about the immediate days after Levi died and his role to minister to the Martin family—including writing his obituary and eulogy. Gordon talks about ministering principles to support others in their time of crisis/need—principles that help us all do better. Gordon talks about how the Savior ministers to those on the margins and invites us to better understand, love, and support transgender people. Thank you, Gordon for being on the podcast. I learned so much from you. I encourage everyone to listen to and share this episode. Levi's Eulogy: https://www.instagram.com/p/CqI2JAKpZWJ/ Sanctuary Documentary: https://sanctuarydoc.com/ Levi's Obituary: https://www.d-mfh.com/obituary/Levi-Martin Episode 631 (David and Kimi Martin): https://soundcloud.com/user-818501778/episode-631-dave-and-kimi-martin-transgender-son
Lando has been crowned World Champion but what do we remember from the season? We hope you enjoy. Warning: this podcast occasionally contains strong language (which may be unsuitable for children), unusual humour (which may be unsuitable for adults), and the ramblings of 2 uninformed blokes who don't really have a clue what they are doing....
Welcome to Deckchairs & Dirty Air, a Patreon only production from Grand Prix Podcast and Rogue Two Media. This week Andy and Elton have found a rather large mountain and are set to carve four faces from Formula 1 into it to stand the test of time. The question is…… who? We hope you enjoy....
In this episode, Stewart Alsop sits down with Joe Wilkinson of Artisan Growth Strategies to talk through how vibe coding is changing who gets to build software, why functional programming and immutability may be better suited for AI-written code, and how tools like LLMs are reshaping learning, work, and curiosity itself. The conversation ranges from Joe's experience living in China and his perspective on Chinese AI labs like DeepSeek, Kimi, Minimax, and GLM, to mesh networks, Raspberry Pi–powered infrastructure, decentralization, and what sovereignty might mean in a world where intelligence is increasingly distributed. They also explore hallucinations, AlphaGo's Move 37, and why creative “wrongness” may be essential for real breakthroughs, along with the tension between centralized power and open access to advanced technology. You can find more about Joe's work at https://artisangrowthstrategies.com and follow him on X at https://x.com/artisangrowth.Check out this GPT we trained on the conversationTimestamps00:00 – Vibe coding as a new learning unlock, China experience, information overload, and AI-powered ingestion systems05:00 – Learning to code late, Exercism, syntax friction, AI as a real-time coding partner10:00 – Functional programming, Elixir, immutability, and why AI struggles with mutable state15:00 – Coding metaphors, “spooky action at a distance,” and making software AI-readable20:00 – Raspberry Pi, personal servers, mesh networks, and peer-to-peer infrastructure25:00 – Curiosity as activation energy, tech literacy gaps, and AI-enabled problem solving30:00 – Knowledge work superpowers, decentralization, and small groups reshaping systems35:00 – Open source vs open weights, Chinese AI labs, data ingestion, and competitive dynamics40:00 – Power, safety, and why broad access to AI beats centralized control45:00 – Hallucinations, AlphaGo's Move 37, creativity, and logical consistency in AI50:00 – Provenance, epistemology, ontologies, and risks of closed-loop science55:00 – Centralization vs decentralization, sovereign countries, and post-global-order shifts01:00:00 – U.S.–China dynamics, war skepticism, pragmatism, and cautious optimism about the futureKey InsightsVibe coding fundamentally lowers the barrier to entry for technical creation by shifting the focus from syntax mastery to intent, structure, and iteration. Instead of learning code the traditional way and hitting constant friction, AI lets people learn by doing, correcting mistakes in real time, and gradually building mental models of how systems work, which changes who gets to participate in software creation.Functional programming and immutability may be better aligned with AI-written code than object-oriented paradigms because they reduce hidden state and unintended side effects. By making data flows explicit and preventing “spooky action at a distance,” immutable systems are easier for both humans and AI to reason about, debug, and extend, especially as code becomes increasingly machine-authored.AI is compressing the entire learning stack, from software to physical reality, enabling people to move fluidly between abstract knowledge and hands-on problem solving. Whether fixing hardware, setting up servers, or understanding networks, the combination of curiosity and AI assistance turns complex systems into navigable terrain rather than expert-only domains.Decentralized infrastructure like mesh networks and personal servers becomes viable when cognitive overhead drops. What once required extreme dedication or specialist knowledge can now be done by small groups, meaning that relatively few motivated individuals can meaningfully change communication, resilience, and local autonomy without waiting for institutions to act.Chinese AI labs are likely underestimated because they operate with different constraints, incentives, and cultural inputs. Their openness to alternative training methods, massive data ingestion, and open-weight strategies creates competitive pressure that limits monopolistic control by Western labs and gives users real leverage through choice.Hallucinations and “mistakes” are not purely failures but potential sources of creative breakthroughs, similar to AlphaGo's Move 37. If AI systems are overly constrained to consensus truth or authority-approved outputs, they risk losing the capacity for novel insight, suggesting that future progress depends on balancing correctness with exploratory freedom.The next phase of decentralization may begin with sovereign countries before sovereign individuals, as AI enables smaller nations to reason from first principles in areas like medicine, regulation, and science. Rather than a collapse into chaos, this points toward a more pluralistic world where power, knowledge, and decision-making are distributed across many competing systems instead of centralized authorities.
This episode explores how intimacy, desire, and erotic states function as powerful tools for healing, nervous system regulation, and personal transformation. You'll learn why certain forms of conscious erotic play create genuine altered states of consciousness, not through substances or external compounds, but through the body's own internal chemistry. This conversation reframes sexuality as a core aspect of health, performance, and emotional resilience rather than something separate from personal development or spirituality. Watch this episode on YouTube for the full video experience: https://www.youtube.com/@DaveAspreyBPR Kimi Inch is a somatic therapist and educator with over 20 years of experience working at the intersection of intimacy, conscious kink, trauma healing, and embodied self awareness. Her work centers on helping individuals and couples access deep states of connection and healing through structured, consent based practices that engage breath, sensation, power dynamics, and presence. She is known for creating safe, grounded spaces where people can explore desire in ways that support lasting nervous system regulation and authentic self expression. Together, Dave Asprey and Kimi explore how erotic experiences trigger specific neurochemical responses including oxytocin, serotonin, endorphins, and adrenaline, and why this internal cocktail mirrors the effects of many psychedelic and somatic healing modalities. They discuss surrender, safety, attunement, and aftercare as essential components of integration, and why unexpressed desire often leaks into leadership, relationships, and performance in destructive ways. You'll Learn: • How erotic states create real altered states of consciousness through internal neurochemistry • Why intimacy can function as a powerful form of nervous system regulation and trauma healing • The difference between conscious desire and compulsive behavior • How surrender, safety, and attunement reshape emotional and relational patterns • Why suppressed desire leaks into work, leadership, and relationships • What makes healthy power dynamics healing rather than harmful • The role of aftercare and integration in long term transformation • How erotic intelligence connects to vitality, authenticity, and human performance Thank you to our sponsors! -EMR-Tek | https://www.emr-tek.com/DAVE and use code DAVE for 40% off. -Calroy | Head to https://calroy.com/dave for an exclusive discount. -Our Place | Head to https://fromourplace.com/ and use the code DAVE for 10% off your order. Dave Asprey is a four-time New York Times bestselling author, founder of Bulletproof Coffee, and the father of biohacking. With over 1,000 interviews and 1 million monthly listeners, The Human Upgrade brings you the knowledge to take control of your biology, extend your longevity, and optimize every system in your body and mind. Each episode delivers cutting-edge insights in health, performance, neuroscience, supplements, nutrition, biohacking, emotional intelligence, and conscious living. New episodes are released every Tuesday, Thursday, Friday, and Sunday (BONUS). Dave asks the questions no one else will and gives you real tools to become stronger, smarter, and more resilient. Keywords: conscious kink, erotic healing, intimacy and healing, altered states of consciousness, somatic therapy, nervous system safety, erotic intelligence, surrender and intimacy, power dynamics psychology, trauma and embodiment, aftercare and integration, neurochemistry of intimacy, oxytocin bonding, endorphins and pleasure, consent based intimacy, asking for what you want, suppressed desire, authenticity and intimacy, biohacking intimacy, Dave Asprey intimacy, Kimi Inch Biohack Resources: • Kimi's Website: https://andmorepresents.com/ • Dave Asprey's Latest News | Go to https://daveasprey.com/ to join Inside Track today. • Danger Coffee: https://dangercoffee.com/discount/dave15 • My Daily Supplements: SuppGrade Labs (15% Off) • Favorite Blue Light Blocking Glasses: TrueDark (15% Off) • Dave Asprey's BEYOND Conference: https://beyondconference.com • Dave Asprey's New Book – Heavily Meditated: https://daveasprey.com/heavily-meditated • Upgrade Collective: https://www.ourupgradecollective.com • Upgrade Labs: https://upgradelabs.com • 40 Years of Zen: https://40yearsofzen.com Timestamps: 0:00 - Trailer 1:25 - Altered States Through Kink 4:41 - Learning to Receive 12:17 - Play Parties Explained 19:31 - Life Force Energy 26:40 - Desire vs Compulsion 28:23 - The Kidnapping Session 35:53 - Aftercare and Integration 45:35 - Bedroom = Life 52:50 - CEOs and Surrender See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
The 2025 season comes to an end with three drivers in contention to win the coveted trophy, but who will it be? Lando, it’s Lando, you knew that though. So what did we think? We hope you enjoy. Warning: this podcast occasionally contains strong language (which may be unsuitable for children), unusual humour (which may be...
On today's episode of HI Now Daily, we’re live at Windward Mall diving into all the holiday deals, and later, Kimié Miner joins us in the studio to chat about Christmas in Hawai‘i and her can’t-miss performances!See omnystudio.com/listener for privacy information.
Do Lando and Oscar lack “playoff experience”? Probably. Is Ferrari okay? No. Do we need to leave Kimi alone? Uhhh absolutely. Qatar wasn't wild but that just means Abu Dhabi will be. Shoutout Williams for that podium and also RBR for finally announcing their 2026 line up. OH and Mick to IndyCar baby!!!
Hinch's Qatar hotel had an impressive water park. Both of the guys had travel experiences, then they dive into the F1 race. A baffling pit decision from McLaren helped tighten the championship battle even more. And fan anger at Kimi Antonelli has reached an unacceptable level.+++Off Track is part of the SiriusXM Sports Podcast Network. If you enjoyed this episode and want to hear more, please give a 5-star rating and leave a review. Subscribe today wherever you stream your podcasts.Want some Off Track swag? Check out our store!Check out our website, www.askofftrack.comSubscribe to our YouTube Channel.Want some advice? Send your questions in for Ask Alex to AskOffTrack@gmail.comFollow us on Twitter at @askofftrack. Or individually at @Hinchtown, @AlexanderRossi, and @TheTimDurham. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Welcome to December 2025 and an episode from 2021. This episode is packed with stories about breaking (tamishawara). Remember that back in 2021, when this was recorded Sensei Landyn was a kid. In the years between then and now Sensei Landyn has done many breaks including the required one with the nunchaku mention in this episode. We also mention quite a few episodes, including one on Kimi (focus). Here's the link:https://www.buzzsprout.com/477379/episodes/8501411After this episode originally aired we did discuss Master Kelljchian's saying "If you tell me you're stupid, I'll tell you you're stupid." Here ya go:https://www.buzzsprout.com/477379/episodes/9274638And finally, we mention Sensei Baier in this episode. He has been on the show a couple of times to talk about Kurosawa movies. Here's one:https://www.buzzsprout.com/477379/episodes/14477612Thanks for listening and if you have a minute and an extra couple of pennies, click the link below to support the show. We appreciate the support. Support the showThanks so much for listening and sharing the podcast with friends. Reach us all over the web. Facebook and twitter are simply wildcatdojo. However, insta is wildcatdojo conversations. (There's a story there.)On YouTube (where we are now airing some of our older episodes - complete with a slideshow that I tweak constantly) https://www.youtube.com/@wildcatdojo9869/podcastsAnd for our webpage, where you can also find all the episodes and see some info about the dojo: http://wildcatdojo.com/025-6/podcast.html . And of course, we love it when you support our sponsor Honor Athletics. Here is their link:https://honor-athletics.com/Thank you for listening.
The penultimate race of the season where Lando ‘could’ have wrapped up the title, but Piastri, Verstappen and McLaren all had other ideas. So we head to Abu Dhabi for the showdown. Lets discuss Qatar first though right? We hope you enjoy. Warning: this podcast occasionally contains strong language (which may be unsuitable for children), unusual...
Send us a textFrom the first lap to 2.5 hours post-race (why'd it take so long FIA?!), the Las Vegas Grand Prix has made it murkier than ever on who our world champion will be. We're chatting about McLaren's blunder, slow FIA decisions, Max Verstappen being Max Verstappen, and lil Kimi dominating. Plus pink Cadillacs, Mickey Mouse, & more. Let's go!Watch the episodeKimi Antonelli watching F1 Academy race Carlos Sainz on Oscar Piastri's penalty in Brazil Ferrari CEO comments Carlos Sainz on Ferrari CEO comments Kimi Antonelli watches Jannik Sinner in Turin Carlos Sainz at the Raiders game Franco Calopinto vs Lance Stroll drama Oscar Piastri's accidental reshare Isack Hadjar Racing Bulls TikTok Toto Wolff says he's Team Carlisle Max Verstappen credits driving to his mom Oscar Piastri on calling his mom Max Verstappen on the season Lewis Hamilton vs Charles Leclerc food Alex Albon and Carlos Sainz eat burgers Valtteri Bottas officiates Las Vegas wedding George Russell helps Girl Scouts distribute cookies Oscar Piastri pre-race interview Cynthia Erivo ranks drivers singing Cynthia Erivo intro to the Las Vegas GP Louis Tomlinson and Lando Norris Logan Lerman at the race All the celebs at the GP Loose drain cover in FP2 Lewis Hamilton says it's his worst season Kimi Antonelli celebrating with Max Verstappen Terry Crews cooldown car Mercedes graphics banter GFind me outside the pod: Follow me @boxboxf1podVisit the website for more deets on me and the podcastShare your thoughts/opinions/questions with me!!
It's a very eventful weekend in the Vegas GP! Meg and Spanners are back to take in all the events of Sin City, including a massive mistake that caused McLaren to be disqualified, Kimi possibly achieving his final form, and some runoff track madness that caused some safety concerns. (00:00) Intro (4:02) Big trouble for McLaren (12:56) Lando's first lap (24:21) Kimi goes for it (31:09) Rainy Vegas vibes (42:51) Runoff madness! (51:21) Big Aston Martin changes Host: Megan Schuster Guest: Spanners Ready Senior Producer: Steve Ahlman Learn more about your ad choices. Visit podcastchoices.com/adchoices
We head off to the glittering lights and atmosphere of Las Vegas where there’s rain in the desert, Ferrari are slowest, Lando is aggressive and McLaren are ‘Too Low’. We hope you enjoy. Warning: this podcast occasionally contains strong language (which may be unsuitable for children), unusual humour (which may be unsuitable for adults), and the...
In this week's EverythingF1 Podcast , we dive into a wild Las Vegas GP packed with drama, controversy, and championship-shaking fallout.We break down McLaren's double disqualification, what it means for both drivers' mindsets, and how it detonates the championship picture. With Max Verstappen suddenly fully back in the title fight, we take a deeper look at Red Bull's internal dynamics and how the post-Horner era is shaping the team.We also celebrate Kimi Antonelli's stunning charge from P17 to the podium — one of the standout drives of the season — before shifting gears into a more serious discussion on Lewis Hamilton's ongoing struggles at Ferrari, what's going wrong, and what it means for his future.All that, plus our usual analysis, debates, and reactions from a chaotic night in Vegas.
Our 225th episode with a summary and discussion of last week's big AI news!Recorded on 11/16/2025Hosted by Andrey Kurenkov and co-hosted by Michelle LeeFeel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.aiRead out our text newsletter and comment on the podcast at https://lastweekin.ai/In this episode:New AI model releases include GPT-5.1 from OpenAI and Ernie 5.0 from Baidu, each with updated features and capabilities.Self-driving technology advancements from Baidu's Apollo Go and Pony AI's IPO highlight significant progress in the automotive sector.Startup funding updates include Incept taking $50M for diffusion models, while Cursor and Gamma secure significant valuations for coding and presentation tools respectively.AI-generated content is gaining traction with songs topping charts and new marketplaces for AI-generated voices, indicating evolving trends in synthetic media.Timestamps:(00:01:19) News PreviewTools & Apps(00:02:13) OpenAI says the brand-new GPT-5.1 is ‘warmer' and has more ‘personality' options | The Verge(00:04:51) Baidu Unveils ERNIE 5.0 and a Series of AI Applications at Baidu World 2025, Ramps Up Global Push(00:07:00) ByteDance's Volcano Engine debuts coding agent at $1.3 promo price(00:08:04) Google will let users call stores, browse products, and check out using AI | The Verge(00:10:41) Fei-Fei Li's World Labs speeds up the world model race with Marble, its first commercial product | TechCrunch(00:13:30) OpenAI says it's fixed ChatGPT's em dash problem | TechCrunchApplications & Business(00:16:01) Anthropic announces $50 billion data center plan | TechCrunch(00:18:06) Baidu teases next-gen AI training, inference accelerators • The Register(00:20:50) Meta chief AI scientist Yann LeCun plans to exit and launch own start-up(00:24:41) Amazon Demands Perplexity Stop AI Tool From Making Purchases - Bloomberg(00:27:32) AI PowerPoint-killer Gamma hits $2.1B valuation, $100M ARR, founder says | TechCrunch(00:29:33) Inception raises $50 million to build diffusion models for code and text | TechCrunch(00:31:14) Coding assistant Cursor raises $2.3B 5 months after its previous round | TechCrunch(00:33:56) China's Baidu says it's running 250,000 robotaxi rides a week — same as Alphabet's Waymo(00:35:26) Driverless Tech Firm Pony AI Raises $863 Million in HK ListingProjects & Open Source(00:36:30) Moonshot's Kimi K2 Thinking emerges as leading open source AIResearch & Advancements(00:39:22) [2510.26787] Remote Labor Index: Measuring AI Automation of Remote Work(00:45:21) OpenAI Researchers Train Weight Sparse Transformers to Expose Interpretable Circuits - MarkTechPost(00:49:34) Kimi Linear: An Expressive, Efficient Attention Architecture(00:53:33) Watch Google DeepMind's new AI agent learn to play video games | The Verge(00:57:34) arXiv Changes Rules After Getting Spammed With AI-Generated 'Research' PapersPolicy & Safety(00:59:35) Stability AI largely wins UK court battle against Getty Images over copyright and trademark | AP News(01:01:48) Court rules that OpenAI violated German copyright law; orders it to pay damages | TechCrunch(01:03:48) Microsoft's $15.2B UAE investment turns Gulf State into test case for US AI diplomacy | TechCrunchSynthetic Media & Art(01:06:39) An AI-Generated Country Song Is Topping A Billboard Chart, And That Should Infuriate Us All | Whiskey Riff(01:10:59) Xania Monet is the first AI-powered artist to debut on a Billboard airplay chart, but she likely won't be the last | CNN(01:13:34) ElevenLabs' new AI marketplace lets brands use famous voices for ads | The VergeSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In this special release episode, Matt sits down with Nathan Lambert and Luca Soldaini from Ai2 (the Allen Institute for AI) to break down one of the biggest open-source AI drops of the year: OLMo 3. At a moment when most labs are offering “open weights” and calling it a day, AI2 is doing the opposite — publishing the models, the data, the recipes, and every intermediate checkpoint that shows how the system was built. It's an unusually transparent look into the inner machinery of a modern frontier-class model.Nathan and Luca walk us through the full pipeline — from pre-training and mid-training to long-context extension, SFT, preference tuning, and RLVR. They also explain what a thinking model actually is, why reasoning models have exploded in 2025, and how distillation from DeepSeek and Qwen reasoning models works in practice. If you've been trying to truly understand the “RL + reasoning” era of LLMs, this is the clearest explanation you'll hear.We widen the lens to the global picture: why Meta's retreat from open source created a “vacuum of influence,” how Chinese labs like Qwen, DeepSeek, Kimi, and Moonshot surged into that gap, and why so many U.S. companies are quietly building on Chinese open models today. Nathan and Luca offer a grounded, insider view of whether America can mount an effective open-source response — and what that response needs to look like.Finally, we talk about where AI is actually heading. Not the hype, not the doom — but the messy engineering reality behind modern model training, the complexity tax that slows progress, and why the transformation between now and 2030 may be dramatic without ever delivering a single “AGI moment.” If you care about the future of open models and the global AI landscape, this is an essential conversation.Allen Institute for AI (AI2)Website - https://allenai.orgX/Twitter - https://x.com/allen_aiNathan LambertBlog - https://www.interconnects.aiLinkedIn - https://www.linkedin.com/in/natolambert/X/Twitter - https://x.com/natolambertLuca SoldainiBlog - https://soldaini.netLinkedIn - https://www.linkedin.com/in/soldni/X/Twitter - https://x.com/soldniFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)Blog - https://mattturck.comLinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) – Cold Open(00:39) – Welcome & today's big announcement(01:18) – Introducing the Olmo 3 model family(02:07) – What “base models” really are (and why they matter)(05:51) – Dolma 3: the data behind Olmo 3(08:06) – Performance vs Qwen, Gemma, DeepSeek(10:28) – What true open source means (and why it's rare)(12:51) – Intermediate checkpoints, transparency, and why AI2 publishes everything(16:37) – Why Qwen is everywhere (including U.S. startups)(18:31) – Why Chinese labs go open source (and why U.S. labs don't)(20:28) – Inside ATOM: the U.S. response to China's model surge(22:13) – The rise of “thinking models” and inference-time scaling(35:58) – The full Olmo pipeline, explained simply(46:52) – Pre-training: data, scale, and avoiding catastrophic spikes(50:27) – Mid-training (tail patching) and avoiding test leakage(52:06) – Why long-context training matters(55:28) – SFT: building the foundation for reasoning(1:04:53) – Preference tuning & why DPO still works(1:10:51) – The hard part: RLVR, long reasoning chains, and infrastructure pain(1:13:59) – Why RL is so technically brutal(1:18:17) – Complexity tax vs AGI hype(1:21:58) – How everyone can contribute to the future of AI(1:27:26) – Closing thoughts
WHAT. A. RACE. Lando takes a controlling lead over the WDC. Max finishes P3 from the pitlane. Kimi & Liam put up epic defences. & Ferrari.... well Ferrari was there. This episode recaps our most exciting race of the season yet!Follow us on Instagram @tracktalk.pod we love you all
Ladies and Gentleman welcome back to Pitstop! The Brazilian Grand Prix is over, another PERFECT weekend for Lando Norris and Verstappen has once again blown us all away! Lets go Kimi with his best finish in Formula 1 ever.. But the big talking point, is Oscar Piastri being ROBBED by his own team? It's all very strange, the lack of support, the drop in performance! What do you think is really going wrong with Oscar in F1 right now? Lando is pulling away with the championship with there only being 3 races left! Who will show up in vegas? Or will the great robbery of Oscar Piastri continue.. Learn more about your ad choices. Visit podcastchoices.com/adchoices
NORRIS BEGINS NAILING THE FINAL NAILS IN THE CHAMPIONSHIP!...PIASTRI LOOSING INTEREST…MAX DRIVER OF THE DAY AND...FERNANDO READY FOR LAS VEGAS. THIS WEEK'S NASIR HAMEED CORNER, WE KEEP IT SIMPLE WITH SOME DUKE OF DIJON AND NASIR BANTER! It was a dominant performance from Lando Norris as he claimed his seventh victory of the year, following up on his victory in the sprint race with another 25 points on Sunday, extending his championship lead to 24 points over Oscar Piastri. Early race incidents would leave Oscar Piastri with a shock penalty and lead to the retirement of Charles Leclerc through no fault of his own. And in unexpected fashion, Max Verstappen would grab fans' attention following his conversion of a pit-lane start all the way to a P3 finish, grabbing a podium on a day many fans would expect his championship shot to slip away from him. None of the top ten were able to get past each other in the initial portion of Lap 1 except Liam Lawson on George Russell, with Lewis Hamilton's Ferrari having the weakest start of any on the grid, dropping four places into 17th. A loss of control from home favorite Gabriel Bortoleto in the Sauber occurred only halfway through the first lap, causing the 21-year-old to hit the barriers, bringing out a safety car and ending his race. The safety car was brought out for the third time in a row at the Brazilian Grand Prix, lasting for three laps and coming in on Lap 4. There was more chaos immediately, as Charles Leclerc, Kimi Antonelli and Oscar Piastri went three abreast at Turn 1 after the Italian struggled to keep up with Lando Norris' pace following the restart. Piastri and Antonelli would collide, sending the Mercedes into Leclerc's Ferrari and causing the Monegasque racer to lose both a tire and incur suspension damage, ending his race prematurely. Unable to continue, Leclerc's Ferrari would pull over and bring out a Virtual Safety Car, with the McLarens of Norris and Piastri leading from the Mercedes of Antonelli and the Racing Bull of Isack Hadjar. Laps 14 and 17 would see ten-second penalties applied for both Yuki Tsunoda and Oscar Piastri, with Tsunoda's given for an incident with Lance Stroll and Piastri's for the aforementioned crash after the safety car restart. Verstappen, who had taken an early pit stop to change from hard tires to mediums, found himself up to seventh by Lap 19 thanks to Hadjar and Pierre Gasly entering the pit lane. Seventh turned into fifth by Lap 21, the Dutchman having gained 15 places in the first third of the race and looking impressive as he looked to restore his championship ambitions. LANDO: “It was an amazing race, and it's nice to win here in Brazil. It's an amazing track with amazing fans. This one was for one of my mentors, Gil, I hope he'd be very proud. “It was a great win, but to be honest, seeing how quick the competition was today, it's clear we've still got work to do. I'll go back, see the team, congratulate them and see what we can do better. Looking ahead, I'll keep focusing on myself, keep my head down, ignore the noise and keep pushing.” MAX: From pitlane to podium, this weekend has completely turned around for me, something that I didn't think was possible. The start of the race was very hectic and I picked up a puncture early on from a load of debris on the track which meant that I pretty much had to start the race again. The Team used the right strategy from start to finish which allowed me to get through all of the traffic very efficiently. I definitely had to send it a few times to get past the other cars but I love doing that and ended up having an unexpectedly fun race. Overall it showed that we had really good pace today and that the grip was much better than the last couple of days. The atmosphere at Interlagos was amazing and it really spurred me on. I am so proud of the Team and would like to thank them for all of the hard work that they put into making the changes post Quali last night. SAO PAULO, BRAZIL - NOVEMBER 09: Race winner Lando Norris of Great Britain and McLaren Second placed Andrea Kimi Antonelli of Italy and Mercedes AMG Petronas F1 Team Third placed Max Verstappen of the Netherlands and Oracle Red Bull Racing and Mark Norris, Director of Commercial Trackside Operations at McLaren on the podium during the F1 Grand Prix of Brazil at Autodromo Jose Carlos Pace on November 09, 2025 in Sao Paulo, Brazil. (Photo by Mark Thompson/Getty Images) We kept pushing and took multiple risks this weekend because we never want to settle for second and we didn't give up. To start in the pitlane and finish P3 on the podium only 10 seconds off P1 was incredible. Now all we can do is keep fighting hard over the final few races of the season and do the best that we possibly can whilst trying to find as much performance as we can extract from the car. A huge congratulations to Kimi as well, he drove amazingly well which will have given his confidence a huge boost which is great for any rookie!" Alex Albon: It was a good race for the fans today but unfortunately for us it was a bit of a race to forget. We had good pace when we could show it. We've struggled with pace all weekend but seem to have recovered a little bit today. In the end what took us out of contention for points was that I think we stayed out too long on the first stint and we never really recovered from there. In the last stint we were quick and were fighting our way back up the grid and just missed out on a point at the end. It's frustrating that our rivals scored points today, but we will regroup and look forward to a better weekend in Las Vegas. Carlos Sainz: Not the day I was hoping for. Once I got squeezed on turn 1, I had considerable damage to the car and my race was compromised from there. We managed to stay in the hunt for points most of the race but after a slow first stop and compiled with the damage, that was it unfortunately. Time to go back home and see what we can do in these types of circuits, as Qatar will also be a challenge. A few races to go, so we cannot relax. Let's keep going.