POPULARITY
MOOBARKFLUFF! Click here to send us a comment or message about the show!Welcome to the Season Finale' of BFFT. This is a long episode and lightly edited. We are joined by so many furs that have been past guests on the show. Cheetaro gives us a movie review and TickTock reports on some furry news. Thank you all for listening for the past 5 years! Season 6 will launch very soon! So tune in for another confusing episode of BFFT. Moobarkfluff everyfur!This podcast contains adult language and adult topics. It is rated M for Mature. Listener discretion is advised.Support the showThanks to all our listeners and to our staff: Bearly Normal, Rayne Raccoon, Taebyn, Cheetaro, TickTock, and Ziggy the Meme Weasel.You can send us a message on Telegram at BFFT Chat, or via email at: bearlyfurcasting@gmail.com
The Nintendo Entertainment Podcast is here! The hosts are gearing up for the Nintendo Switch 2 launch, and there's plenty to discuss about it! First, the trio discusses their gaming exploits! Todd reveals his latest adventures in Xenoblade Chronicles X Definitive Edition, and continues his undying hatred of Tatsu! Will, meanwhile, breaks down the new collaboration between Monster Hunter Wilds and Street Fighter! As for Scott, he's been busy playing games to review them, including Popucom and All In Abyss: Judge The Fake! Then, in the news, with the Nintendo Switch 2 coming in one week, the reports about it continue to crank up! There have been leaks, backhands to those who keep bashing the game key card system, Pokémon Legends Z-A has gotten a release date, and more! Finally, in the main event, the trio do some "planning" to see how they would plan the Switch 2 calendar for 2025 to ensure that the console has the best game schedule possible for the months to come! So sit back, relax, and enjoy the Nintendo Entertainment Podcast!
Puntata a cura di UntimoteoL'anime La via del Grembiule - Lo Yakuza casalingo, presente su Netflix dal 2021, è una serie comica (2 stagioni, 15 episodi) ispirata all'omonimo manga scritto e disegnato da Kōsuke Ōno a partire dal 2018. La via del Grembiule ha in ogni episodio dei mini sketch che ruotano più o meno sempre attorno alla figura di Tatsu. Costui era un tempo un noto boss della Yakuza che decise di abbandonare le proprie imprese malavitose per dedicarsi alla cura della casa.Un prodotto intelligente, anche se un po' ripetitivo, capace di strappare più di una risata agli amanti del cinema e dei manga.“Animazione” è il format del podcast di Mondoserie dedicato alle diverse scuole ed espressioni del genere, dall'Oriente alla scena europea e americana. Parte del progetto: https://www.mondoserie.it/ Iscriviti al podcast sulla tua piattaforma preferita o su: https://www.spreaker.com/show/mondoserie-podcast Collegati a MONDOSERIE sui social:https://www.facebook.com/mondoserie https://www.instagram.com/mondoserie.it/ https://www.youtube.com/channel/UCwXpMjWOcPbFwdit0QJNnXQ https://www.linkedin.com/in/mondoserie/
Stocks are on quite the rollercoaster ride. Stocks have swung wildly every day this week amid uncertainty about President Trump's tariff plans and their economic fallout. On Wednesday, President Trump announced a 90-day pause on reciprocal tariffs except for China. We were joined by Tatsu Ikeda, an information tech consultant and engineer who has developed an expertise in the U.S. and global economy. Tatsu discussed the latest in the world of the U.S. economy and finance and how the market continues to shake out.Listen to WBZ NewsRadio on the NEW iHeart Radio app and be sure to set WBZ NewsRadio as your #1 preset!
Phillip and Eric end 2024 by talking the genre-bending manga series Dandadan by Yukinobu Tatsu! They discuss its quirky blend of sci-fi/fantasy shonen action with romantic comedy, its cast of nuanced characters that defy genre archetypes, and whether Phil believe in ghosts or aliens are more likely than Eric finding true love over the internet.
Texas history was made last night, when the star-studded Michelin ceremony in Houston crowned the state's best restaurants for the first time. Across the state, Michelin handed out 15 1-star awards. No restaurant in Texas received a 2- or a 3-star award. That's Michelin's highest honor, saved for the best of the best. In Dallas, just one Michelin star was given: a 1-star honor to Deep Ellum omakase restaurant Tatsu. Following that, East Dallas restaurant Rye received a special award for Exceptional Cocktails. Seven Bib Gourmands were handed out to Dallas-Fort Worth eateries with reasonably-priced menu items. And 20 restaurants in North Texas received a Recommended nod; In other news, Texas Gov. Greg Abbott was already busing migrants from the border to New York when he picked a new northern target in 2022 — Chicago, another destination chosen because it was run by Democrats; Southwest Airlines will offer buyouts to workers at 18 airports across the country as the Dallas-based carrier cuts back on flying planes due to “aircraft delivery delays” from Boeing; And Micah Parsons is coming under fire from national media analysts for his comments about head coach Mike McCarthy. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Today we talk about Tatsu Yamashiro, best known as Katana on account of she has a cursed katana that (for most of her use of it) has the ghost of her dead husband stuck inside. Today's mentioned & relevant media: -Text -Love Everlasting vol. 1 Thanks to Victoria Watkins for our icon! Support Capes and Japes by: Checking out our Patreon or donating to the Tip jar Find out more on the Capes and Japes website.
No nono Rádio Gambiarra, Gustavo Lopes e Carolina Gusmão comentam os jogos Taco Gato Kids, Tatsu, Infiltrado, Coups Disney, Space Aztecs, Dingo, Campos de Arle e Cerebria: The Inside World. Capa - Gustavo Lopes . O Rádio Gambiarra é o novo formato de episódios sobre jogos do Gambiarra Board Games. Ao invés de fazer um episódio por jogo, a partir de agora faremos episódios agrupando os jogos que jogamos entre um programa e outro, tendo a possibilidade de colocar quantos jogos forem possíveis entre lançamentos, jogos escolhidos por nossos ouvintes, jogos já cobertos no passado, expansões e inclusive blocos temáticos, sempre focando na nossa experiência com o jogo.Quer comprar jogos por um precinho bacana e contribuir com o Gambiarra Board Games? Acessa https://bravojogos.com.br/ e utilize o cupom GAMBIARRANABRAVO Confira as fotos dos jogos em nosso instagram instagram.com/gambiarraboardgames E-mail para sugestões: contato@papodelouco.com papodelouco.com Apoio Acessórios BG: https://www.acessoriosbg.com.br BGSP: https://boardgamessp.com.br/ Bravo Jogos: https://bravojogos.com.br/ Aroma de Madeira: https://www.aromademadeira.com.brAbertura: Free Transition Music - Upbeat 80s Music - 'Euro Pop 80s' (Intro A - 4 seconds)Jay Man - OurMusicBoxhttps://www.youtube.com/c/ourmusicboxTrilhas: Go Bossa Lounge Jazz Royalty Free Music/Free Instrumental Piano Music - Piano Sway - OurMusicBox/Relaxing Jazz Chill Cafe Music (Copyright Free) Free Background Music For Videos/Free Instrumental Music- Take It Slow - OurMusicBox
rWotD Episode 2706: Tamagawa Chōtatsu Welcome to Random Wiki of the Day, your journey through Wikipedia’s vast and varied content, one random article at a time.The random article for Monday, 30 September 2024 is Tamagawa Chōtatsu.Tamagawa Wōji Chōtatsu (玉川 王子 朝達, 23 March 1826 – 18 February 1862), also known by his Chinese style name Shō Shin (尚 慎), was a prince of Ryukyu Kingdom.Prince Tamagawa was the seventh son of King Shō Kō. He was also a half-brother of King Shō Iku, Prince Ōzato Chōkyō and Prince Ie Chōchoku. Nakazato Chōkei (仲里 朝慶) had no heir and adopted him. After Chōei's death, he became 14th head of the royal family Tamagawa Udun (玉川御殿), and inherited the hereditary fief of his family, Kanegusuku magiri (兼城間切, modern a part of Itoman, Okinawa).King Shō Tai dispatched a gratitude envoy after he took power to Edo, Japan in 1850. Prince Tamagawa and Nomura Chōgi (野村 朝宜, also known by Shō Genmo 向 元模) were appointed Envoy (正使, seishi) and Deputy Envoy (副使, fukushi) respectively. They sailed back in the next year.Prince Tamagawa kept in close touch with the pro-Satsuma faction, including Makishi Chōchū, Onga Chōkō and Oroku Ryōchū. It was said that they planned to depose King Shō Tai and install him. In 1859, Makishi, Onga and Oroku were involved in the Makishi Onga Incident (牧志恩河事件) and arrested. Prince Ie was appointed as judge to interrogate them. Prince Ōzato suggested that Prince Tamagawa should be put into prison, but was dissuaded by the king's instructor, Tsuhako Seisei. Prince Tamagawa was banned from politics and under house arrest. He died in the same year.This recording reflects the Wikipedia text as of 00:13 UTC on Monday, 30 September 2024.For the full current version of the article, see Tamagawa Chōtatsu on Wikipedia.This podcast uses content from Wikipedia under the Creative Commons Attribution-ShareAlike License.Visit our archives at wikioftheday.com and subscribe to stay updated on new episodes.Follow us on Mastodon at @wikioftheday@masto.ai.Also check out Curmudgeon's Corner, a current events podcast.Until next time, I'm neural Arthur.
Want to hear all about the Manga behind the hottest new Anime of the Fall 2024 line up? Well, look no further! Mat is joined by the hosts of The Undisputed Anime Podcast, ZaneTaichou AND TsuTsuDae to talk about one of our favourite current Manga titles, Dan Da Dan!Join us as we talk about the book's impact, how it blends incredible art and humour to tackle difficult subjects, not to mention getting into the weeds on how we view different creators and their creations...Thanks to Juliano Zucareli for our theme music!Find us on:X: Manga Tak PodBluesky: Manga Tak PodInstagram: Manga Tak PodThanks to Juliano Zucareli for our theme music!Find us on:X: Manga Tak PodBluesky: Manga Tak PodInstagram: Manga Tak Pod
In this episode, we took a deep dive into the cultural significance and mythology surrounding dragons throughout Asia, guided by our guest Crystal Twitter: @shapeshift16, [https://twitter.com/Shapeshift16]. As a commission artist, TCG player, and dragon enthusiast, Crystal brought a wealth of knowledge and passion to our discussion, offering a unique perspective on how dragons are portrayed and perceived in different regions across Asia. We began by looking at the Overview of Dragons Across Asia, covering the varying depictions from China, Japan, Southeast Asia, South Asia, and West Asia. Each region has its distinct interpretation—dragons are seen as symbols of power and wisdom, intertwined with nature and the spiritual realm.We then transitioned into a conversation about the Perception of Dragons in Asian Society, highlighting how different Asian cultures often regard dragons with reverence. Unlike Western depictions where dragons are often destructive forces to be vanquished, Asian dragons are seen as protectors or symbols of good fortune and prosperity. This perception reflects a broader view within Asian society, where dragons embody strength, control over the elements, and deep spiritual significance.Our conversation then dove into the Myths and Legends of Dragons across Asia. From the Lung of Chinese mythology to the Tatsu of Japan, dragons play a critical role in ancient stories, often depicted as guardians or creators. These legends are deeply embedded in the cultural fabric of each region, often serving as metaphors for larger natural or cosmic forces. Crystal enriched this conversation by drawing from her vast knowledge of the deep cultural connections Asian societies have with dragons, noting how these mythical creatures continue to influence modern media and folklore.In conclusion, the discussion highlighted that dragons are more than just mythical creatures; they are an enduring symbol of strength, wisdom, and spirituality in Asian cultures. Dragons play a significant role across a wide variety of cultural expressions, from religious rites to folklore, from art to modern storytelling. Crystal's perspective brought to light how integral these creatures are to understanding Asia's history and how they've been interpreted and reinterpreted through centuries. We examined how dragons serve not only as protectors and creators in these cultures but as embodiments of elemental forces, societal values, and cosmic power. The enduring reverence for dragons in Asia reflects their profound cultural significance, making them more than just legendary beings—they are symbols that unify past, present, and future generations. This connection continues to influence modern depictions in art, film, and even global pop culture, demonstrating how dragons have transcended their mythical origins to become a part of the living, breathing cultural landscape of Asia today. Through this exploration, we were able to appreciate the depth of reverence and the multi-faceted role dragons play in shaping both the past and present, solidifying their place in Asian and global narratives for years to come.
Rachael and Ruth review two slice of life anime on Netflix, The Way of the Househusband and Aggretsuko.Tatsu is a former yakuza turned domestic god; Retsuko's a beleaguered accountant and secret heavy metal fiend. They both provide comedy gold!Also mentions the upcoming Helluva short.
Today's guest, Nicholas Carlini, a research scientist at DeepMind, argues that we should be focusing more on what AI can do for us individually, rather than trying to have an answer for everyone."How I Use AI" - A Pragmatic ApproachCarlini's blog post "How I Use AI" went viral for good reason. Instead of giving a personal opinion about AI's potential, he simply laid out how he, as a security researcher, uses AI tools in his daily work. He divided it in 12 sections:* To make applications* As a tutor* To get started* To simplify code* For boring tasks* To automate tasks* As an API reference* As a search engine* To solve one-offs* To teach me* Solving solved problems* To fix errorsEach of the sections has specific examples, so we recommend going through it. It also includes all prompts used for it; in the "make applications" case, it's 30,000 words total!My personal takeaway is that the majority of the work AI can do successfully is what humans dislike doing. Writing boilerplate code, looking up docs, taking repetitive actions, etc. These are usually boring tasks with little creativity, but with a lot of structure. This is the strongest arguments as to why LLMs, especially for code, are more beneficial to senior employees: if you can get the boring stuff out of the way, there's a lot more value you can generate. This is less and less true as you go entry level jobs which are mostly boring and repetitive tasks. Nicholas argues both sides ~21:34 in the pod.A New Approach to LLM BenchmarksWe recently did a Benchmarks 201 episode, a follow up to our original Benchmarks 101, and some of the issues have stayed the same. Notably, there's a big discrepancy between what benchmarks like MMLU test, and what the models are used for. Carlini created his own domain-specific language for writing personalized LLM benchmarks. The idea is simple but powerful:* Take tasks you've actually needed AI for in the past.* Turn them into benchmark tests.* Use these to evaluate new models based on your specific needs.It can represent very complex tasks, from a single code generation to drawing a US flag using C:"Write hello world in python" >> LLMRun() >> PythonRun() >> SubstringEvaluator("hello world")"Write a C program that draws an american flag to stdout." >> LLMRun() >> CRun() >> VisionLLMRun("What flag is shown in this image?") >> (SubstringEvaluator("United States") | SubstringEvaluator("USA")))This approach solves a few problems:* It measures what's actually useful to you, not abstract capabilities.* It's harder for model creators to "game" your specific benchmark, a problem that has plagued standardized tests.* It gives you a concrete way to decide if a new model is worth switching to, similar to how developers might run benchmarks before adopting a new library or framework.Carlini argues that if even a small percentage of AI users created personal benchmarks, we'd have a much better picture of model capabilities in practice.AI SecurityWhile much of the AI security discussion focuses on either jailbreaks or existential risks, Carlini's research targets the space in between. Some highlights from his recent work:* LAION 400M data poisoning: By buying expired domains referenced in the dataset, Carlini's team could inject arbitrary images into models trained on LAION 400M. You can read the paper "Poisoning Web-Scale Training Datasets is Practical", for all the details. This is a great example of expanding the scope beyond the model itself, and looking at the whole system and how ti can become vulnerable.* Stealing model weights: They demonstrated how to extract parts of production language models (like OpenAI's) through careful API queries. This research, "Extracting Training Data from Large Language Models", shows that even black-box access can leak sensitive information.* Extracting training data: In some cases, they found ways to make models regurgitate verbatim snippets from their training data. Him and Milad Nasr wrote a paper on this as well: Scalable Extraction of Training Data from (Production) Language Models. They also think this might be applicable to extracting RAG results from a generation.These aren't just theoretical attacks. They've led to real changes in how companies like OpenAI design their APIs and handle data. If you really miss logit_bias and logit results by token, you can blame Nicholas :)We had a ton of fun also chatting about things like Conway's Game of Life, how much data can fit in a piece of paper, and porting Doom to Javascript. Enjoy!Show Notes* How I Use AI* My Benchmark for LLMs* Doom Javascript port* Conway's Game of Life* Tic-Tac-Toe in one printf statement* International Obfuscated C Code Contest* Cursor* LAION 400M poisoning paper* Man vs Machine at Black Hat* Model Stealing from OpenAI* Milad Nasr* H.D. Moore* Vijay Bolina* Cosine.sh* uuencodeTimestamps* [00:00:00] Introductions* [00:01:14] Why Nicholas writes* [00:02:09] The Game of Life* [00:05:07] "How I Use AI" blog post origin story* [00:08:24] Do we need software engineering agents?* [00:11:03] Using AI to kickstart a project* [00:14:08] Ephemeral software* [00:17:37] Using AI to accelerate research* [00:21:34] Experts vs non-expert users as beneficiaries of AI* [00:24:02] Research on generating less secure code with LLMs.* [00:27:22] Learning and explaining code with AI* [00:30:12] AGI speculations?* [00:32:50] Distributing content without social media* [00:35:39] How much data do you think you can put on a single piece of paper?* [00:37:37] Building personal AI benchmarks* [00:43:04] Evolution of prompt engineering and its relevance* [00:46:06] Model vs task benchmarking* [00:52:14] Poisoning LAION 400M through expired domains* [00:55:38] Stealing OpenAI models from their API* [01:01:29] Data stealing and recovering training data from models* [01:03:30] Finding motivation in your workTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO-in-Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:12]: Hey, and today we're in the in-person studio, which Alessio has gorgeously set up for us, with Nicholas Carlini. Welcome. Thank you. You're a research scientist at DeepMind. You work at the intersection of machine learning and computer security. You got your PhD from Berkeley in 2018, and also your BA from Berkeley as well. And mostly we're here to talk about your blogs, because you are so generous in just writing up what you know. Well, actually, why do you write?Nicholas [00:00:41]: Because I like, I feel like it's fun to share what you've done. I don't like writing, sufficiently didn't like writing, I almost didn't do a PhD, because I knew how much writing was involved in writing papers. I was terrible at writing when I was younger. I do like the remedial writing classes when I was in university, because I was really bad at it. So I don't actually enjoy, I still don't enjoy the act of writing. But I feel like it is useful to share what you're doing, and I like being able to talk about the things that I'm doing that I think are fun. And so I write because I think I want to have something to say, not because I enjoy the act of writing.Swyx [00:01:14]: But yeah. It's a tool for thought, as they often say. Is there any sort of backgrounds or thing that people should know about you as a person? Yeah.Nicholas [00:01:23]: So I tend to focus on, like you said, I do security work, I try to like attacking things and I want to do like high quality security research. And that's mostly what I spend my actual time trying to be productive members of society doing that. But then I get distracted by things, and I just like, you know, working on random fun projects. Like a Doom clone in JavaScript.Swyx [00:01:44]: Yes.Nicholas [00:01:45]: Like that. Or, you know, I've done a number of things that have absolutely no utility. But are fun things to have done. And so it's interesting to say, like, you should work on fun things that just are interesting, even if they're not useful in any real way. And so that's what I tend to put up there is after I have completed something I think is fun, or if I think it's sufficiently interesting, write something down there.Alessio [00:02:09]: Before we go into like AI, LLMs and whatnot, why are you obsessed with the game of life? So you built multiplexing circuits in the game of life, which is mind boggling. So where did that come from? And then how do you go from just clicking boxes on the UI web version to like building multiplexing circuits?Nicholas [00:02:29]: I like Turing completeness. The definition of Turing completeness is a computer that can run anything, essentially. And the game of life, Conway's game of life is a very simple cellular 2D automata where you have cells that are either on or off. And a cell becomes on if in the previous generation some configuration holds true and off otherwise. It turns out there's a proof that the game of life is Turing complete, that you can run any program in principle using Conway's game of life. I don't know. And so you can, therefore someone should. And so I wanted to do it. Some other people have done some similar things, but I got obsessed into like, if you're going to try and make it work, like we already know it's possible in theory. I want to try and like actually make something I can run on my computer, like a real computer I can run. And so yeah, I've been going on this rabbit hole of trying to make a CPU that I can run semi real time on the game of life. And I have been making some reasonable progress there. And yeah, but you know, Turing completeness is just like a very fun trap you can go down. A while ago, as part of a research paper, I was able to show that in C, if you call into printf, it's Turing complete. Like printf, you know, like, which like, you know, you can print numbers or whatever, right?Swyx [00:03:39]: Yeah, but there should be no like control flow stuff.Nicholas [00:03:42]: Because printf has a percent n specifier that lets you write an arbitrary amount of data to an arbitrary location. And the printf format specifier has an index into where it is in the loop that is in memory. So you can overwrite the location of where printf is currently indexing using percent n. So you can get loops, you can get conditionals, and you can get arbitrary data rates again. So we sort of have another Turing complete language using printf, which again, like this has essentially zero practical utility, but like, it's just, I feel like a lot of people get into programming because they enjoy the art of doing these things. And then they go work on developing some software application and lose all joy with the boys. And I want to still have joy in doing these things. And so on occasion, I try to stop doing productive, meaningful things and just like, what's a fun thing that we can do and try and make that happen.Alessio [00:04:39]: Awesome. So you've been kind of like a pioneer in the AI security space. You've done a lot of talks starting back in 2018. We'll kind of leave that to the end because I know the security part is, there's maybe a smaller audience, but it's a very intense audience. So I think that'll be fun. But everybody in our Discord started posting your how I use AI blog post and we were like, we should get Carlini on the podcast. And then you were so nice to just, yeah, and then I sent you an email and you're like, okay, I'll come.Swyx [00:05:07]: And I was like, oh, I thought that would be harder.Alessio [00:05:10]: I think there's, as you said in the blog posts, a lot of misunderstanding about what LLMs can actually be used for. What are they useful at? What are they not good at? And whether or not it's even worth arguing what they're not good at, because they're obviously not. So if you cannot count the R's in a word, they're like, it's just not what it does. So how painful was it to write such a long post, given that you just said that you don't like to write? Yeah. And then we can kind of run through the things, but maybe just talk about the motivation, why you thought it was important to do it.Nicholas [00:05:39]: Yeah. So I wanted to do this because I feel like most people who write about language models being good or bad, some underlying message of like, you know, they have their camp and their camp is like, AI is bad or AI is good or whatever. And they like, they spin whatever they're going to say according to their ideology. And they don't actually just look at what is true in the world. So I've read a lot of things where people say how amazing they are and how all programmers are going to be obsolete by 2024. And I've read a lot of things where people who say like, they can't do anything useful at all. And, you know, like, they're just like, it's only the people who've come off of, you know, blockchain crypto stuff and are here to like make another quick buck and move on. And I don't really agree with either of these. And I'm not someone who cares really one way or the other how these things go. And so I wanted to write something that just says like, look, like, let's sort of ground reality and what we can actually do with these things. Because my actual research is in like security and showing that these models have lots of problems. Like this is like my day to day job is saying like, we probably shouldn't be using these in lots of cases. I thought I could have a little bit of credibility of in saying, it is true. They have lots of problems. We maybe shouldn't be deploying them lots of situations. And still, they are also useful. And that is the like, the bit that I wanted to get across is to say, I'm not here to try and sell you on anything. I just think that they're useful for the kinds of work that I do. And hopefully, some people would listen. And it turned out that a lot more people liked it than I thought. But yeah, that was the motivation behind why I wanted to write this.Alessio [00:07:15]: So you had about a dozen sections of like how you actually use AI. Maybe we can just kind of run through them all. And then maybe the ones where you have extra commentary to add, we can... Sure.Nicholas [00:07:27]: Yeah, yeah. I didn't put as much thought into this as maybe was deserved. I probably spent, I don't know, definitely less than 10 hours putting this together.Swyx [00:07:38]: Wow.Alessio [00:07:39]: It took me close to that to do a podcast episode. So that's pretty impressive.Nicholas [00:07:43]: Yeah. I wrote it in one pass. I've gotten a number of emails of like, you got this editing thing wrong, you got this sort of other thing wrong. It's like, I haven't just haven't looked at it. I tend to try it. I feel like I still don't like writing. And so because of this, the way I tend to treat this is like, I will put it together into the best format that I can at a time, and then put it on the internet, and then never change it. And this is an aspect of like the research side of me is like, once a paper is published, like it is done as an artifact that exists in the world. I could forever edit the very first thing I ever put to make it the most perfect version of what it is, and I would do nothing else. And so I feel like I find it useful to be like, this is the artifact, I will spend some certain amount of hours on it, which is what I think it is worth. And then I will just...Swyx [00:08:22]: Yeah.Nicholas [00:08:23]: Timeboxing.Alessio [00:08:24]: Yeah. Stop. Yeah. Okay. We just recorded an episode with the founder of Cosine, which is like an AI software engineer colleague. You said it took you 30,000 words to get GPT-4 to build you the, can GPT-4 solve this kind of like app. Where are we in the spectrum where chat GPT is all you need to actually build something versus I need a full on agent that does everything for me?Nicholas [00:08:46]: Yeah. Okay. So this was an... So I built a web app last year sometime that was just like a fun demo where you can guess if you can predict whether or not GPT-4 at the time could solve a given task. This is, as far as web apps go, very straightforward. You need basic HTML, CSS, you have a little slider that moves, you have a button, sort of animate the text coming to the screen. The reason people are going here is not because they want to see my wonderful HTML, right? I used to know how to do modern HTML in 2007, 2008. I was very good at fighting with IE6 and these kinds of things. I knew how to do that. I have no longer had to build any web app stuff in the meantime, which means that I know how everything works, but I don't know any of the new... Flexbox is new to me. Flexbox is like 10 years old at this point, but it's just amazing being able to go to the model and just say, write me this thing and it will give me all of the boilerplate that I need to get going. Of course it's imperfect. It's not going to get you the right answer, and it doesn't do anything that's complicated right now, but it gets you to the point where the only remaining work that needs to be done is the interesting hard part for me, the actual novel part. Even the current models, I think, are entirely good enough at doing this kind of thing, that they're very useful. It may be the case that if you had something, like you were saying, a smarter agent that could debug problems by itself, that might be even more useful. Currently though, make a model into an agent by just copying and pasting error messages for the most part. That's what I do, is you run it and it gives you some code that doesn't work, and either I'll fix the code, or it will give me buggy code and I won't know how to fix it, and I'll just copy and paste the error message and say, it tells me this. What do I do? And it will just tell me how to fix it. You can't trust these things blindly, but I feel like most people on the internet already understand that things on the internet, you can't trust blindly. And so this is not like a big mental shift you have to go through to understand that it is possible to read something and find it useful, even if it is not completely perfect in its output.Swyx [00:10:54]: It's very human-like in that sense. It's the same ring of trust, I kind of think about it that way, if you had trust levels.Alessio [00:11:03]: And there's maybe a couple that tie together. So there was like, to make applications, and then there's to get started, which is a similar you know, kickstart, maybe like a project that you know the LLM cannot solve. It's kind of how you think about it.Nicholas [00:11:15]: Yeah. So for getting started on things is one of the cases where I think it's really great for some of these things, where I sort of use it as a personalized, help me use this technology I've never used before. So for example, I had never used Docker before January. I know what Docker is. Lucky you. Yeah, like I'm a computer security person, like I sort of, I have read lots of papers on, you know, all the technology behind how these things work. You know, I know all the exploits on them, I've done some of these things, but I had never actually used Docker. But I wanted it to be able to, I could run the outputs of language model stuff in some controlled contained environment, which I know is the right application. So I just ask it like, I want to use Docker to do this thing, like, tell me how to run a Python program in a Docker container. And it like gives me a thing. I'm like, step back. You said Docker compose, I do not know what this word Docker compose is. Is this Docker? Help me. And like, you'll sort of tell me all of these things. And I'm sure there's this knowledge that's out there on the internet, like this is not some groundbreaking thing that I'm doing, but I just wanted it as a small piece of one thing I was working on. And I didn't want to learn Docker from first principles. Like I, at some point, if I need it, I can do that. Like I have the background that I can make that happen. But what I wanted to do was, was thing one. And it's very easy to get bogged down in the details of this other thing that helps you accomplish your end goal. And I just want to like, tell me enough about Docker so I can do this particular thing. And I can check that it's doing the safe thing. I sort of know enough about that from, you know, my other background. And so I can just have the model help teach me exactly the one thing I want to know and nothing more. I don't need to worry about other things that the writer of this thinks is important that actually isn't. Like I can just like stop the conversation and say, no, boring to me. Explain this detail. I don't understand. I think that's what that was very useful for me. It would have taken me, you know, several hours to figure out some things that take 10 minutes if you could just ask exactly the question you want the answer to.Alessio [00:13:05]: Have you had any issues with like newer tools? Have you felt any meaningful kind of like a cutoff day where like there's not enough data on the internet or? I'm sure that the answer to this is yes.Nicholas [00:13:16]: But I tend to just not use most of these things. Like I feel like this is like the significant way in which I use machine learning models is probably very different than most people is that I'm a researcher and I get to pick what tools that I use and most of the things that I work on are fairly small projects. And so I can, I can entirely see how someone who is in a big giant company where they have their own proprietary legacy code base of a hundred million lines of code or whatever and like you just might not be able to use things the same way that I do. I still think there are lots of use cases there that are entirely reasonable that are not the same ones that I've put down. But I wanted to talk about what I have personal experience in being able to say is useful. And I would like it very much if someone who is in one of these environments would be able to describe the ways in which they find current models useful to them. And not, you know, philosophize on what someone else might be able to find useful, but actually say like, here are real things that I have done that I found useful for me.Swyx [00:14:08]: Yeah, this is what I often do to encourage people to write more, to share their experiences because they often fear being attacked on the internet. But you are the ultimate authority on how you use things and there's this objectively true. So they cannot be debated. One thing that people are very excited about is the concept of ephemeral software or like personal software. This use case in particular basically lowers the activation energy for creating software, which I like as a vision. I don't think I have taken as much advantage of it as I could. I feel guilty about that. But also, we're trending towards there.Nicholas [00:14:47]: Yeah. No, I mean, I do think that this is a direction that is exciting to me. One of the things I wrote that was like, a lot of the ways that I use these models are for one-off things that I just need to happen that I'm going to throw away in five minutes. And you can.Swyx [00:15:01]: Yeah, exactly.Nicholas [00:15:02]: Right. It's like the kind of thing where it would not have been worth it for me to have spent 45 minutes writing this, because I don't need the answer that badly. But if it will only take me five minutes, then I'll just figure it out, run the program and then get it right. And if it turns out that you ask the thing, it doesn't give you the right answer. Well, I didn't actually need the answer that badly in the first place. Like either I can decide to dedicate the 45 minutes or I cannot, but like the cost of doing it is fairly low. You see what the model can do. And if it can't, then, okay, when you're using these models, if you're getting the answer you want always, it means you're not asking them hard enough questions.Swyx [00:15:35]: Say more.Nicholas [00:15:37]: Lots of people only use them for very small particular use cases and like it always does the thing that they want. Yeah.Swyx [00:15:43]: Like they use it like a search engine.Nicholas [00:15:44]: Yeah. Or like one particular case. And if you're finding that when you're using these, it's always giving you the answer that you want, then probably it has more capabilities than you're actually using. And so I oftentimes try when I have something that I'm curious about to just feed into the model and be like, well, maybe it's just solved my problem for me. You know, most of the time it doesn't, but like on occasion, it's like, it's done things that would have taken me, you know, a couple hours that it's been great and just like solved everything immediately. And if it doesn't, then it's usually easier to verify whether or not the answer is correct than to have written in the first place. And so you check, you're like, well, that's just, you're entirely misguided. Nothing here is right. It's just like, I'm not going to do this. I'm going to go write it myself or whatever.Alessio [00:16:21]: Even for non-tech, I had to fix my irrigation system. I had an old irrigation system. I didn't know how I worked to program it. I took a photo, I sent it to Claude and it's like, oh yeah, that's like the RT 900. This is exactly, I was like, oh wow, you know, you know, a lot of stuff.Swyx [00:16:34]: Was it right?Alessio [00:16:35]: Yeah, it was right.Swyx [00:16:36]: It worked. Did you compare with OpenAI?Alessio [00:16:38]: No, I canceled my OpenAI subscription, so I'm a Claude boy. Do you have a way to think about this like one-offs software thing? One way I talk to people about it is like LLMs are kind of converging to like semantic serverless functions, you know, like you can say something and like it can run the function in a way and then that's it. It just kind of dies there. Do you have a mental model to just think about how long it should live for and like anything like that?Nicholas [00:17:02]: I don't think I have anything interesting to say here, no. I will take whatever tools are available in front of me and try and see if I can use them in meaningful ways. And if they're helpful, then great. If they're not, then fine. And like, you know, there are lots of people that I'm very excited about seeing all these people who are trying to make better applications that use these or all these kinds of things. And I think that's amazing. I would like to see more of it, but I do not spend my time thinking about how to make this any better.Alessio [00:17:27]: What's the most underrated thing in the list? I know there's like simplified code, solving boring tasks, or maybe is there something that you forgot to add that you want to throw in there?Nicholas [00:17:37]: I mean, so in the list, I only put things that people could look at and go, I understand how this solved my problem. I didn't want to put things where the model was very useful to me, but it would not be clear to someone else that it was actually useful. So for example, one of the things that I use it a lot for is debugging errors. But the errors that I have are very much not the errors that anyone else in the world will have. And in order to understand whether or not the solution was right, you just have to trust me on it. Because, you know, like I got my machine in a state that like CUDA was not talking to whatever some other thing, the versions were mismatched, something, something, something, and everything was broken. And like, I could figure it out with interaction with the model, and it gave it like told me the steps I needed to take. But at the end of the day, when you look at the conversation, you just have to trust me that it worked. And I didn't want to write things online that were this, like, you have to trust me that what I'm saying. I want everything that I said to like have evidence that like, here's the conversation, you can go and check whether or not this actually solved the task as I said that the model does. Because a lot of people I feel like say, I used a model to solve this very complicated task. And what they mean is the model did 10%, and I did the other 90% or something, I wanted everything to be verifiable. And so one of the biggest use cases for me, I didn't describe even at all, because it's not the kind of thing that other people could have verified by themselves. So that maybe is like, one of the things that I wish I maybe had said a little bit more about, and just stated that the way that this is done, because I feel like that this didn't come across quite as well. But yeah, of the things that I talked about, the thing that I think is most underrated is the ability of it to solve the uninteresting parts of problems for me right now, where people always say, this is one of the biggest arguments that I don't understand why people say is, the model can only do things that people have done before. Therefore, the model is not going to be helpful in doing new research or like discovering new things. And as someone whose day job is to do new things, like what is research? Research is doing something literally no one else in the world has ever done before. So this is what I do every single day, 90% of this is not doing something new, 90% of this is doing things a million people have done before, and then a little bit of something that was new. There's a reason why we say we stand on the shoulders of giants. It's true. Almost everything that I do is something that's been done many, many times before. And that is the piece that can be automated. Even if the thing that I'm doing as a whole is new, it is almost certainly the case that the small pieces that build up to it are not. And a number of people who use these models, I feel like expect that they can either solve the entire task or none of the task. But now I find myself very often, even when doing something very new and very hard, having models write the easy parts for me. And the reason I think this is so valuable, everyone who programs understands this, like you're currently trying to solve some problem and then you get distracted. And whatever the case may be, someone comes and talks to you, you have to go look up something online, whatever it is. You lose a lot of time to that. And one of the ways we currently don't think about being distracted is you're solving some hard problem and you realize you need a helper function that does X, where X is like, it's a known algorithm. Any person in the world, you say like, give me the algorithm that, have a dense graph or a sparse graph, I need to make it dense. You can do this by doing some matrix multiplies. It's like, this is a solved problem. I knew how to do this 15 years ago, but it distracts me from the problem I'm thinking about in my mind. I needed this done. And so instead of using my mental capacity and solving that problem and then coming back to the problem I was originally trying to solve, you could just ask model, please solve this problem for me. It gives you the answer. You run it. You can check that it works very, very quickly. And now you go back to solving the problem without having lost all the mental state. And I feel like this is one of the things that's been very useful for me.Swyx [00:21:34]: And in terms of this concept of expert users versus non-expert users, floors versus ceilings, you had some strong opinion here that like, basically it actually is more beneficial for non-experts.Nicholas [00:21:46]: Yeah, I don't know. I think it could go either way. Let me give you the argument for both of these. Yes. So I can only speak on the expert user behalf because I've been doing computers for a long time. And so yeah, the cases where it's useful for me are exactly these cases where I can check the output. I know, and anything the model could do, I could have done. I could have done better. I can check every single thing that the model is doing and make sure it's correct in every way. And so I can only speak and say, definitely it's been useful for me. But I also see a world in which this could be very useful for the kinds of people who do not have this knowledge, with caveats, because I'm not one of these people. I don't have this direct experience. But one of these big ways that I can see this is for things that you can check fairly easily, someone who could never have asked or have written a program themselves to do a certain task could just ask for the program that does the thing. And you know, some of the times it won't get it right. But some of the times it will, and they'll be able to have the thing in front of them that they just couldn't have done before. And we see a lot of people trying to do applications for this, like integrating language models into spreadsheets. Spreadsheets run the world. And there are some people who know how to do all the complicated spreadsheet equations and various things, and other people who don't, who just use the spreadsheet program but just manually do all of the things one by one by one by one. And this is a case where you could have a model that could try and give you a solution. And as long as the person is rigorous in testing that the solution does actually the correct thing, and this is the part that I'm worried about most, you know, I think depending on these systems in ways that we shouldn't, like this is what my research says, my research says is entirely on this, like, you probably shouldn't trust these models to do the things in adversarial situations, like, I understand this very deeply. And so I think that it's possible for people who don't have this knowledge to make use of these tools in ways, but I'm worried that it might end up in a world where people just blindly trust them, deploy them in situations that they probably shouldn't, and then someone like me gets to come along and just break everything because everything is terrible. And so I am very, very worried about that being the case, but I think if done carefully it is possible that these could be very useful.Swyx [00:23:54]: Yeah, there is some research out there that shows that when people use LLMs to generate code, they do generate less secure code.Nicholas [00:24:02]: Yeah, Dan Bonet has a nice paper on this. There are a bunch of papers that touch on exactly this.Swyx [00:24:07]: My slight issue is, you know, is there an agenda here?Nicholas [00:24:10]: I mean, okay, yeah, Dan Bonet, at least the one they have, like, I fully trust everything that sort of.Swyx [00:24:15]: Sorry, I don't know who Dan is.Swyx [00:24:17]: He's a professor at Stanford. Yeah, he and some students have some things on this. Yeah, there's a number. I agree that a lot of the stuff feels like people have an agenda behind it. There are some that don't, and I trust them to have done the right thing. I also think, even on this though, we have to be careful because the argument, whenever someone says x is true about language models, you should always append the suffix for current models because I'll be the first to admit I was one of the people who was very much on the opinion that these language models are fun toys and are going to have absolutely no practical utility. If you had asked me this, let's say, in 2020, I still would have said the same thing. After I had seen GPT-2, I had written a couple of papers studying GPT-2 very carefully. I still would have told you these things are toys. And when I first read the RLHF paper and the instruction tuning paper, I was like, nope, this is this thing that these weird AI people are doing. They're trying to make some analogies to people that makes no sense. It's just like, I don't even care to read it. I saw what it was about and just didn't even look at it. I was obviously wrong. These things can be useful. And I feel like a lot of people had the same mentality that I did and decided not to change their mind. And I feel like this is the thing that I want people to be careful about. I want them to at least know what is true about the world so that they can then see that maybe they should reconsider some of the opinions that they had from four or five years ago that may just not be true about today's models.Swyx [00:25:47]: Specifically because you brought up spreadsheets, I want to share my personal experience because I think Google has done a really good job that people don't know about, which is if you use Google Sheets, Gemini is integrated inside of Google Sheets and it helps you write formulas. Great.Nicholas [00:26:00]: That's news to me.Swyx [00:26:01]: Right? They don't maybe do a good job. Unless you watch Google I.O., there was no other opportunity to learn that Gemini is now in your Google Sheets. And so I just don't write formulas manually anymore. It just prompts Gemini to do it for me. And it does it.Nicholas [00:26:15]: One of the problems that these machine learning models have is a discoverability problem. I think this will be figured out. I mean, it's the same problem that you have with any assistant. You're given a blank box and you're like, what do I do with it? I think this is great. More of these things, it would be good for them to exist. I want them to exist in ways that we can actually make sure that they're done correctly. I don't want to just have them be pushed into more and more things just blindly. I feel like lots of people, there are far too many X plus AI, where X is like arbitrary thing in the world that has nothing to do with it and could not be benefited at all. And they're just doing it because they want to use the word. And I don't want that to happen.Swyx [00:26:58]: You don't want an AI fridge?Nicholas [00:27:00]: No. Yes. I do not want my fridge on the internet.Swyx [00:27:03]: I do not want... Okay.Nicholas [00:27:05]: Anyway, let's not go down that rabbit hole. I understand why some of that happens, because people want to sell things or whatever. But I feel like a lot of people see that and then they write off everything as a result of it. And I just want to say, there are allowed to be people who are trying to do things that don't make any sense. Just ignore them. Do the things that make sense.Alessio [00:27:22]: Another chunk of use cases was learning. So both explaining code, being an API reference, all of these different things. Any suggestions on how to go at it? I feel like one thing is generate code and then explain to me. One way is just tell me about this technology. Another thing is like, hey, I read this online, kind of help me understand it. Any best practices on getting the most out of it?Swyx [00:27:47]: Yeah.Nicholas [00:27:47]: I don't know if I have best practices. I have how I use them.Swyx [00:27:51]: Yeah.Nicholas [00:27:51]: I find it very useful for cases where I understand the underlying ideas, but I have never usedSwyx [00:27:59]: them in this way before.Nicholas [00:28:00]: I know what I'm looking for, but I just don't know how to get there. And so yeah, as an API reference is a great example. The tool everyone always picks on is like FFmpeg. No one in the world knows the command line arguments to do what they want. They're like, make the thing faster. I want lower bitrate, like dash V. Once you tell me what the answer is, I can check. This is one of these things where it's great for these kinds of things. Or in other cases, things where I don't really care that the answer is 100% correct. So for example, I do a lot of security work. Most of security work is reading some code you've never seen before and finding out which pieces of the code are actually important. Because, you know, most of the program isn't actually do anything to do with security. It has, you know, the display piece or the other piece or whatever. And like, you just, you would only ignore all of that. So one very fun use of models is to like, just have it describe all the functions and just skim it and be like, wait, which ones look like approximately the right things to look at? Because otherwise, what are you going to do? You're going to have to read them all manually. And when you're reading them manually, you're going to skim the function anyway, and not just figure out what's going on perfectly. Like you already know that when you're going to read these things, what you're going to try and do is figure out roughly what's going on. Then you'll delve into the details. This is a great way of just doing that, but faster, because it will abstract most of whatSwyx [00:29:21]: is right.Nicholas [00:29:21]: It's going to be wrong some of the time. I don't care.Swyx [00:29:23]: I would have been wrong too.Nicholas [00:29:24]: And as long as you treat it with this way, I think it's great. And so like one of the particular use cases I have in the thing is decompiling binaries, where oftentimes people will release a binary. They won't give you the source code. And you want to figure out how to attack it. And so one thing you could do is you could try and run some kind of decompiler. It turns out for the thing that I wanted, none existed. And so I spent too many hours doing it by hand. Before I first thought, why am I doing this? I should just check if the model could do it for me. And it turns out that it can. And it can turn the compiled source code, which is impossible for any human to understand, into the Python code that is entirely reasonable to understand. And it doesn't run. It has a bunch of problems. But it's so much nicer that it's immediately a win for me. I can just figure out approximately where I should be looking, and then spend all of my time doing that by hand. And again, you get a big win there.Swyx [00:30:12]: So I fully agree with all those use cases, especially for you as a security researcher and having to dive into multiple things. I imagine that's super helpful. I do think we want to move to your other blog post. But you ended your post with a little bit of a teaser about your next post and your speculations. What are you thinking about?Nicholas [00:30:34]: So I want to write something. And I will do that at some point when I have time, maybe after I'm done writing my current papers for ICLR or something, where I want to talk about some thoughts I have for where language models are going in the near-term future. The reason why I want to talk about this is because, again, I feel like the discussion tends to be people who are either very much AGI by 2027, orSwyx [00:30:55]: always five years away, or are going to make statements of the form,Nicholas [00:31:00]: you know, LLMs are the wrong path, and we should be abandoning this, and we should be doing something else instead. And again, I feel like people tend to look at this and see these two polarizing options and go, well, those obviously are both very far extremes. Like, how do I actually, like, what's a more nuanced take here? And so I have some opinions about this that I want to put down, just saying, you know, I have wide margins of error. I think you should too. If you would say there's a 0% chance that something, you know, the models will get very, very good in the next five years, you're probably wrong. If you're going to say there's a 100% chance that in the next five years, then you're probably wrong. And like, to be fair, most of the people, if you read behind the headlines, actually say something like this. But it's very hard to get clicks on the internet of like, some things may be good in the future. Like, everyone wants like, you know, a very, like, nothing is going to be good. This is entirely wrong. It's going to be amazing. You know, like, they want to see this. I want people who have negative reactions to these kinds of extreme views to be able to at least say, like, to tell them, there is something real here. It may not solve all of our problems, but it's probably going to get better. I don't know by how much. And that's basically what I want to say. And then at some point, I'll talk about the safety and security things as a result of this. Because the way in which security intersects with these things depends a lot in exactly how people use these tools. You know, if it turns out to be the case that these models get to be truly amazing and can solve, you know, tasks completely autonomously, that's a very different security world to be living in than if there's always a human in the loop. And the types of security questions I would want to ask would be very different. And so I think, you know, in some very large part, understanding what the future will look like a couple of years ahead of time is helpful for figuring out which problems, as a security person, I want to solve now. You mentioned getting clicks on the internet,Alessio [00:32:50]: but you don't even have, like, an ex-account or anything. How do you get people to read your stuff? What's your distribution strategy? Because this post was popping up everywhere. And then people on Twitter were like, Nicholas Garlini wrote this. Like, what's his handle? It's like, he doesn't have it. It's like, how did you find it? What's the story?Nicholas [00:33:07]: So I have an RSS feed and an email list. And that's it. I don't like most social media things. On principle, I feel like they have some harms. As a person, I have a problem when people say things that are wrong on the internet. And I would get nothing done if I would have a Twitter. I would spend all of my time correcting people and getting into fights. And so I feel like it is just useful for me for this not to be an option. I tend to just post things online. Yeah, it's a very good question. I don't know how people find it. I feel like for some things that I write, other people think it resonates with them. And then they put it on Twitter. And...Swyx [00:33:43]: Hacker News as well.Nicholas [00:33:44]: Sure, yeah. I am... Because my day job is doing research, I get no value for having this be picked up. There's no whatever. I don't need to be someone who has to have this other thing to give talks. And so I feel like I can just say what I want to say. And if people find it useful, then they'll share it widely. You know, this one went pretty wide. I wrote a thing, whatever, sometime late last year, about how to recover data off of an Apple profile drive from 1980. This probably got, I think, like 1000x less views than this. But I don't care. Like, that's not why I'm doing this. Like, this is the benefit of having a thing that I actually care about, which is my research. I would care much more if that didn't get seen. This is like a thing that I write because I have some thoughts that I just want to put down.Swyx [00:34:32]: Yeah. I think it's the long form thoughtfulness and authenticity that is sadly lacking sometimes in modern discourse that makes it attractive. And I think now you have a little bit of a brand of you are an independent thinker, writer, person, that people are tuned in to pay attention to whatever is next coming.Nicholas [00:34:52]: Yeah, I mean, this kind of worries me a little bit. I don't like whenever I have a popular thing that like, and then I write another thing, which is like entirely unrelated. Like, I don't, I don't... You should actually just throw people off right now.Swyx [00:35:01]: Exactly.Nicholas [00:35:02]: I'm trying to figure out, like, I need to put something else online. So, like, the last two or three things I've done in a row have been, like, actually, like, things that people should care about.Swyx [00:35:10]: Yes. So, I have a couple of things.Nicholas [00:35:11]: I'm trying to figure out which one do I put online to just, like, cull the list of people who have subscribed to my email.Swyx [00:35:16]: And so, like, tell them, like,Nicholas [00:35:16]: no, like, what you're here for is not informed, well-thought-through takes. Like, what you're here for is whatever I want to talk about. And if you're not up for that, then, like, you know, go away. Like, this is not what I want out of my personal website.Swyx [00:35:27]: So, like, here's, like, top 10 enemies or something.Alessio [00:35:30]: What's the next project you're going to work on that is completely unrelated to research LLMs? Or what games do you want to port into the browser next?Swyx [00:35:39]: Okay. Yeah.Nicholas [00:35:39]: So, maybe.Swyx [00:35:41]: Okay.Nicholas [00:35:41]: Here's a fun question. How much data do you think you can put on a single piece of paper?Swyx [00:35:47]: I mean, you can think about bits and atoms. Yeah.Nicholas [00:35:49]: No, like, normal printer. Like, I gave you an office printer. How much data can you put on a piece of paper?Alessio [00:35:54]: Can you re-decode it? So, like, you know, base 64A or whatever. Yeah, whatever you want.Nicholas [00:35:59]: Like, you get normal off-the-shelf printer, off-the-shelf scanner. How much data?Swyx [00:36:03]: I'll just throw out there. Like, 10 megabytes. That's enormous. I know.Nicholas [00:36:07]: Yeah, that's a lot.Swyx [00:36:10]: Really small fonts. That's my question.Nicholas [00:36:12]: So, I have a thing. It does about a megabyte.Swyx [00:36:14]: Yeah, okay.Nicholas [00:36:14]: There you go. I was off by an order of magnitude.Swyx [00:36:16]: Yeah, okay.Nicholas [00:36:16]: So, in particular, it's about 1.44 megabytes. A floppy disk.Swyx [00:36:21]: Yeah, exactly.Nicholas [00:36:21]: So, this is supposed to be the title at some point. It's a floppy disk.Swyx [00:36:24]: A paper is a floppy disk. Yeah.Nicholas [00:36:25]: So, this is a little hard because, you know. So, you can do the math and you get 8.5 by 11. You can print at 300 by 300 DPI. And this gives you 2 megabytes. And so, every single pixel, you need to be able to recover up to like 90 plus percent. Like, 95 percent. Like, 99 point something percent accuracy. In order to be able to actually decode this off the paper. This is one of the things that I'm considering. I need to get a couple more things working for this. Where, you know, again, I'm running into some random problems. But this is probably, this will be one thing that I'm going to talk about. There's this contest called the International Obfuscated C-Code Contest, which is amazing. People try and write the most obfuscated C code that they can. Which is great. And I have a submission for that whenever they open up the next one for it. And I'll write about that submission. I have a very fun gate level emulation of an old CPU that runs like fully precisely. And it's a fun kind of thing. Yeah.Swyx [00:37:20]: Interesting. Your comment about the piece of paper reminds me of when I was in college. And you would have like one cheat sheet that you could write. So, you have a formula, a theoretical limit for bits per inch. And, you know, that's how much I would squeeze in really, really small. Yeah, definitely.Nicholas [00:37:36]: Okay.Swyx [00:37:37]: We are also going to talk about your benchmarking. Because you released your own benchmark that got some attention, thanks to some friends on the internet. What's the story behind your own benchmark? Do you not trust the open source benchmarks? What's going on there?Nicholas [00:37:51]: Okay. Benchmarks tell you how well the model solves the task the benchmark is designed to solve. For a long time, models were not useful. And so, the benchmark that you tracked was just something someone came up with, because you need to track something. All of deep learning exists because people tried to make models classify digits and classify images into a thousand classes. There is no one in the world who cares specifically about the problem of distinguishing between 300 breeds of dog for an image that's 224 or 224 pixels. And yet, like, this is what drove a lot of progress. And people did this not because they cared about this problem, because they wanted to just measure progress in some way. And a lot of benchmarks are of this flavor. You want to construct a task that is hard, and we will measure progress on this benchmark, not because we care about the problem per se, but because we know that progress on this is in some way correlated with making better models. And this is fine when you don't want to actually use the models that you have. But when you want to actually make use of them, it's important to find benchmarks that track with whether or not they're useful to you. And the thing that I was finding is that there would be model after model after model that was being released that would find some benchmark that they could claim state-of-the-art on and then say, therefore, ours is the best. And that wouldn't be helpful to me to know whether or not I should then switch to it. So the argument that I tried to lay out in this post is that more people should make benchmarks that are tailored to them. And so what I did is I wrote a domain-specific language that anyone can write for and say, you can take tasks that you have wanted models to solve for you, and you can put them into your benchmark that's the thing that you care about. And then when a new model comes out, you benchmark the model on the things that you care about. And you know that you care about them because you've actually asked for those answers before. And if the model scores well, then you know that for the kinds of things that you have asked models for in the past, it can solve these things well for you. This has been useful for me because when another model comes out, I can run it. I can see, does this solve the kinds of things that I care about? And sometimes the answer is yes, and sometimes the answer is no. And then I can decide whether or not I want to use that model or not. I don't want to say that existing benchmarks are not useful. They're very good at measuring the thing that they're designed to measure. But in many cases, what that's designed to measure is not actually the thing that I want to use it for. And I expect that the way that I want to use it is different the way that you want to use it. And I would just like more people to have these things out there in the world. And the final reason for this is, it is very easy. If you want to make a model good at some benchmark, to make it good at that benchmark, you can find the distribution of data that you need and train the model to be good on the distribution of data. And then you have your model that can solve this benchmark well. And by having a benchmark that is not very popular, you can be relatively certain that no one has tried to optimize their model for your benchmark.Swyx [00:40:40]: And I would like this to be-Nicholas [00:40:40]: So publishing your benchmark is a little bit-Swyx [00:40:43]: Okay, sure.Nicholas [00:40:43]: Contextualized. So my hope in doing this was not that people would use mine as theirs. My hope in doing this was that- You should make yours. Yes, you should make your benchmark. And if, for example, there were even a very small fraction of people, 0.1% of people who made a benchmark that was useful for them, this would still be hundreds of new benchmarks that- not want to make one myself, but I might want to- I might know the kinds of work that I do is a little bit like this person, a little bit like that person. I'll go check how it is on their benchmarks. And I'll see, roughly, I'll get a good sense of what's going on. Because the alternative is people just do this vibes-based evaluation thing, where you interact with the model five times, and you see if it worked on the kinds of things that you just like your toy questions. But five questions is a very low bit output from whether or not it works for this thing. And if you could just automate running it 100 questions for you, it's a much better evaluation. So that's why I did this.Swyx [00:41:37]: Yeah, I like the idea of going through your chat history and actually pulling out real-life examples. I regret to say that I don't think my chat history is used as much these days, because I'm using Cursor, the native AI IDE. So your examples are all coding related. And the immediate question is, now that you've written the How I Use AI post, which is a little bit broader, are you able to translate all these things to evals? Are some things unevaluable?Nicholas [00:42:03]: Right. A number of things that I do are harder to evaluate. So this is the problem with a benchmark, is you need some way to check whether or not the output was correct. And so all of the kinds of things that I can put into the benchmark are the kinds of things that you can check. You can check more things than you might have thought would be possible if you do a little bit of work on the back end. So for example, all of the code that I have the model write, it runs the code and sees whether the answer is the correct answer. Or in some cases, it runs the code, feeds the output to another language model, and the language model judges was the output correct. And again, is using a language model to judge here perfect? No. But like, what's the alternative? The alternative is to not do it. And what I care about is just, is this thing broadly useful for the kinds of questions that I have? And so as long as the accuracy is better than roughly random, like, I'm okay with this. I've inspected the outputs of these, and like, they're almost always correct. If you ask the model to judge these things in the right way, they're very good at being able to tell this. And so, yeah, I probably think this is a useful thing for people to do.Alessio [00:43:04]: You complain about prompting and being lazy and how you do not want to tip your model and you do not want to murder a kitten just to get the right answer. How do you see the evolution of like prompt engineering? Even like 18 months ago, maybe, you know, it was kind of like really hot and people wanted to like build companies around it. Today, it's like the models are getting good. Do you think it's going to be less and less relevant going forward? Or what's the minimum valuable prompt? Yeah, I don't know.Nicholas [00:43:29]: I feel like a big part of making an agent is just like a fancy prompt that like, you know, calls back to the model again. I have no opinion. It seems like maybe it turns out that this is really important. Maybe it turns out that this isn't. I guess the only comment I was making here is just to say, oftentimes when I use a model and I find it's not useful, I talk to people who help make it. The answer they usually give me is like, you're using it wrong. Which like reminds me very much of like that you're holding it wrong from like the iPhone kind of thing, right? Like, you know, like I don't care that I'm holding it wrong. I'm holding it that way. If the thing is not working with me, then like it's not useful for me. Like it may be the case that there exists a way to ask the model such that it gives me the answer that's correct, but that's not the way I'm doing it. If I have to spend so much time thinking about how I want to frame the question, that it would have been faster for me just to get the answer. It didn't save me any time. And so oftentimes, you know, what I do is like, I just dump in whatever current thought that I have in whatever ill-formed way it is. And I expect the answer to be correct. And if the answer is not correct, like in some sense, maybe the model was right to give me the wrong answer. Like I may have asked the wrong question, but I want the right answer still. And so like, I just want to sort of get this as a thing. And maybe the way to fix this is you have some default prompt that always goes into all the models or something, or you do something like clever like this. It would be great if someone had a way to package this up and make a thing I think that's entirely reasonable. Maybe it turns out that as models get better, you don't need to prompt them as much in this way. I just want to use the things that are in front of me.Alessio [00:44:55]: Do you think that's like a limitation of just how models work? Like, you know, at the end of the day, you're using the prompt to kind of like steer it in the latent space. Like, do you think there's a way to actually not make the prompt really relevant and have the model figure it out? Or like, what's the... I mean, you could fine tune itNicholas [00:45:10]: into the model, for example, that like it's supposed to... I mean, it seems like some models have done this, for example, like some recent model, many recent models. If you ask them a question, computing an integral of this thing, they'll say, let's think through this step by step. And then they'll go through the step by step answer. I didn't tell it. Two years ago, I would have had to have prompted it. Think step by step on solving the following thing. Now you ask them the question and the model says, here's how I'm going to do it. I'm going to take the following approach and then like sort of self-prompt itself.Swyx [00:45:34]: Is this the right way?Nicholas [00:45:35]: Seems reasonable. Maybe you don't have to do it. I don't know. This is for the people whose job is to make these things better. And yeah, I just want to use these things. Yeah.Swyx [00:45:43]: For listeners, that would be Orca and Agent Instruct. It's the soda on this stuff. Great. Yeah.Alessio [00:45:49]: That's a few shot. It's included in the lazy prompting. Like, do you do a few shot prompting? Like, do you collect some examples when you want to put them in? Or...Nicholas [00:45:57]: I don't because usually when I want the answer, I just want to get the answer. Brutal.Swyx [00:46:03]: This is hard mode. Yeah, exactly.Nicholas [00:46:04]: But this is fine.Swyx [00:46:06]: I want to be clear.Nicholas [00:46:06]: There's a difference between testing the ultimate capability level of the model and testing the thing that I'm doing with it. What I'm doing is I'm not exercising its full capability level because there are almost certainly better ways to ask the questions and sort of really see how good the model is. And if you're evaluating a model for being state of the art, this is ultimately what I care about. And so I'm entirely fine with people doing fancy prompting to show me what the true capability level could be because it's really useful to know what the ultimate level of the model could be. But I think it's also important just to have available to you how good the model is if you don't do fancy things.Swyx [00:46:39]: Yeah, I would say that here's a divergence between how models are marketed these days versus how people use it, which is when they test MMLU, they'll do like five shots, 25 shots, 50 shots. And no one's providing 50 examples. I completely agree.Nicholas [00:46:54]: You know, for these numbers, the problem is everyone wants to get state of the art on the benchmark. And so you find the way that you can ask the model the questions so that you get state of the art on the benchmark. And it's good. It's legitimately good to know. It's good to know the model can do this thing if only you try hard enough. Because it means that if I have some task that I want to be solved, I know what the capability level is. And I could get there if I was willing to work hard enough. And the question then is, should I work harder and figure out how to ask the model the question? Or do I just do the thing myself? And for me, I have programmed for many, many, many years. It's often just faster for me just to do the thing than to figure out the incantation to ask the model. But I can imagine someone who has never programmed before might be fine writing five paragraphs in English describing exactly the thing that they want and have the model build it for them if the alternative is not. But again, this goes to all these questions of how are they going to validate? Should they be trusting the output? These kinds of things.Swyx [00:47:49]: One problem with your eval paradigm and most eval paradigms, I'm not picking on you, is that we're actually training these things for chat, for interactive back and forth. And you actually obviously reveal much more information in the same way that asking 20 questions reveals more information in sort of a tree search branching sort of way. Then this is also by the way the problem with LMSYS arena, right? Where the vast majority of prompts are single question, single answer, eval, done. But actually the way that we use chat things, in the way, even in the stuff that you posted in your how I use AI stuff, you have maybe 20 turns of back and forth. How do you eval that?Nicholas [00:48:25]: Yeah. Okay. Very good question. This is the thing that I think many people should be doing more of. I would like more multi-turn evals. I might be writing a paper on this at some point if I get around to it. A couple of the evals in the benchmark thing I have are already multi-turn. I mentioned 20 questions. I have a 20 question eval there just for fun. But I have a couple others that are like, I just tell the model, here's my get thing, figure out how to cherry pick off this other branch and move it over there. And so what I do is I just, I basically build a tiny little agency thing. I just ask the model how I do it. I run the thing on Linux. This is what I want a Docker for. I spin up a Docker container. I run whatever the model told me the output to do is. I feed the output back into the model. I repeat this many rounds. And then I check at the very end, does the git commit history show that it is correctly cherry picked in
This week on Toy Power we are joined by the legendary Fresh. First up its an amazing Canadian themed gift box from Colin Betts and some genuinely amazing goodies including one very special custom! Its then into all the news, including the HasLab Cantina funding update, Garfield and Friends, an Ultimate D&D Tiamat (complete with all 5 heads) and new comic Kull and Conan figures! Fresh then gives us some good news for getting your hands on wrestling figures in Australia (and good news for your wallet too). There's new video game figures from Spin Master, and then a tonne of TMNT news starting with a totally NOT Bebop figure, a 1:12 Collective Casey Jones (already sold out), the long awaited Tatsu figure from the 1990 TMNT movie (but why is he red?) and Karai as the Shredder (Fresh's cosplay of choice). Fugitoid is relegated to an accessory in the Last Ronin accessory pack, and then we a get (more) naked Mikey (come on, they are always kind of naked anyhow). Rounding it out with some1988 Michael Keaton figures from Beetlejuice before we get excited from the new Captain America film. Then all the SDCC 2024 exclusives, including one reveal that is, lets just say, not well received. Mikey's Pizza delivery business may just have to be done on Foot!!!!Support the show: http://patreon.com/toypowerpodcastSee omnystudio.com/listener for privacy information.
The Nightmare Krew continues their exploration into the Revanite Catacombs, hoping to unearth the source of the Sage's power, and the reasons for which an empire was borne. Having harvested fungus from a druidic deep-dweller, and holding a vigil for a long-dead royal, the Krew found the spiritual resting place of the ancient Ruler of Revan, Oda Valtameri. But for each question Ryujin had Oda answer, like the Hydra of legend, more questions sprung up in their place. Now, as the other Valtameri, Tatsu, asks their relative long-awaited questions, the Krew takes a moment to wonder what awaits them next in the dark, and what steps to take to get there... Find your way to the scrying pool known as Bards of New York. Sponsored by Magic Mind https://www.magicmind.com/bardsny Use code BARDSNY20 at checkout for up to 56% off! Catch us live on Mondays 6:00pm EST at https://www.twitch.tv/bardsofnewyork Instagram: https://www.instagram.com/bardsofnewyork Discord: https://discord.gg/4zVZ6BdbSA Tiktok: https://tinyurl.com/mrcbx5yj Podcast: https://linktr.ee/bardsofnewyork Cast: Woody Minshew as Dungeon Master Kyle Knight as Ember/Fim Fam Drew Nauden as Ryujin Valtimeri Hannah Minshew as Lilith La Fleur Will Champion as Kallias Myr Dan Krackhardt as Leonidas Goldspear II Jonathan Champion as Thinker/Rek'Niht Special Guest: Rachel Knight as Desiderium If you liked our show, leave us a comment/like. Review us on Apple Podcasts and Spotify and spread the word! Thank you! Tell a friend Spread some joy We love you Thank you to the talented artists who graciously let us use their music. For more info on their work, head to our about page and go to the "Magic Items" section on Twitch.
The Nightmare Krew continues their exploration into the Revanite Catacombs, hoping to unearth the source of the Sage's power, and the reasons for which an empire was borne. Having harvested fungus from a druidic deep-dweller, and holding a vigil for a long-dead royal, the Krew found the spiritual resting place of the ancient Ruler of Revan, Oda Valtameri. But for each question Ryujin had Oda answer, like the Hydra of legend, more questions sprung up in their place. Now, as the other Valtameri, Tatsu, asks their relative long-awaited questions, the Krew takes a moment to wonder what awaits them next in the dark, and what steps to take to get there... Find your way to the scrying pool known as Bards of New York. Sponsored by Magic Mind https://www.magicmind.com/bardsny Use code BARDSNY20 at checkout for up to 56% off! Catch us live on Mondays 6:00pm EST at https://www.twitch.tv/bardsofnewyork Instagram: https://www.instagram.com/bardsofnewyork Discord: https://discord.gg/4zVZ6BdbSA Tiktok: https://tinyurl.com/mrcbx5yj Podcast: https://linktr.ee/bardsofnewyork Cast: Woody Minshew as Dungeon Master Kyle Knight as Ember/Fim Fam Drew Nauden as Ryujin Valtimeri Hannah Minshew as Lilith La Fleur Will Champion as Kallias Myr Dan Krackhardt as Leonidas Goldspear II Jonathan Champion as Thinker/Rek'Niht Special Guest: Rachel Knight as Desiderium If you liked our show, leave us a comment/like. Review us on Apple Podcasts and Spotify and spread the word! Thank you! Tell a friend Spread some joy We love you Thank you to the talented artists who graciously let us use their music. For more info on their work, head to our about page and go to the "Magic Items" section on Twitch.
In this episode, host Kevin Griffin catches up with his old friend and special guest, Jamie Wright. They discuss their accidental meeting at THAT Conference in Austin and Jamie's Kalahari World Tour. The conversation then shifts to Jamie's experience and philosophy around creating multithreaded income through freelancing, consulting, and side projects. Jamie shares his journey from being employed to embracing freelancing since 2006, exploring employability challenges, venturing into SaaS, and dabbling in creating chatbots and productivity tools. He emphasizes the importance of working on projects to generate income. He reflects on the challenges of balancing consulting work with pursuing product development. Jamie recounts his experience acquiring and reviving the Kyber Slack app in his SaaS endeavors, discussing the transition and his strategy moving forward. Additionally, they touch on Jamie's past projects, failures, and successes in software development and the importance of freedom and enjoyment in work. The conversation concludes with insights on the value of trying new things, ignoring conventional 'expert' advice, and finding joy in the journey of creation.Follow Jamie on Twitter: https://twitter.com/jwrightFollow Jamie on LinkedIn: https://www.linkedin.com/in/brilliantfantastic/Tatsu: https://tatsu.io/Kyber: https://kyber.me/Creators & Guests Kevin Griffin - Host Jamie - Guest
A mix of tracks from recently released albums on @constellationtatsu label, for more information visit: This mix is produced for broadcast on @radiockut 90.3FM on If you got ears, for the April 10, 2024 edition. Thank you for listening! - Stefan. Track listing (artist name / track name) 01. Andy Aquarius - U Lisi 02 Glo Phase - Vista Hermosa 03. Glo Phase - Pearl Around Moss 04. Tarotplane - Aeonium 05. Tarotplane - Auximenes 06. Randal Fisher & Dexter Story - The Griot 07. Randal Fisher & Dexter Story - Igi 08. Tarotplane - Omayyad 09. Randal Fisher & Dexter Story - Baki 10. @rfisher89 & @dexterstory - Eburean Dance 11. @iksremusic - we are soaring 12. IKSRE - tail lights 13. IKSRE - pillars 14. @glophase - The Reflecting Pool 15. Glo Phase - Seaflowers 16. Glo Phase - Blink Reprise 17. @andyaquarius - Kyrie 18. Andy Aquarius - La Force Aquarienne 19. Glo Phase - Ugoki 20. Glo Phase - Rewind The Sim 21. Glo Phase - Transient Shelter 22. IKSRE - intention 23. IKSRE - dawn in a foreign city 24. IKSRE - heartburst The cover image is “DENKENDER EMBRYO72” (1972), a silkscreen by Gernot Bubenik photographed by the artist specifically for this release, Aeonium by Tarotplane. www.bubenikgernot.com/index-e.html
Phil Migliarese join THE LOFT to talk about the start of Balance MMA, Tatsu Tea, and Mastering the art of Jiu Jitsu. We also dive deep into his connection with his brother Rick as well as his ability to lead and stay inspired after years of perfecting his craft.
Leah is a third-degree black belt in Brazilian Jiu Jitsu and a full-time coach at Straight Blast Gym of Montana. Leah's martial arts journey was inspired by her mother, and Leah initially delved into Tatsu do karate, achieving her black belt in 2004. Leah's search for a functional martial arts academy led her to Straight Blast Gym and the world of Brazilian Jiu Jitsu. Here, she not only honed her skills, becoming an adult world champion as a blue belt and a multiple-time masters world champion at the black belt level, but also redefined her life's purpose. Leah's story is more than just martial arts, her diverse background ranges from multiple years as an Alaskan Wilderness Ranger to her significant contributions as a science educator for over a decade. These experiences have shaped her unique approach to coaching and her philosophy of life. Now, as a respected coach, Leah has developed age-specific martial arts curricula for kids and pioneered a women's jiu jitsu team. Her story is not just about physical strength but also about the power of resilience, adaptability, and the relentless pursuit of self-improvement. https://www.coachleahbjj.com/home The Speed of War Comic Series: https://www.thespeedofwar.com/ Check out the newest Cleared Hot Gear here: https://shop.clearedhotpodcast.com/
In this episode of the Tri Beginner's Luck podcast, we delve into the magic of camaraderie and community, focusing on friendships as part of the tri experience. We discover why embracing triathlons with friends isn't just a good idea—it's the best way to engage in the sport. After all, you'll have built-in accountability partners. This week, we talked with a group that elevates the tri challenge with a friendly competition. These friends like to put a unique twist in their competition, with consequences including a month-long PB&J diet or making a lifetime commitment to buying a beer a month for the loser. Tim Hughes and Tatsu Osada, and the possible instigator Laura Povlich, share their experiences in spicing up triathlons with unconventional challenges while maintaining a commitment to pushing boundaries and embracing healthy lifestyles. Join us this week to discover how they keep the stakes high and explore what it means to tri with friends. Get ready for an inspiring journey that makes you reconsider your next race against friends! Don't forget to leave a review, share it with your friends, and follow Tri Beginner's luck on Twitter, Instagram, and Facebook. And send any questions or feedback you have to tblpodbiz@tribeginnersluck.com ________________________________________________ Curious about triathlons? Look no further for answers! Introducing our new segment, “Ask The Coach” designed to address all your triathlon-related questions. Click here to submit your questions and head over to subscribe to the TBL YouTube page for an exclusive first look when the segment is released. Your journey to triathlon expertise starts with a click!
It's the last episode of 2023, and our 100th episode! But despite that, we keep on moving through the period, hitting a bunch of smaller stories from the Nihon Shoki about this period. We talk about Zentoku no Omi, the temple commissioner of Hokoji, as well as the trouble they went through to get the Asukadera Daibutsu in place to begin with. We have the first instance of the Dazai--as in the Dazaifu of Kyushu--as well as the first instance of the holiday that would eventually become Children's Day, Kodomo no Hi. There are various immigrants, bringing painting, handmills, and even a new kind of musical dance theater known as gigaku. And that's just some of what we'll cover. For more, check out our website at https://sengokudaimyo.com/podcast/episode-100 Rough Transcript Welcome to Sengoku Daimyo's Chronicles of Japan. My name is Joshua, and this is episode 100: Sacred Tetris and Other Tidbits First off: woohoo! One hundred episodes! Thank you to everyone who has been listening and following along on this journey so far. When I started this I had no idea how long I would be able to keep up with it, but I appreciate everyone who has encouraged me along the way. This all started in September of 2019, and we are now four years in and we have a ways to go. While I'm thanking people, I'd also like to give a big thank you to my wife, Ellen, who has been helping me behind the scenes. She's the one who typically helps read through what I'm going to say and helps edit out a lot of things, and provides reminders of things that I sometimes forget. She really helps to keep me on track, and I always appreciate the time she puts into helping to edit the scripts and the questions she asks. Now, we are still talking about the 6th and early 7th centuries during the reign of Kashikiya Hime, aka Suiko Tenno. We've talked about a lot of different aspects of this period—about the conflicts over Nimna on the peninsula, about the rise of the Sui dynasty on the continent, and the importation of various continental goods, including animals, immigrants, and knowledge. That knowledge included new ideas about governance as well as religious practices such as Buddhism—and possibly other religious practices as well, as many of the stories that we saw in the Age of the Gods may have analogs on the continent and may just as easily have been coming over with the current crop of immigrants, though it is hard to say for certain. At the heart of these changes are three individuals. Obviously there is Kashikiya Hime, on the throne through a rather intricate and bloody series of events. Then there is Soga no Umako, her maternal uncle, who has been helping to keep the Soga family on top. And of course, the subject of our last couple episodes, Prince Umayado, aka Shotoku Taishi. He, of course, is credited with the very founding of the Japanese state through the 17 article constitution and the promulgation of Buddhism. This episode, I'd like to tackle some of the little things. Some of the stories that maybe didn't make it into other episodes up to this point. For this, we'll mostly look at it in a chronological fashion, more or less. As you may recall, Kashikiya Hime came to the throne in about 593, ruling in the palace of Toyoura. This was around the time that the pagoda was erected at Houkouji temple—and about the time that we are told that Shitennouji temple was erected as well. Kashikiya Home made Umayado the Crown Prince, despite having a son of her own, as we'd mentioned previously, and then, in 594, she told Umayado and Umako to start to promulgate Buddhism, kicking off a temple building craze that would sweep the nation—or at least the areas ruled by the elites of Yamato. By 596, Houkouji was finished and, in a detail I don't think we touched on when talking about Asukadera back in episode 97, they appointed as commissioner one Zentoku no Omi—or possibly Zentoko, in one reading I found. This is a curious name, since “Zentoku” comes across as a decidedly Buddhist name, and they really liked to use the character “Zen”, it feels like, at this time. In fact, it is the same name that the nun, the daughter of Ohotomo no Sadehiko no Muraji, took, though the narrative is very clear about gender in both instances, despite them having the exact same Buddhist names. This name isn't exactly unique, however, and it is also the name recorded for the Silla ruler, Queen Seondeok, whose name uses the same two characters, so it is possible that at this time it was a popular name—or perhaps people just weren't in the mood to get too creative, yet. However, what is particularly interesting to me, is that the name “Zentoku” is then followed by the kabane of “Omi”. As you may recall from Episode XX, a kabane is a level of rank, but associated with an entire family or lineage group rather than an individual. So while there are times where we have seen “personal name” + “kabane” in the past, there is usually a surname somewhere in there. In this case, we aren't told the surname, but we know it because we are given the name of Zentoku's father: we are told that he was the son of none other than the “Oho-omi”, the Great Omi, aka Soga no Umako. So, in summary, one of Soga no Umako's sons took the tonsure and became a monk. I bring this little tidbit up because there is something that seems very odd to me and, at the same time, very aristocratic, about taking vows, retiring from the world, and yet still being known by your family's title of rank. Often monks are depicted as outside of the civil rank and status system—though there were certainly ranks and titles within the priesthood. I wonder if it read as strange to the 8th century readers, looking back on this period. It certainly seems to illustrate quite clearly how Buddhism at this point was a tool of the elite families, and not a grass-roots movements among the common people. This also further strengthens the idea that Houkouji was the temple of the Soga—and specifically Soga no Umako. Sure, as a Soga descendant, Prince Umayado may have had some hand in it, but in the end it was the head of the Soga family who was running the show, and so he appoints one of his own sons as the chief commissioner of the temple. They aren't even trying to hide the connection. In fact, having one of his sons “retire” and start making merit through Buddhist practice was probably a great PR move, overall. We don't hear much more from Zentoku after this point, and we really know very little about him. We do know something about the Soga family, and we know that Soga no Umako has at least one other son. While we've yet to see him in the narrative—children in the Nihon Shoki are often meant to be neither seen nor heard, it would seem—Umako's other son is known to us as Soga no Emishi. Based on when we believe Soga no Emishi was born, however, he would have been a child, still, when all this was happening, and so Zentoku may have actually been his father's eldest son, taking the reins at Houkouji temple, likely setting him up to claim a role of spiritual leadership in the new religion of Buddhism. Compare this to what we see later, and also in other places, such as Europe, where it is often the second son that is sent into religious life, while the eldest son—the heir—is kept at hand to succeed the father in case anything happens. On the other hand, I am unsure if the monks of this time had any sort of celibacy that was expected of them, and I suspect that even as the temple commissioner, the tera no Tsukasa, Zentoku was keeping his hand in. After all, the Soga family head appears to have been staying near the temple as well, so it isn't like they were packing him off to the high mountains. Moving on, in 601 we are told that Kashikiya Hime was in a temporary palace at a place called Miminashi, when heavy rains came and flooded the palace site. This seems to be referring to flooding of Toyoura palace, which was, we believe, next to the Asuka river. I wonder, then, if that wasn't the impetus for, two years later, in 603, moving the palace to Woharida, and leaving the old palace buildings to become a nunnery. That Woharida palace is not thought to have been very far away—traditionally just a little ways north or possibly across the river. In 604, with the court operating out of the new Woharida palace, we see the institution of more continental style traditions. It includes the idea of bowing when you entered or left the palace grounds—going so far as to get on your hands and knees for the bow. Even today, it is customary to bow when entering a room—particularly a traditional room like in a dojo or similar—and it is also customary to bow when passing through a torii gate, entering into a sacred space. Of course, that is often just a standing bow from the waist, and not a full bow from a seated position. In 605, with more continental culture being imported, we see it affecting fashion. In fact, in this year we are told that Prince Umayado commanded all the ministers to wear the “hirami”. The kanji simply translates to “pleats”, but in clothing terms this refers to a pleated skirt or apron. We see examples of this in courtly clothing going back to at least the Han dynasty, if not earlier, typically tied high above the waist and falling all the way down so that only the tips of the shoes are poking out from underneath. We have a bit more on this in the historical clothing section of the Sengoku Daimyo website, sengokudaimyo.com. I wonder if these wrapped skirts aren't some of what we see in the embroidered Tenjukoku mandala of Chuuguuji. Court women would continue to wear some kind of pleated skirt-like garment, which would become the mo, though for men they would largely abandon the fashion, except for some very specific ritual outfits. That said, there is still an outfit used for some imperial ceremonies. It is red, with many continental and what some might consider Taoist symbols, such as dragons, the sun and moon, etc.. That continuation of tradition gives us some idea of what this was and what it may have looked like back in the day. It is also very neat that we are starting to get specific pieces of potentially identifiable clothing information, even if it is only for the court nobles. The year following that, 606, we get the giant Buddha image being installed at Houkouji, aka Asukadera. Or at least, we think that is the one they are talking about, as we can't be one hundred percent certain. However, it is traditionally thought to be one and the same. The copper and gold image was commissioned a year prior, along with an embroidered image as well, but when they went to install it they ran into a slight problem: The statue was too large to fit through the doors of the kondo, the golden image hall. No doubt that caused some embarrassment—it is like ordering furniture that won't fit through the doorway, no matter how you and your friends try to maneuver it around. They were thinking they would have to cut through the doors of the kondo to create more room, and then fix it afterwards. Nobody really wanted to do that thought—whether because they thought it would damage the structural integrity of the building or they just didn't want to have to put up with an unsightly scar, it isn't clear. Finally, before they took such extreme measures, they called on the original artist, Kuratsukuri no Tori. He is said to be the son of the famous Shiba Tattou, and so his family was quite close with the Soga, and he seems to have had quite the eye for geometry as we are told that he, “by way of skill”, was able to get it through the doors and into the hall. I don't know if that meant he had to some how turn it on its side and walk it through, or something else, but whatever it was, it worked. Tori's mad Tetris skills worked, and they were able to install the giant Buddha in the hall without cutting through the doorways. For his efforts, Tori was rewarded, and he was raised up to the rank of Dainin, one of the 12 new ranks of the court. He was also given 20 cho worth of “water fields”—likely meaning rice paddies. With the income from those fields, we are told that he invested in a temple of his own: Kongoji, later known as the nunnery of Sakata in Minabuchi. For all that Buddhism was on the rise, the worship of the kami was still going strong as well. In 607 we are told that there was an edict that everyone should worship the kami of heaven and earth, and we are told that all of the noble families complied. I would note that Aston wonders about this entry, as the phrasing looks like something you could have taken right out of continental records, but at the same time, it likely reflects reality to some extent. It is hard to see the court just completely giving up on the traditional kami worship, which would continue to be an important part of court ritual. In fact, it is still unclear just how the new religion of Buddhism was viewed, and how much people understood the Buddha to be anything more than just another type of kami. Later in that same year was the mission to the Sui court, which we discussed in Episode 96. The year after, the mission returned to Yamato with Sui ambassadors, and then, in 609, those ambassadors returned to the Sui court. These were the missions of that infamous letter, where the Yamato court addressed the Sui Emperor as an equal. “From the child of heaven in the land where the sun rises to the child of heaven in the land where the sun sets.” It is still one of my favorite little pieces of history, and I constantly wonder if Yamato didn't understand the difference in scale or if they just didn't care. Either way, some really powerful vibes coming off that whole thing. That same year that the Sui ambassadors were going back to their court there was another engagement with foreigners. In this case the official on the island of Tsukushi, aka Kyuushuu, reported to the Yamato court that 2 priests from Baekje, along with 10 other priests and 75 laypersons had anchored in the harbor of Ashigita, in the land of Higo, which is to say the land of Hi that was farther from Yamato, on the western side of Kyuushuu. Ashigita, you may recall, came up in Episode 89 in reference to the Baekje monk—and I use that term loosely—Nichira, aka Illa. There, Nichira was said to descend from the lord of Ashigita, who was said to be Arisateung, a name which appears to be a Korean—possibly Baekje—title. So now we have a Baekje ship harboring in a land that once was ruled by a family identified, at least in their names or titles, as having come from or at least having ties with Baekje. This isn't entirely surprising, as it wouldn't have taken all that much effort for people to cross from one side to the other, and particularly during the period before there was a truly strong central government it is easy to see that there may have been lands in the archipelago that had ties to Baekje, just as we believe there were some lands on the peninsula that had ties to Yamato. One more note before get to the heart of the matter is the title of the person who reported all these Baekje goings-on. Aston translates the title as the Viceroy of Tsukushi, and the kanji read “Dazai”, as in the “Dazaifu”, or government of the “Dazai”. There is kana that translates the title as Oho-mikoto-Mochi—the Great August Thing Holder, per Aston, who takes this as a translation, rather than a strict transliteration. This is the first time that this term, “Dazai” has popped up in the history, and it will appear more and more in the future. We know that, at least later, the Dazaifu was the Yamato court's representative government in Kyuushuu. The position wasn't new - it goes back to the various military governors sent there in previous reigns - but this is the first time that specific phrasing is used—and unfortunately we don't even know much about who it was referring to. The position, however, would become an important part of the Yamato governing apparatus, as it provided an extension of the court's power over Kyuushuu, which could otherwise have easily fallen under the sway of others, much as Iwai tried to do when he tried to ally with Silla and take Tsukushi by force. Given the importance of Kyuushuu as the entrypoint to the archipelago, it was in the Court's best interest to keep it under their control. Getting back to the ship with the Baekje priests on it: the passengers claimed they were on their way to Wu, or Kure—presumably headed to the Yangzi river region. Given the number of Buddhist monasteries in the hills around the Yangzi river, it is quite believable, though of course by this time the Wu dynasty was long gone. What they had not prepared for was the new Sui dynasty, as they said there was a civil war of some kind going on, and so they couldn't land and were subsequently blown off course in a storm, eventually limping along to Ashigita harbor, where they presumably undertook rest and a chance to repair their vessels. It is unclear to me exactly what civil war they were referring to, and it may have just been a local conflict. There would be rebellions south of the Yangzi river a few years later, but no indication that it was this, just a bit out of context. We know that the Sui dynasty suffered—it wouldn't last another decade before being dismantled and replaced by the Tang dynasty in about 618. There were also ongoing conflicts with Goguryeo and even the area of modern Vietnam, which were draining the Sui's resources and could be related to all of these issues. If so, though, it is hard to see an exact correlation to the “civil war” mentioned in the text. Given all this, two court nobles: Naniwa no Kishi no Tokomaro and Fumibito no Tatsu were sent to Kyuushuu to see what had happened, and, once they learned the truth, help send the visitors on their way. However, ten of the priests asked to stay in Yamato, and they were sent to be housed at the Soga family temple of Houkouji. As you may recall, 10 monks was the necessary number to hold a proper ordination ceremony, funnily enough. In 610, another couple of monks showed up—this time from Goguryeo. They were actually sent, we are told, as “tribute”. We are told that one of them was well read—specifically that he knew the Five Classics—but also that he understood how to prepare various paints and pigments. A lot of paint and pigments were based on available materials as well as what was known at the time, and so it is understandable, to me, why you might have that as a noted and remarkable skill. We are also told that he made mills—likely a type of handmill. These can be easily used for helping to crush and blend medicines, but I suspect it could just as easily be used to crush the various ingredients for different pigments. A type of handmill, where you roll a wheel in a narrow channel, forward and back, is still in use today throughout Asia. In 611, on the 5th day of the 5th month, the court went out to gather herbs. They assembled at the pond of Fujiwara—the pond of the wisteria field—and set out at sunrise. We are told that their clothing matched their official cap colors, which was based on their rank, so that would seem to indicate that they were dressed in their court outfits. In this case, though, they also had hair ornaments mad of gold, leopard's tails, or birds. That leopard's tail, assuming the description is accurate, is particularly interesting, as it would have had to have come from the continent. This ritual gathering of herbs would be repeated on the 5th day of the 5th month of both 612 and 614. If that date seems familiar, you might be thinking of the modern holiday of Tango no Sekku, aka Kodomo no Hi. That is to say: Boy's Day or the more gender neutral “Children's Day”. It is part of a series of celebrations in Japan known today as “Golden Week”, when there are so many holidays crammed together that people get roughly a week off of work, meaning that a lot of travel tends to happen in that period. While the idea of “Boy's Day” probably doesn't come about until the Kamakura period, Tango no Sekku has long been one of the five seasonal festivals of the court, the Gosekku. These included New Year's day; the third day of the third month, later to become the Doll Festival, or Girl's Day; the seventh day of the seventh month, during Tanabata; and the 9th day of the 9th month. As you can see, that is 1/1, 3/3, 5/5, 7/7, and 9/9. Interestingly, they skipped over 11/11, possibly because that was in the winter time, based on the old calendar, and people were just trying to stay warm. Early traditions of Tango no Sekku include women gathering irises to protect the home. That could connect to the practice, here, of “picking herbs” by the court, and indeed, many people connect the origins of Tango no Sekku back to this reign specifically because of these references, though there is very little said about what they were doing, other than picking herbs in their fancy outfits. We are given a few more glimpses into the lives of the court in a few other entries. In 612, for instance, we have a banquet thrown for the high functionaries. This may have been a semi-regular occasion, but this particular incident was memorable for a couple of poems that were bandied back and forth between Soga no Umako and Kashikiya Hime. He toasted her, and she responded with a toast to the sons of Soga. Later that year, they held a more somber event, as Kitashi Hime was re-interred. She was the sister to Soga no Umako, consort of Nunakura Futodamashiki no Ohokimi, aka Kimmei Tenno, and mother to both Tachibana no Toyohi, aka Youmei Tennou, and Kashikiya Hime, Suiko Tennou. She was re-buried with her husband at his tomb in Hinokuma. During this period, various nobles made speeches. Kicking the event off was Abe no Uchi no Omi no Tori, who made offerings to her spirit, including around 15,000 utensils and garments. Then the royal princes spoke, each according to rank, but we aren't given just what they said. After that, Nakatomi no Miyatokoro no Muraji no Womaro gave the eulogy of the Oho-omi, presumably speaking on Umako's behalf, though it isn't exactly clear why, though Umako was certainly getting on in years. Then, Sakahibe no Omi no Marise delivered the written eulogies of the other families. And here we get an interesting glimpse into court life as we see a report that both Nakatomi no Womaro and Sakahibe no Marise apparently delivered their speeches with great aplomb, and the people listening were quite appreciative. However, they did not look quite so fondly on the speechifying of Abe no Tori, and they said that he was less than skillful. And consider that—if you find public speaking to be something you dread, imagine if your entire reputation hung on ensuring that every word was executed properly. A single misstep or a bad day and suddenly you are recorded in the national history as having been just the worst. In fact, his political career seems to have tanked, as we don't hear much more about him after that. 612 also saw more immigrants bringing more art and culture. The first was a man from Baekje. He did not look well—he had white circles under his eyes, we are told, possibly indicating ringworm or some other infection. It was so bad that the people on the ship with him were thinking about putting him off on an island to fend for himself. He protested that his looks were not contagious, and no different that the white patches of color you might see on horses or cattle. Moreover, he had a talent for painting figures and mountains. He drew figures of the legendary Mt. Sumeru, and of the Bridge of Wu, during the period of the Southern Courts, and the people were so taken by it that they forestalled tossing him overboard. He was eventually known as Michiko no Takumi, though more colloquially he was known as Shikomaro, which basically was a nickname calling him ugly, because judging people based on appearance was still totally a thing. The other notable immigrant that year was also a man of Baekje, known to us as Mimachi, or perhaps Mimashi or Mimaji. He claimed to know the music and dancing of the Wu court—or at least some continental dynasty. He settled in Sakurawi and took on students who were basically forced to learn from him. As if a piano teacher appeared and all the children went to learn, but now it isn't just your parents and their high expectations, but the very state telling you to do it. So… no pressure, I'm sure. Eventually, Manu no Obito no Deshi—whose name literally means “student” or “disciple”—and Imaki no Ayabito no Seibun learned the teachings and passed them down to others. This would appear to be the masked dances known as Gigaku. If you know about early Japanese music and dance you may have heard of Gagaku, Bugaku, and Noh theater. Gagaku is the courtly music, with roots in apparently indigenous Japanese music as well as various continental sources, from the Korean peninsula all the way down to Southeast Asia. Indeed, the musical records we have in Japan are often the only remaining records of what some of the continental music of this time might have sounded like, even though the playing style and flourishes have changed over the centuries, and many scholars have used the repertoire of the Japanese court to help work backwards to try and recreate some of the continental music. The dances that you often see with Gagaku musical accompaniment are known as Bugaku, and most of that was codified in the latter years of the Heian era—about the 12th century. Then there is the famous masked theater known as Noh, which has its origins in a variety of traditions, going back to at least the 8th century and really brought together around the 14th century. All of these traditions, however, are preceded by Gigaku, this form of masked dance that came over in the 7th century, and claims its roots in the area of “Wu” rather than “Tang”, implying that it goes back to traditions of the southern courts of the Yangzi river region. Gigaku spread along with the rest of continental culture, along with the spread of Buddhism and other such ideas. From what we can tell, it was a dominant form of music and dance for the court, and many of the masks that were used are preserved in temple storehouses such as the famous Shosoin at the Todaiji in Nara. However, as the centuries rolled by, Gigaku was eventually replaced at court by Bugaku style dances, though it continued to be practiced up through at least the 14th century. Unfortunately, I know of no Gigaku dances that survived into the modern day, and we are left with the elaborate masks, some illustrations of dancers, and a few descriptions of what it was like, but that seems to be it. From what we can tell, Gigaku—also known as Kure-gaku, or Kure-no-utamai, meaning Music or Music and Dances of Wu—is first noted back in the reign of Nunakura Futodamashiki, aka Kimmei Tennou, but it wasn't until the reign of Kashikiya Hime that we actually see someone coming over and clearly imparting knowledge of the dances and music—Mimashi, mentioned above. We then see the dances mentioned at various temples, including Houryuuji, Toudaiji, and others. Of course, as with many such things, Shotoku Taishi is given credit for spreading Gigaku through the Buddhist temples, and the two do seem to have gone hand in hand. We know a little bit about the dances from the masks and various writings. The masks are not random, and a collection of Gigaku masks will have generally the same set of characters. These characters appear to have been organized in a traditional order. A performance would start with a parade and a sutra reading—which I wonder if that was original or if it was added as they grew more connected to the Buddhist temple establishment. And then there was a lion dance, where a young cub would pacify an adult lion. Lion dances, in various forms, continue to be found throughout East Asia. Then the characters come into play and there are various stories about, for example, the Duke of Wu, and people from the “Hu” Western Regions—that is to say the non-Han people in the Western part of what is now China and central Eurasia. Some of these performances appear to be serious, while others may have been humorous interludes, like when a demon assaults the character Rikishi using a man's genitals while calling for the “Woman of Wu”. That brings to mind the later tradition of ai-kyougen; similarly humorous or lighthearted episodes acted out during Noh plays to help break up the dramatic tension. Many of aspects of Gigaku would go on to influence the later styles of court music and dance. Bugaku is thought to have some of its origins in masked Gigaku dancers performing to the various styles of what became known as Gagaku music. There are also examples of some of the characters making their way into other theatrical traditions, such as Sarugaku and, eventually, Noh and even folk theater. These hints have been used to help artists reconstruct what Gigagku might have been like. One of the key aspects of Gigaku is that for all they were telling stories, other than things like the recitation of the sutras, the action of the story appears to have been told strictly through pantomime in the dances. This was accompanied by the musicians, who played a variety of instruments during the performance that would provide the musical queues for the dancers-slash-actors. There was no dialogue, however, but the names of the various characters appear to have been well known, and based on the specifics of the masks one could tell who was who and what was going on. This is similar to how, in the west, there were often stock characters in things like the English Mummers plays or the Comedia dell'arte of the Italian city-states, though in Gigaku those characters would not speak at all, and their story would be conveyed simply through pantomime, music, and masks. There have been attempts to reconstruct Gigaku. Notably there was an attempt in the 1980s, in coordination with a celebration of the anniversary of Todaiji, in Nara, and it appears that Tenri University may continue that tradition. There was also another revival by famed Kyougen actor Nomura Mannojo, uncle to another famous Kyougen actor turned movie star, Nomura Mansai. Mannojo called his style “Shingigaku”, which seems to be translated as either “True Gigaku” or “New Gigagku”, and he took that on tour to various countries. You can find an example of his performance from the Silk Road Theater at the Smithsonian Folklife Festival in Washington, DC back in 2002, as well as elsewhere. It does appear that he's changed things up just a little bit, however, based on his layout of the dances, but it is an interesting interpretation, nonetheless. We may never truly know what Gigaku looked and sounded like, but it certainly had an impact on theatrical and musical traditions of Japan, and for that alone it perhaps deserves to be mentioned. And I think we'll stop right there, for now. There is more to get through, so we'll certainly have a part two as we continue to look at events of this rein. There are stories of gods and omens. There is contact with an island off the southern coast of Kyuushuu. There are more trips to the Sui court. Much of that is coming. Until then, I'd like to thank you once again. I can hardly believe we reached one hundred episodes! And it comes just as we are about to close out the year. As usual, I'll plan for a recap episode over New Year's, and then I'll plan to get back into everything the episode after that, but this closes out the year. I hope everyone has a wonderful new year, however you celebrate and, as always, thank you for listening and for all of your support. If you like what we are doing, tell your friends and feel free to rate us wherever you listen to podcasts. If you feel the need to do more, and want to help us keep this going, we have information about how you can donate on Patreon or through our KoFi site, ko-fi.com/sengokudaimyo, or find the links over at our main website, SengokuDaimyo.com/Podcast, where we will have some more discussion on topics from this episode. Also, feel free to Tweet at us at @SengokuPodcast, or reach out to our Sengoku Daimyo Facebook page. You can also email us at the.sengoku.daimyo@gmail.com. And that's all for now. Thank you again, and I'll see you next episode on Sengoku Daimyo's Chronicles of Japan.
Ever had an experience that at first jostled you but then left you with lasting joy? I've had several, but the one I'll be talking about this Sunday in my Week 13, "Flourishing Trees" series sermon, is from just last summer. That turbulent but triumphant experience is somewhat like the experience we get from this week's passage, Matthew 7:13-23 (read it now to get the turbulence over with). This Sunday will be a special one at "new Hillside", mainly because we'll be back in our beautiful worship home after several months of "camping" in the Community Center. Friends, the "flood" is over. Like Noah and his crew exiting the ark, we're back on dry land! Really hoping to see you for this new start. Remember, your presence alone advances our Hillside mission to 'Help everyone know and follow King Jesus." So please join your church family in person if you're healthy and not on a road trip! Worship with Hillside Covenant Church as Dan Seitz teaches from Matthew 7:13-23. To view or download a copy of this week's bulletin and sermon notes follow this link: https://u.pcloud.link/publink/show?code=XZerjcVZQhoEyEiuFhpG4AlcAhuiCQAaIMWy If you are new to Hillside and are looking for ways to get connected and build community, visit our “Get Connected” page: https://hillsidecovenant.churchcenter.com/pages/get-connected We welcome you to Hillside and are so glad you joined us today! To give in support of Hillside Covenant and its ministries follow this link: https://hillsidecovenant.churchcenter.com The full service from Hillside Covenant Church, Sunday, November 5, 2023.
Ever had an experience that at first jostled you but then left you with lasting joy? I've had several, but the one I'll be talking about this Sunday in my Week 13, "Flourishing Trees" series sermon, is from just last summer. That turbulent but triumphant experience is somewhat like the experience we get from this week's passage, Matthew 7:13-23 (read it now to get the turbulence over with). This Sunday will be a special one at "new Hillside", mainly because we'll be back in our beautiful worship home after several months of "camping" in the Community Center. Friends, the "flood" is over. Like Noah and his crew exiting the ark, we're back on dry land! Really hoping to see you for this new start. Remember, your presence alone advances our Hillside mission to 'Help everyone know and follow King Jesus." So please join your church family in person if you're healthy and not on a road trip! Worship with Hillside Covenant Church as Dan Seitz teaches from Matthew 7:13-23. To view or download a copy of this week's bulletin and sermon notes follow this link: https://u.pcloud.link/publink/show?code=XZerjcVZQhoEyEiuFhpG4AlcAhuiCQAaIMWy If you are new to Hillside and are looking for ways to get connected and build community, visit our “Get Connected” page: https://hillsidecovenant.churchcenter.com/pages/get-connected We welcome you to Hillside and are so glad you joined us today! To give in support of Hillside Covenant and its ministries follow this link: https://hillsidecovenant.churchcenter.com The sermon from Hillside Covenant Church, Sunday, November 5, 2023.
In this interview, Yoshi Tatsu touches on topics such asNJPW Highlights: The DoJo Class of 2002 known as the last class of the Inoki Era: What it was like trying out for NJPW 90 people, Yoshi tells how many actually made it through.: Was Yoshi Tatsu a fan of wrestling before becoming a wrestler?: What is the reason wrestlers in NJPW use their real name WWE Highlights: Struggles he first faced coming to America: Why it went from Yoshitatsu to Yoshi Tatsu in WWE and the reason: Talks WWE giving him name changes and Yoshi says Japanese people would laugh at his name, and what he told the office: Talks ideas Dusty suggested: June 30th 2009, finding out his debut opponent was Shelton Benjamin: Working with Brodie Lee and Bray Wyatt: Anyone in WWE he wish he got to have a match with?: Yoshi on starting Yoshi Unleashed Podcast: Talks being first Japanese wrestler to start in development in WWE: WrestleMania 26 winning battle royal last eliminating Zack Ryder, what was it like to have that moment?: Matt Cardona who at WrestleMania XXVI (26), he was the last person Tatsu tossed out. You had many matches in WWE, he's done some hardcore/death match type matches, you have been doing those in AJPW, would you face Matt Cardona in a death match? AJPW: How does one mentally prepare to have a death match?Yoshi Tatsu is Perched On The Top Rope!Support this podcast at — https://redcircle.com/perchedonthetoprope/donationsAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Tap on the Wrist is back for Season 6 focusing on women in alcohol history! For our first episode back, Laura tells us the history of the Poor Clare Nuns and their secret liquor recipe, while Vanessa tells the story of Tatsu'uma Kiyo and her impact on the sake industry! If you enjoyed this episode, be sure to subscribe, rate, & review. Music credit: 'Booze and Blues' by Ma Rainey.
Moobarkfluff! Wow, what a lot of stuff this week. We have lots of upcoming events, we visit the Transfurmation lab, talk about some creature oddness, visit with Taebyn's honey; Tatsu, A movie review from Cheetaro, and well, that's just some of the hilarity you you expect from BFFT! Moobarkfluff all you furs! Migration | Official TrailerTaebyn Merch at FourthwallWild Bills SodaMerch at RedbubbleMerch at BonfireMerch at FourthwallThis podcast contains adult language and adult topics. It is rated M for Mature. Listener discretion is advised.Support the showThanks to all our listeners and to our staff: Bearly Normal, Rayne Raccoon, Taebyn, and Ziggy the Meme Weasel.You can send us a message on Telegram at BFFT Chat, or via email at: bearlyfurcasting@gmail.com
It's gonna be deep and underground this week, we'll hear from Dimmish, Victor Stancov, Lucalag, & more, drum n bass from Retromigration and Breakage, ending with new tunes from Roosevelt and Barry Can't Swim!! TRACKLISTING: 1) Local Options — Check The Basement — Tavern Cuts EP — No Fuss Records 2) Amy Dabbs — Dandelion Theory — The Bobcat Special EP — Shall Not Fade 3) Dimmish — Sayonara Wall Street — Every Other Day LP — Shall Not Fade 4) Matt Gillespie — Give It Up — Give It Up (Single) — Floppy Disks 5) Victor Stancov — Untitled Pass — Untitled Sequence — Sense Traxx 6) Tatsu, Moo Ve — World In Madness — World In Madness (Single) — Induction Muzic 7) Hooverphonic — Mad About U (Kcik Edit) — Mad About U (Single) — self-release on Bandcamp 8) Lucalag — Relations — Waves EP — Piston Recordings 9) Retromigration ft Nephews — Bad Knees — Straight Foxin' — WOLF Music 10) Nia Archives — That's Tha Way Life Goes (Breakage Remix) — Sunrise Bang Ur Head Against Tha Wall (Remixes) — HIJINXX 11) Roosevelt — Ordinary Love — Ordinary Love (Single) — Counter Records 12) Barry Can't Swim — Woman — Woman (Single) — Ninja Tune
Patrick and Jacob sit down with Tatsu Hashimoto, Professor of AI at Stanford, to discuss the incredible open source projects from his research group like Alpaca and AlpacaFarm, whether data, algorithms, fine-tuning or RLHF is most important for performance, if AI is liberal or conservative, and much more! (0:00) - intro(1:05) - journey to Stanford(2:50) - origins of Alpaca(6:08) - capabilities of the Alpaca model(16:39) - the future of AI(20:07) - AlpacaFarm(21:37) - how to improve language models(29:15) - do language models form opinions?(32:15) - how to solve bias in ai(34:18) - how does academia fit into the world of AI(42:01) - over-hyped/under-hyped(46:35) - questions Tatsu doesn't have time for With your co-hosts: @jasoncwarner - Former CTO GitHub, VP Eng Heroku & Canonical @ericabrescia - Former COO Github, Founder Bitnami (acq'd by VMWare) @patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn @jacobeffron - Partner at Redpoint, Former PM Flatiron Health@jacobeffron - Partner at Redpoint, Former PM Flatiron Health
Mind The Gap #394 — House & Best of June Playlist! I've got another exciting crate full of choons for you this week with Pirate Copy, Grant Nelson, & Mark Hawkins, to name a few…Then, it's my Best Of June Playlist featuring Tatsu, Tete De La Course, & more…DISC 2 has sweet beats from my mates The Black Steel Brothers, made right here in Kansas City! GENRES TAGGED: house, deep house, hip-hop/lo-fi TRACKLISTING: 1) Karl Seery, Cody — You Feel — You Feel (Single) — Nervous Records 2) Digitalism — 4th Floor — Back To Haus — Running Back 3) Pirate Copy — I Can't Stop — I Can't Stop (Single) — Kaluki Music 4) Home Team — Tiger Style — Neon Genesis — Plastik People Digital 5) Grant Nelson — Spellbound — More Than House Music — House Place Records 6) Mark Hawkins — How Do I Know (Club Mix) — How Do I Know (Single) — Aus Music 7) Tete De La Course — I Got It (Niles Cooper Remix) — I Got It (Single) — Snatch!Records 8) Audiojack — Haze — State Of Nirvana EP — Shall Not Fade 9) Tatsu — Love Shuttle — Crazy Blonde EP — Induction Music 10) Matt Gillespie — Organ Bump — Organ Bump — Plastik People Digital 11) Nu-Cleo — Suntune — Never Satisfied — Pleasant Systems 12) The Black Steel Brothers — Takeoff — Flight — Sockett Records 13) The Black Steel Brothers — Climbing — Flight — Sockett Records
I've got deep house from the likes of Jansons, Nu-Cleo, Tatsu, & more…Then, it's breakbeat from Gynoid 74, a Special Request jungle edit, and a new one by ASC…rounding out the show in DISC 2 is Eddie Merced!! TRACKLISTING: 1) Katermurr — Underground — Underground (Single) — Plastik People Digital 2) Matt Gillespie — Organ Bump — Organ Bump (Single) — 4th Set Records 3) Jansons — Get It On — Nite Life — PIV 4) Nu-Cleo — Suntune — Never Satisfied — Pleasant Systems 5) Crew Deep — Location (T.Markakis Extended Remix) — Location (Remix Version) — I Records 6) Tatsu — Love Shuttle — Crazy Blonde EP — Induction Music 7) Manuel Kane — You Got It (Unreleased Extended Version) — You Got It (Single) — I Records 8) Gynoid 74 — Rain — Shroom EP — T4T LUV NRG 9) Nia Archives — So Tell Me (Special Request Remix) — Sunrise Bang Ur Head Against Tha Wall (Remixes) — HIJINXX 10) ASC — Deep Dive — The Depths Of Space LP — Over/Shadow 11) Eddie Merced — Project MK-Ultra — Spies In Berlin EP — R4808n
Tatsu Tsuchida discusses his experience transitioning from a Japanese-only shop to an all-makes and all-models shop, his marketing strategy, and his experience taking over an existing business. He also discusses the challenges of owning a family business and the importance of having an exit strategy. The episode also includes a tour of Tsuchida's second location. Tatsu Tsuchida, Toyko Automotive, Costa Mesa and Placentia, CA. Tatu's previous episodes HERE Watch Full Video Episode HERE (00:00:01) Tatsu Tsuchida talks about his experience owning an all-makes and all-models auto repair shop. (00:03:44) Tatsu Tsuchida discusses his marketing strategy for his all-makes and all-models auto repair shop. (00:07:44) Tatsu talks about his experience taking over an existing business and the challenge of transitioning to a new team. (00:09:34) Tatsu discusses the difficulty of easing immigrant parents out of business and his own exit strategy. (00:14:40) Tatsu Tsuchida talks about the inspiration behind the Fast and Furious movies. (00:16:03) Tatsu Tsuchida tours his 10,000 square feet auto repair shop in Costa Mesa, California. Thanks to our Partner, Dorman Products. Dorman gives people greater freedom to fix vehicles by constantly developing new repair solutions that put owners and technicians first. Take the Dorman Virtual Tour at www.DormanProducts.com/Tour Connect with the Podcast: -Join our Insider List: https://remarkableresults.biz/insider -All books mentioned on our podcasts: https://remarkableresults.biz/books -Our Classroom page for personal or team learning: https://remarkableresults.biz/classroom -Buy Me a Coffee: https://www.buymeacoffee.com/carm -The Aftermarket Radio Network: https://aftermarketradionetwork.com -Special episode collections: https://remarkableresults.biz/collections
Moobarkfluff! Taebyn's Hubby; Tatsu, joins us to impart Rhody Wisdom! We get some math, do a little This or That, tell some really bad puns and jokes, and generally have a typical BFFT time! So join us, won't you, on our furtastic adventure! Moobarkfluff! Good Furry Award VotingWild Bills SodaFursona Non-Grata Link Merch at RedbubbleMerch at BonfireThis podcast contains adult language and adult topics. It is rated M for Mature. Listener discretion is advised.Support the showThanks to all our listeners and to our staff: Bearly Normal, Rayne Raccoon, Taebyn, and Ziggy the Meme Weasel.You can send us a message on Telegram at BFFT Chat, or via email at: bearlyfurcasting@gmail.com
[Recorded on March 20 and released now while Jeremy is on vacation] Alyssa forgets to disclose to Jeremy that she is also going to Japan at the end of April and the hype is even realer than before. Jeremy is watching Nigeru wa Haji da ga Yaku ni Tatsu and playing the Resident Evil 2 Remake and Sons of the Forest. And both give minimal updates on Like a Dragon: Ishin. They also brush over the standout releases for April and The Oscars 2023. HBO's The Last of Us spoilers of the finale and the game Part 2 start from 26:50-37:50. New podcast theme music by the wonderful Joey Mossman! Please check out his Instagram and SoundCloud. Find us on our Twitter, Official Website, and community Discord. Leave us a review on Apple Podcasts if you enjoy our podcast and subscribe on our platform!
Next week: Spy x Family Season 2!Featuring Gozen from AnimeUproar as Anime Virgin, Briggs from BriggsADA and Truck Chan: https://twitter.com/TruckChanVtuber Truck Chan Twitch: https://www.twitch.tv/truckchanvtuber
Don't forget you can check out all things casual at: https://linktr.ee/Casual_Empire Also you can email us at: animecasualsreal@gmail.com In this episode we talk about the second season of "The Way of the Househusband," as Tatsu is still up to his shenanigans.
The Tatsu-no-kuchi event that resolved Nichiren's mission and cemented his commitment to the propagation of Buddha self-awakening in the Lotus sutra.
The remonstration that would lead to Tatsu-no-kutchi and Nichiren's admonition to Hachiman and the Sun Goddess of Japan.
Another duo episode! In this one I spoke with Tim Hughes and Tatsu Osada - accomplished marathoners and triathletes in the DC area. The two have been friends since 2014, when they met at the fateful polar vortex run referenced in Mike Katz's episode. Tatsu and Tim are supportive friends AND intense competitors. For many of their races, they have "bets" over who wins. This includes betting on whose version of a peanut butter jelly sandwich is "correct" and betting on which of them will have to buy the other a beer every month for the rest of their month. (Listen in to find out who won what.) Tim talks about going from an 4+ hour marathoner to a 2:35 marathoner. Tatsu discusses crushing six ironman triathlons. The two mention their favorite races and what advice they have for those hoping to get faster and stronger.Shoutouts to Pacers Running and YTri Program in this episode!
AJ, Sara and Tim discuss Batman and the Outsiders #28 from 1985, “Abduction from Below”, the appeal of yachts, Tatsu being a friend, and Saturday morning cartoon lineup.
Our guests in this episode of The Auto Repair Marketing Podcast are Tim Chakarian of Bimmer PHD and Tatsu Tsuchida of Tokyo Automotive. We talk about a lot in this episode, including guerrilla marketing, 20 groups, and industry associations. Tim and Tatsu both run incredible shops and are very involved at an industry level. They believe in learning from and helping other shop owners. Talking Points Tim and Tatsu both have had great success in using “bird droppings” to market their shops Both also have gifts that they give to their clients The relationships they have built with each other and other shop owners through their 20 group have had a large impact on their business and themselves personally Being a member of ASCCA has allowed them to work with other local shop owners and create relationships that benefit their clients How To Get In Touch Group - https://www.facebook.com/groups/autorepairmarketingmastermind (Auto Repair Marketing Mastermind) Website - https://shopmarketingpros.com/meet-the-pros/ (shopmarketingpros.com) Facebook - https://www.facebook.com/shopmarketingpros/ (facebook.com/shopmarketingpros) Get the Book - http://shopmarketingpros.com/book (shopmarketingpros.com/book) Instagram - @shopmarketingpros Questions/Ideas - podcast@shopmarketingpros.com Thanks to our partner RepairPal. Visit the Web https://repairpal.com/ (HERE)
Our guests in this episode of The Auto Repair Marketing Podcast are Tim Chakarian of Bimmer PHD and Tatsu Tsuchida of Tokyo Automotive. We talk about a lot in this episode, including guerrilla marketing, 20 groups, and industry associations. Tim and Tatsu both run incredible shops and are very involved at an industry level. They believe in learning from and helping other shop owners.Talking PointsTim and Tatsu both have had great success in using “bird droppings” to market their shopsBoth also have gifts that they give to their clientsThe relationships they have built with each other and other shop owners through their 20 group have had a large impact on their business and themselves personallyBeing a member of ASCCA has allowed them to work with other local shop owners and create relationships that benefit their clientsHow To Get In TouchGroup - Auto Repair Marketing MastermindWebsite - shopmarketingpros.com Facebook - facebook.com/shopmarketingpros Get the Book - shopmarketingpros.com/bookInstagram - @shopmarketingpros Questions/Ideas - podcast@shopmarketingpros.com Thanks to our partner RepairPal. Visit the Web HERE
"I made up a plan which was run a distance, and then run a little bit more, and then run a little bit more...then run even more miles in a week."Mike Katz was one of my first DC running friends upon moving back to the district in 2019, and I am lucky because he is a genuine and wonderful human. He is also a very smart scientist in DC! For this episode, we sat down in Logan Circle to discuss how Katz got into running (his friend, Laura, bullied him a lot), turning from a sitter to a mover, his running adventures with college pals turned post-college pals, Tatsu and Laura, his first Pacers group run in a polar vortex, dealing with injury (some sage advice at the end of this episode), training for long distance races, biking and walking around DC, and Katz's role as Skechers Ambassador!Special thanks to Katz for testing out the new mic with me. And always-thanks to Dan Hoffman for the audio. I hope you enjoy and stay engaged with the pod! Good luck to everyone running the DC Half this weekend!
On today's podcast Eric is joined Matt Harris to discuss the latest news from the Houston restaurant and bar scene including Tim Love shuttering his 3 Houston restaurants, Luis Rangel's expansion into seafood, and a Canadian favorite is now open in Katy. In the Restaurant's of the Week Pacha Nikkei and Neo are featured. In the Guest of the Week portion Eric is joined by Shion Aikawa of Ramen Tatsu-Ya. Shion speaks with Eric about what it means to be the senior VP of culture for a restaurant group, memories from the early days of bringing ramen restaurants to Texas, building up the following, why they decided to come to Houston, adapting the Austin dining experience to Houston, what's some of their goals are, their other concepts, whether any of the other concepts would expand into Houston, their new BBQ concept coming to Austin, and much more! Follow Eric on Instagram and Twitter, plus check out some of his latest articles at Culturemap.com: Texas Celebrity Chef Tim Love Pulls the Plug on His 3 Houston Restaurants New Gatsby-Themed Seafood Restaurant Jazzes Up Montrose with Roaring '20's Vibe Canadian Favorite Coffee & Doughnut Shop Tim Hortons Now Open in Katy Beloved Bellaire Burger Restaurant Suddenly Shutters Despite Celebrated Comeback New Texas-Inspired Restaurant Fires Up Steaks and Barbecue in Garden Oaks New Pizza Restaurant Slices into Houston with Drive-Thru and Affordable Pies
Watch the live stream: Watch on YouTube About the show Sponsored by us! Support our work through: Our courses at Talk Python Training Test & Code Podcast Patreon Supporters Special guest: Gina Häußge, creator & maintainer of OctoPrint Michael #1: beanita Local MongoDB-like database prepared to work with Beanie ODM So, you know Beanie - Pydantic + async + MongoDB And you know Mongita - Mongita is to MongoDB as SQLite is to SQL Beanita lets you use Beanie, but against Mongita rather than a server-based MongoDB server Brian #2: The Good Research Code Handbook Patrick J Mineault “for grad students, postdocs and PIs (principle investigator) who do a lot of programming as part of their research.” lessons setup git, virtual environments, project layout, packaging, cookie cutter style style guides, keeping things clean coding separating concerns, separating pure functions and those with side effects, pythonic-ness testing unit testing, testing with side effects, … (incorrect definition of end-to-end tests, but a good job at covering the other bits) documentation comments, tests, docstrings, README.md, usage docs, tutorials, websites documenting pipelines and projects social aspects various reviews, pairing, open source, community sample project extras testing example good tools to use Gina #3: CadQuery Python lib to do build parametric 3D CAD models Can output STL, STEP, AMF, SVG and some more Uses same geometry kernel as FreeCAD (OpenCascade) Also available: desktop editor, Jupyter extension, CLI Would recommend the Jupyter extension, the app seems a bit behind latest development Jupyter extension is easy to set up on Docker and comes with a nice 3D preview pane Was able to create a basic parametric design of an insert for an assortment box easily Python 3.8+, not yet 3.11, OpenCascade related Michael #4: Textinator Like TextSniper, but in Python Simple MacOS StatusBar / Menu Bar app to automatically detect text in screenshots Built with RUMPS: Ridiculously Uncomplicated macOS Python Statusbar apps Take a screenshot of a region of the screen using ⌘ + ⇧ + 4 (Cmd + Shift + 4). The app will automatically detect any text in the screenshot and copy it to your clipboard. How Textinator Works At startup, Textinator starts a persistent NSMetadataQuery Spotlight query (using the pyobjc Python-to-Objective-C bridge) to detect when a new screenshot is created. When the user creates screenshot, the NSMetadataQuery query is fired and Textinator performs text detection using a Vision VNRecognizeTextRequest call. Brian #5: Handling Concurrency Without Locks "How to not let concurrency cripple your system” Haki Benita “…common concurrency challenges and how to overcome them with minimal locking.” Starts with a Django web app A url shortener that generates a unique short url and stores the result in a database so it doesn't get re-used. Discussions of collision with two users checking, then storing keys at the same time. locking problems in general utilizing database ability to make sure some items are unique, in this case PostgreSQL updating your code to take advantage of database constraints support to allow you to do less locking within your code Gina #6: TatSu Generates parsers from EBNF grammars (or ANTLR) Can compile the model (similar to regex) for quick reuse or generate python source Many examples provided Active development, Python 3.10+ Extras Michael: Back on 285 we spoke about PEP 690. Now there is a proper blog post about it. Expedited release of Python3.11.0b3 - Due to a known incompatibility with pytest and the previous beta release (Python 3.11.0b2) and after some deliberation, Python release team have decided to do an expedited release of Python 3.11.0b3 so the community can continue testing their packages with pytest and therefore testing the betas as expected. (via Python Weekly) Kagi search via Daniel Hjertholm Not really python related, but if I know Michael right, he'll love the new completely ad free and privacy-respecting search engine kagi.com. I've used kagi.com since their public beta launched, mainly to search for solutions to Python issues at work. The results are way better than DuckDuckGo's results, and even better than Googles! Love the Programming-lens and the ability to up/down prioritize domains in the results. Their FAQ explains everything you need to know: https://kagi.com/faq Looks great but not sure about the pricing justification (32 sec of compute = $1), that's either 837x more than all of Talk Python + Python Bytes or more than 6,700x more than just one of our sites/services. (We spend about $100/mo on 8 servers.) But they may be buying results from Google and Bing, and that could be the cost. Here's a short interview with the man who started kagi. Gina: rdserialtool: Reads out low-cost USB power monitors (UM24C, UM25C, UM34C) via BLE/pybluez. Amazing if you need to monitor the power consumption/voltage/current of some embedded electronics on a budget. Helped me solve a very much OctoPrint development specific problem. Python 3.4+ nodejs-bin: by Sam Willis: https://twitter.com/samwillis/status/1537787836119793667 Install nodejs via pypi/as dependency, still very much an Alpha but looks promising Makes it easier to obtain a full stack environment Very interesting for end to end testing with JS based tooling, or packaging a frontend with your Python app See also nodeenv, which does a similar thing, but with additional steps Joke: Rejected Github Badges
Shay and Ro talk about the CANON ship from Way of the House Husband, Tatsu and Miku.Shay's twitterFic RecsRumor has it by rosethornliDragons Banking Through Beautiful Skies by CaeriaSupport the show (https://www.patreon.com/theonetruepod)
Travis County launches a new program for rental and mortgage assistance. The team behind Ramen Tatsu-Ya announces a new project. The CI Morning Breakdown is a production of Community Impact Newspaper. It is produced by Olivia Aldridge with editing by Marie Leonard. Weather and allergy reports are sourced from www.weather.com and AccuWeather.
Sean and Alex flew to California for a visit to Six Flags Magic Mountain. In this episode we discuss our trip, as well as hot topics such as Tatsu repaint, Wonder Woman construction, relocated rides, Goliath repaint, and more!
Andrew Orolfo is here. We try to figure out why Andrew is so chill and indifferent while talking about Tatsu ramen, online porn games, getting disrespected by fellow comedians, network marketing meetings, scientology tests, white people fusion food, fighting high school kids, Eddie Murphy is Asian, beating yourself up mentally, fearlessness towards death, the issue with indifference in dating, & more.
Deb leads the crew through Shonen Jump's Hot New Title, DanDaDan, by Yukinobu Tatsu. This is a spicy manga, and it makes for a very spicy episode! It's not for the kiddos, but does that automatically make it perfect for Chip? Show notes and more info at mangasplaining.com. Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy