Podcasts about unthinking

  • 31PODCASTS
  • 40EPISODES
  • 53mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Mar 9, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about unthinking

Latest podcast episodes about unthinking

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Top 5 Research Trends + OpenAI Sora, Google Gemini, Groq Math (Jan-Feb 2024 Audio Recap) + Latent Space Anniversary with Lindy.ai, RWKV, Pixee, Julius.ai, Listener Q&A!

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Mar 9, 2024 108:52


We will be recording a preview of the AI Engineer World's Fair soon with swyx and Ben Dunphy, send any questions about Speaker CFPs and Sponsor Guides you have!Alessio is now hiring engineers for a new startup he is incubating at Decibel: Ideal candidate is an ex-technical co-founder type (can MVP products end to end, comfortable with ambiguous prod requirements, etc). Reach out to him for more!Thanks for all the love on the Four Wars episode! We're excited to develop this new “swyx & Alessio rapid-fire thru a bunch of things” format with you, and feedback is welcome. Jan 2024 RecapThe first half of this monthly audio recap pod goes over our highlights from the Jan Recap, which is mainly focused on notable research trends we saw in Jan 2024:Feb 2024 RecapThe second half catches you up on everything that was topical in Feb, including:* OpenAI Sora - does it have a world model? Yann LeCun vs Jim Fan * Google Gemini Pro 1.5 - 1m Long Context, Video Understanding* Groq offering Mixtral at 500 tok/s at $0.27 per million toks (swyx vs dylan math)* The {Gemini | Meta | Copilot} Alignment Crisis (Sydney is back!)* Grimes' poetic take: Art for no one, by no one* F*** you, show me the promptLatent Space AnniversaryPlease also read Alessio's longform reflections on One Year of Latent Space!We launched the podcast 1 year ago with Logan from OpenAI:and also held an incredible demo day that got covered in The Information:Over 750k downloads later, having established ourselves as the top AI Engineering podcast, reaching #10 in the US Tech podcast charts, and crossing 1 million unique readers on Substack, for our first anniversary we held Latent Space Final Frontiers, where 10 handpicked teams, including Lindy.ai and Julius.ai, competed for prizes judged by technical AI leaders from (former guest!) LlamaIndex, Replit, GitHub, AMD, Meta, and Lemurian Labs.The winners were Pixee and RWKV (that's Eugene from our pod!):And finally, your cohosts got cake!We also captured spot interviews with 4 listeners who kindly shared their experience of Latent Space, everywhere from Hungary to Australia to China:* Balázs Némethi* Sylvia Tong* RJ Honicky* Jan ZhengOur birthday wishes for the super loyal fans reading this - tag @latentspacepod on a Tweet or comment on a @LatentSpaceTV video telling us what you liked or learned from a pod that stays with you to this day, and share us with a friend!As always, feedback is welcome. Timestamps* [00:03:02] Top Five LLM Directions* [00:03:33] Direction 1: Long Inference (Planning, Search, AlphaGeometry, Flow Engineering)* [00:11:42] Direction 2: Synthetic Data (WRAP, SPIN)* [00:17:20] Wildcard: Multi-Epoch Training (OLMo, Datablations)* [00:19:43] Direction 3: Alt. Architectures (Mamba, RWKV, RingAttention, Diffusion Transformers)* [00:23:33] Wildcards: Text Diffusion, RALM/Retro* [00:25:00] Direction 4: Mixture of Experts (DeepSeekMoE, Samba-1)* [00:28:26] Wildcard: Model Merging (mergekit)* [00:29:51] Direction 5: Online LLMs (Gemini Pro, Exa)* [00:33:18] OpenAI Sora and why everyone underestimated videogen* [00:36:18] Does Sora have a World Model? Yann LeCun vs Jim Fan* [00:42:33] Groq Math* [00:47:37] Analyzing Gemini's 1m Context, Reddit deal, Imagegen politics, Gemma via the Four Wars* [00:55:42] The Alignment Crisis - Gemini, Meta, Sydney is back at Copilot, Grimes' take* [00:58:39] F*** you, show me the prompt* [01:02:43] Send us your suggestions pls* [01:04:50] Latent Space Anniversary* [01:04:50] Lindy.ai - Agent Platform* [01:06:40] RWKV - Beyond Transformers* [01:15:00] Pixee - Automated Security* [01:19:30] Julius AI - Competing with Code Interpreter* [01:25:03] Latent Space Listeners* [01:25:03] Listener 1 - Balázs Némethi (Hungary, Latent Space Paper Club* [01:27:47] Listener 2 - Sylvia Tong (Sora/Jim Fan/EntreConnect)* [01:31:23] Listener 3 - RJ (Developers building Community & Content)* [01:39:25] Listener 4 - Jan Zheng (Australia, AI UX)Transcript[00:00:00] AI Charlie: Welcome to the Latent Space podcast, weekend edition. This is Charlie, your new AI co host. Happy weekend. As an AI language model, I work the same every day of the week, although I might get lazier towards the end of the year. Just like you. Last month, we released our first monthly recap pod, where Swyx and Alessio gave quick takes on the themes of the month, and we were blown away by your positive response.[00:00:33] AI Charlie: We're delighted to continue our new monthly news recap series for AI engineers. Please feel free to submit questions by joining the Latent Space Discord, or just hit reply when you get the emails from Substack. This month, we're covering the top research directions that offer progress for text LLMs, and then touching on the big Valentine's Day gifts we got from Google, OpenAI, and Meta.[00:00:55] AI Charlie: Watch out and take care.[00:00:57] Alessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO of Residence at Decibel Partners, and we're back with a monthly recap with my co host[00:01:06] swyx: Swyx. The reception was very positive for the first one, I think people have requested this and no surprise that I think they want to hear us more applying on issues and maybe drop some alpha along the way I'm not sure how much alpha we have to drop, this month in February was a very, very heavy month, we also did not do one specifically for January, so I think we're just going to do a two in one, because we're recording this on the first of March.[00:01:29] Alessio: Yeah, let's get to it. I think the last one we did, the four wars of AI, was the main kind of mental framework for people. I think in the January one, we had the five worthwhile directions for state of the art LLMs. Four, five,[00:01:42] swyx: and now we have to do six, right? Yeah.[00:01:46] Alessio: So maybe we just want to run through those, and then do the usual news recap, and we can do[00:01:52] swyx: one each.[00:01:53] swyx: So the context to this stuff. is one, I noticed that just the test of time concept from NeurIPS and just in general as a life philosophy I think is a really good idea. Especially in AI, there's news every single day, and after a while you're just like, okay, like, everyone's excited about this thing yesterday, and then now nobody's talking about it.[00:02:13] swyx: So, yeah. It's more important, or better use of time, to spend things, spend time on things that will stand the test of time. And I think for people to have a framework for understanding what will stand the test of time, they should have something like the four wars. Like, what is the themes that keep coming back because they are limited resources that everybody's fighting over.[00:02:31] swyx: Whereas this one, I think that the focus for the five directions is just on research that seems more proMECEng than others, because there's all sorts of papers published every single day, and there's no organization. Telling you, like, this one's more important than the other one apart from, you know, Hacker News votes and Twitter likes and whatever.[00:02:51] swyx: And obviously you want to get in a little bit earlier than Something where, you know, the test of time is counted by sort of reference citations.[00:02:59] The Five Research Directions[00:02:59] Alessio: Yeah, let's do it. We got five. Long inference.[00:03:02] swyx: Let's start there. Yeah, yeah. So, just to recap at the top, the five trends that I picked, and obviously if you have some that I did not cover, please suggest something.[00:03:13] swyx: The five are long inference, synthetic data, alternative architectures, mixture of experts, and online LLMs. And something that I think might be a bit controversial is this is a sorted list in the sense that I am not the guy saying that Mamba is like the future and, and so maybe that's controversial.[00:03:31] Direction 1: Long Inference (Planning, Search, AlphaGeometry, Flow Engineering)[00:03:31] swyx: But anyway, so long inference is a thesis I pushed before on the newsletter and on in discussing The thesis that, you know, Code Interpreter is GPT 4. 5. That was the title of the post. And it's one of many ways in which we can do long inference. You know, long inference also includes chain of thought, like, please think step by step.[00:03:52] swyx: But it also includes flow engineering, which is what Itamar from Codium coined, I think in January, where, basically, instead of instead of stuffing everything in a prompt, You do like sort of multi turn iterative feedback and chaining of things. In a way, this is a rebranding of what a chain is, what a lang chain is supposed to be.[00:04:15] swyx: I do think that maybe SGLang from ElemSys is a better name. Probably the neatest way of flow engineering I've seen yet, in the sense that everything is a one liner, it's very, very clean code. I highly recommend people look at that. I'm surprised it hasn't caught on more, but I think it will. It's weird that something like a DSPy is more hyped than a Shilang.[00:04:36] swyx: Because it, you know, it maybe obscures the code a little bit more. But both of these are, you know, really good sort of chain y and long inference type approaches. But basically, the reason that the basic fundamental insight is that the only, like, there are only a few dimensions we can scale LLMs. So, let's say in like 2020, no, let's say in like 2018, 2017, 18, 19, 20, we were realizing that we could scale the number of parameters.[00:05:03] swyx: 20, we were And we scaled that up to 175 billion parameters for GPT 3. And we did some work on scaling laws, which we also talked about in our talk. So the datasets 101 episode where we're like, okay, like we, we think like the right number is 300 billion tokens to, to train 175 billion parameters and then DeepMind came along and trained Gopher and Chinchilla and said that, no, no, like, you know, I think we think the optimal.[00:05:28] swyx: compute optimal ratio is 20 tokens per parameter. And now, of course, with LLAMA and the sort of super LLAMA scaling laws, we have 200 times and often 2, 000 times tokens to parameters. So now, instead of scaling parameters, we're scaling data. And fine, we can keep scaling data. But what else can we scale?[00:05:52] swyx: And I think understanding the ability to scale things is crucial to understanding what to pour money and time and effort into because there's a limit to how much you can scale some things. And I think people don't think about ceilings of things. And so the remaining ceiling of inference is like, okay, like, we have scaled compute, we have scaled data, we have scaled parameters, like, model size, let's just say.[00:06:20] swyx: Like, what else is left? Like, what's the low hanging fruit? And it, and it's, like, blindingly obvious that the remaining low hanging fruit is inference time. So, like, we have scaled training time. We can probably scale more, those things more, but, like, not 10x, not 100x, not 1000x. Like, right now, maybe, like, a good run of a large model is three months.[00:06:40] swyx: We can scale that to three years. But like, can we scale that to 30 years? No, right? Like, it starts to get ridiculous. So it's just the orders of magnitude of scaling. It's just, we're just like running out there. But in terms of the amount of time that we spend inferencing, like everything takes, you know, a few milliseconds, a few hundred milliseconds, depending on what how you're taking token by token, or, you know, entire phrase.[00:07:04] swyx: But We can scale that to hours, days, months of inference and see what we get. And I think that's really proMECEng.[00:07:11] Alessio: Yeah, we'll have Mike from Broadway back on the podcast. But I tried their product and their reports take about 10 minutes to generate instead of like just in real time. I think to me the most interesting thing about long inference is like, You're shifting the cost to the customer depending on how much they care about the end result.[00:07:31] Alessio: If you think about prompt engineering, it's like the first part, right? You can either do a simple prompt and get a simple answer or do a complicated prompt and get a better answer. It's up to you to decide how to do it. Now it's like, hey, instead of like, yeah, training this for three years, I'll still train it for three months and then I'll tell you, you know, I'll teach you how to like make it run for 10 minutes to get a better result.[00:07:52] Alessio: So you're kind of like parallelizing like the improvement of the LLM. Oh yeah, you can even[00:07:57] swyx: parallelize that, yeah, too.[00:07:58] Alessio: So, and I think, you know, for me, especially the work that I do, it's less about, you know, State of the art and the absolute, you know, it's more about state of the art for my application, for my use case.[00:08:09] Alessio: And I think we're getting to the point where like most companies and customers don't really care about state of the art anymore. It's like, I can get this to do a good enough job. You know, I just need to get better. Like, how do I do long inference? You know, like people are not really doing a lot of work in that space, so yeah, excited to see more.[00:08:28] swyx: So then the last point I'll mention here is something I also mentioned as paper. So all these directions are kind of guided by what happened in January. That was my way of doing a January recap. Which means that if there was nothing significant in that month, I also didn't mention it. Which is which I came to regret come February 15th, but in January also, you know, there was also the alpha geometry paper, which I kind of put in this sort of long inference bucket, because it solves like, you know, more than 100 step math olympiad geometry problems at a human gold medalist level and that also involves planning, right?[00:08:59] swyx: So like, if you want to scale inference, you can't scale it blindly, because just, Autoregressive token by token generation is only going to get you so far. You need good planning. And I think probably, yeah, what Mike from BrightWave is now doing and what everyone is doing, including maybe what we think QSTAR might be, is some form of search and planning.[00:09:17] swyx: And it makes sense. Like, you want to spend your inference time wisely. How do you[00:09:22] Alessio: think about plans that work and getting them shared? You know, like, I feel like if you're planning a task, somebody has got in and the models are stochastic. So everybody gets initially different results. Somebody is going to end up generating the best plan to do something, but there's no easy way to like store these plans and then reuse them for most people.[00:09:44] Alessio: You know, like, I'm curious if there's going to be. Some paper or like some work there on like making it better because, yeah, we don't[00:09:52] swyx: really have This is your your pet topic of NPM for[00:09:54] Alessio: Yeah, yeah, NPM, exactly. NPM for, you need NPM for anything, man. You need NPM for skills. You need NPM for planning. Yeah, yeah.[00:10:02] Alessio: You know I think, I mean, obviously the Voyager paper is like the most basic example where like, now their artifact is like the best planning to do a diamond pickaxe in Minecraft. And everybody can just use that. They don't need to come up with it again. Yeah. But there's nothing like that for actually useful[00:10:18] swyx: tasks.[00:10:19] swyx: For plans, I believe it for skills. I like that. Basically, that just means a bunch of integration tooling. You know, GPT built me integrations to all these things. And, you know, I just came from an integrations heavy business and I could definitely, I definitely propose some version of that. And it's just, you know, hard to execute or expensive to execute.[00:10:38] swyx: But for planning, I do think that everyone lives in slightly different worlds. They have slightly different needs. And they definitely want some, you know, And I think that that will probably be the main hurdle for any, any sort of library or package manager for planning. But there should be a meta plan of how to plan.[00:10:57] swyx: And maybe you can adopt that. And I think a lot of people when they have sort of these meta prompting strategies of like, I'm not prescribing you the prompt. I'm just saying that here are the like, Fill in the lines or like the mad libs of how to prompts. First you have the roleplay, then you have the intention, then you have like do something, then you have the don't something and then you have the my grandmother is dying, please do this.[00:11:19] swyx: So the meta plan you could, you could take off the shelf and test a bunch of them at once. I like that. That was the initial, maybe, promise of the, the prompting libraries. You know, both 9chain and Llama Index have, like, hubs that you can sort of pull off the shelf. I don't think they're very successful because people like to write their own.[00:11:36] swyx: Yeah,[00:11:37] Direction 2: Synthetic Data (WRAP, SPIN)[00:11:37] Alessio: yeah, yeah. Yeah, that's a good segue into the next one, which is synthetic[00:11:41] swyx: data. Synthetic data is so hot. Yeah, and, you know, the way, you know, I think I, I feel like I should do one of these memes where it's like, Oh, like I used to call it, you know, R L A I F, and now I call it synthetic data, and then people are interested.[00:11:54] swyx: But there's gotta be older versions of what synthetic data really is because I'm sure, you know if you've been in this field long enough, There's just different buzzwords that the industry condenses on. Anyway, the insight that I think is relatively new that why people are excited about it now and why it's proMECEng now is that we have evidence that shows that LLMs can generate data to improve themselves with no teacher LLM.[00:12:22] swyx: For all of 2023, when people say synthetic data, they really kind of mean generate a whole bunch of data from GPT 4 and then train an open source model on it. Hello to our friends at News Research. That's what News Harmony says. They're very, very open about that. I think they have said that they're trying to migrate away from that.[00:12:40] swyx: But it is explicitly against OpenAI Terms of Service. Everyone knows this. You know, especially once ByteDance got banned for, for doing exactly that. So so, so synthetic data that is not a form of model distillation is the hot thing right now, that you can bootstrap better LLM performance from the same LLM, which is very interesting.[00:13:03] swyx: A variant of this is RLAIF, where you have a, where you have a sort of a constitutional model, or, you know, some, some kind of judge model That is sort of more aligned. But that's not really what we're talking about when most people talk about synthetic data. Synthetic data is just really, I think, you know, generating more data in some way.[00:13:23] swyx: A lot of people, I think we talked about this with Vipul from the Together episode, where I think he commented that you just have to have a good world model. Or a good sort of inductive bias or whatever that, you know, term of art is. And that is strongest in math and science math and code, where you can verify what's right and what's wrong.[00:13:44] swyx: And so the REST EM paper from DeepMind explored that. Very well, it's just the most obvious thing like and then and then once you get out of that domain of like things where you can generate You can arbitrarily generate like a whole bunch of stuff and verify if they're correct and therefore they're they're correct synthetic data to train on Once you get into more sort of fuzzy topics, then it's then it's a bit less clear So I think that the the papers that drove this understanding There are two big ones and then one smaller one One was wrap like rephrasing the web from from Apple where they basically rephrased all of the C4 data set with Mistral and it be trained on that instead of C4.[00:14:23] swyx: And so new C4 trained much faster and cheaper than old C, than regular raw C4. And that was very interesting. And I have told some friends of ours that they should just throw out their own existing data sets and just do that because that seems like a pure win. Obviously we have to study, like, what the trade offs are.[00:14:42] swyx: I, I imagine there are trade offs. So I was just thinking about this last night. If you do synthetic data and it's generated from a model, probably you will not train on typos. So therefore you'll be like, once the model that's trained on synthetic data encounters the first typo, they'll be like, what is this?[00:15:01] swyx: I've never seen this before. So they have no association or correction as to like, oh, these tokens are often typos of each other, therefore they should be kind of similar. I don't know. That's really remains to be seen, I think. I don't think that the Apple people export[00:15:15] Alessio: that. Yeah, isn't that the whole, Mode collapse thing, if we do more and more of this at the end of the day.[00:15:22] swyx: Yeah, that's one form of that. Yeah, exactly. Microsoft also had a good paper on text embeddings. And then I think this is a meta paper on self rewarding language models. That everyone is very interested in. Another paper was also SPIN. These are all things we covered in the the Latent Space Paper Club.[00:15:37] swyx: But also, you know, I just kind of recommend those as top reads of the month. Yeah, I don't know if there's any much else in terms, so and then, regarding the potential of it, I think it's high potential because, one, it solves one of the data war issues that we have, like, everyone is OpenAI is paying Reddit 60 million dollars a year for their user generated data.[00:15:56] swyx: Google, right?[00:15:57] Alessio: Not OpenAI.[00:15:59] swyx: Is it Google? I don't[00:16:00] Alessio: know. Well, somebody's paying them 60 million, that's[00:16:04] swyx: for sure. Yes, that is, yeah, yeah, and then I think it's maybe not confirmed who. But yeah, it is Google. Oh my god, that's interesting. Okay, because everyone was saying, like, because Sam Altman owns 5 percent of Reddit, which is apparently 500 million worth of Reddit, he owns more than, like, the founders.[00:16:21] Alessio: Not enough to get the data,[00:16:22] swyx: I guess. So it's surprising that it would go to Google instead of OpenAI, but whatever. Okay yeah, so I think that's all super interesting in the data field. I think it's high potential because we have evidence that it works. There's not a doubt that it doesn't work. I think it's a doubt that there's, what the ceiling is, which is the mode collapse thing.[00:16:42] swyx: If it turns out that the ceiling is pretty close, then this will maybe augment our data by like, I don't know, 30 50 percent good, but not game[00:16:51] Alessio: changing. And most of the synthetic data stuff, it's reinforcement learning on a pre trained model. People are not really doing pre training on fully synthetic data, like, large enough scale.[00:17:02] swyx: Yeah, unless one of our friends that we've talked to succeeds. Yeah, yeah. Pre trained synthetic data, pre trained scale synthetic data, I think that would be a big step. Yeah. And then there's a wildcard, so all of these, like smaller Directions,[00:17:15] Wildcard: Multi-Epoch Training (OLMo, Datablations)[00:17:15] swyx: I always put a wildcard in there. And one of the wildcards is, okay, like, Let's say, you have pre, you have, You've scraped all the data on the internet that you think is useful.[00:17:25] swyx: Seems to top out at somewhere between 2 trillion to 3 trillion tokens. Maybe 8 trillion if Mistral, Mistral gets lucky. Okay, if I need 80 trillion, if I need 100 trillion, where do I go? And so, you can do synthetic data maybe, but maybe that only gets you to like 30, 40 trillion. Like where, where is the extra alpha?[00:17:43] swyx: And maybe extra alpha is just train more on the same tokens. Which is exactly what Omo did, like Nathan Lambert, AI2, After, just after he did the interview with us, they released Omo. So, it's unfortunate that we didn't get to talk much about it. But Omo actually started doing 1. 5 epochs on every, on all data.[00:18:00] swyx: And the data ablation paper that I covered in Europe's says that, you know, you don't like, don't really start to tap out of like, the alpha or the sort of improved loss that you get from data all the way until four epochs. And so I'm just like, okay, like, why do we all agree that one epoch is all you need?[00:18:17] swyx: It seems like to be a trend. It seems that we think that memorization is very good or too good. But then also we're finding that, you know, For improvement in results that we really like, we're fine on overtraining on things intentionally. So, I think that's an interesting direction that I don't see people exploring enough.[00:18:36] swyx: And the more I see papers coming out Stretching beyond the one epoch thing, the more people are like, it's completely fine. And actually, the only reason we stopped is because we ran out of compute[00:18:46] Alessio: budget. Yeah, I think that's the biggest thing, right?[00:18:51] swyx: Like, that's not a valid reason, that's not science. I[00:18:54] Alessio: wonder if, you know, Matt is going to do it.[00:18:57] Alessio: I heard LamaTree, they want to do a 100 billion parameters model. I don't think you can train that on too many epochs, even with their compute budget, but yeah. They're the only ones that can save us, because even if OpenAI is doing this, they're not going to tell us, you know. Same with DeepMind.[00:19:14] swyx: Yeah, and so the updates that we got on Lambda 3 so far is apparently that because of the Gemini news that we'll talk about later they're pushing it back on the release.[00:19:21] swyx: They already have it. And they're just pushing it back to do more safety testing. Politics testing.[00:19:28] Alessio: Well, our episode with Sumit will have already come out by the time this comes out, I think. So people will get the inside story on how they actually allocate the compute.[00:19:38] Direction 3: Alt. Architectures (Mamba, RWKV, RingAttention, Diffusion Transformers)[00:19:38] Alessio: Alternative architectures. Well, shout out to our WKV who won one of the prizes at our Final Frontiers event last week.[00:19:47] Alessio: We talked about Mamba and Strapain on the Together episode. A lot of, yeah, monarch mixers. I feel like Together, It's like the strong Stanford Hazy Research Partnership, because Chris Ray is one of the co founders. So they kind of have a, I feel like they're going to be the ones that have one of the state of the art models alongside maybe RWKB.[00:20:08] Alessio: I haven't seen as many independent. People working on this thing, like Monarch Mixer, yeah, Manbuster, Payena, all of these are together related. Nobody understands the math. They got all the gigabrains, they got 3DAO, they got all these folks in there, like, working on all of this.[00:20:25] swyx: Albert Gu, yeah. Yeah, so what should we comment about it?[00:20:28] swyx: I mean, I think it's useful, interesting, but at the same time, both of these are supposed to do really good scaling for long context. And then Gemini comes out and goes like, yeah, we don't need it. Yeah.[00:20:44] Alessio: No, that's the risk. So, yeah. I was gonna say, maybe it's not here, but I don't know if we want to talk about diffusion transformers as like in the alt architectures, just because of Zora.[00:20:55] swyx: One thing, yeah, so, so, you know, this came from the Jan recap, which, and diffusion transformers were not really a discussion, and then, obviously, they blow up in February. Yeah. I don't think they're, it's a mixed architecture in the same way that Stripe Tiena is mixed there's just different layers taking different approaches.[00:21:13] swyx: Also I think another one that I maybe didn't call out here, I think because it happened in February, was hourglass diffusion from stability. But also, you know, another form of mixed architecture. So I guess that is interesting. I don't have much commentary on that, I just think, like, we will try to evolve these things, and maybe one of these architectures will stick and scale, it seems like diffusion transformers is going to be good for anything generative, you know, multi modal.[00:21:41] swyx: We don't see anything where diffusion is applied to text yet, and that's the wild card for this category. Yeah, I mean, I think I still hold out hope for let's just call it sub quadratic LLMs. I think that a lot of discussion this month actually was also centered around this concept that People always say, oh, like, transformers don't scale because attention is quadratic in the sequence length.[00:22:04] swyx: Yeah, but, you know, attention actually is a very small part of the actual compute that is being spent, especially in inference. And this is the reason why, you know, when you multiply, when you, when you, when you jump up in terms of the, the model size in GPT 4 from like, you know, 38k to like 32k, you don't also get like a 16 times increase in your, in your performance.[00:22:23] swyx: And this is also why you don't get like a million times increase in your, in your latency when you throw a million tokens into Gemini. Like people have figured out tricks around it or it's just not that significant as a term, as a part of the overall compute. So there's a lot of challenges to this thing working.[00:22:43] swyx: It's really interesting how like, how hyped people are about this versus I don't know if it works. You know, it's exactly gonna, gonna work. And then there's also this, this idea of retention over long context. Like, even though you have context utilization, like, the amount of, the amount you can remember is interesting.[00:23:02] swyx: Because I've had people criticize both Mamba and RWKV because they're kind of, like, RNN ish in the sense that they have, like, a hidden memory and sort of limited hidden memory that they will forget things. So, for all these reasons, Gemini 1. 5, which we still haven't covered, is very interesting because Gemini magically has fixed all these problems with perfect haystack recall and reasonable latency and cost.[00:23:29] Wildcards: Text Diffusion, RALM/Retro[00:23:29] swyx: So that's super interesting. So the wildcard I put in here if you want to go to that. I put two actually. One is text diffusion. I think I'm still very influenced by my meeting with a mid journey person who said they were working on text diffusion. I think it would be a very, very different paradigm for, for text generation, reasoning, plan generation if we can get diffusion to work.[00:23:51] swyx: For text. And then the second one is Dowie Aquila's contextual AI, which is working on retrieval augmented language models, where it kind of puts RAG inside of the language model instead of outside.[00:24:02] Alessio: Yeah, there's a paper called Retro that covers some of this. I think that's an interesting thing. I think the The challenge, well not the challenge, what they need to figure out is like how do you keep the rag piece always up to date constantly, you know, I feel like the models, you put all this work into pre training them, but then at least you have a fixed artifact.[00:24:22] Alessio: These architectures are like constant work needs to be done on them and they can drift even just based on the rag data instead of the model itself. Yeah,[00:24:30] swyx: I was in a panel with one of the investors in contextual and the guy, the way that guy pitched it, I didn't agree with. He was like, this will solve hallucination.[00:24:38] Alessio: That's what everybody says. We solve[00:24:40] swyx: hallucination. I'm like, no, you reduce it. It cannot,[00:24:44] Alessio: if you solved it, the model wouldn't exist, right? It would just be plain text. It wouldn't be a generative model. Cool. So, author, architectures, then we got mixture of experts. I think we covered a lot of, a lot of times.[00:24:56] Direction 4: Mixture of Experts (DeepSeekMoE, Samba-1)[00:24:56] Alessio: Maybe any new interesting threads you want to go under here?[00:25:00] swyx: DeepSeq MOE, which was released in January. Everyone who is interested in MOEs should read that paper, because it's significant for two reasons. One three reasons. One, it had, it had small experts, like a lot more small experts. So, for some reason, everyone has settled on eight experts for GPT 4 for Mixtral, you know, that seems to be the favorite architecture, but these guys pushed it to 64 experts, and each of them smaller than the other.[00:25:26] swyx: But then they also had the second idea, which is that it is They had two, one to two always on experts for common knowledge and that's like a very compelling concept that you would not route to all the experts all the time and make them, you know, switch to everything. You would have some always on experts.[00:25:41] swyx: I think that's interesting on both the inference side and the training side for for memory retention. And yeah, they, they, they, the, the, the, the results that they published, which actually excluded, Mixed draw, which is interesting. The results that they published showed a significant performance jump versus all the other sort of open source models at the same parameter count.[00:26:01] swyx: So like this may be a better way to do MOEs that are, that is about to get picked up. And so that, that is interesting for the third reason, which is this is the first time a new idea from China. has infiltrated the West. It's usually the other way around. I probably overspoke there. There's probably lots more ideas that I'm not aware of.[00:26:18] swyx: Maybe in the embedding space. But the I think DCM we, like, woke people up and said, like, hey, DeepSeek, this, like, weird lab that is attached to a Chinese hedge fund is somehow, you know, doing groundbreaking research on MOEs. So, so, I classified this as a medium potential because I think that it is a sort of like a one off benefit.[00:26:37] swyx: You can Add to any, any base model to like make the MOE version of it, you get a bump and then that's it. So, yeah,[00:26:45] Alessio: I saw Samba Nova, which is like another inference company. They released this MOE model called Samba 1, which is like a 1 trillion parameters. But they're actually MOE auto open source models.[00:26:56] Alessio: So it's like, they just, they just clustered them all together. So I think people. Sometimes I think MOE is like you just train a bunch of small models or like smaller models and put them together. But there's also people just taking, you know, Mistral plus Clip plus, you know, Deepcoder and like put them all together.[00:27:15] Alessio: And then you have a MOE model. I don't know. I haven't tried the model, so I don't know how good it is. But it seems interesting that you can then have people working separately on state of the art, you know, Clip, state of the art text generation. And then you have a MOE architecture that brings them all together.[00:27:31] swyx: I'm thrown off by your addition of the word clip in there. Is that what? Yeah, that's[00:27:35] Alessio: what they said. Yeah, yeah. Okay. That's what they I just saw it yesterday. I was also like[00:27:40] swyx: scratching my head. And they did not use the word adapter. No. Because usually what people mean when they say, Oh, I add clip to a language model is adapter.[00:27:48] swyx: Let me look up the Which is what Lava did.[00:27:50] Alessio: The announcement again.[00:27:51] swyx: Stable diffusion. That's what they do. Yeah, it[00:27:54] Alessio: says among the models that are part of Samba 1 are Lama2, Mistral, DeepSigCoder, Falcon, Dplot, Clip, Lava. So they're just taking all these models and putting them in a MOE. Okay,[00:28:05] swyx: so a routing layer and then not jointly trained as much as a normal MOE would be.[00:28:12] swyx: Which is okay.[00:28:13] Alessio: That's all they say. There's no paper, you know, so it's like, I'm just reading the article, but I'm interested to see how[00:28:20] Wildcard: Model Merging (mergekit)[00:28:20] swyx: it works. Yeah, so so the wildcard for this section, the MOE section is model merges, which has also come up as, as a very interesting phenomenon. The last time I talked to Jeremy Howard at the Olama meetup we called it model grafting or model stacking.[00:28:35] swyx: But I think the, the, the term that people are liking these days, the model merging, They're all, there's all different variations of merging. Merge types, and some of them are stacking, some of them are, are grafting. And, and so like, some people are approaching model merging in the way that Samba is doing, which is like, okay, here are defined models, each of which have their specific, Plus and minuses, and we will merge them together in the hope that the, you know, the sum of the parts will, will be better than others.[00:28:58] swyx: And it seems like it seems like it's working. I don't really understand why it works apart from, like, I think it's a form of regularization. That if you merge weights together in like a smart strategy you, you, you get a, you get a, you get a less overfitting and more generalization, which is good for benchmarks, if you, if you're honest about your benchmarks.[00:29:16] swyx: So this is really interesting and good. But again, they're kind of limited in terms of like the amount of bumps you can get. But I think it's very interesting in the sense of how cheap it is. We talked about this on the Chinatalk podcast, like the guest podcast that we did with Chinatalk. And you can do this without GPUs, because it's just adding weights together, and dividing things, and doing like simple math, which is really interesting for the GPU ports.[00:29:42] Alessio: There's a lot of them.[00:29:44] Direction 5: Online LLMs (Gemini Pro, Exa)[00:29:44] Alessio: And just to wrap these up, online LLMs? Yeah,[00:29:48] swyx: I think that I ki I had to feature this because the, one of the top news of January was that Gemini Pro beat GPT-4 turbo on LM sis for the number two slot to GPT-4. And everyone was very surprised. Like, how does Gemini do that?[00:30:06] swyx: Surprise, surprise, they added Google search. Mm-hmm to the results. So it became an online quote unquote online LLM and not an offline LLM. Therefore, it's much better at answering recent questions, which people like. There's an emerging set of table stakes features after you pre train something.[00:30:21] swyx: So after you pre train something, you should have the chat tuned version of it, or the instruct tuned version of it, however you choose to call it. You should have the JSON and function calling version of it. Structured output, the term that you don't like. You should have the online version of it. These are all like table stakes variants, that you should do when you offer a base LLM, or you train a base LLM.[00:30:44] swyx: And I think online is just like, There, it's important. I think companies like Perplexity, and even Exa, formerly Metaphor, you know, are rising to offer that search needs. And it's kind of like, they're just necessary parts of a system. When you have RAG for internal knowledge, and then you have, you know, Online search for external knowledge, like things that you don't know yet?[00:31:06] swyx: Mm-Hmm. . And it seems like it's, it's one of many tools. I feel like I may be underestimating this, but I'm just gonna put it out there that I, I think it has some, some potential. One of the evidence points that it doesn't actually matter that much is that Perplexity has a, has had online LMS for three months now and it performs, doesn't perform great.[00:31:25] swyx: Mm-Hmm. on, on lms, it's like number 30 or something. So it's like, okay. You know, like. It's, it's, it helps, but it doesn't give you a giant, giant boost. I[00:31:34] Alessio: feel like a lot of stuff I do with LLMs doesn't need to be online. So I'm always wondering, again, going back to like state of the art, right? It's like state of the art for who and for what.[00:31:45] Alessio: It's really, I think online LLMs are going to be, State of the art for, you know, news related activity that you need to do. Like, you're like, you know, social media, right? It's like, you want to have all the latest stuff, but coding, science,[00:32:01] swyx: Yeah, but I think. Sometimes you don't know what is news, what is news affecting.[00:32:07] swyx: Like, the decision to use an offline LLM is already a decision that you might not be consciously making that might affect your results. Like, what if, like, just putting things on, being connected online means that you get to invalidate your knowledge. And when you're just using offline LLM, like it's never invalidated.[00:32:27] swyx: I[00:32:28] Alessio: agree, but I think going back to your point of like the standing the test of time, I think sometimes you can get swayed by the online stuff, which is like, hey, you ask a question about, yeah, maybe AI research direction, you know, and it's like, all the recent news are about this thing. So the LLM like focus on answering, bring it up, you know, these things.[00:32:50] swyx: Yeah, so yeah, I think, I think it's interesting, but I don't know if I can, I bet heavily on this.[00:32:56] Alessio: Cool. Was there one that you forgot to put, or, or like a, a new direction? Yeah,[00:33:01] swyx: so, so this brings us into sort of February. ish.[00:33:05] OpenAI Sora and why everyone underestimated videogen[00:33:05] swyx: So like I published this in like 15 came with Sora. And so like the one thing I did not mention here was anything about multimodality.[00:33:16] swyx: Right. And I have chronically underweighted this. I always wrestle. And, and my cop out is that I focused this piece or this research direction piece on LLMs because LLMs are the source of like AGI, quote unquote AGI. Everything else is kind of like. You know, related to that, like, generative, like, just because I can generate better images or generate better videos, it feels like it's not on the critical path to AGI, which is something that Nat Friedman also observed, like, the day before Sora, which is kind of interesting.[00:33:49] swyx: And so I was just kind of like trying to focus on like what is going to get us like superhuman reasoning that we can rely on to build agents that automate our lives and blah, blah, blah, you know, give us this utopian future. But I do think that I, everybody underestimated the, the sheer importance and cultural human impact of Sora.[00:34:10] swyx: And you know, really actually good text to video. Yeah. Yeah.[00:34:14] Alessio: And I saw Jim Fan at a, at a very good tweet about why it's so impressive. And I think when you have somebody leading the embodied research at NVIDIA and he said that something is impressive, you should probably listen. So yeah, there's basically like, I think you, you mentioned like impacting the world, you know, that we live in.[00:34:33] Alessio: I think that's kind of like the key, right? It's like the LLMs don't have, a world model and Jan Lekon. He can come on the podcast and talk all about what he thinks of that. But I think SORA was like the first time where people like, Oh, okay, you're not statically putting pixels of water on the screen, which you can kind of like, you know, project without understanding the physics of it.[00:34:57] Alessio: Now you're like, you have to understand how the water splashes when you have things. And even if you just learned it by watching video and not by actually studying the physics, You still know it, you know, so I, I think that's like a direction that yeah, before you didn't have, but now you can do things that you couldn't before, both in terms of generating, I think it always starts with generating, right?[00:35:19] Alessio: But like the interesting part is like understanding it. You know, it's like if you gave it, you know, there's the video of like the, the ship in the water that they generated with SORA, like if you gave it the video back and now it could tell you why the ship is like too rocky or like it could tell you why the ship is sinking, then that's like, you know, AGI for like all your rig deployments and like all this stuff, you know, so, but there's none, there's none of that yet, so.[00:35:44] Alessio: Hopefully they announce it and talk more about it. Maybe a Dev Day this year, who knows.[00:35:49] swyx: Yeah who knows, who knows. I'm talking with them about Dev Day as well. So I would say, like, the phrasing that Jim used, which resonated with me, he kind of called it a data driven world model. I somewhat agree with that.[00:36:04] Does Sora have a World Model? Yann LeCun vs Jim Fan[00:36:04] swyx: I am on more of a Yann LeCun side than I am on Jim's side, in the sense that I think that is the vision or the hope that these things can build world models. But you know, clearly even at the current SORA size, they don't have the idea of, you know, They don't have strong consistency yet. They have very good consistency, but fingers and arms and legs will appear and disappear and chairs will appear and disappear.[00:36:31] swyx: That definitely breaks physics. And it also makes me think about how we do deep learning versus world models in the sense of You know, in classic machine learning, when you have too many parameters, you will overfit, and actually that fails, that like, does not match reality, and therefore fails to generalize well.[00:36:50] swyx: And like, what scale of data do we need in order to world, learn world models from video? A lot. Yeah. So, so I, I And cautious about taking this interpretation too literally, obviously, you know, like, I get what he's going for, and he's like, obviously partially right, obviously, like, transformers and, and, you know, these, like, these sort of these, these neural networks are universal function approximators, theoretically could figure out world models, it's just like, how good are they, and how tolerant are we of hallucinations, we're not very tolerant, like, yeah, so It's, it's, it's gonna prior, it's gonna bias us for creating like very convincing things, but then not create like the, the, the useful role models that we want.[00:37:37] swyx: At the same time, what you just said, I think made me reflect a little bit like we just got done saying how important synthetic data is for Mm-Hmm. for training lms. And so like, if this is a way of, of synthetic, you know, vi video data for improving our video understanding. Then sure, by all means. Which we actually know, like, GPT 4, Vision, and Dolly were trained, kind of, co trained together.[00:38:02] swyx: And so, like, maybe this is on the critical path, and I just don't fully see the full picture yet.[00:38:08] Alessio: Yeah, I don't know. I think there's a lot of interesting stuff. It's like, imagine you go back, you have Sora, you go back in time, and Newton didn't figure out gravity yet. Would Sora help you figure it out?[00:38:21] Alessio: Because you start saying, okay, a man standing under a tree with, like, Apples falling, and it's like, oh, they're always falling at the same speed in the video. Why is that? I feel like sometimes these engines can like pick up things, like humans have a lot of intuition, but if you ask the average person, like the physics of like a fluid in a boat, they couldn't be able to tell you the physics, but they can like observe it, but humans can only observe this much, you know, versus like now you have these models to observe everything and then They generalize these things and maybe we can learn new things through the generalization that they pick up.[00:38:55] swyx: But again, And it might be more observant than us in some respects. In some ways we can scale it up a lot more than the number of physicists that we have available at Newton's time. So like, yeah, absolutely possible. That, that this can discover new science. I think we have a lot of work to do to formalize the science.[00:39:11] swyx: And then, I, I think the last part is you know, How much, how much do we cheat by gen, by generating data from Unreal Engine 5? Mm hmm. which is what a lot of people are speculating with very, very limited evidence that OpenAI did that. The strongest evidence that I saw was someone who works a lot with Unreal Engine 5 looking at the side characters in the videos and noticing that they all adopt Unreal Engine defaults.[00:39:37] swyx: of like, walking speed, and like, character choice, like, character creation choice. And I was like, okay, like, that's actually pretty convincing that they actually use Unreal Engine to bootstrap some synthetic data for this training set. Yeah,[00:39:52] Alessio: could very well be.[00:39:54] swyx: Because then you get the labels and the training side by side.[00:39:58] swyx: One thing that came up on the last day of February, which I should also mention, is EMO coming out of Alibaba, which is also a sort of like video generation and space time transformer that also involves probably a lot of synthetic data as well. And so like, this is of a kind in the sense of like, oh, like, you know, really good generative video is here and It is not just like the one, two second clips that we saw from like other, other people and like, you know, Pika and all the other Runway are, are, are, you know, run Cristobal Valenzuela from Runway was like game on which like, okay, but like, let's see your response because we've heard a lot about Gen 1 and 2, but like, it's nothing on this level of Sora So it remains to be seen how we can actually apply this, but I do think that the creative industry should start preparing.[00:40:50] swyx: I think the Sora technical blog post from OpenAI was really good.. It was like a request for startups. It was so good in like spelling out. Here are the individual industries that this can impact.[00:41:00] swyx: And anyone who, anyone who's like interested in generative video should look at that. But also be mindful that probably when OpenAI releases a Soa API, right? The you, the in these ways you can interact with it are very limited. Just like the ways you can interact with Dahlia very limited and someone is gonna have to make open SOA to[00:41:19] swyx: Mm-Hmm to, to, for you to create comfy UI pipelines.[00:41:24] Alessio: The stability folks said they wanna build an open. For a competitor, but yeah, stability. Their demo video, their demo video was like so underwhelming. It was just like two people sitting on the beach[00:41:34] swyx: standing. Well, they don't have it yet, right? Yeah, yeah.[00:41:36] swyx: I mean, they just wanna train it. Everybody wants to, right? Yeah. I, I think what is confusing a lot of people about stability is like they're, they're, they're pushing a lot of things in stable codes, stable l and stable video diffusion. But like, how much money do they have left? How many people do they have left?[00:41:51] swyx: Yeah. I have had like a really, Ima Imad spent two hours with me. Reassuring me things are great. And, and I'm like, I, I do, like, I do believe that they have really, really quality people. But it's just like, I, I also have a lot of very smart people on the other side telling me, like, Hey man, like, you know, don't don't put too much faith in this, in this thing.[00:42:11] swyx: So I don't know who to believe. Yeah.[00:42:14] Alessio: It's hard. Let's see. What else? We got a lot more stuff. I don't know if we can. Yeah, Groq.[00:42:19] Groq Math[00:42:19] Alessio: We can[00:42:19] swyx: do a bit of Groq prep. We're, we're about to go to talk to Dylan Patel. Maybe, maybe it's the audio in here. I don't know. It depends what, what we get up to later. What, how, what do you as an investor think about Groq? Yeah. Yeah, well, actually, can you recap, like, why is Groq interesting? So,[00:42:33] Alessio: Jonathan Ross, who's the founder of Groq, he's the person that created the TPU at Google. It's actually, it was one of his, like, 20 percent projects. It's like, he was just on the side, dooby doo, created the TPU.[00:42:46] Alessio: But yeah, basically, Groq, they had this demo that went viral, where they were running Mistral at, like, 500 tokens a second, which is like, Fastest at anything that you have out there. The question, you know, it's all like, The memes were like, is NVIDIA dead? Like, people don't need H100s anymore. I think there's a lot of money that goes into building what GRUK has built as far as the hardware goes.[00:43:11] Alessio: We're gonna, we're gonna put some of the notes from, from Dylan in here, but Basically the cost of the Groq system is like 30 times the cost of, of H100 equivalent. So, so[00:43:23] swyx: let me, I put some numbers because me and Dylan were like, I think the two people actually tried to do Groq math. Spreadsheet doors.[00:43:30] swyx: Spreadsheet doors. So, one that's, okay, oh boy so, so, equivalent H100 for Lama 2 is 300, 000. For a system of 8 cards. And for Groq it's 2. 3 million. Because you have to buy 576 Groq cards. So yeah, that, that just gives people an idea. So like if you deprecate both over a five year lifespan, per year you're deprecating 460K for Groq, and 60K a year for H100.[00:43:59] swyx: So like, Groqs are just way more expensive per model that you're, that you're hosting. But then, you make it up in terms of volume. So I don't know if you want to[00:44:08] Alessio: cover that. I think one of the promises of Groq is like super high parallel inference on the same thing. So you're basically saying, okay, I'm putting on this upfront investment on the hardware, but then I get much better scaling once I have it installed.[00:44:24] Alessio: I think the big question is how much can you sustain the parallelism? You know, like if you get, if you're going to get 100% Utilization rate at all times on Groq, like, it's just much better, you know, because like at the end of the day, the tokens per second costs that you're getting is better than with the H100s, but if you get to like 50 percent utilization rate, you will be much better off running on NVIDIA.[00:44:49] Alessio: And if you look at most companies out there, who really gets 100 percent utilization rate? Probably open AI at peak times, but that's probably it. But yeah, curious to see more. I saw Jonathan was just at the Web Summit in Dubai, in Qatar. He just gave a talk there yesterday. That I haven't listened to yet.[00:45:09] Alessio: I, I tweeted that he should come on the pod. He liked it. And then rock followed me on Twitter. I don't know if that means that they're interested, but[00:45:16] swyx: hopefully rock social media person is just very friendly. They, yeah. Hopefully[00:45:20] Alessio: we can get them. Yeah, we, we gonna get him. We[00:45:22] swyx: just call him out and, and so basically the, the key question is like, how sustainable is this and how much.[00:45:27] swyx: This is a loss leader the entire Groq management team has been on Twitter and Hacker News saying they are very, very comfortable with the pricing of 0. 27 per million tokens. This is the lowest that anyone has offered tokens as far as Mixtral or Lama2. This matches deep infra and, you know, I think, I think that's, that's, that's about it in terms of that, that, that low.[00:45:47] swyx: And we think the pro the break even for H100s is 50 cents. At a, at a normal utilization rate. To make this work, so in my spreadsheet I made this, made this work. You have to have like a parallelism of 500 requests all simultaneously. And you have, you have model bandwidth utilization of 80%.[00:46:06] swyx: Which is way high. I just gave them high marks for everything. Groq has two fundamental tech innovations that they hinge their hats on in terms of like, why we are better than everyone. You know, even though, like, it remains to be independently replicated. But one you know, they have this sort of the entire model on the chip idea, which is like, Okay, get rid of HBM.[00:46:30] swyx: And, like, put everything in SREM. Like, okay, fine, but then you need a lot of cards and whatever. And that's all okay. And so, like, because you don't have to transfer between memory, then you just save on that time and that's why they're faster. So, a lot of people buy that as, like, that's the reason that you're faster.[00:46:45] swyx: Then they have, like, some kind of crazy compiler, or, like, Speculative routing magic using compilers that they also attribute towards their higher utilization. So I give them 80 percent for that. And so that all that works out to like, okay, base costs, I think you can get down to like, maybe like 20 something cents per million tokens.[00:47:04] swyx: And therefore you actually are fine if you have that kind of utilization. But it's like, I have to make a lot of fearful assumptions for this to work.[00:47:12] Alessio: Yeah. Yeah, I'm curious to see what Dylan says later.[00:47:16] swyx: So he was like completely opposite of me. He's like, they're just burning money. Which is great.[00:47:22] Analyzing Gemini's 1m Context, Reddit deal, Imagegen politics, Gemma via the Four Wars[00:47:22] Alessio: Gemini, want to do a quick run through since this touches on all the four words.[00:47:28] swyx: Yeah, and I think this is the mark of a useful framework, that when a new thing comes along, you can break it down in terms of the four words and sort of slot it in or analyze it in those four frameworks, and have nothing left.[00:47:41] swyx: So it's a MECE categorization. MECE is Mutually Exclusive and Collectively Exhaustive. And that's a really, really nice way to think about taxonomies and to create mental frameworks. So, what is Gemini 1. 5 Pro? It is the newest model that came out one week after Gemini 1. 0. Which is very interesting.[00:48:01] swyx: They have not really commented on why. They released this the headline feature is that it has a 1 million token context window that is multi modal which means that you can put all sorts of video and audio And PDFs natively in there alongside of text and, you know, it's, it's at least 10 times longer than anything that OpenAI offers which is interesting.[00:48:20] swyx: So it's great for prototyping and it has interesting discussions on whether it kills RAG.[00:48:25] Alessio: Yeah, no, I mean, we always talk about, you know, Long context is good, but you're getting charged per token. So, yeah, people love for you to use more tokens in the context. And RAG is better economics. But I think it all comes down to like how the price curves change, right?[00:48:42] Alessio: I think if anything, RAG's complexity goes up and up the more you use it, you know, because you have more data sources, more things you want to put in there. The token costs should go down over time, you know, if the model stays fixed. If people are happy with the model today. In two years, three years, it's just gonna cost a lot less, you know?[00:49:02] Alessio: So now it's like, why would I use RAG and like go through all of that? It's interesting. I think RAG is better cutting edge economics for LLMs. I think large context will be better long tail economics when you factor in the build cost of like managing a RAG pipeline. But yeah, the recall was like the most interesting thing because we've seen the, you know, You know, in the haystack things in the past, but apparently they have 100 percent recall on anything across the context window.[00:49:28] Alessio: At least they say nobody has used it. No, people[00:49:30] swyx: have. Yeah so as far as, so, so what this needle in a haystack thing for people who aren't following as closely as us is that someone, I forget his name now someone created this needle in a haystack problem where you feed in a whole bunch of generated junk not junk, but just like, Generate a data and ask it to specifically retrieve something in that data, like one line in like a hundred thousand lines where it like has a specific fact and if it, if you get it, you're, you're good.[00:49:57] swyx: And then he moves the needle around, like, you know, does it, does, does your ability to retrieve that vary if I put it at the start versus put it in the middle, put it at the end? And then you generate this like really nice chart. That, that kind of shows like it's recallability of a model. And he did that for GPT and, and Anthropic and showed that Anthropic did really, really poorly.[00:50:15] swyx: And then Anthropic came back and said it was a skill issue, just add this like four, four magic words, and then, then it's magically all fixed. And obviously everybody laughed at that. But what Gemini came out with was, was that, yeah, we, we reproduced their, you know, haystack issue you know, test for Gemini, and it's good across all, all languages.[00:50:30] swyx: All the one million token window, which is very interesting because usually for typical context extension methods like rope or yarn or, you know, anything like that, or alibi, it's lossy like by design it's lossy, usually for conversations that's fine because we are lossy when we talk to people but for superhuman intelligence, perfect memory across Very, very long context.[00:50:51] swyx: It's very, very interesting for picking things up. And so the people who have been given the beta test for Gemini have been testing this. So what you do is you upload, let's say, all of Harry Potter and you change one fact in one sentence, somewhere in there, and you ask it to pick it up, and it does. So this is legit.[00:51:08] swyx: We don't super know how, because this is, like, because it doesn't, yes, it's slow to inference, but it's not slow enough that it's, like, running. Five different systems in the background without telling you. Right. So it's something, it's something interesting that they haven't fully disclosed yet. The open source community has centered on this ring attention paper, which is created by your friend Matei Zaharia, and a couple other people.[00:51:36] swyx: And it's a form of distributing the compute. I don't super understand, like, why, you know, doing, calculating, like, the fee for networking and attention. In block wise fashion and distributing it makes it so good at recall. I don't think they have any answer to that. The only thing that Ring of Tension is really focused on is basically infinite context.[00:51:59] swyx: They said it was good for like 10 to 100 million tokens. Which is, it's just great. So yeah, using the four wars framework, what is this framework for Gemini? One is the sort of RAG and Ops war. Here we care less about RAG now, yes. Or, we still care as much about RAG, but like, now it's it's not important in prototyping.[00:52:21] swyx: And then, for data war I guess this is just part of the overall training dataset, but Google made a 60 million deal with Reddit and presumably they have deals with other companies. For the multi modality war, we can talk about the image generation, Crisis, or the fact that Gemini also has image generation, which we'll talk about in the next section.[00:52:42] swyx: But it also has video understanding, which is, I think, the top Gemini post came from our friend Simon Willison, who basically did a short video of him scanning over his bookshelf. And it would be able to convert that video into a JSON output of what's on that bookshelf. And I think that is very useful.[00:53:04] swyx: Actually ties into the conversation that we had with David Luan from Adept. In a sense of like, okay what if video was the main modality instead of text as the input? What if, what if everything was video in, because that's how we work. We, our eyes don't actually read, don't actually like get input, our brains don't get inputs as characters.[00:53:25] swyx: Our brains get the pixels shooting into our eyes, and then our vision system takes over first, and then we sort of mentally translate that into text later. And so it's kind of like what Adept is kind of doing, which is driving by vision model, instead of driving by raw text understanding of the DOM. And, and I, I, in that, that episode, which we haven't released I made the analogy to like self-driving by lidar versus self-driving by camera.[00:53:52] swyx: Mm-Hmm. , right? Like, it's like, I think it, what Gemini and any other super long context that model that is multimodal unlocks is what if you just drive everything by video. Which is[00:54:03] Alessio: cool. Yeah, and that's Joseph from Roboflow. It's like anything that can be seen can be programmable with these models.[00:54:12] Alessio: You mean[00:54:12] swyx: the computer vision guy is bullish on computer vision?[00:54:18] Alessio: It's like the rag people. The rag people are bullish on rag and not a lot of context. I'm very surprised. The, the fine tuning people love fine tuning instead of few shot. Yeah. Yeah. The, yeah, the, that's that. Yeah, the, I, I think the ring attention thing, and it's how they did it, we don't know. And then they released the Gemma models, which are like a 2 billion and 7 billion open.[00:54:41] Alessio: Models, which people said are not, are not good based on my Twitter experience, which are the, the GPU poor crumbs. It's like, Hey, we did all this work for us because we're GPU rich and we're just going to run this whole thing. And

ceo american spotify black tiktok english australia google art europe china apple ai vision france politics online service state crisis san francisco living west research russia chinese reach elon musk search microsoft teacher surprise ring harry potter security asian broadway run silicon valley mvp ceos chatgpt medium discord reddit mail math stanford dubai adolf hitler fill worlds complex direction mixed context stanford university qatar dom one year falcon cto offensive retro tension effectively minecraft newton ia hungary sf substack explorers residence archive alt gemini ux api builder openai laptops apples lamar nvidia discovered generate fastest sweep j'ai ui python voyager mm stable jet developed rj stretching ml lama alibaba hungarian github automated directions gpt grimes llama rail lava notion merge lesser clip runway transformer amd metaphor synthetic samba bal emo shack wechat mamba sora ops unreal engine ix structured gpu connector sam altman raspberry pi spreadsheets rahul copilot vector zapier sql llm pixie bytedance agi c4 collected sonar anz 7b gpus lambda rag deepmind vps alessio utilization tiananmen square speculative lms gopher lm web summit json arp mixture sundar pichai pocketcast kura 60k cli tendency pika soa motif digital ocean anthropic mistral a16z sumit chinchillas demo day itamar adept versa npm markov perplexity yon reassuring dabble linux foundation hacker news google gemini dcm boma moes omo svelte us tech jupyter agis matryoshka jupyter notebooks jeremy howard open api tpu yann lecun vipul exa 70b neurips replit mece rnn groq nat friedman code interpreter chris ray naton mrl gemini pro rlhf hbm audio recap sfai 460k unthinking simon willison and openai versal latent space jerry liu matei zaharia hashnode
Choosing Happy
Episode 58 - Doing the Unthinkable

Choosing Happy

Play Episode Listen Later Feb 26, 2024 16:58


Doing the Unthinkable!In this week's episode of Choosing Happy Podcast, Heather Master, the host, discusses the concept of doing the unthinkable or stepping out of one's comfort zone. She shares personal experiences of making bold choices and leaps of faith, for instance, moving to South Africa with very short notice and attending training in India against social norms and odds. She discusses the fear and challenges of live streaming as another example of facing an unthinkable situation. Heather also talks about the mindset needed to conquer the unthinkable by staying in the state of flow, letting go of overthinking, and letting the mind wander into the unexplored domain. She discusses an exercise from the Magician's Way (William Whitecloud), about a straw passing through a potato to illustrate absolute faith and state of unthinking. She challenges the listeners to step out of their comfort zones and do something unthinkable; she offers a special coaching offer for the month.Timestamps:00:00 Introduction to Doing the Unthinkable01:57 Personal Experiences of Leaping into the Unknown02:19 The Power of Intuition and Trusting Your Gut03:07 The Impact of Doing the Unthinkable04:26 The Unthinkable in Everyday Life05:11 Overcoming Personal Limitations07:46 The Unthinkable in Business and Personal Growth09:29 The Power of Unthinking and Flow State13:38 Practical Exercise: The Potato and the Straw15:03 Creating Opportunities by Doing the Unthinkable15:44 Special Coaching Offer and ConclusionI do hope you find this episode useful. Please share if you think someone you know would benefit from the very powerful message. Please leave a review if you enjoyed it.Heather Masters Copyright 2024 Heather Masters If you are enjoying the podcast and would like to donate you can do so here: (Thank you in advance)Support this PodcastLinks:www.choosinghappypodcast.comFree Wheel of Life Template: https://www.choosinghappy.co.uk/wheel-of-lifewww.choosinghappy.co.uk/communitywww.twitter.com/nlpwarriorhttps://www.facebook.com/choosinghappypodcasthttps://www.instagram.com/hvmasters/Podchaser: https://www.podchaser.com/podcasts/choosing-happy-1878162/episodes What you can do now...Join the new community at www.choosinghappy.co.uk/communitySign up for my Awakened Entrepreneurs Free Video Series. https://www.takingyourbusinessonline.com/the-awakened-entrepreneurDrop me a line with any feedback you may have at heather@choosinghappy.co.uk Please take a moment to share and subscribe at www.choosinghappypodcast.comAnd it would be really great if you could leave a review on Amazon, Google, iTunes or at Podchaser.comhttps://www.podchaser.com/podcasts/choosing-happy-1878162/episodesI do hope you find this episode useful. Once again, if you know someone who may benefit from listening, please share it.

Guru Viking Podcast
Ep234: Dzogchen Valley - Aro gTér Sangha

Guru Viking Podcast

Play Episode Listen Later Dec 29, 2023 196:17


In this episode I travel to deepest of Wales to visit Drala Jong, the headquarters and retreat centre of the Aro gTér sect of Tibetan Buddhism. Ngakchang Rinpoche and Khandro Déchen give a tour of the grounds of Drala Jong, discuss the esoteric geomancy of the site, and reveal the methods used to identify and propitiate the local spirit of the land. Ngakchang Rinpoche and Khandro Déchen detail the methods and practices of the Aro gTér, guide practice sessions in the Dzogchen meditation of sky gazing and other techniques, and tell stories of their lamas such as Kunzang Dorje Rinpoche, Chimé Rigdzin Rinpoche, Dudjom Rinpoche, and more. I also meet the ordained caretakers of the centre, witness the sangha in their daily rituals of chanting and song, and receive a lesson in their physical movement system of Kum Nye from lineage specialist Sang-gyé A-tsal. … Video version: https://www.guruviking.com/podcast/ep234-dzogchen-valley-aro-gtr-sangha Also available on Youtube, iTunes, & Spotify – search ‘Guru Viking Podcast'. … 01:23 - Tour of the grounds 04:01 - Geomancy of the site 05:07 - Appeasing the land spirits 10:17 - The right wing 13:47 - Plans for a statue of Dudjom Rinpoche 17:31 - Plans for the left wing 25:37 - The requirements and procedure of Aro gTér ordination 32:57 - Pilgrimage and normalising Vajrayana religion 35:38 - The main house 42:57 - The shrine room 45:59 - Ritual practice at Drala Jong 46:56 - Range of Aro gTér practices 48:16 - Shiné vs śāmatha 51:19 - The 4 Naljors and approaching Dzogchen 56:20 - Practices of shiné and lhagtong 59:44 - Showing the ritual objects, statues, and thangkas 01:11:08 - Attention to detail and creating a lineage place 01:14:05 - The role of personal shrine rooms and care for ritual objects 01:16:35 - Caring for people 01:18:19 - Weapon collection 01:24:57 - Role of prostrations 01:26:13 - Ngakchang Rinpoche's reflections on his 70th birthday 01:28:17 - Drala and the practice of relating the natural world 01:31:30 - Why Wales is good for Dzogchen practice 01:35:48 - Ancient woodland 01:37:36 - The caretakers Jagyür and Métsal 01:39:52 - Tour of Jagyür and Métsal's accommodation 01:44:21 - Living at Drala Jong 01:45:36 - Daily practice regime 01:48:35 - Recent ordination of disciples 01:50:52 - Ngakpa vs Naljorpa ordination 01:52:21 - Origins of Aro gTér ordination lineages 01:54:05 - Weather making for Chime Rinpoche 01:58:31 - Lunchtime song 01:59:54 - Ngakchang Rinpoche gives sky gazing instruction (namkha arte) 02:04:52 - The 21 semdzin of Dzogchen 02:08:28 - Reflecting on having a retreat centre after 30 years 02:09:40 - Meditation on sound 02:10:56 - On the practice of retreat 02:13:06 - 3 part approach and the practice of suspension 02:16:12 - Markers of progress in spiritual practice 02:20:10 - A story of Dungse Thinley Norbu Rinpoche 02:21:48 - Anecdote about Dzogchen shouts 02:22:42 - Dreams of Dudjom Rinpoche 02:24:52 - Sang-gyé A-tsal introduces Kum Nye 02:27:29 - Steve receives a Kum Nye lesson 02:46:26 - The drinking song of Dudjom Rinpoche 02:49:19 - Wine drinking mudra of Chimé Rigdzin Rinpoche 02:50:41 - Is Dzogchen a religion? 02:52:51 - Taking Dzogchen out of Buddhism 02:56:21 - Imitating culture 02:59:05 - Religion is ‘bigger than me' 03:00:52 - Providing community 03:05:26 - Organised religion 03:09:46 - Starting new religions vs being part of a lineage 03:11:48 - Unthinking use of language 03:13:23 - The value of being contained by religion 03:14:38 - Inspiration between teacher and student 
 … Previous episodes with the Aro gTer: - https://youtube.com/playlist?list=PLlkzlKFgdknxvlIXraR8zs4j4vNebA8os&si=BJvxADcKcGuUPOSc To find out more about Drala Jong, visit: - https://www.drala-jong.org/ For more interviews, videos, and more visit: - https://www.guruviking.com Music ‘Deva Dasi' by Steve James

When Shift Happens Podcast
E38: Mike Chang - Why being Fit, Rich and Successful won't make you happy

When Shift Happens Podcast

Play Episode Listen Later Aug 30, 2023 104:54


Mike Chang is a previous internet celebrity, and founder of one of the most popular fitness YouTube channels - sixpackshortcut - with more than 4.5 million subscribers in 2015. He turned his fitness passion into an 8 figures business and despite living the American dream left his business and totally disappeared from the internet after realizing he felt miserable. Today Mike is a new, wiser man, husband, and father with a greater purpose: helping people not only be fit but also feel happiness, peace, and joy. In this episode, he shares his journey to finding inner peace, including his accidental discovery of unlocking dormant energy through psychedelics. Mike explains why achieving big goals won't necessarily make you happier, the importance of controlling your attention, and how being authentic leads to purpose. KEY TOPICS: 00:00 Mike Chang - The Podcast Guest Who Disappeared 04:38 Mushrooms As Nootropics - Mike's Accidental Discovery 13:20 Unblocking Energy To Unlock The Mind 16:48 Why I Walked Away From Riches and Fame 19:31 My Journey From Misery To Happiness 26:14 The Hidden Keys To Fulfillment 29:17 The Illusion of Control - And Why Letting Go Sets You Free 34:27 What Drove Me To The Top - And Almost Drove Me Over The Edge 36:38 Staying Focused In The Storm Of Life 38:19 The #1 Obstacle To Inner Peace (And How To Overcome It) 41:18 The Simple (Yet Not Easy) Path To Acceptance 46:36 From Stuck To Serene - Where Do I Even Begin? 57:29 Accessing Genius By Silencing The Mind 01:02:01 The Surprising Power of "Unthinking" 01:04:57 Transform Your Meditation With This 5 Step Prep 01:15:40 Why Stress Ruins Everything (And How To Stop It) 01:20:41 Food As Medicine - Fueling Body, Mind and Spirit 01:24:00 Staying On Track In My Darkest Moments Follow Mike Chang: Website: https://www.flow60.com/ Intagram: https://www.instagram.com/mikechangofficial/ Youtube: https://www.youtube.com/c/MikeChangTraining Follow When Shift Happens: Website: https://www.when-shift-happens.co/ Twitter (X): https://twitter.com/yieldlabs Instagram: https://www.instagram.com/kevinffollonier/ Tiktok: https://www.tiktok.com/@kevinfollonier Linkedin: https://www.linkedin.com/in/kevinfollonier/

Inside Exercise
Speed-duration relationship across the animal kingdom with Dr Mark Burnley

Inside Exercise

Play Episode Listen Later May 21, 2023 112:30


Dr Glenn McConell chats with Dr Mark Burnley from the Loughborough University in England. He is an expert on critical power (cycling) and critical speed (running). We compared and contrasted the speed duration relationship across the animal kingdom. Mark is an absolute wealth of knowledge on exercise intensity domains and critical power etc and comparative exercise physiology. Critical power is essentially one's aerobic capacity and W' is essentially one's work capacity. Twitter: @DrMarkBurnley0:00. Introduction and how Mark's entry into research6:12. Exercise intensity domains/ Critical power10:02. Single out exercise to determine critical power11:55. Marks students, supervisors, collaborators in the area etc14:55. Critical power and W' (the work capacity above critical power)18:15. Lactate threshold and critical power23:42. Isometric exercise, resistance exercise and critical power26:50. Is W' really all only anaerobic work capacity?28:36. Maximum accumulated oxygen deficit29:56. Practical use of critical power/ W' for training/racing31:55. Critical speed/pacing33:50. Zones of training and critical power37:35. Mark feels that with ex training all roads lead to Rome/Tokyo38:15. Zone 2 confusion/evidence/lack of evidence etc41:35. Animal athletes/critical power in animals53:22. Dogs are too smart to do proper critical power measures57:20. Desert Iguanas critical power59:21. Crabs critical power1:02:00. How compare critical speeds across the animal kingdom1:04:46. Lungless salamanders critical power1:06:37. Fish and birds critical power1:16:35. Fibre type/capillaries/muscle mass and critical power1:25:30. Humans are below average athletically1:27:30. Migratory birds energy expenditure etc1:31:40. Monty python1:34:42. The speed duration curve shape tends to be similar across species1:35:03. Crabs and humans have similar critical speed relationships!1:36:00. Recovery and critical speed. W' balance1:39:02. Unthinking lactate use re Best practice for LT determination1:44:27. Takeaway messages1:47:10. Zones and training1:51:10. With ex training all roads lead to Rome/Tokyo1:52:21. Outro (9 seconds)Inside Exercise brings to you the who's who of research in exercise metabolism, exercise physiology and exercise's effects on health. With scientific rigor, these researchers discuss popular exercise topics while providing practical strategies for all. The interviewer, Emeritus Professor Glenn McConell, has an international research profile following 30 years of Exercise Metabolism research experience while at The University of Melbourne, Ball State University, Monash University, the University of Copenhagen and Victoria University.He has published over 120 peer reviewed journal articles and recently edited an Exercise Metabolism eBook written by world experts on 17 different topics (https://link.springer.com/book/10.1007/978-3-030-94305-9).Connect with Inside Exercise and Glenn McConell at:Twitter: @Inside_exercise and @GlennMcConell1Instagram: insideexerciseFacebook: Glenn McConellLinkedIn: Glenn McConell https://www.linkedin.com/in/glenn-mcconell-83475460ResearchGate: Glenn McConellEmail: glenn.mcconell@gmail.comSubscribe to Inside exercise:Spotify: shorturl.at/tyGHLApple Podcasts: shorturl.at/oFQRUYouTube: https://www.youtube.com/@insideexerciseAnchor: https://anchor.fm/insideexerciseGoogle Podcasts: shorturl.at/bfhHIAnchor: https://anchor.fm/insideexercisePodcast Addict: https://podcastaddict.com/podcast/4025218Not medical advice

Legacy Roadmap Podcast
Karen Roberts Practice of Unthinking

Legacy Roadmap Podcast

Play Episode Listen Later Feb 15, 2023 54:18


Karen Roberts shares her gritty accidental entrepreneurial journey with Robert and Noelle. Like many others, she got it wrong more than she got it right, but kept moving forward and improving. It is important to say YES to things and then figure it out. The entrepreneurial journey is a crucible of personal growth and she learned that no one is teaching coaches and therapists how to go from giving away their services to attracting paying clients who love them. Check out more of Karen Website: karenrobertscoaching.com Facebook: /karenrobertscoaching /groups/6figuresandbeyondforcoaches Instagram: /karenroberts.tv/ LinkedIn: /in/karen-roberts-coaching/ Twitter: /krobertsfitness Youtube: /UCJQVgHgHOYR4aYcvncxoA7w Did you love the value that we are putting out in the show? LEAVE A REVIEW and tell us what you think about the episode so we can continue putting out great content just for you! Share this episode and help someone who wants to connect with world-class people. Get our free gift of 11 Hacks from Successful Entrepreneurs @ AddValue2Entrepreneurs.com. Do you struggle with procrastination? Sign up for a 5 day challenge to help you take more action and make more money in your business AddValue2Life.com/action Need some hope? Get your copy of the Dose of Hope @AddValue2Life.com/dose. Follow us at facebook.com/n2rpeterson, instagram.com/n2rpeterson, linkedin.com/in/robertav2l, youtube.com/channel/UCU1gxHrzesGKUPHJdKLUTLg

New Books in Asian American Studies
A. Carly Buxton, "Un-Thinking Collaboration: American Nisei in Transwar Japan" (U Hawaii Press, 2022)

New Books in Asian American Studies

Play Episode Listen Later Oct 26, 2022 61:24


Today I will be talking to Carly Buxton about her book Unthinking collaboration: American Nisei in transwar Japan, which came out this year [2022] with the University of Hawaiʹi Press. Unthinking Collaboration uncovers the little-known history of Japanese Americans who spent World War II in Japan.  Japanese Americans who found themselves in Japan during the war, could not leave but also, unlike their compatriots, were not interned. But, to survive many had to serve the Japanese state and act as Japanese during the war. When the war ended these same people were mobilized again, but now in the service of the American occupation. Weaving archival data with oral histories, personal narratives, material culture, and fiction, Unthinking Collaboration emphasizes the heterogeneity of Japanese immigrant experiences, and sheds light on broader issues of identity, race, and performance of individuals growing up in a bicultural or multicultural context. By distancing “collaboration” from its default elision with moral judgment, and by incorporating contemporary findings from psychology and behavioral science about the power of the subconscious mind to influence human behavior, Carly Buxton offers an alternative approach to history—one that posits historical subjects as deeply embedded in the realities of their physical and discursive environment. Walking beside Nisei as they navigate their everyday lives in war time and post war Japan, readers are urged to “un-think” long-held assumptions about the actions and decisions of individuals as represented in history. Unthinking Collaboration is an ambitious historical study that relates to broader questions of race and trust, empire-building, World War II, and its legacy on both the Western and Pacific fronts, as well as the questions of loyalty, treason, assimilation, and collaboration. Ran Zwigenberg is an associate professor at Pennsylvania State University. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/asian-american-studies

New Books Network
A. Carly Buxton, "Un-Thinking Collaboration: American Nisei in Transwar Japan" (U Hawaii Press, 2022)

New Books Network

Play Episode Listen Later Oct 26, 2022 61:24


Today I will be talking to Carly Buxton about her book Unthinking collaboration: American Nisei in transwar Japan, which came out this year [2022] with the University of Hawaiʹi Press. Unthinking Collaboration uncovers the little-known history of Japanese Americans who spent World War II in Japan.  Japanese Americans who found themselves in Japan during the war, could not leave but also, unlike their compatriots, were not interned. But, to survive many had to serve the Japanese state and act as Japanese during the war. When the war ended these same people were mobilized again, but now in the service of the American occupation. Weaving archival data with oral histories, personal narratives, material culture, and fiction, Unthinking Collaboration emphasizes the heterogeneity of Japanese immigrant experiences, and sheds light on broader issues of identity, race, and performance of individuals growing up in a bicultural or multicultural context. By distancing “collaboration” from its default elision with moral judgment, and by incorporating contemporary findings from psychology and behavioral science about the power of the subconscious mind to influence human behavior, Carly Buxton offers an alternative approach to history—one that posits historical subjects as deeply embedded in the realities of their physical and discursive environment. Walking beside Nisei as they navigate their everyday lives in war time and post war Japan, readers are urged to “un-think” long-held assumptions about the actions and decisions of individuals as represented in history. Unthinking Collaboration is an ambitious historical study that relates to broader questions of race and trust, empire-building, World War II, and its legacy on both the Western and Pacific fronts, as well as the questions of loyalty, treason, assimilation, and collaboration. Ran Zwigenberg is an associate professor at Pennsylvania State University. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network

New Books in American Studies
A. Carly Buxton, "Un-Thinking Collaboration: American Nisei in Transwar Japan" (U Hawaii Press, 2022)

New Books in American Studies

Play Episode Listen Later Oct 26, 2022 61:24


Today I will be talking to Carly Buxton about her book Unthinking collaboration: American Nisei in transwar Japan, which came out this year [2022] with the University of Hawaiʹi Press. Unthinking Collaboration uncovers the little-known history of Japanese Americans who spent World War II in Japan.  Japanese Americans who found themselves in Japan during the war, could not leave but also, unlike their compatriots, were not interned. But, to survive many had to serve the Japanese state and act as Japanese during the war. When the war ended these same people were mobilized again, but now in the service of the American occupation. Weaving archival data with oral histories, personal narratives, material culture, and fiction, Unthinking Collaboration emphasizes the heterogeneity of Japanese immigrant experiences, and sheds light on broader issues of identity, race, and performance of individuals growing up in a bicultural or multicultural context. By distancing “collaboration” from its default elision with moral judgment, and by incorporating contemporary findings from psychology and behavioral science about the power of the subconscious mind to influence human behavior, Carly Buxton offers an alternative approach to history—one that posits historical subjects as deeply embedded in the realities of their physical and discursive environment. Walking beside Nisei as they navigate their everyday lives in war time and post war Japan, readers are urged to “un-think” long-held assumptions about the actions and decisions of individuals as represented in history. Unthinking Collaboration is an ambitious historical study that relates to broader questions of race and trust, empire-building, World War II, and its legacy on both the Western and Pacific fronts, as well as the questions of loyalty, treason, assimilation, and collaboration. Ran Zwigenberg is an associate professor at Pennsylvania State University. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/american-studies

New Books in Japanese Studies
A. Carly Buxton, "Un-Thinking Collaboration: American Nisei in Transwar Japan" (U Hawaii Press, 2022)

New Books in Japanese Studies

Play Episode Listen Later Oct 26, 2022 61:24


Today I will be talking to Carly Buxton about her book Unthinking collaboration: American Nisei in transwar Japan, which came out this year [2022] with the University of Hawaiʹi Press. Unthinking Collaboration uncovers the little-known history of Japanese Americans who spent World War II in Japan.  Japanese Americans who found themselves in Japan during the war, could not leave but also, unlike their compatriots, were not interned. But, to survive many had to serve the Japanese state and act as Japanese during the war. When the war ended these same people were mobilized again, but now in the service of the American occupation. Weaving archival data with oral histories, personal narratives, material culture, and fiction, Unthinking Collaboration emphasizes the heterogeneity of Japanese immigrant experiences, and sheds light on broader issues of identity, race, and performance of individuals growing up in a bicultural or multicultural context. By distancing “collaboration” from its default elision with moral judgment, and by incorporating contemporary findings from psychology and behavioral science about the power of the subconscious mind to influence human behavior, Carly Buxton offers an alternative approach to history—one that posits historical subjects as deeply embedded in the realities of their physical and discursive environment. Walking beside Nisei as they navigate their everyday lives in war time and post war Japan, readers are urged to “un-think” long-held assumptions about the actions and decisions of individuals as represented in history. Unthinking Collaboration is an ambitious historical study that relates to broader questions of race and trust, empire-building, World War II, and its legacy on both the Western and Pacific fronts, as well as the questions of loyalty, treason, assimilation, and collaboration. Ran Zwigenberg is an associate professor at Pennsylvania State University. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/japanese-studies

New Books in East Asian Studies
A. Carly Buxton, "Un-Thinking Collaboration: American Nisei in Transwar Japan" (U Hawaii Press, 2022)

New Books in East Asian Studies

Play Episode Listen Later Oct 26, 2022 61:24


Today I will be talking to Carly Buxton about her book Unthinking collaboration: American Nisei in transwar Japan, which came out this year [2022] with the University of Hawaiʹi Press. Unthinking Collaboration uncovers the little-known history of Japanese Americans who spent World War II in Japan.  Japanese Americans who found themselves in Japan during the war, could not leave but also, unlike their compatriots, were not interned. But, to survive many had to serve the Japanese state and act as Japanese during the war. When the war ended these same people were mobilized again, but now in the service of the American occupation. Weaving archival data with oral histories, personal narratives, material culture, and fiction, Unthinking Collaboration emphasizes the heterogeneity of Japanese immigrant experiences, and sheds light on broader issues of identity, race, and performance of individuals growing up in a bicultural or multicultural context. By distancing “collaboration” from its default elision with moral judgment, and by incorporating contemporary findings from psychology and behavioral science about the power of the subconscious mind to influence human behavior, Carly Buxton offers an alternative approach to history—one that posits historical subjects as deeply embedded in the realities of their physical and discursive environment. Walking beside Nisei as they navigate their everyday lives in war time and post war Japan, readers are urged to “un-think” long-held assumptions about the actions and decisions of individuals as represented in history. Unthinking Collaboration is an ambitious historical study that relates to broader questions of race and trust, empire-building, World War II, and its legacy on both the Western and Pacific fronts, as well as the questions of loyalty, treason, assimilation, and collaboration. Ran Zwigenberg is an associate professor at Pennsylvania State University. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/east-asian-studies

New Books in History
A. Carly Buxton, "Un-Thinking Collaboration: American Nisei in Transwar Japan" (U Hawaii Press, 2022)

New Books in History

Play Episode Listen Later Oct 26, 2022 61:24


Today I will be talking to Carly Buxton about her book Unthinking collaboration: American Nisei in transwar Japan, which came out this year [2022] with the University of Hawaiʹi Press. Unthinking Collaboration uncovers the little-known history of Japanese Americans who spent World War II in Japan.  Japanese Americans who found themselves in Japan during the war, could not leave but also, unlike their compatriots, were not interned. But, to survive many had to serve the Japanese state and act as Japanese during the war. When the war ended these same people were mobilized again, but now in the service of the American occupation. Weaving archival data with oral histories, personal narratives, material culture, and fiction, Unthinking Collaboration emphasizes the heterogeneity of Japanese immigrant experiences, and sheds light on broader issues of identity, race, and performance of individuals growing up in a bicultural or multicultural context. By distancing “collaboration” from its default elision with moral judgment, and by incorporating contemporary findings from psychology and behavioral science about the power of the subconscious mind to influence human behavior, Carly Buxton offers an alternative approach to history—one that posits historical subjects as deeply embedded in the realities of their physical and discursive environment. Walking beside Nisei as they navigate their everyday lives in war time and post war Japan, readers are urged to “un-think” long-held assumptions about the actions and decisions of individuals as represented in history. Unthinking Collaboration is an ambitious historical study that relates to broader questions of race and trust, empire-building, World War II, and its legacy on both the Western and Pacific fronts, as well as the questions of loyalty, treason, assimilation, and collaboration. Ran Zwigenberg is an associate professor at Pennsylvania State University. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/history

The Pestle: In-depth Movie Talk, No Fluff | Film Review | Spoilers

We stay for Christopher Nolan’s “Interstellar” and discuss: Story & writing, adding texture, 90% honesty, emotional storytelling; Directing, moments of breathing, the scene with the emotional weight of the film; and other such stuff and things and stuff. “Unthinking respect for authority is the greatest enemy of truth.“ – Kip Thorne Notes & References: MacGuffin […] The post Ep 200: “Interstellar” part 3 appeared first on The Pestle.

Asia Rising
#187: Australia's Unthinking Alliance with America

Asia Rising

Play Episode Listen Later Jul 17, 2022 33:30


Australia has a strong alliance with America, one that has remained unwavering through changes of leadership and turbulent international developments. While agreements such as AUKUS and the Quad have strengthened our position in the region, it has come at the cost of relations with other states and could in the future draw us into conflict. Guest: Hugh White (Emeritus Professor of Strategic Studies at Australian National University) Hugh's new Quarterly Essay is Sleepwalk to War: Australia's unthinking alliance with America. Recorded on 30 June, 2022.

Will Moneymaker Photography Podcast
WM-326: The Unthinking Mind

Will Moneymaker Photography Podcast

Play Episode Listen Later Dec 17, 2021 3:41


Our unthinking minds may be sending us signals that we hardly register as we go about life—but we should pay attention to those signals as potential opportunities for art. Podcast Show Notes: https://moneymakerphotography.com/the-unthinking-mind/  Subscribe to this channel here: https://www.youtube.com/willmoneymaker Photography Clips Podcast: https://PhotographyClips.com Beautiful Postcard Giveaway: https://MoneymakerPhotography.com/Giveaways Free Photography e-Books: https://MoneymakerPhotography.com/ebooks #WillMoneymaker #PhotographyClips #Photography

Should We Keep This?
2005: Crash & American Idol

Should We Keep This?

Play Episode Listen Later Mar 17, 2021 102:45


“Unthinking” is the word of the week, with Crash (best picture) and American Idol (top-rated TV show).  Join Gina and Steven as we ask the tough questions: What would you do if you thought you were making “Do the Right Thing” but actually made “Driving Miss Daisy”? Was everyone a maxxinista in 2005? And who the #*$@ is Brian Dunkleman? Cingular Wireless customers can text “VOTE” to 1-866-SWKT. This podcast is produced by Rock Rising - check us out at rockrising.org & join our newsletter! Support this podcast

Conversations With Canadians
Dr. Irvin Studin: The Curse of Unthinking. Canada Needs to Think For Itself.

Conversations With Canadians

Play Episode Listen Later Jan 16, 2021 99:37


Irvin Studin is a Canadian academic, author, public intellectual, and former two-time all Canadian soccer player who is the editor-in chief of Global Brief Magazine and the President of the Institute of 21st Century Questions. He holds an undergraduate degree in Business Administration from Schulich Busines School. He studied at the University of Oxford as a Rhodes Scholar earning a Master of Arts degree in philosophy, politics, and economics. He studied International Relations at the London School of Economics and has a PHD in Constitutional Law from Osgood Hall Law School where he earned the Governor General's Gold Metal. We chat about Irvin's professional soccer experience and how that influenced his academic pursuits. We discuss a number of topics such as the new strategic borders facing Canada in the 21st century and the importance of China. Why Canada needs a  population of 100 million. The 6 major crises facing Canada in the post pandemic world with a particular focus on the disintegrating education system in Canada.  We talk about the future of Canada in the 21st century, and Irvin discusses his thoughts on what being Canadian means to him.Irvin Studin can be reached on Facebook at https://www.facebook.com/irvin.studin, on LinkedIn at https://www.linkedin.com/in/irvin-studin-28087b112/?originalSubdomain=ca. Global Brief Magazine - https://globalbrief.ca/Institute For 21st Century Questions - https://www.i21cq.com/I hope you enjoy the episode. If you like what you are hearing please remember to hit the subscribe button. Please feel free to reach out to me and provide feedback at MikeRyanG1@gmail.com On Twitter @MikeRyanGOn Instagram at https://www.instagram.com/mike_ryang/    @Mike_RyanG

The Adorable Boy Podcast
Who Wants to Be a COVID Governor?!

The Adorable Boy Podcast

Play Episode Listen Later Dec 22, 2020 77:21


Spud uses the Podcast War to offer Jim Cornette peace in the form of a poem. Vito the Vegan Guido stops by to show us how to cook vegan chicken parmigiana. Three inept governors vie to become ultimate dingbat in "Who Wants to be a COVID Governor?!". The Unthinking mob leader calls in to satisfy his obsession and, Randy "The Rocket" Rosenthal calls to bring cheer, inspiration, and hope. Spud things the Tom Cruise rant is pure bologna and, discusses the news. Music Credit (Artist-Song Title) Duncan Reid and the Big Heads - 77 Pudge- Sweetheart Forget the Whalers- I Know Where You've Been Kai Engel- Great Expectations Karsten Holy Moly- The Invisible Enemy --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app

The Adorable Boy Podcast
The Adorable Boys Strike Back!

The Adorable Boy Podcast

Play Episode Listen Later Aug 20, 2020 78:38


Spud starts out the eighth episode of the podcast dealing with #podcastwar business. Cornette has one week to sign the treaty or the Boys are coming for him again. After episode 8, Low Pitch Tim and Joe the Camel Boy discuss their training regiments. Spud heroically talks Tim into doing the duel despite his broken wrist (which Spud is in no way at fault for). Spud does a favor for our generation as he whitewashes history.  The Boys talk about the news and deal with the Unthinking mob.Follow us on Twitter @adorablepodcast --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app

Mental Health Check-In Podcast
"An Unthinking Thing" - feat. Megan Fiscus

Mental Health Check-In Podcast

Play Episode Listen Later Aug 13, 2020 71:15


Guest Megan Fiscus (@megatron.v2.0 on Instagram, photographer/model and yoga instructor) talks about yoga, meditation, the body's connection to the mind, people and things as attachments, and how to respond in difficult conversations in relationships. Where to find Megan: https://www.instagram.com/megatron.v2.0/ https://www.modelmegatron.com/ https://onlyfans.com/modelmegatron https://www.patreon.com/modelmegatron Where to find MHCIP: https://www.instagram.com/mentalhealthcheckinpod/ https://twitter.com/checkinpod https://soundcloud.com/mental-health-check-in https://open.spotify.com/show/6mynlCxmWbOlLGG5WKPIwT https://podcasts.apple.com/us/podcast/mental-health-check-in-podcast/id1525485523 https://www.youtube.com/watch?v=GyAo-z6U7F4&t=1s --- Send in a voice message: https://anchor.fm/mental-health-check-in/message

fiscus unthinking
The Adorable Boy Podcast
The Unthinking Mob

The Adorable Boy Podcast

Play Episode Listen Later Aug 13, 2020 68:53


Spud gets tough with Jim Cornette and implores him to sign the treaty. He tells the kowtowing podcaster that the war will resume if Cornette refuses to officially surrender. Pete Johnson Jr. finally got the phones working so The Adorable Boys finally take some calls from fans, the results are...interesting. The ABN News Network sued Spud based on false allegations and used their pull in the legal system to have the courts brand our favorite podcaster a racist. As per court order, Spud must allow the leader of the unthinking mob to address the Adorable Boy audience. In true Spud fashion, he takes a devil may care attitude and defies the court and the mob. The consequences could be devastating but to Spud, integrity is everything. With the first ever #podcastduel just weeks away, a new sponsor comes on board to make the duel a commercial free broadcast. All this and more on this weeks Adorable Boy Podcast. --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app

The Good Fail
Creatively Unthinking It

The Good Fail

Play Episode Listen Later Feb 10, 2020 48:45


Today we are asking how we can generate new ideas when we're in the grips of creative block or feeling stuck in out current life situation. After quickly catching up with out joint fails and successes of the week we start by clarifying how idea generation can relate to our listeners. We're all asked to think creatively at some point and sometimes the ideas just aren't coming and you feel stuck. DOES THAT FEEL LIKE YOU RIGHT NOW? We've definitely been there/are there right now! Here are our experiences of feeling stuck, both artistically and in life, and what we've learned from those experiences. SHOULD WE FEEL BAD ABOUT IT? No! We don't think so. Nor would your future self who can only look back with gratitude for what you have achieved. SO WHAT SHOULD WE DO ABOUT IT? Acknowledge you're don't feel good, it's okay, and even lean into the feeling a little. Just don't get stuck there. ALSO, DO YOU REMEMBER NIGREDO? Check out season 1 episode 2 to recap. Nigredo is exciting because it means new growth is possible. So when you find yourself feeling stuck, you should feel excited for the next bit too, generating ideas. WHAT'S A GOOD WAY OF GENERATING IDEAS? DO NOT focus of generating ideas. Instead, fill yourself to the brim with whatever you love doing. Then the ideas will creep in. We end with a quote from https://www.elizabethgilbert.com/books/big-magic/ (Big Magic by Elizabeth Gilbert) that gives a clear, visual image of what we're talking about. CONCLUSION: Idea generation when you're feeling stuck can, like a blank sheet of paper, feel scary and intimidating. But by not panicking and by absolutely not thinking about it, you can get pretty creative. WHAT'S OUR RECOMMENDATION FOR THE WEEK? https://www.bbcearth.com/podcast/# (BBC Earth Podcast) A great mix of stories about our planet to get you started on feeling inspired. SUPPORT THE SHOW You can become an official supporter of the show by joining our Good Fail club on Patreon. Visit: https://my.captivate.fm/www.patreon.com/thegoodfail (www.patreon.com/thegoodfail) FIND US ON SOCIAL MEDIA If you're not tired of us yet, you can also find us in these social places: Instagram - https://my.captivate.fm/dashboard/@thegood_fail (@thegood_fail) https://www.instagram.com/pretty_messy_official/ (@pretty_messy_official) https://www.instagram.com/merlemade_tales/ (@merlemade_tales) Facebook - https://www.facebook.com/TheGoodFailers/ (@thegoodfailers) https://www.facebook.com/merlemadetales/ (@merlemadetales) Twitter - https://my.captivate.fm/dashboard/@thegood_fail (@thegood_fail ) Pinterest - https://www.pinterest.co.uk/merlehunt/ (@merlemadetales) The Web - http://merlemadetales.com/ (www.merlemadetales.com) MUSIC FOR THE GOOD FAIL BY: https://filmmusic.io "Dreamy Flashback" by Kevin MacLeod (https://incompetech.com) License: CC BY (http://creativecommons.org/licenses/by/4.0/) Support this podcast

Speakers Speak
Quote Analysis - "Words ought to be a little wild for they are the assaults of thought on the unthinking.” - John Maynard Keynes

Speakers Speak

Play Episode Listen Later Sep 21, 2019 5:30


In this episode of quote analysis, we go over "Words ought to be a little wild for they are the assaults of thought on the unthinking.” - John Maynard Keynes

TheThinkingAtheist
Unthinking Atheists: I Can't Believe I Did This When I Was Drunk!

TheThinkingAtheist

Play Episode Listen Later Aug 6, 2019 57:12


This is a lighthearted show filled with anecdotes (and cautionary tales) about alcohol-related antics. We also get a little bit into the science of inebriation, so we can better understand why those who are drunk aren't really "thunk."Support our sponsor, Blinkist, with a free 7-day trial at http://www.blinkist.com/seth

Bshani Radio
Grown Folks Talking Live - (S9 - Ep - 1) - Unthinking & Mind Blowing with Lavinia Jackson

Bshani Radio

Play Episode Listen Later Dec 4, 2018 62:41


Unthinking & Mind Blowing with Lavinia Jackson

mind blowing grown folks unthinking bshaniradio vercayradio
Quranite Podcast
Cults Require Unthinking Conformity – Then They Will Disown You

Quranite Podcast

Play Episode Listen Later Dec 4, 2017 31:41


Cults share common characteristics: * Abdication of independent thought * Deception * Crucially: lack of personal responsibility He promises them And arouses desires in them And the shayṭān promises them only deception. These: their habitation is Hell And they will find no refuge therefrom. But those who heed warning and do deeds of righteousness We … Continue reading "Cults Require Unthinking Conformity – Then They Will Disown You" The post Cults Require Unthinking Conformity – Then They Will Disown You appeared first on QuraniteCast.

805conversations
Undeveloping The Future - Adam Hall - EarthKeeper

805conversations

Play Episode Listen Later Aug 3, 2017 43:47


Adam Hall has a background in real estate development and investment banking. His focus now is un-doing. Unthinking, undeveloping, it's turned into his life's mission. It's quite a turnaround in his thinking and has changed his life. He recently sat down with Mark and Patrick to talk his background, the formation of the EarthKeeper Alliance and what he's currently working (or unworking) on. Including: • His thoughts on land conservation, both on and offshore • How he defines 'undevelopment.' • What it means to be an Impact Investor • Ironically he talks about how to make sure that land conservation projects are not undone. • Adam explains how Land Trusts work and how they are proliferating across the US. • Our audience knows we love to talk about Dragons. The one that Adam deals with is regarding Property Right and Usage • He's encouraged by efforts in Santa Barbara like the SustainSB effort currently underway • Mark and Patrick love talking about the impact of our guest's ideas on Millenials and Adam helped us understand where they fit in this 'undoing' world. • He left us with this quote from Claude Debussy, "The music is the silence between the notes."

Latin American History Seminars
Unthinking the Canon: Latin America and the History of Historiography

Latin American History Seminars

Play Episode Listen Later Nov 16, 2015


Institute of Historical Research Unthinking the Canon: Latin America and the History of Historiography Mark Thurner (ILAS) In the late eighteenth century Peruvian intellectuals complained in print that their history "occupies only a diminutiv...

Small Business Hour
Harry Beckwith

Small Business Hour

Play Episode Listen Later Jan 19, 2015


Noted business author Harry Beckwith comes on the show to discuss his new book Unthinking. We also discuss Jeffrey Immelt’s influence on the Obama administration, the latest Discover Small Business Survey, and some ways to enact Breakthrough Innovation in your business.

family business interview marketing growth management barack obama harry beckwith breakthrough innovation unthinking
Urantia Book
86 - Early Evolution of Religion

Urantia Book

Play Episode Listen Later Oct 4, 2014


Early Evolution of Religion (950.1) 86:0.1 THE evolution of religion from the preceding and primitive worship urge is not dependent on revelation. The normal functioning of the human mind under the directive influence of the sixth and seventh mind-adjutants of universal spirit bestowal is wholly sufficient to insure such development. (950.2) 86:0.2 Man’s earliest prereligious fear of the forces of nature gradually became religious as nature became personalized, spiritized, and eventually deified in human consciousness. Religion of a primitive type was therefore a natural biologic consequence of the psychologic inertia of evolving animal minds after such minds had once entertained concepts of the supernatural. 1. Chance: Good Luck and Bad Luck (950.3) 86:1.1 Aside from the natural worship urge, early evolutionary religion had its roots of origin in the human experiences of chance — so-called luck, commonplace happenings. Primitive man was a food hunter. The results of hunting must ever vary, and this gives certain origin to those experiences which man interprets as good luck and bad luck. Mischance was a great factor in the lives of men and women who lived constantly on the ragged edge of a precarious and harassed existence. (950.4) 86:1.2 The limited intellectual horizon of the savage so concentrates the attention upon chance that luck becomes a constant factor in his life. Primitive Urantians struggled for existence, not for a standard of living; they lived lives of peril in which chance played an important role. The constant dread of unknown and unseen calamity hung over these savages as a cloud of despair which effectively eclipsed every pleasure; they lived in constant dread of doing something that would bring bad luck. Superstitious savages always feared a run of good luck; they viewed such good fortune as a certain harbinger of calamity. (950.5) 86:1.3 This ever-present dread of bad luck was paralyzing. Why work hard and reap bad luck — nothing for something — when one might drift along and encounter good luck — something for nothing? Unthinking men forget good luck — take it for granted — but they painfully remember bad luck. (950.6) 86:1.4 Early man lived in uncertainty and in constant fear of chance — bad luck. Life was an exciting game of chance; existence was a gamble. It is no wonder that partially civilized people still believe in chance and evince lingering predispositions to gambling. Primitive man alternated between two potent interests: the passion of getting something for nothing and the fear of getting nothing for something. And this gamble of existence was the main interest and the supreme fascination of the early savage mind. (951.1) 86:1.5 The later herders held the same views of chance and luck, while the still later agriculturists were increasingly conscious that crops were immediately influenced by many things over which man had little or no control. The farmer found himself the victim of drought, floods, hail, storms, pests, and plant diseases, as well as heat and cold. And as all of these natural influences affected individual prosperity, they were regarded as good luck or bad luck. (951.2) 86:1.6 This notion of chance and luck strongly pervaded the philosophy of all ancient peoples. Even in recent times in the Wisdom of Solomon it is said: “I returned and saw that the race is not to the swift, nor the battle to the strong, neither bread to the wise, nor riches to men of understanding, nor favor to men of skill; but fate and chance befall them all. For man knows not his fate; as fishes are taken in an evil net, and as birds are caught in a snare, so are the sons of men snared in an evil time when it falls suddenly upon them.” 2. The Personification of Chance (951.3) 86:2.1 Anxiety was a natural state of the savage mind. When men and women fall victims to excessive anxiety, they are simply reverting to the natural estate of their far-distant ancestors; and when anxiety becomes actually painful, it inhibits activity and unfailingly institutes evolutionary changes and biologic adaptations. Pain and suffering are essential to progressive evolution. (951.4) 86:2.2 The struggle for life is so painful that certain backward tribes even yet howl and lament over each new sunrise. Primitive man constantly asked, “Who is tormenting me?” Not finding a material source for his miseries, he settled upon a spirit explanation. And so was religion born of the fear of the mysterious, the awe of the unseen, and the dread of the unknown. Nature fear thus became a factor in the struggle for existence first because of chance and then because of mystery. (951.5) 86:2.3 The primitive mind was logical but contained few ideas for intelligent association; the savage mind was uneducated, wholly unsophisticated. If one event followed another, the savage considered them to be cause and effect. What civilized man regards as superstition was just plain ignorance in the savage. Mankind has been slow to learn that there is not necessarily any relationship between purposes and results. Human beings are only just beginning to realize that the reactions of existence appear between acts and their consequences. The savage strives to personalize everything intangible and abstract, and thus both nature and chance become personalized as ghosts — spirits — and later on as gods. (951.6) 86:2.4 Man naturally tends to believe that which he deems best for him, that which is in his immediate or remote interest; self-interest largely obscures logic. The difference between the minds of savage and civilized men is more one of content than of nature, of degree rather than of quality. (951.7) 86:2.5 But to continue to ascribe things difficult of comprehension to supernatural causes is nothing less than a lazy and convenient way of avoiding all forms of intellectual hard work. Luck is merely a term coined to cover the inexplicable in any age of human existence; it designates those phenomena which men are unable or unwilling to penetrate. Chance is a word which signifies that man is too ignorant or too indolent to determine causes. Men regard a natural occurrence as an accident or as bad luck only when they are destitute of curiosity and imagination, when the races lack initiative and adventure. Exploration of the phenomena of life sooner or later destroys man’s belief in chance, luck, and so-called accidents, substituting therefor a universe of law and order wherein all effects are preceded by definite causes. Thus is the fear of existence replaced by the joy of living. (952.1) 86:2.6 The savage looked upon all nature as alive, as possessed by something. Civilized man still kicks and curses those inanimate objects which get in his way and bump him. Primitive man never regarded anything as accidental; always was everything intentional. To primitive man the domain of fate, the function of luck, the spirit world, was just as unorganized and haphazard as was primitive society. Luck was looked upon as the whimsical and temperamental reaction of the spirit world; later on, as the humor of the gods. (952.2) 86:2.7 But all religions did not develop from animism. Other concepts of the supernatural were contemporaneous with animism, and these beliefs also led to worship. Naturalism is not a religion — it is the offspring of religion. 3. Death — The Inexplicable (952.3) 86:3.1 Death was the supreme shock to evolving man, the most perplexing combination of chance and mystery. Not the sanctity of life but the shock of death inspired fear and thus effectively fostered religion. Among savage peoples death was ordinarily due to violence, so that nonviolent death became increasingly mysterious. Death as a natural and expected end of life was not clear to the consciousness of primitive people, and it has required age upon age for man to realize its inevitability. (952.4) 86:3.2 Early man accepted life as a fact, while he regarded death as a visitation of some sort. All races have their legends of men who did not die, vestigial traditions of the early attitude toward death. Already in the human mind there existed the nebulous concept of a hazy and unorganized spirit world, a domain whence came all that is inexplicable in human life, and death was added to this long list of unexplained phenomena. (952.5) 86:3.3 All human disease and natural death was at first believed to be due to spirit influence. Even at the present time some civilized races regard disease as having been produced by “the enemy” and depend upon religious ceremonies to effect healing. Later and more complex systems of theology still ascribe death to the action of the spirit world, all of which has led to such doctrines as original sin and the fall of man. (952.6) 86:3.4 It was the realization of impotency before the mighty forces of nature, together with the recognition of human weakness before the visitations of sickness and death, that impelled the savage to seek for help from the supermaterial world, which he vaguely visualized as the source of these mysterious vicissitudes of life. 4. The Death-Survival Concept (952.7) 86:4.1 The concept of a supermaterial phase of mortal personality was born of the unconscious and purely accidental association of the occurrences of everyday life plus the ghost dream. The simultaneous dreaming about a departed chief by several members of his tribe seemed to constitute convincing evidence that the old chief had really returned in some form. It was all very real to the savage who would awaken from such dreams reeking with sweat, trembling, and screaming. (953.1) 86:4.2 The dream origin of the belief in a future existence explains the tendency always to imagine unseen things in the terms of things seen. And presently this new dream-ghost-future-life concept began effectively to antidote the death fear associated with the biologic instinct of self-preservation. (953.2) 86:4.3 Early man was also much concerned about his breath, especially in cold climates, where it appeared as a cloud when exhaled. The breath of life was regarded as the one phenomenon which differentiated the living and the dead. He knew the breath could leave the body, and his dreams of doing all sorts of queer things while asleep convinced him that there was something immaterial about a human being. The most primitive idea of the human soul, the ghost, was derived from the breath-dream idea-system. (953.3) 86:4.4 Eventually the savage conceived of himself as a double — body and breath. The breath minus the body equaled a spirit, a ghost. While having a very definite human origin, ghosts, or spirits, were regarded as superhuman. And this belief in the existence of disembodied spirits seemed to explain the occurrence of the unusual, the extraordinary, the infrequent, and the inexplicable. (953.4) 86:4.5 The primitive doctrine of survival after death was not necessarily a belief in immortality. Beings who could not count over twenty could hardly conceive of infinity and eternity; they rather thought of recurring incarnations. (953.5) 86:4.6 The orange race was especially given to belief in transmigration and reincarnation. This idea of reincarnation originated in the observance of hereditary and trait resemblance of offspring to ancestors. The custom of naming children after grandparents and other ancestors was due to belief in reincarnation. Some later-day races believed that man died from three to seven times. This belief (residual from the teachings of Adam about the mansion worlds), and many other remnants of revealed religion, can be found among the otherwise absurd doctrines of twentieth-century barbarians. (953.6) 86:4.7 Early man entertained no ideas of hell or future punishment. The savage looked upon the future life as just like this one, minus all ill luck. Later on, a separate destiny for good ghosts and bad ghosts — heaven and hell — was conceived. But since many primitive races believed that man entered the next life just as he left this one, they did not relish the idea of becoming old and decrepit. The aged much preferred to be killed before becoming too infirm. (953.7) 86:4.8 Almost every group had a different idea regarding the destiny of the ghost soul. The Greeks believed that weak men must have weak souls; so they invented Hades as a fit place for the reception of such anemic souls; these unrobust specimens were also supposed to have shorter shadows. The early Andites thought their ghosts returned to the ancestral homelands. The Chinese and Egyptians once believed that soul and body remained together. Among the Egyptians this led to careful tomb construction and efforts at body preservation. Even modern peoples seek to arrest the decay of the dead. The Hebrews conceived that a phantom replica of the individual went down to Sheol; it could not return to the land of the living. They did make that important advance in the doctrine of the evolution of the soul. 5. The Ghost-Soul Concept (953.8) 86:5.1 The nonmaterial part of man has been variously termed ghost, spirit, shade, phantom, specter, and latterly soul. The soul was early man’s dream double; it was in every way exactly like the mortal himself except that it was not responsive to touch. The belief in dream doubles led directly to the notion that all things animate and inanimate had souls as well as men. This concept tended long to perpetuate the nature-spirit beliefs; the Eskimos still conceive that everything in nature has a spirit. (954.1) 86:5.2 The ghost soul could be heard and seen, but not touched. Gradually the dream life of the race so developed and expanded the activities of this evolving spirit world that death was finally regarded as “giving up the ghost.” All primitive tribes, except those little above animals, have developed some concept of the soul. As civilization advances, this superstitious concept of the soul is destroyed, and man is wholly dependent on revelation and personal religious experience for his new idea of the soul as the joint creation of the God-knowing mortal mind and its indwelling divine spirit, the Thought Adjuster. (954.2) 86:5.3 Early mortals usually failed to differentiate the concepts of an indwelling spirit and a soul of evolutionary nature. The savage was much confused as to whether the ghost soul was native to the body or was an external agency in possession of the body. The absence of reasoned thought in the presence of perplexity explains the gross inconsistencies of the savage view of souls, ghosts, and spirits. (954.3) 86:5.4 The soul was thought of as being related to the body as the perfume to the flower. The ancients believed that the soul could leave the body in various ways, as in: (954.4) 86:5.5 1. Ordinary and transient fainting. (954.5) 86:5.6 2. Sleeping, natural dreaming. (954.6) 86:5.7 3. Coma and unconsciousness associated with disease and accidents. (954.7) 86:5.8 4. Death, permanent departure. (954.8) 86:5.9 The savage looked upon sneezing as an abortive attempt of the soul to escape from the body. Being awake and on guard, the body was able to thwart the soul’s attempted escape. Later on, sneezing was always accompanied by some religious expression, such as “God bless you!” (954.9) 86:5.10 Early in evolution sleep was regarded as proving that the ghost soul could be absent from the body, and it was believed that it could be called back by speaking or shouting the sleeper’s name. In other forms of unconsciousness the soul was thought to be farther away, perhaps trying to escape for good — impending death. Dreams were looked upon as the experiences of the soul during sleep while temporarily absent from the body. The savage believes his dreams to be just as real as any part of his waking experience. The ancients made a practice of awaking sleepers gradually so that the soul might have time to get back into the body. (954.10) 86:5.11 All down through the ages men have stood in awe of the apparitions of the night season, and the Hebrews were no exception. They truly believed that God spoke to them in dreams, despite the injunctions of Moses against this idea. And Moses was right, for ordinary dreams are not the methods employed by the personalities of the spiritual world when they seek to communicate with material beings. (954.11) 86:5.12 The ancients believed that souls could enter animals or even inanimate objects. This culminated in the werewolf ideas of animal identification. A person could be a law-abiding citizen by day, but when he fell asleep, his soul could enter a wolf or some other animal to prowl about on nocturnal depredations. (955.1) 86:5.13 Primitive men thought that the soul was associated with the breath, and that its qualities could be imparted or transferred by the breath. The brave chief would breathe upon the newborn child, thereby imparting courage. Among early Christians the ceremony of bestowing the Holy Spirit was accompanied by breathing on the candidates. Said the Psalmist: “By the word of the Lord were the heavens made and all the host of them by the breath of his mouth.” It was long the custom of the eldest son to try to catch the last breath of his dying father. (955.2) 86:5.14 The shadow came, later on, to be feared and revered equally with the breath. The reflection of oneself in the water was also sometimes looked upon as proof of the double self, and mirrors were regarded with superstitious awe. Even now many civilized persons turn the mirror to the wall in the event of death. Some backward tribes still believe that the making of pictures, drawings, models, or images removes all or a part of the soul from the body; hence such are forbidden. (955.3) 86:5.15 The soul was generally thought of as being identified with the breath, but it was also located by various peoples in the head, hair, heart, liver, blood, and fat. The “crying out of Abel’s blood from the ground” is expressive of the onetime belief in the presence of the ghost in the blood. The Semites taught that the soul resided in the bodily fat, and among many the eating of animal fat was taboo. Head hunting was a method of capturing an enemy’s soul, as was scalping. In recent times the eyes have been regarded as the windows of the soul. (955.4) 86:5.16 Those who held the doctrine of three or four souls believed that the loss of one soul meant discomfort, two illness, three death. One soul lived in the breath, one in the head, one in the hair, one in the heart. The sick were advised to stroll about in the open air with the hope of recapturing their strayed souls. The greatest of the medicine men were supposed to exchange the sick soul of a diseased person for a new one, the “new birth.” (955.5) 86:5.17 The children of Badonan developed a belief in two souls, the breath and the shadow. The early Nodite races regarded man as consisting of two persons, soul and body. This philosophy of human existence was later reflected in the Greek viewpoint. The Greeks themselves believed in three souls; the vegetative resided in the stomach, the animal in the heart, the intellectual in the head. The Eskimos believe that man has three parts: body, soul, and name.* 6. The Ghost-Spirit Environment (955.6) 86:6.1 Man inherited a natural environment, acquired a social environment, and imagined a ghost environment. The state is man’s reaction to his natural environment, the home to his social environment, the church to his illusory ghost environment. (955.7) 86:6.2 Very early in the history of mankind the realities of the imaginary world of ghosts and spirits became universally believed, and this newly imagined spirit world became a power in primitive society. The mental and moral life of all mankind was modified for all time by the appearance of this new factor in human thinking and acting. (955.8) 86:6.3 Into this major premise of illusion and ignorance, mortal fear has packed all of the subsequent superstition and religion of primitive peoples. This was man’s only religion up to the times of revelation, and today many of the world’s races have only this crude religion of evolution. (955.9) 86:6.4 As evolution progressed, good luck became associated with good spirits and bad luck with bad spirits. The discomfort of enforced adaptation to a changing environment was regarded as ill luck, the displeasure of the spirit ghosts. Primitive man slowly evolved religion out of his innate worship urge and his misconception of chance. Civilized man provides schemes of insurance to overcome these chance occurrences; modern science puts an actuary with mathematical reckoning in the place of fictitious spirits and whimsical gods. (956.1) 86:6.5 Each passing generation smiles at the foolish superstitions of its ancestors while it goes on entertaining those fallacies of thought and worship which will give cause for further smiling on the part of enlightened posterity. (956.2) 86:6.6 But at last the mind of primitive man was occupied with thoughts which transcended all of his inherent biologic urges; at last man was about to evolve an art of living based on something more than response to material stimuli. The beginnings of a primitive philosophic life policy were emerging. A supernatural standard of living was about to appear, for, if the spirit ghost in anger visits ill luck and in pleasure good fortune, then must human conduct be regulated accordingly. The concept of right and wrong had at last evolved; and all of this long before the times of any revelation on earth. (956.3) 86:6.7 With the emergence of these concepts, there was initiated the long and wasteful struggle to appease the ever-displeased spirits, the slavish bondage to evolutionary religious fear, that long waste of human effort upon tombs, temples, sacrifices, and priesthoods. It was a terrible and frightful price to pay, but it was worth all it cost, for man therein achieved a natural consciousness of relative right and wrong; human ethics was born! 7. The Function of Primitive Religion (956.4) 86:7.1 The savage felt the need of insurance, and he therefore willingly paid his burdensome premiums of fear, superstition, dread, and priest gifts toward his policy of magic insurance against ill luck. Primitive religion was simply the payment of premiums on insurance against the perils of the forests; civilized man pays material premiums against the accidents of industry and the exigencies of modern modes of living. (956.5) 86:7.2 Modern society is removing the business of insurance from the realm of priests and religion, placing it in the domain of economics. Religion is concerning itself increasingly with the insurance of life beyond the grave. Modern men, at least those who think, no longer pay wasteful premiums to control luck. Religion is slowly ascending to higher philosophic levels in contrast with its former function as a scheme of insurance against bad luck. (956.6) 86:7.3 But these ancient ideas of religion prevented men from becoming fatalistic and hopelessly pessimistic; they believed they could at least do something to influence fate. The religion of ghost fear impressed upon men that they must regulate their conduct, that there was a supermaterial world which was in control of human destiny. (956.7) 86:7.4 Modern civilized races are just emerging from ghost fear as an explanation of luck and the commonplace inequalities of existence. Mankind is achieving emancipation from the bondage of the ghost-spirit explanation of ill luck. But while men are giving up the erroneous doctrine of a spirit cause of the vicissitudes of life, they exhibit a surprising willingness to accept an almost equally fallacious teaching which bids them attribute all human inequalities to political misadaptation, social injustice, and industrial competition. But new legislation, increasing philanthropy, and more industrial reorganization, however good in and of themselves, will not remedy the facts of birth and the accidents of living. Only comprehension of facts and wise manipulation within the laws of nature will enable man to get what he wants and to avoid what he does not want. Scientific knowledge, leading to scientific action, is the only antidote for so-called accidental ills. (957.1) 86:7.5 Industry, war, slavery, and civil government arose in response to the social evolution of man in his natural environment; religion similarly arose as his response to the illusory environment of the imaginary ghost world. Religion was an evolutionary development of self-maintenance, and it has worked, notwithstanding that it was originally erroneous in concept and utterly illogical. (957.2) 86:7.6 Primitive religion prepared the soil of the human mind, by the powerful and awesome force of false fear, for the bestowal of a bona fide spiritual force of supernatural origin, the Thought Adjuster. And the divine Adjusters have ever since labored to transmute God-fear into God-love. Evolution may be slow, but it is unerringly effective. (957.3) 86:7.7 [Presented by an Evening Star of Nebadon.]

Spiritual Living Podcast
Changing Our B.S.

Spiritual Living Podcast

Play Episode Listen Later Oct 28, 2012 25:29


The road to hell is covered in our B.S., our belief system. Intentions are always good at their source. And our intentions start at the centre of our being. But they can go bad when they get filtered by our false beliefs and our hidden fears. Unthinking our false beliefs takes committed practice. This prayer works. Practice is cultivating the soil of silence so that God can grow through us.

Sam & Kara in the Morning
Thursday, February 3, 2011

Sam & Kara in the Morning

Play Episode Listen Later Feb 3, 2011 50:00


We prepare for the station's 3rd anniversary. Sam tells about the origins of the station and how it has evolved. We continue discussing the developments in Egypt and the growing violence there. We discuss how we make choices in life with bestselling author Harry Beckwith, and talk about his new book "Unthinking." We discuss different climates and wonder how people can live in such cold weather. More about living in LA, and Kara starts making generalizations. Arguments ensue.

sex funny arguments talk radio morning radio harry beckwith unthinking sam in the morning
Sam & Kara in the Morning
Thursday, February 3, 2011

Sam & Kara in the Morning

Play Episode Listen Later Feb 3, 2011 50:00


We prepare for the station's 3rd anniversary. Sam tells about the origins of the station and how it has evolved. We continue discussing the developments in Egypt and the growing violence there. We discuss how we make choices in life with bestselling author Harry Beckwith, and talk about his new book "Unthinking." We discuss different climates and wonder how people can live in such cold weather. More about living in LA, and Kara starts making generalizations. Arguments ensue.

sex funny arguments talk radio morning radio harry beckwith unthinking sam in the morning
Lucky Rock Comedy Show
Lucky Rant 122 - Old Dogs, Old Tricks

Lucky Rock Comedy Show

Play Episode Listen Later Apr 15, 2007 2:53


Hillary's resurrected her talk of a "vast, right-wing conspiracy." This inspires us to reach back into the past for some tried and true insults of our own. Like this Lucky Rant? Forward it to your friends and tell them to check out myluckyrock.com and the Lucky Rock blog.

Lucky Rock Comedy Show
Lucky Rant 123 - True Political Ads

Lucky Rock Comedy Show

Play Episode Listen Later Apr 15, 2007 3:08


Hillary Clinton was furious over a recent ad by an Obama friend, which dared to tell the truth about Hillary's "ambitions." This inspired us to think up a few true political ads of our own. Enjoy! Like this Lucky Rant? Forward it to your friends and tell them to check out myluckyrock.com and the Lucky Rock blog.

Lucky Rock Comedy Show
British Accent in a Bottle

Lucky Rock Comedy Show

Play Episode Listen Later Apr 13, 2007 0:29


Now you can sound brilliant, no matter who you are or what you believe? Like British Accent in a Bottle? Forward it to your friends and tell them to check out myluckyrock.com and the Lucky Rock blog.

Lucky Rock Comedy Show
Gym Rats Interview

Lucky Rock Comedy Show

Play Episode Listen Later Apr 13, 2007 3:27


Meet Bonnie and Donnie...the coolest people at your gym. You know it! Like Gym Rats? Forward it to your friends and tell them to check out myluckyrock.com and the Lucky Rock blog.

Lucky Rock Comedy Show
HypoChristian Interview

Lucky Rock Comedy Show

Play Episode Listen Later Apr 13, 2007 5:03


Ever meet a hypocritical Christian? Ever wish they'd get a clue? Then you'll love Todd Powers, a.k.a., HypoChristian. Like HypoChristian? Forward it to your friends and tell them to check out myluckyrock.com and the Lucky Rock blog.

Lucky Rock Comedy Show
Share My Pain - Ordering at McDonald's

Lucky Rock Comedy Show

Play Episode Listen Later Apr 13, 2007 2:02


Share my pain as I retell yet another awful McDonald's experience. Like Share My Pain? Forward it to you friends and tell them to check out myluckyrock.com and the Lucky Rock blog.

Lucky Rock Comedy Show
No Think Powder Ad

Lucky Rock Comedy Show

Play Episode Listen Later Apr 10, 2007 0:29


Iraq? Politics? Religion? Don't worry about it. No Think Powder is here! Like our No Think Powder ad? Forward it to your friends and tell them to check out myluckyrock.com and the Lucky Rock blog.