Property of a thermodynamic system
POPULARITY
Please welcome back Grant Newsham, retired marine colonel and author of When China Attacks, A Warning to America. Grant came on the show to discuss the state of the Japan Defense Forces and the PRC threat. This is a two-part episode. Grant's biography: https://centerforsecuritypolicy.org/author/grant-newsham/ Book link: https://www.regnery.com/9781684513659/when-china-attacks/ A recent article: https://andmagazine.substack.com/p/the-us-in-the-pacific-getting-the?utm_source=substack&publication_id=746580&post_id=151553726&utm_medium=email&utm_content=share&utm_campaign=email-share&triggerShare=true&isFreemail=true&r=ercjf&triedRedirect=true --- One CA is a product of the civil affairs association and brings in people who are current or former military, diplomats, development officers, and field agents to discuss their experiences on the ground with a partner nation's people and leadership. We aim to inspire anyone interested in working in the "last three feet" of U.S. foreign relations. To contact the show, email us at CApodcasting@gmail.com or look us up on the Civil Affairs Association website at www civilaffairsassoc.org --- Special thanks to the site Cool Jazz Hot Bossa for the sample of Cool Jazz Hot Bossa. (59:00). Retrieved from: https://www.youtube.com/watch?v=bdWUj2NYDYQ --- Transcript: (Part I) 00:00:05 JACK GAINES Welcome to the 1CA Podcast. This is your host, Jack Gaines. 1CA is a product of the Civil Affairs Association and brings in people who are current or former military, diplomats, development officers, and field agents to discuss their experiences on ground with the partner nation's people and leadership. Our goal is to inspire anyone interested in working the last three feet of foreign relations. To contact the show, email us at capodcasting@gmail.com. Or look us up on the Civil Affairs Association website at www.civilaffairsassoc.org. I'll have those in the show notes. Please welcome Grant Newsham, retired Marine Colonel and author of When China Attacks, A Warning to America. Grant came on the show to discuss the state of the Japan Defense Forces and the PRC threat. This is the first of a two-part episode, so let's get started. 00:00:56 GRANT NEWSHAM I was effectively MarforPak's guy in Asia for a number of years. which worked well in both directions. So I was obviously in Japan, but also did a lot of work for them throughout the region, Southeast Asia as well, Taiwan even, which was a lot of fun. 00:01:13 JACK GAINES Yeah. And you've become a foreign policy advocate in the area. 00:01:16 GRANT NEWSHAM Yeah. At some point, maybe seven or eight years ago, figured I'd actually done enough stuff to maybe have a few ideas. So I started writing and speak a lot as well. So I guess I'm part of the commentariat. But I seem to write about once a week some topic related to often Asian defense, but sometimes economics, politics, sometimes organized crime. And I do get invited to speak here and there and seem to get a number of television or radio interviews as well. That's really cool. I didn't say I get invited to good things, but I do get the occasional invitation. I used to think it was because I had such insight. Someone told me not all that long ago that actually, if you'll say yes to an interview, you're likely to get more of them. Because the people who book them, they just want to get somebody on. And I thought it was because of my particular wisdom. 00:02:07 JACK GAINES of my particular wisdom. 00:02:09 GRANT NEWSHAM I'm joking a little bit. But obviously, you must have something useful to say. But it is funny. There's one place in Singapore that calls me a lot. It's like their CNN. And they've been calling me. Probably eight years at least, or almost every time, I'll tell the presenters that basically they don't know what they're talking about. And I always think, well, this is the last one, but they keep calling me up. They mislike you because you're the contrarian. 00:02:34 JACK GAINES mislike you because you're the contrarian. 00:02:36 GRANT NEWSHAM Oh, I can blame things in a way that sort of suits broadcast and that sort of regular people can understand, you know, 00:02:42 GRANT NEWSHAM that sort of regular people can understand, you know, being a regular person myself. 00:02:47 JACK GAINES Yeah, you learn to disagree without offending. 00:02:49 GRANT NEWSHAM Usually. And it's always sort of a relief, actually, when you can have a different look at things. 00:02:56 JACK GAINES That's good. I always thought you were going to say it is a relief sometimes when you just peel the coat off and then yell at them. 00:03:02 GRANT NEWSHAM The facts speak for themselves. Right. And if it's a presenter, their role is different, and they will generally not have the substantive knowledge that most of the people on the show will have. Right. And so much of what I have to say is often not... in line with accepted wisdom, particularly when it comes to Japan. Sure. So it's often that I'll have to present a different take on things, but they don't seem to be offended. 00:03:27 JACK GAINES Right. You mostly talk about Japan in its current defense fashion or in its foreign policy actions. 00:03:33 GRANT NEWSHAM A lot of that because people have a perception of Japan, for example, as a pacifist country. It cannot fight. It's peace loving. Right. etc. They have a nuclear allergy. You know, just the idea of nuclear weapons in Japan is out of the question. You often hear, well, their constitution won't let them fight. And none of those things are actually true. But it's the received wisdom. It's what people think. And when you simply point out the realities of Japan, that ultimately, at the end of the day, it's a country just like every other. And that the stereotypes about it really aren't correct when it comes to defense security. In fact, they use that the Constitution won't let them have a military. You probably heard it. Yeah. That's the idea. And they don't even call it a military. But the fact is they've got a military, which, according to some ratings, is the fifth most powerful in the world. It depends on how you calculate it, of course. But they call it something else. And what is the actual distinction between offensive and defensive weapons? 00:04:35 JACK GAINES It's usually the strike space. If it's inside your own country defending, then it's a defense space. Once you go out and start taking out other people's cities and moving forces in. 00:04:44 GRANT NEWSHAM Well, for example, they don't have much what you call power projection capability very far off their borders. But they do have a submarine fleet, say over 20 submarines. There's no reason you couldn't send them to the coast of China and start sinking ships. 00:04:59 JACK GAINES True. 00:05:00 GRANT NEWSHAM They've got F -16s. You can put long -range missiles on them and you can fly out of ways and cause people a lot of trouble. But their military really, I would say, is not so good at offense. It's not so good at defense either. And that's something that comes as a surprise to a lot of people. 00:05:15 JACK GAINES Well, do they exercise defense and offense? 00:05:18 GRANT NEWSHAM Oh, they have exercises, training, and they put on a pretty good show, particularly when they have visitors come. But they really, until very recently, and even now, they can't do joint operations, which means the air, sea, and ground forces. can't operate together. In fact, they don't even have a radio with which they can communicate easily. They have to jury -rig some relations, these connections. And that's something most people don't understand, because you look at it on paper. Japan has 250 ,000 people in its military, and it's got ships, aircraft, all of it modern and good stuff. 150 ,000 people in its ground self -defense force, their army. But it's not even the sum of its whole. If you imagine each of your limbs, your arms and your legs, each doing whatever it wants without the coordinating function provided by a brain. 00:06:10 JACK GAINES Sounds like me dancing. 00:06:12 GRANT NEWSHAM It would be, yeah. I think that I can picture that, whereas I'm more of an Arthur Murray kind of guy. But it's like that. And nobody can believe that because they think, well, this is the Japanese. It's this advanced modern country, big military, the rich country. And I mean, they can't even do these simple things. Right. The short answer is no, except in some limited circumstances. After 60 years of the U .S.-Japan defense relationship, 80 years after World War II, they still cannot do some of the basic things that a military needs to do, or do them very well, put it that way. But they do train, they exercise, the personnel quality is excellent. You know, we tend to say, well, we've got Japan as our ally, Japan has a military. But the reality is that the U .S. and Japanese forces cannot work very well together. There's one exception, and that's the two navies. The U .S. Navy and the Japanese Navy, called the Maritime Self -Defense Force, they actually do work well. And they show what's doable. 00:07:15 JACK GAINES They probably do dynamic exercises as well as structured ones, so they have to change, have to practice new orders and maneuvers. 00:07:22 GRANT NEWSHAM Well, the nature of naval operations is you can go out... into the sea, and you have more freedom to actually do stuff. But part of it actually was when Admiral Arleigh Burke, who was later chief of naval operations for many years, he was in charge in Japan. He basically laid down the ground rules, which was that the American Navy was going to treat the Japanese like friends, like allies. And that set the tone for everything. So they had a more relationship of equals, people who wanted to operate together. And that is why they have a good relationship today. in my opinion. So as a result, after all these decades, the two militaries are not really very good at operating together. There's no joint headquarters. There never has been in Japan. At best, they've operated in isolation. Do they recognize they don't have a joint access? Oh, they know. The Japanese military knows this. And US Indo -PACOM has not pushed the issue. And then you had... The State Department side, on the civilian side, people saying, well, if we ask the Japanese to get better at defense matters, well, they'll get angry. And if they do, then the Chinese will be mad. So you have the U .S. on the U .S. side. We're thinking of at least 10 reasons why Japan cannot improve its defenses. That's changed enough in recent years. But you see how many decades we've lost. 00:08:51 JACK GAINES Right. I can see part of what the State Department is saying in that a lot of those countries along the Asian coast were under Japanese rule during World War II. They're concerned that by showing favor and coordinating with them in defense might offend places like the Philippines or Korea. It is a concern to be weighed, but I don't know how much weight you would put to it. 00:09:14 GRANT NEWSHAM I wouldn't give it hardly any. With the Japanese, when you actually think about it, I would say within... 30 years of the end of the war, but certainly today, and for the last at least 20 years ago. The new century. Even before that. The Japanese and World War II is not really an issue in almost all of Asia. The Chinese, of course... Play it up. That's a good way to put it. Of course, they do remember what the Japanese did, and it was barbaric. Although the Chinese Communist Party afterwards killed 50 million Chinese in peacetime and good weather, which the Imperial Japanese Army couldn't have dreamed of doing. But World War II is an issue in China. Korea as well, the relationship is dicey. Up to a point. I mean, little old ladies go and sit in front of the embassy still. 00:10:05 JACK GAINES the embassy still. 00:10:06 GRANT NEWSHAM There are, and then you just had a South Korean amphibious ship come to Yokosuka in Tokyo on a visit. In Korea, there's a fundamental sort of suspicion of the Japanese. Sometimes it is a real dislike. But most people, it's not a big issue. But except for those two countries, you go down the list in Asia, and there is no after effect of World War II. I find the Filipinos get along very well with the Japanese. The Indonesians do. They, in fact, see the Japanese as being the people who freed them from the colonial yoke. Okay. The Malays, they actually didn't have that bad a time during the occupation. The Chinese in Malaysia did. So the Malaysians don't have any really hard feelings against the Japanese. Taiwan, same thing. They've got a very good relationship. And then there's one plus billion Indians who actually have an excellent relationship with Japan and see Japan as real friends and vice versa. So you're starting to get a good chunk of Asia, which, as you can see, actually sees Japan as a good country, useful economically. It's been very generous. And they like to see a Japanese military that's strong enough, allied with the United States, able to deal with China. 00:11:27 JACK GAINES Right. And why would we have such a different balance as we do with Germany and Europe? Because no one's questioning this in Holland or in France. That's just another country. They freely trade, they freely access each other. So maybe mindset just needs to shift to say the reform of Japan is just like Germany, and we need to start treating them and partner nations the same and start advocating for a joint staff. 00:11:52 GRANT NEWSHAM And you could do that in an afternoon, but the Japanese will not speak up for themselves. And an old New York Times reporter, Richard Halloran, I remember him telling me once that all the people he ever dealt with in the world... The Japanese were the worst at explaining themselves. And there's a reticence which slows them down. But also the Americans are afraid to tell them what we need. And that is a huge problem, because if we don't tell them, the Japanese are not blind readers, and they won't do what we think we'd like them to do, but we're afraid to ask. And in fact, one of the Japanese prime ministers in 1970, so 50 -some years ago, He gave some very good advice to the Americans, and it was at the time the Americans were trying to put an aircraft carrier into Yokosuka, the naval base near Tokyo. They wanted to assign it there permanently. And the U .S. side was thinking of excuses why it was too hard for the Japanese. They'll cause political difficulties. The Japanese have an election coming up. The timing just isn't right. And finally, the Japanese side sent a message to the Americans saying, tell us what you need. And don't back down. And they said it out of exasperation, really. And it was the best advice the Americans have ever been given. And we've refused to follow it ever since then. And really, it's almost a cultural trait, sort of a Confucian system. They actually are happy to have experts tell them what they ought to do. Sure. Whereas we are more of the Socratic method. And it doesn't, it just doesn't work. That's why after all these years, the Americans and the Japanese forces, except for the navies, And except for missile defense, we really don't operate together anywhere near where we need to be. We're not even close. And another very interesting fact a lot of people don't know is the Japanese military missed its recruitment targets by about 50 % last year. 5 -0? 00:13:50 JACK GAINES -0? 00:13:50 GRANT NEWSHAM 5 -0. And it routinely misses them by 20 -25%. So this, you can see, is a problem. It's now an older force, doesn't have enough people. In order to fulfill its missions, it would probably have to be twice as big, both personnel -wise and in terms of ships and hardware. Its war stocks are basically non -existent, doesn't know anything really about casualty care, combat casualty replacements, logistics. 00:14:20 JACK GAINES Well, if the media looks down on it and the political class looks down on it, it's not going to get a lot of positivity in the public mindset. So that's got to be part of it. It's not a vote -getter to push for a strong defense. 00:14:31 GRANT NEWSHAM vote -getter to push for a strong defense. I mean, if you're a politician, no one's going to say, he's the defense guy, let's give him our vote. But people vote for other reasons. But you do get used to, after that horrific experience in World War II, that for decades people didn't want to really think about defense, and they were glad to have the Americans around to handle it, and particularly when it seemed like there wasn't any real threat anywhere. People were happy with that, and even the U .S. side. didn't mind it as well. But I'd say it should have started to change at least 20 years ago. And it didn't until maybe four or five years ago. Well, 00:15:10 JACK GAINES when did the risk indicators really start popping up with China? 00:15:14 GRANT NEWSHAM I think by... It can't be back when Nixon went. 00:15:15 JACK GAINES It can't be back when Nixon went. Well, it should have, 00:15:16 GRANT NEWSHAM it should have, you know, I think. But about 2005 is when it was obvious what was coming. 00:15:19 JACK GAINES But about 00:15:21 GRANT NEWSHAM when it was obvious what was coming. And even before that, if you knew what to look for. But as I said, some of us... We knew what needed done and what the problems were. And there were Japanese who did too. And that's why when we put together their amphibious force, it was sort of an effort to address the shortcomings in Japan's self -defense force. Also to improve the overall U .S.-Japan relationship because it was so imbalanced. Right. Where the Japanese weren't doing anything near enough to defend themselves. And that over time creates a lot of friction in a relationship. So we were trying to address that with the amphibious force, and that was 2011, which we were pretty successful at that because we didn't ask permission from anybody. I was going to say, if you were successful, 00:16:10 JACK GAINES did you get fired? 00:16:11 GRANT NEWSHAM Well, it's not that people didn't try. 00:16:11 JACK GAINES Well, it's not that people didn't try. Sorry, that was sarcastic. But I was a reservist, so they couldn't quite get a bead on me. 00:16:15 GRANT NEWSHAM I was a reservist, so they couldn't quite get a bead on me. Yeah. And didn't quite know what we were doing. And also you had people like General Gregson, who was then at... Department of Defense, who had been in Japan many years, and he knew the importance of it all. So he would provide some cover. But the real success there was because the Japanese side took the ideas and ran with it. And the Americans provided some cover and some know -how and some advice. But it was the Japanese who did that. Once the Japanese took on the mission, well, what are the Americans going to say? But I was even told that at Indo -PACOM, that there were people who gotten wind of this and were very much opposed because the idea that Japanese having an amphibious force was provocative. Not just provocative, but it was going to cause the Japanese to go on the rampage again, like in 1941. I'm not making this up. 00:17:11 JACK GAINES So when Germany starts building the Leopard 2, were they expected to go on a rampage too? 00:17:17 GRANT NEWSHAM No, those are Europeans. Oh, okay. You know how the Europeans are okay. 00:17:19 JACK GAINES okay. You know 00:17:21 GRANT NEWSHAM But the fact that Germans have been allowed back into polite society. tells you something, and the Japanese are just as deserving of it as well. 00:17:30 JACK GAINES Did you see the movie Godzilla Minus One? No. It's an interesting portrayal of post -World War II Japan. And Godzilla, which is this giant monster, comes out of the sea, tears up Japan, and has an atomic breath that shoots off nuclear explosions, which sounds a lot like the United States in a mythological way. One thing that... the show did that was interesting is it kind of engaged post -military era and had talked about it. And it seemed like it was trying to reconcile the past with now and build out a notion that the military is okay, that after the war, there were good things that happened and that we should embrace a military in the future. So there might be some societal impulses out there that are promoting and supporting a more built -up military in Japan. 00:18:24 GRANT NEWSHAM Well, you're actually right. The public at large has always been pretty supportive of the military. For example, when they have open base days, when they put on so -called firepower demonstrations, which is like an exercise you can watch where they shoot off stuff, that they're always oversubscribed. And people just pour into these things because they're interested. And the central government, or say the ruling class, are the ones who are gun -shy or... I'm really hesitant, but the public at large, you know, when you ask them, you know, should Japan have a normal military? The replies to that are like 85%. Well, yes, of course. And I think they would be horrified if they knew the actual state of the Japanese military. I mentioned this to a Japanese politician last year, and he was horrified at the idea. And the public as well would have a similar reaction. Regular Japanese people say they have a pretty good understanding of what Japan needs to do to defend itself and of the importance of having a national defense, but the government doesn't explain it very well. When they do, the reaction, there's a Japanese expression, it's called like, it's atarimae. And it means like, well, yeah. It's like, duh. 00:19:42 JACK GAINES Abnautually. And that's what it means. 00:19:42 GRANT NEWSHAM And that's what it means. Should Japan have a good defense? Atarimae. And yeah, what's the question here? But if you ask that question in the political world, then you'll get all sorts of emming and hawing. They wanted nothing of that. By the late 70s, certainly by the 90s, that they sort of outlived that. But it was comfortable to continue with it, particularly if you're the government, because you don't have to spend money on defense. And the Americans are covering that. So it was as if the Americans were giving. I'd say at least $50 billion a year in free defense coverage, at least, probably more. And, you know, if you're a government, you think, well, why should we do anything different? And so they got used to that. We got used to it. And then at some point, the friction builds up where you just can't do that. And the Japanese themselves start to be resentful. 00:20:37 JACK GAINES Right. Keeping them handicapped, probably. 00:20:40 GRANT NEWSHAM Yeah. You know, they're not letting us be self -fulfilled. I think that's sort of the marriage counselor's analysis. And so that imbalance was such that it was creating huge problems in the relationship. But the defense relationship, you know, pointing out, well, you know, you guys really aren't very good, except for the Navy. You know, and we can't work with you very well, except for the Navies. And as a result, that's why we are where we are today. By now, if we had a more sort of capable U .S.-Japan defense relationship, where the two services could... operate together, and we're conducting a joint defense of Japan and the surrounding areas, which includes, say, to Taiwan even, that that would have, I think, deterred a lot of the problems that we're having. But by pretending everything was okay, we've gotten ourselves in a position where we now face a real threat out there. And we're trying to make up for lost time. And I don't know. And I don't know which side I would bet on. I'd bet on ours because I'm an American. But that's how out of whack it has gotten. It used to be maybe till 20 years ago, we were in pretty good shape. But you can see that advantage eroding. And nowadays, depending on how a fight were to take place, if it does take place, it would be less of a sure thing than it once was. And that's, I think, putting it very nicely. 00:22:04 JACK GAINES Well, tell me about the threat. 00:22:05 GRANT NEWSHAM What are you seeing? It's China. led by the Chinese Communist Party. (Part II) 00:00:02 JACK GAINES Welcome to the 1CA Podcast. This is your host, Jack Gaines. 1CA is a product of the Civil Affairs Association and brings in people who are current or former military, diplomats, development officers, and field agents to discuss their experiences on ground with the partner nation's people and leadership. Our goal is to inspire anyone interested in working the last three feet of foreign relations. To contact the show, email us at capodcasting@gmail.com. or look us up on the Civil Affairs Association website at www.civilaffairsassoc.org. I'll have those in the show notes. Please welcome back Grant Newsham, retired Marine colonel and author of When China Attacks, A Warning to America. Grant came on the show to discuss the state of the Japanese defense forces and the PRC threat. This is the second in a two-part episode, so let's get started. 00:00:56 SPEAKER_02 It's China. led by the Chinese Communist Party. They built up a military which is just gradually but steadily expanding its reach and its coverage. And it is compared to, say, 2020, now instead of just being able to operate a little bit off their coast, they can reach Guam, Hawaii, and onwards. The Chinese military doesn't tend to develop into a force able to operate worldwide just like the U .S. can. And their ship numbers. They've got more than we do. Something like 350 versus our 290. 00:00:58 JACK GAINES the Chinese Communist Party. 00:01:06 JACK GAINES its reach 00:01:11 JACK GAINES say, 2020, now instead of just being able to operate a little bit 00:01:15 GRANT NEWSHAM off their coast, they can reach Guam, Hawaii, and onwards. The Chinese military doesn't tend to develop into a force able to operate worldwide just 00:01:25 JACK GAINES like the U .S. can. And their ship numbers. They've got more than we do. Something like 350 versus our 00:01:37 SPEAKER_02 Well, fortunately, in terms of quality, they're pretty good. And they know what they need to do, and they're getting better. For some things like carrier operations, they're not at our level yet. But if you look at the speed at which they have developed, they're in pretty good shape. But let's just say the South China Sea, which is one and a half times the size of the Mediterranean. Whenever U .S. ships go in there, and we do publicize our transits and operations and exercises, for every ship we put in there, For every ship we put in, the Chinese can match it with at least 10. And that doesn't include ground -based and air -launched anti -ship missiles, for example. So if the Chinese pick their spot, 00:01:39 JACK GAINES they're pretty good. And they know what they need to do, and they're getting better. For some things like carrier operations, they're not at our level yet. But if you look at the speed at which they have developed, they're in pretty good shape. But let's just say the South China Sea, which is one and a half times the size of the Mediterranean. 00:02:00 JACK GAINES and we do publicize our transits and operations and exercises, for every ship we put in there, For every ship we put in, the Chinese can match it with at least 10. And that doesn't include ground -based and air -launched anti -ship missiles, for example. 00:02:16 SPEAKER_02 if the Chinese pick their spot, pick their timing, I wouldn't want to be the destroyer skipper who's got 20 anti -ship missiles coming at him. 20 anti -ship missiles coming at him. And he's got eight seconds to figure out what to do. The point is they have had de facto control of the South China Sea since about seven, eight years ago. And yes, we can go in there. But once we're gone, the Chinese close back up and they've pretty much got it. Beyond that, it's harder for them, but they're steadily expanding their capability to conduct operations. It's a military that has its problems, like every military, but they are trying to correct them. They are building a military which they want to be able to defeat a country that has aircraft carriers, which is us. In many respects, they are our equals. Have you ever heard a Korean War veteran who said he wanted to fight the Chinese again? And these were Chinese. These was the Chinese of 1950s. It's a very different place today. And I'm not saying that they can't be defeated, but I'm not saying that they can't be defeated. An adversary that could give us a lot of trouble. When their intentions are to first dominate regionally and locally, and then push that farther afield to all the Pacific and beyond. And they're setting up the infrastructure worldwide with ports and airfields to do that. They're investing in long -range transports, these naval replenishment ships that you need to be able to operate the way we do, and that's their mission. And we have pretended until about 2017 that this wasn't the case. In fact, you couldn't even say China was an adversary. And guys who did, like Captain James Fennell, who was the head of intelligence at Pack Fleet. He was cashier. He was forced to retire. He was cashier. He was forced to retire. The then administration hated him and got rid of him. And that's how bad it was. And I saw this all firsthand. Experience some of it, not as bad as Captain Fennell did. So we've allowed them to build up into a military that we had better take very seriously. And the Chinese do see this as a tool for their... 00:02:16 JACK GAINES if the 00:02:17 SPEAKER_03 Chinese pick their spot, pick their timing, I wouldn't want to be the destroyer skipper who's got 20 anti -ship missiles coming at him. 20 anti -ship missiles coming at him. 00:02:28 JACK GAINES figure out what to do. The point is they have had de facto control of the South China Sea since about seven, eight years ago. 00:02:39 JACK GAINES we're gone, the Chinese close back up and they've pretty much got it. Beyond that, it's 00:02:45 SPEAKER_03 but they're steadily expanding their capability to conduct operations. It's a military that has its problems, like every military, but they are trying to correct them. They are 00:02:55 JACK GAINES a military which they want to be able to defeat a country that has aircraft carriers, which is us. In many respects, 00:03:03 JACK GAINES our equals. Have you ever heard a Korean War veteran who said he wanted to fight the Chinese again? And these were Chinese. These was the Chinese of 1950s. It's a very different place today. And I'm not saying that they can't be defeated, but I'm not saying that they can't 00:03:22 JACK GAINES a lot of trouble. When their intentions are to first dominate regionally and locally, and then push that farther afield to all the Pacific and beyond. And they're setting up the infrastructure worldwide with ports and airfields to do that. They're investing in long -range transports, these naval replenishment ships that you need to be able to operate the way we do, and that's their mission. And we have pretended 00:03:50 SPEAKER_03 until about 2017 00:03:51 GRANT NEWSHAM that this wasn't the case. In fact, you couldn't even say China was an adversary. And guys who did, like Captain James Fennell, 00:04:01 JACK GAINES who was the head of intelligence at Pack Fleet. He was cashier. He was forced to retire. He was cashier. He was forced to retire. The then administration hated him and got rid of him. And that's how bad it was. And I saw this all firsthand. Experience some of it, not as bad as Captain Fennell did. So we've allowed them to build up into a military that we had better take very seriously. And the Chinese do see this as a tool for 00:04:30 SPEAKER_02 The idea is if you have a powerful military, well, that's when you can lean on people. That's when you can intimidate people. You can dominate them. And they're happy with the psychological domination, political domination. It doesn't have to be occupying, but dominating. And they're in every field, from outer space, long -range missiles, undersea warfare, really putting a lot of effort into it. And there is a certain sort of ingenuity that goes into their operations. Well, they can't invent things. They don't develop things on their own. They just steal things. So they reverse engineer things. So they reverse engineer. 00:04:32 SPEAKER_03 well, that's when you can lean on people. That's when 00:04:39 JACK GAINES And they're happy with the psychological domination, political domination. It doesn't have to be occupying, but dominating. And they're in every field, from outer space, long -range missiles, undersea warfare, really putting a lot of effort into it. And there is a certain sort of ingenuity that goes into their operations. Well, they can't invent things. They don't develop things on their own. They just steal things. So they reverse engineer things. 00:05:09 SPEAKER_02 Well, it... You know, it's kind of true up to a point, but look at us. The Yankee ingenuity was taking stolen British technology and making it better. And so the fact they may not be as innovative as us, well, sometimes it just has to be good enough. So they've got now a military to combine with this desire for political domination as well as considering their economic power as just as important as the military. And you see how successful that has been. When you have U .S. business leaders giving Xi Jinping two standing ovations last November in San Francisco, that tells you how successful they've been on the economic front. And the Japanese know they have a huge problem. You would often hear the Japanese military saying, one thing Taiwan's defense is Japan's defense. But I've even seen the calculations they did, like at which point the Japanese Navy would be outmatched by the Chinese Navy. And they had the date almost down to when it was. And our side, we were late recognizing this. We refused to. We refused to. 00:05:11 GRANT NEWSHAM kind of true up to a point, but look at us. The Yankee ingenuity was taking stolen British technology and making it better. And so the fact 00:05:20 SPEAKER_03 be as innovative as us, well, sometimes it just 00:05:23 JACK GAINES has to be good enough. So they've got now a military to combine with this desire for political domination as well as considering their economic power as just as important as the military. And you see how successful that has been. When you have U .S. business leaders giving Xi Jinping two standing 00:05:45 JACK GAINES San Francisco, that tells you how successful they've been on the economic front. And the Japanese know they have a huge problem. You 00:05:53 SPEAKER_03 would often hear the Japanese military saying, one thing Taiwan's defense is Japan's defense. But I've even seen the calculations they did, like at which 00:06:03 JACK GAINES point the 00:06:06 JACK GAINES be outmatched by the Chinese Navy. And they had the date almost down to when it was. And our side, we were late recognizing this. We refused 00:07:18 SPEAKER_02 Yeah, as he described it well. Ultimately, the military part of the fight is extremely important. But it's almost a sideshow. But it's almost a sideshow to the other activities, the other fight that China's been waging for the last 30, 40 years, almost ever since we opened up to them. And that has been generally referred to as political warfare, with components being economic warfare, financial warfare, drug warfare, which is the word the Chinese use. So all this fentanyl that's been pumped into America for the last decade that's killed up towards a million Americans, almost all of it comes from China. And they know exactly what they're doing. And so every year they're taking like the equivalent of two or three divisions off the battlefield. You've destroyed neighborhoods. You've destroyed successful economic warfare. Drive 30 miles up the road to Baltimore. Go to Sparrows, Baltimore. Where there used to be steel mills. And now you have Amazon fulfillment sectors at best. But you've seen just the gutting of American society, the so -called working class, the Rust Belt. And this was done intentionally. And this was done intentionally. In large part, Chinese economic warfare directed at the United States. And then you have cyber warfare as well. You have cyber espionage. Well beyond what countries normally do. But they have used it very effectively. And the Chinese just... Recently put out their new fighter. That's called the J -35. That is a dig at the Americans. Because it is based on stolen blueprints for the F -35. I don't know. 00:07:20 SPEAKER_03 Ultimately, the military part of the 00:07:26 SPEAKER_03 it's almost a sideshow. 00:07:29 JACK GAINES sideshow to the other activities, the other fight that China's been waging for the last 30, 40 years, almost ever since we opened up to them. And that has been generally referred to as political warfare, with components being economic warfare, financial warfare, drug warfare, which is the word the Chinese use. So all this fentanyl that's been pumped into America for the last decade that's killed up towards a million Americans, almost all of it comes from China. And they know exactly what they're doing. And so every year they're taking like the equivalent of two or three divisions off the battlefield. You've destroyed neighborhoods. You've destroyed successful economic warfare. Drive 30 miles up the road to Baltimore. Go to Sparrows, Baltimore. Where there used to be steel mills. And now you have Amazon fulfillment sectors at best. But you've seen just the gutting of American society, the so -called working class, the Rust Belt. And this was done intentionally. 00:08:26 JACK GAINES warfare directed at the United States. And then you have cyber warfare as well. You have cyber espionage. 00:08:34 SPEAKER_03 Well beyond what countries normally do. But they have used it very effectively. And the Chinese just... Recently put out their new fighter. That's called the J -35. That is a dig at 00:08:47 GRANT NEWSHAM it is based on stolen blueprints for the F -35. 00:08:55 GRANT NEWSHAM know. It's been a while. I don't know. It's been a while. 00:09:02 SPEAKER_02 Unfortunately, Copperfish is leapfrogging over stages. Yes, it may take them a little longer, but they will popscotch through it. And so... So I take it pretty seriously. Their Y -20, their long -range transport, is basically the C -17. And they've just been immensely successful at this sort of espionage. And at the same time, we've done nothing to push back on them. Then there's the propaganda angle of this, which really good old Jesuit meaning of the word just means to explain yourself or articulate your position. So people understand that they've been very successful in getting Americans to buy the Chinese line. China's rise is peaceful. China's rise is peaceful. China's never attacked anybody. China's never attacked anybody. It's not true. All great nations do this. So who are we to complain? America has its problems, too. America has its problems, too. Who are we to complain about the Chinese taking live organs out of Uyghurs and prisoners of conscience? And we've been able to convince ourselves that we've been able to convince ourselves that we've not only can we not do anything, we shouldn't do anything. This is changing. But you can see we were very late getting started. And this has all been done without firing a shot. Chinese economic inroads, Chinese economic inroads, which leads to political influence, is in, for example, South America and Africa. Just immense how fast that has come, how solid it is. Pacific Island, something similar is going on, something similar is going on. Look at the difficulties the Germans have had, weaning themselves off of this Chinese addiction. And as a result, 00:09:03 GRANT NEWSHAM is leapfrogging over stages. Yes, it may take them 00:09:07 SPEAKER_03 but they will 00:09:09 GRANT NEWSHAM popscotch through it. And so... So I take it pretty seriously. Their Y -20, 00:09:16 JACK GAINES their long -range transport, is basically the C -17. And they've just been immensely successful at this sort of espionage. And at the same time, we've done nothing to push back on them. Then there's the propaganda angle of this, which really good old Jesuit meaning of the word just means to explain yourself or articulate your position. So people understand that they've been very successful in getting Americans to buy the Chinese line. China's rise is peaceful. China's rise is peaceful. China's never attacked anybody. China's never attacked anybody. It's not true. All great nations do this. So who are we to complain? 00:09:49 SPEAKER_03 America has its problems, too. America has its problems, too. Who are we to complain about the Chinese taking live organs out of Uyghurs and prisoners of conscience? And we've been able to 00:10:00 JACK GAINES that we've been able 00:10:00 SPEAKER_03 to convince ourselves that we've not only can we not do anything, we shouldn't do anything. This is changing. But you can see we were very late getting started. And this has all been done without firing a shot. 00:10:10 JACK GAINES Chinese economic inroads, Chinese economic inroads, which leads to political influence, is in, for example, South America and Africa. Just immense how fast that has come, how solid it is. Pacific Island, something similar is going on, something similar is going on. 00:10:27 SPEAKER_03 Look at the difficulties the Germans have had, weaning themselves off of this Chinese addiction. 00:10:34 SPEAKER_02 as a result, they have been able to improve their position politically, psychologically, economically, and they've been able to do this globally without having to use their military. 00:10:36 SPEAKER_03 their position 00:10:40 GRANT NEWSHAM and they've been able to do this globally without having to use their military. 00:10:51 SPEAKER_02 Yeah, that's the idea. Is you don't want to. So our view of warfare is like a hundred -yard dash. Wherever the two sides come to the line, shake loose, and then someone fires a gun, and then someone fires a gun, and then it's game on. To the Chinese, the war has started long ago. And you're wearing down your opponent. You're weakening his ability to resist. You're creating chaos in his own country. There's a word called entropy. Which is just breaking down. Entropic warfare is a word that sometimes gets used. For you're breaking down his ability to resist. And at the same time, of course, the Chinese are building up a military, which is very serious. Yes, it's not showing up off of San Diego just yet. But places closer to China, it's much more of an issue. Japan knows the problem they have with the People's Liberation Army. Pacific Island, Southeast Asia. You are seeing more of a Chinese presence. And the point is, when the time comes, you may not even be able to resist if the Chinese have done this other sort of warfare. 00:10:53 JACK GAINES want to. So our view of warfare is like a hundred -yard dash. Wherever the two sides come to the line, shake loose, and then someone fires a gun, and then someone fires a gun, and then it's game on. To the Chinese, the war has started long ago. And you're wearing down your opponent. You're weakening his ability to resist. You're creating chaos in his own country. There's a word called entropy. Which is just breaking down. Entropic warfare is a word that 00:11:19 SPEAKER_03 sometimes gets used. For you're breaking down his ability to resist. And at the same time, of course, the Chinese are building up a military, which is very serious. 00:11:28 JACK GAINES Yes, it's not showing 00:11:33 JACK GAINES places closer to China, it's much more of an issue. Japan knows the problem they have with the People's Liberation Army. Pacific Island, Southeast Asia. You are seeing more 00:11:46 JACK GAINES Chinese presence. And the point is, when the time comes, you may not even be able to resist if the Chinese have 00:11:52 SPEAKER_03 this other 00:12:31 SPEAKER_02 That's exactly what it is. It's mental warfare. You're attacking the mind. You're attacking how people think about things. Some people use the word cognitive warfare. You're the popular word. Yeah, you're attacking the mind. And so you can see how well it worked. And the Russians had a much poorer hand to play than the Chinese do. Because we do so much business with China. And you see how hard it is to do things like ban TikTok. We can't even get that done. 00:12:33 JACK GAINES mental warfare. You're attacking the mind. You're attacking how people think about things. Some people use the 00:12:42 JACK GAINES You're the popular word. Yeah, you're attacking the mind. And so you can see how well it worked. And the Russians had a much poorer hand to play than 00:12:50 GRANT NEWSHAM the Chinese do. Because we do so much business with China. And you see how hard it is to do things like ban TikTok. We can't even get that done. 00:12:59 SPEAKER_02 We can't even get that done. 00:13:03 SPEAKER_02 Look, 72 hours, if that for the Indians do, we can do it. And you see how Chinese successfully use what they call lawfare, which is using our own legal system. And the idea is that you get proxies, influential foreigners in your target country to actually do your bidding for you. The Chinese have like five aces to play. The Russians might have won, but you can see how successful the Russians have been just with that. 00:13:04 JACK GAINES for the Indians do, we can do it. And you see how Chinese successfully use what they call lawfare, which 00:13:13 JACK GAINES the idea is that you get proxies, influential foreigners in your target country to actually do your bidding for you. The Chinese have like five aces to play. The Russians might have won, but you can see how successful the Russians have 00:13:41 SPEAKER_02 Uh -huh. Uh -huh. 00:13:46 SPEAKER_02 Well, you're right about the Russians, but the Chinese understand that the term gray zone paralyzes Americans. We have no idea what to do because of our view of warfare being until the shooting starts. That it is we're not really at war. There's still hope of working something out. 00:13:51 GRANT NEWSHAM paralyzes Americans. We have no idea what to do because of our view of warfare being until the shooting starts. That it is we're not really at war. There's still hope of working 00:14:03 SPEAKER_03 something out. 00:14:05 SPEAKER_02 That has been our rote response for all these years, is to not get the Chinese mad, don't provoke them, and we have convinced ourselves that we have to have Chinese help with fill -in -the -blank, North Korea transnational crime, nuclear weapons proliferation, climate change, and therefore we cannot challenge the PRC because we won't get their cooperation. That's what we've effectively handcuffed ourselves, but when it comes to that so -called hybrid warfare, it's not all that It's not all that complicated if you recognize what it is and how it fits into China's behavior, its strategy. But you also would do well to attract from other directions where they're particularly vulnerable. And that is where you take advantage of the fact, for example, the Chinese currency is not freely convertible, which means that outside of China, nobody really wants Chinese money. It's like the script at a... It's like the script where you can use it to buy caramel corn and go on the rides. 00:14:05 SPEAKER_03 has been our rote response for all these years, is to not get the Chinese mad, don't provoke them, and we have convinced ourselves that 00:14:14 JACK GAINES have Chinese help with fill -in -the -blank, North Korea transnational crime, nuclear weapons 00:14:22 JACK GAINES climate change, and therefore we cannot challenge the PRC because we won't get their cooperation. That's what we've effectively handcuffed ourselves, but when it comes to that so -called hybrid warfare, it's not all that It's not all that complicated if you recognize what it is and how it fits into 00:14:42 JACK GAINES its strategy. But you also would do well to attract from other directions where they're particularly vulnerable. And that is where you take advantage of the fact, for example, the Chinese currency is not freely convertible, which means that outside of China, nobody really wants Chinese money. It's like the script at a... It's like the script where you can use it to buy caramel corn and 00:15:06 SPEAKER_02 That's it. Nobody wants it. So choke that off and China's got some real problems. Another is the just thoroughgoing corruption of China's ruling class. And most of them have wealth overseas, foreign bank accounts. foreign bank accounts, relatives with green cards, relatives with green cards, some operate businesses overseas. And this is illegal. And this is illegal. 00:15:08 JACK GAINES it. So choke that off and China's got some real problems. Another is the just thoroughgoing corruption of China's ruling class. And most 00:15:19 GRANT NEWSHAM overseas, foreign bank accounts. foreign bank accounts, relatives with green cards, relatives with green cards, some operate businesses overseas. And this 00:15:31 SPEAKER_02 And this is where that really scares them. Because in 2011 or 2012, New York Times and Bloomberg actually put out some good stories about the overseas wealth of China's top people, including Xi Jinping's family. I've never seen a reaction from the Chinese like that one. This bothered them. 00:15:33 JACK GAINES scares them. Because in 2011 or 2012, New 00:15:37 SPEAKER_03 York Times and Bloomberg actually put out some good stories about the overseas wealth of China's top people, including Xi Jinping's family. 00:15:46 GRANT NEWSHAM I've never seen a reaction from the Chinese like that one. 00:15:53 SPEAKER_02 More than anything else we've ever done. That's... 00:15:53 GRANT NEWSHAM than anything 00:16:14 SPEAKER_02 One way to do it. Another way to do it. That would be a tactical thing. Say you were to release, say, every Friday. Say at 1 a .m. 1 o 'clock or whenever. 1 a .m. 1 o 'clock or whenever. 00:16:16 JACK GAINES way to do it. That would be a tactical thing. Say you were to 00:16:19 SPEAKER_03 release, say, every Friday. Say at 1 a .m. 1 o 'clock or whenever. 1 a .m. 1 o 'clock or whenever. 00:16:25 SPEAKER_02 Which of the top 50 Chinese Communist Party officials? And make sure it reached everywhere in China. The thing that the public really hates is this corruption. And by the top dogs. And that is something that really bothers them. And you note that the Chinese leadership is very willing to have the average Chinese citizen absorb any amount of punishment. And they even talk about it. 00:16:27 SPEAKER_03 Chinese Communist Party officials? And make sure it 00:16:29 GRANT NEWSHAM reached everywhere in China. The thing that the public really hates is this corruption. And by the top dogs. 00:16:38 JACK GAINES is something that really bothers them. And you note that the Chinese leadership is very willing to have the average Chinese citizen 00:16:49 JACK GAINES they even talk about it. 00:16:51 SPEAKER_02 But when it's personal, then they see it very differently. And this is one of the few ways to really make it personal for them is to capitalize on this corruption. So when we talk about... Dealing with Gray's own operations, we're probably not going to be all that successful. Because they have more ships, they can be in more places. 00:16:51 JACK GAINES when it's personal, then they see it very differently. And this is one of the few ways to really make it personal for them is to capitalize on this corruption. So when we talk about... Dealing with Gray's own operations, we're probably not going to be all that successful. Because they have more ships, they can be in more places. 00:17:14 SPEAKER_02 But expose that. They can do that. Have we made a concerted effort to expose Chinese bribery, the illicit payments, the corruption that they put into everywhere they go? Everywhere there's a Chinese presence, you have corruption of the society, the political class as well. And do we ever target that? Do we consider it a priority effort? I don't even think we consider it an effort at all. Exposure is the one thing that has a huge effect. This is why investigative journalists get big. It's why like Irish. gangsters try to murder them in Malta they get blown up because they're effective because they're effective which is the thing that makes it very hard for corruption to work and that's where I think 00:17:15 JACK GAINES can do that. Have we made a concerted effort to expose Chinese bribery, the illicit payments, the corruption that they put into everywhere they go? Everywhere there's a Chinese presence, you have corruption of the society, the political class as well. And do we ever target that? Do we consider it a priority effort? I don't even think we consider it an effort at all. Exposure is the one thing that has a huge effect. This is why investigative journalists 00:17:44 SPEAKER_03 get big. It's why like Irish. gangsters try to murder them in Malta they get blown up because they're effective because they're effective which 00:17:52 SPEAKER_02 is the thing that makes it very hard for corruption to work and that's where I think We have some real opportunities to make it very clear what's being done. And this is something that, if you expose it, you can really capitalize on it. Just make it too hard to do this. And it also gives oxygen to the honest people in a country. It gives them something to work with. It gives them something to work with. To take on these repressive regimes, these corrupt regimes, these corrupt regimes, administrations. And get rid of them and replace them with honest people. I've never met anywhere, anywhere I've been. Over the years. Where people like to be cheaters. Where people like to be cheaters. Where they like their leaders to be corrupt. I just haven't met it. I've been anywhere. I just haven't met it. I've been anywhere. It's just nothing you can do. But it's just nothing you can do. It really has an effect. And that's where I think government for sources could be effectively devoted. And particularly once you get local reporters in on it. Once you get the local. Honest locals in on it. Honest locals in on it. And that's where I think we could be very effective. Corruption, as you've mentioned, that really is the grease to everything the Chinese communists do globally. Take it away and then take away their access to dollars, convertible currency. And they've really got some problems. But they have played their hand very well today. But in some ways it's a house of cards. I don't think it's that hard to take on. But the longer you wait, the harder it gets. 00:17:52 SPEAKER_03 is the thing that makes it 00:17:54 JACK GAINES corruption to work and that's where I think We have some real opportunities to make it very clear what's being done. And this is something that, if you expose it, you can really capitalize on it. Just make it too hard to do this. And it also gives oxygen to the honest people in a country. It 00:18:16 JACK GAINES to work with. To take on these repressive regimes, these corrupt regimes, these corrupt regimes, 00:18:23 JACK GAINES them with honest people. I've never met anywhere, anywhere I've been. Over the years. Where 00:18:32 JACK GAINES I just haven't met it. I've been anywhere. I just haven't met it. I've been anywhere. It's just nothing you can do. But it's just nothing you can do. It really has an effect. And that's where I think government for sources could be effectively devoted. And particularly once you 00:18:46 GRANT NEWSHAM reporters in on it. Once you get the local. Honest locals in on it. Honest locals in on it. And that's where I think we could be very effective. 00:18:56 JACK GAINES Corruption, as you've mentioned, that really is the grease to everything the Chinese communists do globally. Take it away and then take away their access to dollars, convertible currency. And they've really got some 00:19:12 JACK GAINES today. But in some ways it's a house of cards. I don't think it's that hard to take on. But the longer you wait, the harder it 00:19:28 SPEAKER_02 In regards to U .S. policy, in policy, there really is a... a desire that the United States stays around in Asia, that maintains its military might, and is able to effectively safeguard what you call freedom consensual government. Because if you go around the region, nobody wants to be dominated by the PRC. But they do have a huge advantage, particularly economically, that they're seen by leaders and business people in a lot of these countries. That's really the source of... some wealth, some prosperity. And we would do well, for example, to see the fight as just as much an economic one as a military one. Because we could build up our military, rebuild it, and we could have 800 ships in the Navy, and still lose. If we don't fight on these other fronts, we don't want you here because we're doing too much business with China. And that's where the U .S., along with its friends, the Japanese, the Koreans, the Indians, the Australians, we would do well to operate together more and to see the economic front and the political warfare fronts as a priority effort as much, if not more, than the military. 00:19:30 SPEAKER_03 in policy, there 00:19:31 JACK GAINES really is a... a desire that the United States stays around in Asia, that maintains its military might, and is able to 00:19:45 JACK GAINES Because if you go around the region, nobody wants to be dominated by the PRC. But they do have a huge advantage, particularly economically, that they're seen by leaders and business people in a lot of these countries. That's really the source of... some wealth, some prosperity. And we would do well, for example, to see the fight as just as much an economic 00:20:09 GRANT NEWSHAM one as a military one. Because we could build up our military, rebuild it, and we could have 800 ships in the Navy, and still lose. If we don't 00:20:19 JACK GAINES on these other fronts, we don't want you here because we're doing too much business with China. And that's where the U .S., along with its friends, the Japanese, the Koreans, the Indians, the Australians, 00:20:30 GRANT NEWSHAM we would do well to operate together more and to see the economic front and the political warfare fronts as a priority effort as much, if not 00:20:40 JACK GAINES more, than the 00:20:45 SPEAKER_02 They have a role to play if they're properly harnessed. But you do know that these days you don't see the Yankee trader that used to exist. You'd run to Americans everywhere trying to sell something to do business. Not so much these days. And we've almost ceded the far -flung part to the world. Because, well, the return on investment isn't enough. That's not an attractive enough proposition. Well, then let's make it one. Plus, you do have, say, the Japanese, the Indians, who are much better at operating in these places, to put it together into a coherent plan. Understand what it is, political warfare, and not just block the Chinese political warfare effort, but actually have our own campaign. And it really is worth doing some homework, I think, for a lot of people into what political warfare is. One sees the opportunities, but it takes a certain type of person who's good at it. versus a civil affairs guy. Versus a civil affairs guy. He's going to see different... He's going to see parts of the battlefield in a different way. Yes, sometimes you want the tank. But then there's this other part of it all. That is almost like a liberal arts test. Here you have to figure out the motivations for things. You have to figure out how a society works. And then how do you appeal to it using the things that are parts of political warfare? And this is where you can really make some mileage. You've got to have both. Make no mistake. If you're not able to destroy things and kill people, the civil affairs part isn't going to get you very far. But combine the two, and then you've really got something that's very hard to take on if you're the bad guys. We talk about defending Taiwan, and how important it is, and it is, I think, indispensable, that China does not take Taiwan and enslave 23 million people. If they did that... 00:20:47 JACK GAINES they're properly harnessed. But you do know that these days you don't see the Yankee trader that used to exist. You'd run to Americans everywhere trying to sell something to do business. Not 00:20:59 SPEAKER_03 so much these days. And we've almost ceded the far -flung part to the world. Because, well, the return on investment isn't enough. That's not an attractive enough proposition. Well, then let's 00:21:10 GRANT NEWSHAM make it one. Plus, you do have, say, the Japanese, the Indians, who are much better at operating in these places, to put it together into a coherent plan. Understand what it 00:21:20 JACK GAINES is, political warfare, and not just block the Chinese political warfare effort, but actually have our own campaign. And it really is worth doing some homework, I think, for a lot of people into what political warfare is. One sees the opportunities, but it takes a certain type of person who's good at it. versus a civil affairs guy. Versus a civil affairs guy. He's going to see different... He's going to see parts of the battlefield in a different way. 00:21:50 SPEAKER_03 Yes, sometimes you want the tank. But then there's this other part of it all. That is almost like a liberal arts test. Here you have to figure 00:22:00 JACK GAINES for things. You have to figure out how a society works. And then how do you appeal to it using the things that are parts of political warfare? 00:22:10 JACK GAINES make some mileage. You've got to have both. Make no mistake. If you're not able to destroy things and kill people, the civil affairs part isn't going to get you very far. But combine the two, and then you've really got something that's very hard to take on if you're the bad guys. We talk about defending Taiwan, and how important it is, and it is, I think, indispensable, 00:22:32 GRANT NEWSHAM that China does not take Taiwan and enslave 23 million people. If they did that... 00:22:39 SPEAKER_02 Asia would turn red overnight, as every country tried to cut the best deal they could. No country anywhere on Earth would have much confidence in American promises that will protect them. But one of the ways to actually defend Taiwan is, yes, they could maybe use F -35s and long -range missiles and smart pines, etc. You do have to have all of this stuff. Is it enough, 00:22:39 GRANT NEWSHAM would turn red overnight, as every country tried 00:22:42 SPEAKER_03 to cut the best deal they could. No country anywhere 00:22:46 JACK GAINES on Earth would have much confidence in American promises that will protect them. But one of the ways to actually defend 00:22:51 GRANT NEWSHAM Taiwan is, yes, they could maybe use F -35s and long -range missiles and smart pines, etc. You do have to have all of this stuff. Is it enough, even? Particularly if the other side says, okay, we'll absorb whatever you can send at us, but you're finished. But one of the ways that... But one of the ways is to give them a free trade agreement to improve their economy to the point that the government felt like it had money to spend on defense. 00:23:02 SPEAKER_02 Particularly if the other side says, okay, we'll absorb whatever you can send at us, but you're finished. But one of the ways that... But one of the ways is to give them a free trade agreement to improve their economy to the point that the government felt like it had money to spend on defense. You get a certain confidence in the entire society when they're more prosperous. Salaries are very low in Taiwan. Make it so people feel like they've got more money. Can they can buy a house? Can they can buy a condominium? build up the economy and that has a ripple effect throughout the society and on their military itself. And yet we didn't do that. And I think that's where we should apply some effort. 00:23:11 JACK GAINES give them a free trade agreement to 00:23:16 JACK GAINES point that the government felt like it had money to spend on defense. You get a certain confidence in the entire society when they're more prosperous. Salaries are very low in Taiwan. Make it so people feel like they've got more money. Can they can buy a house? Can they can buy a condominium? 00:23:35 JACK GAINES the economy and that has a ripple effect throughout the society and on their military itself. And yet we didn't do that. And I think that's where we should apply some 00:24:25 SPEAKER_02 I think you're right. And it's essential that we start to understand. You look at much of the debate about us in China. What happens when the two forces go at each other? And that's almost like... Going up behind the Waffle House. Going up behind the Waffle House. To see who's the toughest guy in Prince William County. To see who's the toughest guy in Prince William County. Out back. But think of all the things that go into whether or not the two hoodlums. There's all sorts of reasons why. No, the
Please welcome Grant Newsham, retired marine colonel and author of When China Attacks, A Warning to America. Grant came on the show to discuss the state of the Japan Defense Forces and the PRC threat. This is a two-part episode. Grant's biography: https://centerforsecuritypolicy.org/author/grant-newsham/ Book link: https://www.regnery.com/9781684513659/when-china-attacks/ A recent article: https://andmagazine.substack.com/p/the-us-in-the-pacific-getting-the?utm_source=substack&publication_id=746580&post_id=151553726&utm_medium=email&utm_content=share&utm_campaign=email-share&triggerShare=true&isFreemail=true&r=ercjf&triedRedirect=true --- One CA is a product of the civil affairs association and brings in people who are current or former military, diplomats, development officers, and field agents to discuss their experiences on the ground with a partner nation's people and leadership. We aim to inspire anyone interested in working in the "last three feet" of U.S. foreign relations. To contact the show, email us at CApodcasting@gmail.com or look us up on the Civil Affairs Association website at www civilaffairsassoc.org --- Special thanks to the site Cool Jazz Hot Bossa for the sample of Cool Jazz Hot Bossa. (59:00). Retrieved from: https://www.youtube.com/watch?v=bdWUj2NYDYQ --- Transcript: (Part I) 00:00:05 JACK GAINES Welcome to the 1CA Podcast. This is your host, Jack Gaines. 1CA is a product of the Civil Affairs Association and brings in people who are current or former military, diplomats, development officers, and field agents to discuss their experiences on ground with the partner nation's people and leadership. Our goal is to inspire anyone interested in working the last three feet of foreign relations. To contact the show, email us at capodcasting@gmail.com. Or look us up on the Civil Affairs Association website at www.civilaffairsassoc.org. I'll have those in the show notes. Please welcome Grant Newsham, retired Marine Colonel and author of When China Attacks, A Warning to America. Grant came on the show to discuss the state of the Japan Defense Forces and the PRC threat. This is the first of a two-part episode, so let's get started. 00:00:56 GRANT NEWSHAM I was effectively MarforPak's guy in Asia for a number of years. which worked well in both directions. So I was obviously in Japan, but also did a lot of work for them throughout the region, Southeast Asia as well, Taiwan even, which was a lot of fun. 00:01:13 JACK GAINES Yeah. And you've become a foreign policy advocate in the area. 00:01:16 GRANT NEWSHAM Yeah. At some point, maybe seven or eight years ago, figured I'd actually done enough stuff to maybe have a few ideas. So I started writing and speak a lot as well. So I guess I'm part of the commentariat. But I seem to write about once a week some topic related to often Asian defense, but sometimes economics, politics, sometimes organized crime. And I do get invited to speak here and there and seem to get a number of television or radio interviews as well. That's really cool. I didn't say I get invited to good things, but I do get the occasional invitation. I used to think it was because I had such insight. Someone told me not all that long ago that actually, if you'll say yes to an interview, you're likely to get more of them. Because the people who book them, they just want to get somebody on. And I thought it was because of my particular wisdom. 00:02:07 JACK GAINES of my particular wisdom. 00:02:09 GRANT NEWSHAM I'm joking a little bit. But obviously, you must have something useful to say. But it is funny. There's one place in Singapore that calls me a lot. It's like their CNN. And they've been calling me. Probably eight years at least, or almost every time, I'll tell the presenters that basically they don't know what they're talking about. And I always think, well, this is the last one, but they keep calling me up. They mislike you because you're the contrarian. 00:02:34 JACK GAINES mislike you because you're the contrarian. 00:02:36 GRANT NEWSHAM Oh, I can blame things in a way that sort of suits broadcast and that sort of regular people can understand, you know, 00:02:42 GRANT NEWSHAM that sort of regular people can understand, you know, being a regular person myself. 00:02:47 JACK GAINES Yeah, you learn to disagree without offending. 00:02:49 GRANT NEWSHAM Usually. And it's always sort of a relief, actually, when you can have a different look at things. 00:02:56 JACK GAINES That's good. I always thought you were going to say it is a relief sometimes when you just peel the coat off and then yell at them. 00:03:02 GRANT NEWSHAM The facts speak for themselves. Right. And if it's a presenter, their role is different, and they will generally not have the substantive knowledge that most of the people on the show will have. Right. And so much of what I have to say is often not... in line with accepted wisdom, particularly when it comes to Japan. Sure. So it's often that I'll have to present a different take on things, but they don't seem to be offended. 00:03:27 JACK GAINES Right. You mostly talk about Japan in its current defense fashion or in its foreign policy actions. 00:03:33 GRANT NEWSHAM A lot of that because people have a perception of Japan, for example, as a pacifist country. It cannot fight. It's peace loving. Right. etc. They have a nuclear allergy. You know, just the idea of nuclear weapons in Japan is out of the question. You often hear, well, their constitution won't let them fight. And none of those things are actually true. But it's the received wisdom. It's what people think. And when you simply point out the realities of Japan, that ultimately, at the end of the day, it's a country just like every other. And that the stereotypes about it really aren't correct when it comes to defense security. In fact, they use that the Constitution won't let them have a military. You probably heard it. Yeah. That's the idea. And they don't even call it a military. But the fact is they've got a military, which, according to some ratings, is the fifth most powerful in the world. It depends on how you calculate it, of course. But they call it something else. And what is the actual distinction between offensive and defensive weapons? 00:04:35 JACK GAINES It's usually the strike space. If it's inside your own country defending, then it's a defense space. Once you go out and start taking out other people's cities and moving forces in. 00:04:44 GRANT NEWSHAM Well, for example, they don't have much what you call power projection capability very far off their borders. But they do have a submarine fleet, say over 20 submarines. There's no reason you couldn't send them to the coast of China and start sinking ships. 00:04:59 JACK GAINES True. 00:05:00 GRANT NEWSHAM They've got F -16s. You can put long -range missiles on them and you can fly out of ways and cause people a lot of trouble. But their military really, I would say, is not so good at offense. It's not so good at defense either. And that's something that comes as a surprise to a lot of people. 00:05:15 JACK GAINES Well, do they exercise defense and offense? 00:05:18 GRANT NEWSHAM Oh, they have exercises, training, and they put on a pretty good show, particularly when they have visitors come. But they really, until very recently, and even now, they can't do joint operations, which means the air, sea, and ground forces. can't operate together. In fact, they don't even have a radio with which they can communicate easily. They have to jury -rig some relations, these connections. And that's something most people don't understand, because you look at it on paper. Japan has 250 ,000 people in its military, and it's got ships, aircraft, all of it modern and good stuff. 150 ,000 people in its ground self -defense force, their army. But it's not even the sum of its whole. If you imagine each of your limbs, your arms and your legs, each doing whatever it wants without the coordinating function provided by a brain. 00:06:10 JACK GAINES Sounds like me dancing. 00:06:12 GRANT NEWSHAM It would be, yeah. I think that I can picture that, whereas I'm more of an Arthur Murray kind of guy. But it's like that. And nobody can believe that because they think, well, this is the Japanese. It's this advanced modern country, big military, the rich country. And I mean, they can't even do these simple things. Right. The short answer is no, except in some limited circumstances. After 60 years of the U .S.-Japan defense relationship, 80 years after World War II, they still cannot do some of the basic things that a military needs to do, or do them very well, put it that way. But they do train, they exercise, the personnel quality is excellent. You know, we tend to say, well, we've got Japan as our ally, Japan has a military. But the reality is that the U .S. and Japanese forces cannot work very well together. There's one exception, and that's the two navies. The U .S. Navy and the Japanese Navy, called the Maritime Self -Defense Force, they actually do work well. And they show what's doable. 00:07:15 JACK GAINES They probably do dynamic exercises as well as structured ones, so they have to change, have to practice new orders and maneuvers. 00:07:22 GRANT NEWSHAM Well, the nature of naval operations is you can go out... into the sea, and you have more freedom to actually do stuff. But part of it actually was when Admiral Arleigh Burke, who was later chief of naval operations for many years, he was in charge in Japan. He basically laid down the ground rules, which was that the American Navy was going to treat the Japanese like friends, like allies. And that set the tone for everything. So they had a more relationship of equals, people who wanted to operate together. And that is why they have a good relationship today. in my opinion. So as a result, after all these decades, the two militaries are not really very good at operating together. There's no joint headquarters. There never has been in Japan. At best, they've operated in isolation. Do they recognize they don't have a joint access? Oh, they know. The Japanese military knows this. And US Indo -PACOM has not pushed the issue. And then you had... The State Department side, on the civilian side, people saying, well, if we ask the Japanese to get better at defense matters, well, they'll get angry. And if they do, then the Chinese will be mad. So you have the U .S. on the U .S. side. We're thinking of at least 10 reasons why Japan cannot improve its defenses. That's changed enough in recent years. But you see how many decades we've lost. 00:08:51 JACK GAINES Right. I can see part of what the State Department is saying in that a lot of those countries along the Asian coast were under Japanese rule during World War II. They're concerned that by showing favor and coordinating with them in defense might offend places like the Philippines or Korea. It is a concern to be weighed, but I don't know how much weight you would put to it. 00:09:14 GRANT NEWSHAM I wouldn't give it hardly any. With the Japanese, when you actually think about it, I would say within... 30 years of the end of the war, but certainly today, and for the last at least 20 years ago. The new century. Even before that. The Japanese and World War II is not really an issue in almost all of Asia. The Chinese, of course... Play it up. That's a good way to put it. Of course, they do remember what the Japanese did, and it was barbaric. Although the Chinese Communist Party afterwards killed 50 million Chinese in peacetime and good weather, which the Imperial Japanese Army couldn't have dreamed of doing. But World War II is an issue in China. Korea as well, the relationship is dicey. Up to a point. I mean, little old ladies go and sit in front of the embassy still. 00:10:05 JACK GAINES the embassy still. 00:10:06 GRANT NEWSHAM There are, and then you just had a South Korean amphibious ship come to Yokosuka in Tokyo on a visit. In Korea, there's a fundamental sort of suspicion of the Japanese. Sometimes it is a real dislike. But most people, it's not a big issue. But except for those two countries, you go down the list in Asia, and there is no after effect of World War II. I find the Filipinos get along very well with the Japanese. The Indonesians do. They, in fact, see the Japanese as being the people who freed them from the colonial yoke. Okay. The Malays, they actually didn't have that bad a time during the occupation. The Chinese in Malaysia did. So the Malaysians don't have any really hard feelings against the Japanese. Taiwan, same thing. They've got a very good relationship. And then there's one plus billion Indians who actually have an excellent relationship with Japan and see Japan as real friends and vice versa. So you're starting to get a good chunk of Asia, which, as you can see, actually sees Japan as a good country, useful economically. It's been very generous. And they like to see a Japanese military that's strong enough, allied with the United States, able to deal with China. 00:11:27 JACK GAINES Right. And why would we have such a different balance as we do with Germany and Europe? Because no one's questioning this in Holland or in France. That's just another country. They freely trade, they freely access each other. So maybe mindset just needs to shift to say the reform of Japan is just like Germany, and we need to start treating them and partner nations the same and start advocating for a joint staff. 00:11:52 GRANT NEWSHAM And you could do that in an afternoon, but the Japanese will not speak up for themselves. And an old New York Times reporter, Richard Halloran, I remember him telling me once that all the people he ever dealt with in the world... The Japanese were the worst at explaining themselves. And there's a reticence which slows them down. But also the Americans are afraid to tell them what we need. And that is a huge problem, because if we don't tell them, the Japanese are not blind readers, and they won't do what we think we'd like them to do, but we're afraid to ask. And in fact, one of the Japanese prime ministers in 1970, so 50 -some years ago, He gave some very good advice to the Americans, and it was at the time the Americans were trying to put an aircraft carrier into Yokosuka, the naval base near Tokyo. They wanted to assign it there permanently. And the U .S. side was thinking of excuses why it was too hard for the Japanese. They'll cause political difficulties. The Japanese have an election coming up. The timing just isn't right. And finally, the Japanese side sent a message to the Americans saying, tell us what you need. And don't back down. And they said it out of exasperation, really. And it was the best advice the Americans have ever been given. And we've refused to follow it ever since then. And really, it's almost a cultural trait, sort of a Confucian system. They actually are happy to have experts tell them what they ought to do. Sure. Whereas we are more of the Socratic method. And it doesn't, it just doesn't work. That's why after all these years, the Americans and the Japanese forces, except for the navies, And except for missile defense, we really don't operate together anywhere near where we need to be. We're not even close. And another very interesting fact a lot of people don't know is the Japanese military missed its recruitment targets by about 50 % last year. 5 -0? 00:13:50 JACK GAINES -0? 00:13:50 GRANT NEWSHAM 5 -0. And it routinely misses them by 20 -25%. So this, you can see, is a problem. It's now an older force, doesn't have enough people. In order to fulfill its missions, it would probably have to be twice as big, both personnel -wise and in terms of ships and hardware. Its war stocks are basically non -existent, doesn't know anything really about casualty care, combat casualty replacements, logistics. 00:14:20 JACK GAINES Well, if the media looks down on it and the political class looks down on it, it's not going to get a lot of positivity in the public mindset. So that's got to be part of it. It's not a vote -getter to push for a strong defense. 00:14:31 GRANT NEWSHAM vote -getter to push for a strong defense. I mean, if you're a politician, no one's going to say, he's the defense guy, let's give him our vote. But people vote for other reasons. But you do get used to, after that horrific experience in World War II, that for decades people didn't want to really think about defense, and they were glad to have the Americans around to handle it, and particularly when it seemed like there wasn't any real threat anywhere. People were happy with that, and even the U .S. side. didn't mind it as well. But I'd say it should have started to change at least 20 years ago. And it didn't until maybe four or five years ago. Well, 00:15:10 JACK GAINES when did the risk indicators really start popping up with China? 00:15:14 GRANT NEWSHAM I think by... It can't be back when Nixon went. 00:15:15 JACK GAINES It can't be back when Nixon went. Well, it should have, 00:15:16 GRANT NEWSHAM it should have, you know, I think. But about 2005 is when it was obvious what was coming. 00:15:19 JACK GAINES But about 00:15:21 GRANT NEWSHAM when it was obvious what was coming. And even before that, if you knew what to look for. But as I said, some of us... We knew what needed done and what the problems were. And there were Japanese who did too. And that's why when we put together their amphibious force, it was sort of an effort to address the shortcomings in Japan's self -defense force. Also to improve the overall U .S.-Japan relationship because it was so imbalanced. Right. Where the Japanese weren't doing anything near enough to defend themselves. And that over time creates a lot of friction in a relationship. So we were trying to address that with the amphibious force, and that was 2011, which we were pretty successful at that because we didn't ask permission from anybody. I was going to say, if you were successful, 00:16:10 JACK GAINES did you get fired? 00:16:11 GRANT NEWSHAM Well, it's not that people didn't try. 00:16:11 JACK GAINES Well, it's not that people didn't try. Sorry, that was sarcastic. But I was a reservist, so they couldn't quite get a bead on me. 00:16:15 GRANT NEWSHAM I was a reservist, so they couldn't quite get a bead on me. Yeah. And didn't quite know what we were doing. And also you had people like General Gregson, who was then at... Department of Defense, who had been in Japan many years, and he knew the importance of it all. So he would provide some cover. But the real success there was because the Japanese side took the ideas and ran with it. And the Americans provided some cover and some know -how and some advice. But it was the Japanese who did that. Once the Japanese took on the mission, well, what are the Americans going to say? But I was even told that at Indo -PACOM, that there were people who gotten wind of this and were very much opposed because the idea that Japanese having an amphibious force was provocative. Not just provocative, but it was going to cause the Japanese to go on the rampage again, like in 1941. I'm not making this up. 00:17:11 JACK GAINES So when Germany starts building the Leopard 2, were they expected to go on a rampage too? 00:17:17 GRANT NEWSHAM No, those are Europeans. Oh, okay. You know how the Europeans are okay. 00:17:19 JACK GAINES okay. You know 00:17:21 GRANT NEWSHAM But the fact that Germans have been allowed back into polite society. tells you something, and the Japanese are just as deserving of it as well. 00:17:30 JACK GAINES Did you see the movie Godzilla Minus One? No. It's an interesting portrayal of post -World War II Japan. And Godzilla, which is this giant monster, comes out of the sea, tears up Japan, and has an atomic breath that shoots off nuclear explosions, which sounds a lot like the United States in a mythological way. One thing that... the show did that was interesting is it kind of engaged post -military era and had talked about it. And it seemed like it was trying to reconcile the past with now and build out a notion that the military is okay, that after the war, there were good things that happened and that we should embrace a military in the future. So there might be some societal impulses out there that are promoting and supporting a more built -up military in Japan. 00:18:24 GRANT NEWSHAM Well, you're actually right. The public at large has always been pretty supportive of the military. For example, when they have open base days, when they put on so -called firepower demonstrations, which is like an exercise you can watch where they shoot off stuff, that they're always oversubscribed. And people just pour into these things because they're interested. And the central government, or say the ruling class, are the ones who are gun -shy or... I'm really hesitant, but the public at large, you know, when you ask them, you know, should Japan have a normal military? The replies to that are like 85%. Well, yes, of course. And I think they would be horrified if they knew the actual state of the Japanese military. I mentioned this to a Japanese politician last year, and he was horrified at the idea. And the public as well would have a similar reaction. Regular Japanese people say they have a pretty good understanding of what Japan needs to do to defend itself and of the importance of having a national defense, but the government doesn't explain it very well. When they do, the reaction, there's a Japanese expression, it's called like, it's atarimae. And it means like, well, yeah. It's like, duh. 00:19:42 JACK GAINES Abnautually. And that's what it means. 00:19:42 GRANT NEWSHAM And that's what it means. Should Japan have a good defense? Atarimae. And yeah, what's the question here? But if you ask that question in the political world, then you'll get all sorts of emming and hawing. They wanted nothing of that. By the late 70s, certainly by the 90s, that they sort of outlived that. But it was comfortable to continue with it, particularly if you're the government, because you don't have to spend money on defense. And the Americans are covering that. So it was as if the Americans were giving. I'd say at least $50 billion a year in free defense coverage, at least, probably more. And, you know, if you're a government, you think, well, why should we do anything different? And so they got used to that. We got used to it. And then at some point, the friction builds up where you just can't do that. And the Japanese themselves start to be resentful. 00:20:37 JACK GAINES Right. Keeping them handicapped, probably. 00:20:40 GRANT NEWSHAM Yeah. You know, they're not letting us be self -fulfilled. I think that's sort of the marriage counselor's analysis. And so that imbalance was such that it was creating huge problems in the relationship. But the defense relationship, you know, pointing out, well, you know, you guys really aren't very good, except for the Navy. You know, and we can't work with you very well, except for the Navies. And as a result, that's why we are where we are today. By now, if we had a more sort of capable U .S.-Japan defense relationship, where the two services could... operate together, and we're conducting a joint defense of Japan and the surrounding areas, which includes, say, to Taiwan even, that that would have, I think, deterred a lot of the problems that we're having. But by pretending everything was okay, we've gotten ourselves in a position where we now face a real threat out there. And we're trying to make up for lost time. And I don't know. And I don't know which side I would bet on. I'd bet on ours because I'm an American. But that's how out of whack it has gotten. It used to be maybe till 20 years ago, we were in pretty good shape. But you can see that advantage eroding. And nowadays, depending on how a fight were to take place, if it does take place, it would be less of a sure thing than it once was. And that's, I think, putting it very nicely. 00:22:04 JACK GAINES Well, tell me about the threat. 00:22:05 GRANT NEWSHAM What are you seeing? It's China. led by the Chinese Communist Party. (Part II) 00:00:02 JACK GAINES Welcome to the 1CA Podcast. This is your host, Jack Gaines. 1CA is a product of the Civil Affairs Association and brings in people who are current or former military, diplomats, development officers, and field agents to discuss their experiences on ground with the partner nation's people and leadership. Our goal is to inspire anyone interested in working the last three feet of foreign relations. To contact the show, email us at capodcasting@gmail.com. or look us up on the Civil Affairs Association website at www.civilaffairsassoc.org. I'll have those in the show notes. Please welcome back Grant Newsham, retired Marine colonel and author of When China Attacks, A Warning to America. Grant came on the show to discuss the state of the Japanese defense forces and the PRC threat. This is the second in a two-part episode, so let's get started. 00:00:56 SPEAKER_02 It's China. led by the Chinese Communist Party. They built up a military which is just gradually but steadily expanding its reach and its coverage. And it is compared to, say, 2020, now instead of just being able to operate a little bit off their coast, they can reach Guam, Hawaii, and onwards. The Chinese military doesn't tend to develop into a force able to operate worldwide just like the U .S. can. And their ship numbers. They've got more than we do. Something like 350 versus our 290. 00:00:58 JACK GAINES the Chinese Communist Party. 00:01:06 JACK GAINES its reach 00:01:11 JACK GAINES say, 2020, now instead of just being able to operate a little bit 00:01:15 GRANT NEWSHAM off their coast, they can reach Guam, Hawaii, and onwards. The Chinese military doesn't tend to develop into a force able to operate worldwide just 00:01:25 JACK GAINES like the U .S. can. And their ship numbers. They've got more than we do. Something like 350 versus our 00:01:37 SPEAKER_02 Well, fortunately, in terms of quality, they're pretty good. And they know what they need to do, and they're getting better. For some things like carrier operations, they're not at our level yet. But if you look at the speed at which they have developed, they're in pretty good shape. But let's just say the South China Sea, which is one and a half times the size of the Mediterranean. Whenever U .S. ships go in there, and we do publicize our transits and operations and exercises, for every ship we put in there, For every ship we put in, the Chinese can match it with at least 10. And that doesn't include ground -based and air -launched anti -ship missiles, for example. So if the Chinese pick their spot, 00:01:39 JACK GAINES they're pretty good. And they know what they need to do, and they're getting better. For some things like carrier operations, they're not at our level yet. But if you look at the speed at which they have developed, they're in pretty good shape. But let's just say the South China Sea, which is one and a half times the size of the Mediterranean. 00:02:00 JACK GAINES and we do publicize our transits and operations and exercises, for every ship we put in there, For every ship we put in, the Chinese can match it with at least 10. And that doesn't include ground -based and air -launched anti -ship missiles, for example. 00:02:16 SPEAKER_02 if the Chinese pick their spot, pick their timing, I wouldn't want to be the destroyer skipper who's got 20 anti -ship missiles coming at him. 20 anti -ship missiles coming at him. And he's got eight seconds to figure out what to do. The point is they have had de facto control of the South China Sea since about seven, eight years ago. And yes, we can go in there. But once we're gone, the Chinese close back up and they've pretty much got it. Beyond that, it's harder for them, but they're steadily expanding their capability to conduct operations. It's a military that has its problems, like every military, but they are trying to correct them. They are building a military which they want to be able to defeat a country that has aircraft carriers, which is us. In many respects, they are our equals. Have you ever heard a Korean War veteran who said he wanted to fight the Chinese again? And these were Chinese. These was the Chinese of 1950s. It's a very different place today. And I'm not saying that they can't be defeated, but I'm not saying that they can't be defeated. An adversary that could give us a lot of trouble. When their intentions are to first dominate regionally and locally, and then push that farther afield to all the Pacific and beyond. And they're setting up the infrastructure worldwide with ports and airfields to do that. They're investing in long -range transports, these naval replenishment ships that you need to be able to operate the way we do, and that's their mission. And we have pretended until about 2017 that this wasn't the case. In fact, you couldn't even say China was an adversary. And guys who did, like Captain James Fennell, who was the head of intelligence at Pack Fleet. He was cashier. He was forced to retire. He was cashier. He was forced to retire. The then administration hated him and got rid of him. And that's how bad it was. And I saw this all firsthand. Experience some of it, not as bad as Captain Fennell did. So we've allowed them to build up into a military that we had better take very seriously. And the Chinese do see this as a tool for their... 00:02:16 JACK GAINES if the 00:02:17 SPEAKER_03 Chinese pick their spot, pick their timing, I wouldn't want to be the destroyer skipper who's got 20 anti -ship missiles coming at him. 20 anti -ship missiles coming at him. 00:02:28 JACK GAINES figure out what to do. The point is they have had de facto control of the South China Sea since about seven, eight years ago. 00:02:39 JACK GAINES we're gone, the Chinese close back up and they've pretty much got it. Beyond that, it's 00:02:45 SPEAKER_03 but they're steadily expanding their capability to conduct operations. It's a military that has its problems, like every military, but they are trying to correct them. They are 00:02:55 JACK GAINES a military which they want to be able to defeat a country that has aircraft carriers, which is us. In many respects, 00:03:03 JACK GAINES our equals. Have you ever heard a Korean War veteran who said he wanted to fight the Chinese again? And these were Chinese. These was the Chinese of 1950s. It's a very different place today. And I'm not saying that they can't be defeated, but I'm not saying that they can't 00:03:22 JACK GAINES a lot of trouble. When their intentions are to first dominate regionally and locally, and then push that farther afield to all the Pacific and beyond. And they're setting up the infrastructure worldwide with ports and airfields to do that. They're investing in long -range transports, these naval replenishment ships that you need to be able to operate the way we do, and that's their mission. And we have pretended 00:03:50 SPEAKER_03 until about 2017 00:03:51 GRANT NEWSHAM that this wasn't the case. In fact, you couldn't even say China was an adversary. And guys who did, like Captain James Fennell, 00:04:01 JACK GAINES who was the head of intelligence at Pack Fleet. He was cashier. He was forced to retire. He was cashier. He was forced to retire. The then administration hated him and got rid of him. And that's how bad it was. And I saw this all firsthand. Experience some of it, not as bad as Captain Fennell did. So we've allowed them to build up into a military that we had better take very seriously. And the Chinese do see this as a tool for 00:04:30 SPEAKER_02 The idea is if you have a powerful military, well, that's when you can lean on people. That's when you can intimidate people. You can dominate them. And they're happy with the psychological domination, political domination. It doesn't have to be occupying, but dominating. And they're in every field, from outer space, long -range missiles, undersea warfare, really putting a lot of effort into it. And there is a certain sort of ingenuity that goes into their operations. Well, they can't invent things. They don't develop things on their own. They just steal things. So they reverse engineer things. So they reverse engineer. 00:04:32 SPEAKER_03 well, that's when you can lean on people. That's when 00:04:39 JACK GAINES And they're happy with the psychological domination, political domination. It doesn't have to be occupying, but dominating. And they're in every field, from outer space, long -range missiles, undersea warfare, really putting a lot of effort into it. And there is a certain sort of ingenuity that goes into their operations. Well, they can't invent things. They don't develop things on their own. They just steal things. So they reverse engineer things. 00:05:09 SPEAKER_02 Well, it... You know, it's kind of true up to a point, but look at us. The Yankee ingenuity was taking stolen British technology and making it better. And so the fact they may not be as innovative as us, well, sometimes it just has to be good enough. So they've got now a military to combine with this desire for political domination as well as considering their economic power as just as important as the military. And you see how successful that has been. When you have U .S. business leaders giving Xi Jinping two standing ovations last November in San Francisco, that tells you how successful they've been on the economic front. And the Japanese know they have a huge problem. You would often hear the Japanese military saying, one thing Taiwan's defense is Japan's defense. But I've even seen the calculations they did, like at which point the Japanese Navy would be outmatched by the Chinese Navy. And they had the date almost down to when it was. And our side, we were late recognizing this. We refused to. We refused to. 00:05:11 GRANT NEWSHAM kind of true up to a point, but look at us. The Yankee ingenuity was taking stolen British technology and making it better. And so the fact 00:05:20 SPEAKER_03 be as innovative as us, well, sometimes it just 00:05:23 JACK GAINES has to be good enough. So they've got now a military to combine with this desire for political domination as well as considering their economic power as just as important as the military. And you see how successful that has been. When you have U .S. business leaders giving Xi Jinping two standing 00:05:45 JACK GAINES San Francisco, that tells you how successful they've been on the economic front. And the Japanese know they have a huge problem. You 00:05:53 SPEAKER_03 would often hear the Japanese military saying, one thing Taiwan's defense is Japan's defense. But I've even seen the calculations they did, like at which 00:06:03 JACK GAINES point the 00:06:06 JACK GAINES be outmatched by the Chinese Navy. And they had the date almost down to when it was. And our side, we were late recognizing this. We refused 00:07:18 SPEAKER_02 Yeah, as he described it well. Ultimately, the military part of the fight is extremely important. But it's almost a sideshow. But it's almost a sideshow to the other activities, the other fight that China's been waging for the last 30, 40 years, almost ever since we opened up to them. And that has been generally referred to as political warfare, with components being economic warfare, financial warfare, drug warfare, which is the word the Chinese use. So all this fentanyl that's been pumped into America for the last decade that's killed up towards a million Americans, almost all of it comes from China. And they know exactly what they're doing. And so every year they're taking like the equivalent of two or three divisions off the battlefield. You've destroyed neighborhoods. You've destroyed successful economic warfare. Drive 30 miles up the road to Baltimore. Go to Sparrows, Baltimore. Where there used to be steel mills. And now you have Amazon fulfillment sectors at best. But you've seen just the gutting of American society, the so -called working class, the Rust Belt. And this was done intentionally. And this was done intentionally. In large part, Chinese economic warfare directed at the United States. And then you have cyber warfare as well. You have cyber espionage. Well beyond what countries normally do. But they have used it very effectively. And the Chinese just... Recently put out their new fighter. That's called the J -35. That is a dig at the Americans. Because it is based on stolen blueprints for the F -35. I don't know. 00:07:20 SPEAKER_03 Ultimately, the military part of the 00:07:26 SPEAKER_03 it's almost a sideshow. 00:07:29 JACK GAINES sideshow to the other activities, the other fight that China's been waging for the last 30, 40 years, almost ever since we opened up to them. And that has been generally referred to as political warfare, with components being economic warfare, financial warfare, drug warfare, which is the word the Chinese use. So all this fentanyl that's been pumped into America for the last decade that's killed up towards a million Americans, almost all of it comes from China. And they know exactly what they're doing. And so every year they're taking like the equivalent of two or three divisions off the battlefield. You've destroyed neighborhoods. You've destroyed successful economic warfare. Drive 30 miles up the road to Baltimore. Go to Sparrows, Baltimore. Where there used to be steel mills. And now you have Amazon fulfillment sectors at best. But you've seen just the gutting of American society, the so -called working class, the Rust Belt. And this was done intentionally. 00:08:26 JACK GAINES warfare directed at the United States. And then you have cyber warfare as well. You have cyber espionage. 00:08:34 SPEAKER_03 Well beyond what countries normally do. But they have used it very effectively. And the Chinese just... Recently put out their new fighter. That's called the J -35. That is a dig at 00:08:47 GRANT NEWSHAM it is based on stolen blueprints for the F -35. 00:08:55 GRANT NEWSHAM know. It's been a while. I don't know. It's been a while. 00:09:02 SPEAKER_02 Unfortunately, Copperfish is leapfrogging over stages. Yes, it may take them a little longer, but they will popscotch through it. And so... So I take it pretty seriously. Their Y -20, their long -range transport, is basically the C -17. And they've just been immensely successful at this sort of espionage. And at the same time, we've done nothing to push back on them. Then there's the propaganda angle of this, which really good old Jesuit meaning of the word just means to explain yourself or articulate your position. So people understand that they've been very successful in getting Americans to buy the Chinese line. China's rise is peaceful. China's rise is peaceful. China's never attacked anybody. China's never attacked anybody. It's not true. All great nations do this. So who are we to complain? America has its problems, too. America has its problems, too. Who are we to complain about the Chinese taking live organs out of Uyghurs and prisoners of conscience? And we've been able to convince ourselves that we've been able to convince ourselves that we've not only can we not do anything, we shouldn't do anything. This is changing. But you can see we were very late getting started. And this has all been done without firing a shot. Chinese economic inroads, Chinese economic inroads, which leads to political influence, is in, for example, South America and Africa. Just immense how fast that has come, how solid it is. Pacific Island, something similar is going on, something similar is going on. Look at the difficulties the Germans have had, weaning themselves off of this Chinese addiction. And as a result, 00:09:03 GRANT NEWSHAM is leapfrogging over stages. Yes, it may take them 00:09:07 SPEAKER_03 but they will 00:09:09 GRANT NEWSHAM popscotch through it. And so... So I take it pretty seriously. Their Y -20, 00:09:16 JACK GAINES their long -range transport, is basically the C -17. And they've just been immensely successful at this sort of espionage. And at the same time, we've done nothing to push back on them. Then there's the propaganda angle of this, which really good old Jesuit meaning of the word just means to explain yourself or articulate your position. So people understand that they've been very successful in getting Americans to buy the Chinese line. China's rise is peaceful. China's rise is peaceful. China's never attacked anybody. China's never attacked anybody. It's not true. All great nations do this. So who are we to complain? 00:09:49 SPEAKER_03 America has its problems, too. America has its problems, too. Who are we to complain about the Chinese taking live organs out of Uyghurs and prisoners of conscience? And we've been able to 00:10:00 JACK GAINES that we've been able 00:10:00 SPEAKER_03 to convince ourselves that we've not only can we not do anything, we shouldn't do anything. This is changing. But you can see we were very late getting started. And this has all been done without firing a shot. 00:10:10 JACK GAINES Chinese economic inroads, Chinese economic inroads, which leads to political influence, is in, for example, South America and Africa. Just immense how fast that has come, how solid it is. Pacific Island, something similar is going on, something similar is going on. 00:10:27 SPEAKER_03 Look at the difficulties the Germans have had, weaning themselves off of this Chinese addiction. 00:10:34 SPEAKER_02 as a result, they have been able to improve their position politically, psychologically, economically, and they've been able to do this globally without having to use their military. 00:10:36 SPEAKER_03 their position 00:10:40 GRANT NEWSHAM and they've been able to do this globally without having to use their military. 00:10:51 SPEAKER_02 Yeah, that's the idea. Is you don't want to. So our view of warfare is like a hundred -yard dash. Wherever the two sides come to the line, shake loose, and then someone fires a gun, and then someone fires a gun, and then it's game on. To the Chinese, the war has started long ago. And you're wearing down your opponent. You're weakening his ability to resist. You're creating chaos in his own country. There's a word called entropy. Which is just breaking down. Entropic warfare is a word that sometimes gets used. For you're breaking down his ability to resist. And at the same time, of course, the Chinese are building up a military, which is very serious. Yes, it's not showing up off of San Diego just yet. But places closer to China, it's much more of an issue. Japan knows the problem they have with the People's Liberation Army. Pacific Island, Southeast Asia. You are seeing more of a Chinese presence. And the point is, when the time comes, you may not even be able to resist if the Chinese have done this other sort of warfare. 00:10:53 JACK GAINES want to. So our view of warfare is like a hundred -yard dash. Wherever the two sides come to the line, shake loose, and then someone fires a gun, and then someone fires a gun, and then it's game on. To the Chinese, the war has started long ago. And you're wearing down your opponent. You're weakening his ability to resist. You're creating chaos in his own country. There's a word called entropy. Which is just breaking down. Entropic warfare is a word that 00:11:19 SPEAKER_03 sometimes gets used. For you're breaking down his ability to resist. And at the same time, of course, the Chinese are building up a military, which is very serious. 00:11:28 JACK GAINES Yes, it's not showing 00:11:33 JACK GAINES places closer to China, it's much more of an issue. Japan knows the problem they have with the People's Liberation Army. Pacific Island, Southeast Asia. You are seeing more 00:11:46 JACK GAINES Chinese presence. And the point is, when the time comes, you may not even be able to resist if the Chinese have 00:11:52 SPEAKER_03 this other 00:12:31 SPEAKER_02 That's exactly what it is. It's mental warfare. You're attacking the mind. You're attacking how people think about things. Some people use the word cognitive warfare. You're the popular word. Yeah, you're attacking the mind. And so you can see how well it worked. And the Russians had a much poorer hand to play than the Chinese do. Because we do so much business with China. And you see how hard it is to do things like ban TikTok. We can't even get that done. 00:12:33 JACK GAINES mental warfare. You're attacking the mind. You're attacking how people think about things. Some people use the 00:12:42 JACK GAINES You're the popular word. Yeah, you're attacking the mind. And so you can see how well it worked. And the Russians had a much poorer hand to play than 00:12:50 GRANT NEWSHAM the Chinese do. Because we do so much business with China. And you see how hard it is to do things like ban TikTok. We can't even get that done. 00:12:59 SPEAKER_02 We can't even get that done. 00:13:03 SPEAKER_02 Look, 72 hours, if that for the Indians do, we can do it. And you see how Chinese successfully use what they call lawfare, which is using our own legal system. And the idea is that you get proxies, influential foreigners in your target country to actually do your bidding for you. The Chinese have like five aces to play. The Russians might have won, but you can see how successful the Russians have been just with that. 00:13:04 JACK GAINES for the Indians do, we can do it. And you see how Chinese successfully use what they call lawfare, which 00:13:13 JACK GAINES the idea is that you get proxies, influential foreigners in your target country to actually do your bidding for you. The Chinese have like five aces to play. The Russians might have won, but you can see how successful the Russians have 00:13:41 SPEAKER_02 Uh -huh. Uh -huh. 00:13:46 SPEAKER_02 Well, you're right about the Russians, but the Chinese understand that the term gray zone paralyzes Americans. We have no idea what to do because of our view of warfare being until the shooting starts. That it is we're not really at war. There's still hope of working something out. 00:13:51 GRANT NEWSHAM paralyzes Americans. We have no idea what to do because of our view of warfare being until the shooting starts. That it is we're not really at war. There's still hope of working 00:14:03 SPEAKER_03 something out. 00:14:05 SPEAKER_02 That has been our rote response for all these years, is to not get the Chinese mad, don't provoke them, and we have convinced ourselves that we have to have Chinese help with fill -in -the -blank, North Korea transnational crime, nuclear weapons proliferation, climate change, and therefore we cannot challenge the PRC because we won't get their cooperation. That's what we've effectively handcuffed ourselves, but when it comes to that so -called hybrid warfare, it's not all that It's not all that complicated if you recognize what it is and how it fits into China's behavior, its strategy. But you also would do well to attract from other directions where they're particularly vulnerable. And that is where you take advantage of the fact, for example, the Chinese currency is not freely convertible, which means that outside of China, nobody really wants Chinese money. It's like the script at a... It's like the script where you can use it to buy caramel corn and go on the rides. 00:14:05 SPEAKER_03 has been our rote response for all these years, is to not get the Chinese mad, don't provoke them, and we have convinced ourselves that 00:14:14 JACK GAINES have Chinese help with fill -in -the -blank, North Korea transnational crime, nuclear weapons 00:14:22 JACK GAINES climate change, and therefore we cannot challenge the PRC because we won't get their cooperation. That's what we've effectively handcuffed ourselves, but when it comes to that so -called hybrid warfare, it's not all that It's not all that complicated if you recognize what it is and how it fits into 00:14:42 JACK GAINES its strategy. But you also would do well to attract from other directions where they're particularly vulnerable. And that is where you take advantage of the fact, for example, the Chinese currency is not freely convertible, which means that outside of China, nobody really wants Chinese money. It's like the script at a... It's like the script where you can use it to buy caramel corn and 00:15:06 SPEAKER_02 That's it. Nobody wants it. So choke that off and China's got some real problems. Another is the just thoroughgoing corruption of China's ruling class. And most of them have wealth overseas, foreign bank accounts. foreign bank accounts, relatives with green cards, relatives with green cards, some operate businesses overseas. And this is illegal. And this is illegal. 00:15:08 JACK GAINES it. So choke that off and China's got some real problems. Another is the just thoroughgoing corruption of China's ruling class. And most 00:15:19 GRANT NEWSHAM overseas, foreign bank accounts. foreign bank accounts, relatives with green cards, relatives with green cards, some operate businesses overseas. And this 00:15:31 SPEAKER_02 And this is where that really scares them. Because in 2011 or 2012, New York Times and Bloomberg actually put out some good stories about the overseas wealth of China's top people, including Xi Jinping's family. I've never seen a reaction from the Chinese like that one. This bothered them. 00:15:33 JACK GAINES scares them. Because in 2011 or 2012, New 00:15:37 SPEAKER_03 York Times and Bloomberg actually put out some good stories about the overseas wealth of China's top people, including Xi Jinping's family. 00:15:46 GRANT NEWSHAM I've never seen a reaction from the Chinese like that one. 00:15:53 SPEAKER_02 More than anything else we've ever done. That's... 00:15:53 GRANT NEWSHAM than anything 00:16:14 SPEAKER_02 One way to do it. Another way to do it. That would be a tactical thing. Say you were to release, say, every Friday. Say at 1 a .m. 1 o 'clock or whenever. 1 a .m. 1 o 'clock or whenever. 00:16:16 JACK GAINES way to do it. That would be a tactical thing. Say you were to 00:16:19 SPEAKER_03 release, say, every Friday. Say at 1 a .m. 1 o 'clock or whenever. 1 a .m. 1 o 'clock or whenever. 00:16:25 SPEAKER_02 Which of the top 50 Chinese Communist Party officials? And make sure it reached everywhere in China. The thing that the public really hates is this corruption. And by the top dogs. And that is something that really bothers them. And you note that the Chinese leadership is very willing to have the average Chinese citizen absorb any amount of punishment. And they even talk about it. 00:16:27 SPEAKER_03 Chinese Communist Party officials? And make sure it 00:16:29 GRANT NEWSHAM reached everywhere in China. The thing that the public really hates is this corruption. And by the top dogs. 00:16:38 JACK GAINES is something that really bothers them. And you note that the Chinese leadership is very willing to have the average Chinese citizen 00:16:49 JACK GAINES they even talk about it. 00:16:51 SPEAKER_02 But when it's personal, then they see it very differently. And this is one of the few ways to really make it personal for them is to capitalize on this corruption. So when we talk about... Dealing with Gray's own operations, we're probably not going to be all that successful. Because they have more ships, they can be in more places. 00:16:51 JACK GAINES when it's personal, then they see it very differently. And this is one of the few ways to really make it personal for them is to capitalize on this corruption. So when we talk about... Dealing with Gray's own operations, we're probably not going to be all that successful. Because they have more ships, they can be in more places. 00:17:14 SPEAKER_02 But expose that. They can do that. Have we made a concerted effort to expose Chinese bribery, the illicit payments, the corruption that they put into everywhere they go? Everywhere there's a Chinese presence, you have corruption of the society, the political class as well. And do we ever target that? Do we consider it a priority effort? I don't even think we consider it an effort at all. Exposure is the one thing that has a huge effect. This is why investigative journalists get big. It's why like Irish. gangsters try to murder them in Malta they get blown up because they're effective because they're effective which is the thing that makes it very hard for corruption to work and that's where I think 00:17:15 JACK GAINES can do that. Have we made a concerted effort to expose Chinese bribery, the illicit payments, the corruption that they put into everywhere they go? Everywhere there's a Chinese presence, you have corruption of the society, the political class as well. And do we ever target that? Do we consider it a priority effort? I don't even think we consider it an effort at all. Exposure is the one thing that has a huge effect. This is why investigative journalists 00:17:44 SPEAKER_03 get big. It's why like Irish. gangsters try to murder them in Malta they get blown up because they're effective because they're effective which 00:17:52 SPEAKER_02 is the thing that makes it very hard for corruption to work and that's where I think We have some real opportunities to make it very clear what's being done. And this is something that, if you expose it, you can really capitalize on it. Just make it too hard to do this. And it also gives oxygen to the honest people in a country. It gives them something to work with. It gives them something to work with. To take on these repressive regimes, these corrupt regimes, these corrupt regimes, administrations. And get rid of them and replace them with honest people. I've never met anywhere, anywhere I've been. Over the years. Where people like to be cheaters. Where people like to be cheaters. Where they like their leaders to be corrupt. I just haven't met it. I've been anywhere. I just haven't met it. I've been anywhere. It's just nothing you can do. But it's just nothing you can do. It really has an effect. And that's where I think government for sources could be effectively devoted. And particularly once you get local reporters in on it. Once you get the local. Honest locals in on it. Honest locals in on it. And that's where I think we could be very effective. Corruption, as you've mentioned, that really is the grease to everything the Chinese communists do globally. Take it away and then take away their access to dollars, convertible currency. And they've really got some problems. But they have played their hand very well today. But in some ways it's a house of cards. I don't think it's that hard to take on. But the longer you wait, the harder it gets. 00:17:52 SPEAKER_03 is the thing that makes it 00:17:54 JACK GAINES corruption to work and that's where I think We have some real opportunities to make it very clear what's being done. And this is something that, if you expose it, you can really capitalize on it. Just make it too hard to do this. And it also gives oxygen to the honest people in a country. It 00:18:16 JACK GAINES to work with. To take on these repressive regimes, these corrupt regimes, these corrupt regimes, 00:18:23 JACK GAINES them with honest people. I've never met anywhere, anywhere I've been. Over the years. Where 00:18:32 JACK GAINES I just haven't met it. I've been anywhere. I just haven't met it. I've been anywhere. It's just nothing you can do. But it's just nothing you can do. It really has an effect. And that's where I think government for sources could be effectively devoted. And particularly once you 00:18:46 GRANT NEWSHAM reporters in on it. Once you get the local. Honest locals in on it. Honest locals in on it. And that's where I think we could be very effective. 00:18:56 JACK GAINES Corruption, as you've mentioned, that really is the grease to everything the Chinese communists do globally. Take it away and then take away their access to dollars, convertible currency. And they've really got some 00:19:12 JACK GAINES today. But in some ways it's a house of cards. I don't think it's that hard to take on. But the longer you wait, the harder it 00:19:28 SPEAKER_02 In regards to U .S. policy, in policy, there really is a... a desire that the United States stays around in Asia, that maintains its military might, and is able to effectively safeguard what you call freedom consensual government. Because if you go around the region, nobody wants to be dominated by the PRC. But they do have a huge advantage, particularly economically, that they're seen by leaders and business people in a lot of these countries. That's really the source of... some wealth, some prosperity. And we would do well, for example, to see the fight as just as much an economic one as a military one. Because we could build up our military, rebuild it, and we could have 800 ships in the Navy, and still lose. If we don't fight on these other fronts, we don't want you here because we're doing too much business with China. And that's where the U .S., along with its friends, the Japanese, the Koreans, the Indians, the Australians, we would do well to operate together more and to see the economic front and the political warfare fronts as a priority effort as much, if not more, than the military. 00:19:30 SPEAKER_03 in policy, there 00:19:31 JACK GAINES really is a... a desire that the United States stays around in Asia, that maintains its military might, and is able to 00:19:45 JACK GAINES Because if you go around the region, nobody wants to be dominated by the PRC. But they do have a huge advantage, particularly economically, that they're seen by leaders and business people in a lot of these countries. That's really the source of... some wealth, some prosperity. And we would do well, for example, to see the fight as just as much an economic 00:20:09 GRANT NEWSHAM one as a military one. Because we could build up our military, rebuild it, and we could have 800 ships in the Navy, and still lose. If we don't 00:20:19 JACK GAINES on these other fronts, we don't want you here because we're doing too much business with China. And that's where the U .S., along with its friends, the Japanese, the Koreans, the Indians, the Australians, 00:20:30 GRANT NEWSHAM we would do well to operate together more and to see the economic front and the political warfare fronts as a priority effort as much, if not 00:20:40 JACK GAINES more, than the 00:20:45 SPEAKER_02 They have a role to play if they're properly harnessed. But you do know that these days you don't see the Yankee trader that used to exist. You'd run to Americans everywhere trying to sell something to do business. Not so much these days. And we've almost ceded the far -flung part to the world. Because, well, the return on investment isn't enough. That's not an attractive enough proposition. Well, then let's make it one. Plus, you do have, say, the Japanese, the Indians, who are much better at operating in these places, to put it together into a coherent plan. Understand what it is, political warfare, and not just block the Chinese political warfare effort, but actually have our own campaign. And it really is worth doing some homework, I think, for a lot of people into what political warfare is. One sees the opportunities, but it takes a certain type of person who's good at it. versus a civil affairs guy. Versus a civil affairs guy. He's going to see different... He's going to see parts of the battlefield in a different way. Yes, sometimes you want the tank. But then there's this other part of it all. That is almost like a liberal arts test. Here you have to figure out the motivations for things. You have to figure out how a society works. And then how do you appeal to it using the things that are parts of political warfare? And this is where you can really make some mileage. You've got to have both. Make no mistake. If you're not able to destroy things and kill people, the civil affairs part isn't going to get you very far. But combine the two, and then you've really got something that's very hard to take on if you're the bad guys. We talk about defending Taiwan, and how important it is, and it is, I think, indispensable, that China does not take Taiwan and enslave 23 million people. If they did that... 00:20:47 JACK GAINES they're properly harnessed. But you do know that these days you don't see the Yankee trader that used to exist. You'd run to Americans everywhere trying to sell something to do business. Not 00:20:59 SPEAKER_03 so much these days. And we've almost ceded the far -flung part to the world. Because, well, the return on investment isn't enough. That's not an attractive enough proposition. Well, then let's 00:21:10 GRANT NEWSHAM make it one. Plus, you do have, say, the Japanese, the Indians, who are much better at operating in these places, to put it together into a coherent plan. Understand what it 00:21:20 JACK GAINES is, political warfare, and not just block the Chinese political warfare effort, but actually have our own campaign. And it really is worth doing some homework, I think, for a lot of people into what political warfare is. One sees the opportunities, but it takes a certain type of person who's good at it. versus a civil affairs guy. Versus a civil affairs guy. He's going to see different... He's going to see parts of the battlefield in a different way. 00:21:50 SPEAKER_03 Yes, sometimes you want the tank. But then there's this other part of it all. That is almost like a liberal arts test. Here you have to figure 00:22:00 JACK GAINES for things. You have to figure out how a society works. And then how do you appeal to it using the things that are parts of political warfare? 00:22:10 JACK GAINES make some mileage. You've got to have both. Make no mistake. If you're not able to destroy things and kill people, the civil affairs part isn't going to get you very far. But combine the two, and then you've really got something that's very hard to take on if you're the bad guys. We talk about defending Taiwan, and how important it is, and it is, I think, indispensable, 00:22:32 GRANT NEWSHAM that China does not take Taiwan and enslave 23 million people. If they did that... 00:22:39 SPEAKER_02 Asia would turn red overnight, as every country tried to cut the best deal they could. No country anywhere on Earth would have much confidence in American promises that will protect them. But one of the ways to actually defend Taiwan is, yes, they could maybe use F -35s and long -range missiles and smart pines, etc. You do have to have all of this stuff. Is it enough, 00:22:39 GRANT NEWSHAM would turn red overnight, as every country tried 00:22:42 SPEAKER_03 to cut the best deal they could. No country anywhere 00:22:46 JACK GAINES on Earth would have much confidence in American promises that will protect them. But one of the ways to actually defend 00:22:51 GRANT NEWSHAM Taiwan is, yes, they could maybe use F -35s and long -range missiles and smart pines, etc. You do have to have all of this stuff. Is it enough, even? Particularly if the other side says, okay, we'll absorb whatever you can send at us, but you're finished. But one of the ways that... But one of the ways is to give them a free trade agreement to improve their economy to the point that the government felt like it had money to spend on defense. 00:23:02 SPEAKER_02 Particularly if the other side says, okay, we'll absorb whatever you can send at us, but you're finished. But one of the ways that... But one of the ways is to give them a free trade agreement to improve their economy to the point that the government felt like it had money to spend on defense. You get a certain confidence in the entire society when they're more prosperous. Salaries are very low in Taiwan. Make it so people feel like they've got more money. Can they can buy a house? Can they can buy a condominium? build up the economy and that has a ripple effect throughout the society and on their military itself. And yet we didn't do that. And I think that's where we should apply some effort. 00:23:11 JACK GAINES give them a free trade agreement to 00:23:16 JACK GAINES point that the government felt like it had money to spend on defense. You get a certain confidence in the entire society when they're more prosperous. Salaries are very low in Taiwan. Make it so people feel like they've got more money. Can they can buy a house? Can they can buy a condominium? 00:23:35 JACK GAINES the economy and that has a ripple effect throughout the society and on their military itself. And yet we didn't do that. And I think that's where we should apply some 00:24:25 SPEAKER_02 I think you're right. And it's essential that we start to understand. You look at much of the debate about us in China. What happens when the two forces go at each other? And that's almost like... Going up behind the Waffle House. Going up behind the Waffle House. To see who's the toughest guy in Prince William County. To see who's the toughest guy in Prince William County. Out back. But think of all the things that go into whether or not the two hoodlums. There's all sorts of reasons why. No, there may
I interviewed director Pegah Tabassinejad about Entropic FIelds of Displacement that showed at IDFA DocLab 2024. See the transcript down below for more context on our conversation. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality
Dr. Andreas Schlatter is a classically trained physicist (EPFL, Princeton) with a decidedly heretical approach to physics. Though deeply mathematical in his approach, he dispenses with the purely field-based approach to understanding the building blocks of nature, and asks far deeper question about what the mathematics is telling us about the hidden structures of nature. Rather than take the positivist approach, which suggests that anything that cannot be experimentally encountered is not worth considering, Schlatter follows in the tradition of Gödel and the other mid 20th century logicians, who believed that a layer of the universe beyond the visible is available to us if we can reason our way to it. By following this path, Schlatter has reached the conclusion that the only viable interpretation of quantum mechanics is the transactional one. Unlike the other transnational theorists we've had on the show, Schlatter has gone one step further to propose that there is a transactional interpretation of gravity just as is there is for quantum mechanics. He calls it entropic gravity, and in this episode we explore the convoluted path he took to physics, how he found the transactionalists, and how he and Ruth Kastner formulated an entropic explanation for spacetime. PATREON: get episodes early + join our weekly Patron Chat https://bit.ly/3lcAasB MERCH: Rock some DemystifySci gear : https://demystifysci.myspreadshop.com/ AMAZON: Do your shopping through this link for Caver Mead's Collective Electrodynamics: https://amzn.to/4e01Slj (00:00) Go! (00:05:28) Andreas Schlatter's Academic Journey (00:10:39) Exploration of Mathematics in Physics (00:25:51) The Vienna Circle and Logical Positivism (00:30:04) Einstein's Transition in Theoretical Approach (00:37:37) Philosophical Inquiry in Physics Education (00:41:08) The Quest for Understanding in Logic and Set Theory (00:48:02) Transition from Academia to Finance (00:56:02) Challenges of Financial Modeling (01:09:59) Trust and Economic Stability (01:16:10) Light and Gravity Intersect (01:23:02) Entropy and Information Theory (01:31:07) Absorption and Entropy Dynamics (01:37:22) Exploration of Quantum Transactions (01:46:30) Transactional Approach to Gravity (01:56:31) Light Clocks and the Nature of Time (02:04:13) Multiverses and Quantum Realms #Physics, #QuantumMechanics, #Mathematics, #PhilosophyOfScience, #LogicalPositivism, #EmpiricalScience, #TheoreticalPhysics, #Einstein, #Newton, #QuantumReality, #Entropy, #Cosmology, #Multiverse, #GravityTheory, #EconomicStability, #TransactionalInterpretation, #ScienceEducation, #Philosophy, #QuantumGravity, #FinanceAndPhysics, #ScientificUnderstanding #sciencepodcast, #longformpodcast Check our short-films channel, @DemystifySci: https://www.youtube.com/c/DemystifyingScience AND our material science investigations of atomics, @MaterialAtomics https://www.youtube.com/@MaterialAtomics Join our mailing list https://bit.ly/3v3kz2S PODCAST INFO: Anastasia completed her PhD studying bioelectricity at Columbia University. When not talking to brilliant people or making movies, she spends her time painting, reading, and guiding backcountry excursions. Shilo also did his PhD at Columbia studying the elastic properties of molecular water. When he's not in the film studio, he's exploring sound in music. They are both freelance professors at various universities. - Blog: http://DemystifySci.com/blog - RSS: https://anchor.fm/s/2be66934/podcast/rss - Donate: https://bit.ly/3wkPqaD - Swag: https://bit.ly/2PXdC2y SOCIAL: - Discord: https://discord.gg/MJzKT8CQub - Facebook: https://www.facebook.com/groups/DemystifySci - Instagram: https://www.instagram.com/DemystifySci/ - Twitter: https://twitter.com/DemystifySci MUSIC: -Shilo Delay: https://g.co/kgs/oty671
JC and Mike fire things up on a Sunday Special episode after the chaotic Week 8 of College Football. The conference realignment, much anticipated and controversial, has brought some chaos into the fold of this College Football season. Unpredictability has ruled over the landscape thus far this season much to the delight of your two hosts. JC hits us with his JC/5 for the week looking at who would you rather be concerning the playoff, a couple of specific teams and the Big 12, how many Army/Navy games we will get to enjoy, and how many water bottles it takes to get your way. Mike wraps this special episode with his BOSS awards for the week To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
Send us a Text Message.About the guest: Robin Carhart-Harris, PhD is a neuroscientist & psychopharmacologist. His lab at the University of California-San Francisco studies the effects of psychedelics and other drugs on the human brain, using neuroimaging and other approaches.Episode summary: Nick and Dr. Carhart-Harris discuss: psychedelics & the human brain; functional connectivity & entropy in brain patterns; the “entropic brain” hypothesis of psychedelic drug action; psychiatry & depression; psychology, Carl Jung & Sigmund Freud; the FDA's rejection of MDMA-assisted psychotherapy for PTSD; latest research on psychedelics; and more.Related episodes:Anesthesia, Placebo Effects, Consciousness, Subjectivity, MDMA, Ketamine, Opioids, Psychedelics | Boris Heifets | #163DMT, Serotonin, Inflammation, Psychedelics, and Past, Present & Future of Psychedelic Medicine | David & Charles Nichols | #137*This content is never meant to serve as medical advice.Support the Show.All episodes (audio & video), show notes, transcripts, and more at the M&M Substack Try Athletic Greens: Comprehensive & convenient daily nutrition. Free 1-year supply of vitamin D with purchase.Try SiPhox Health—Affordable, at-home bloodwork w/ a comprehensive set of key health marker. Use code TRIKOMES for a 10% discount.Try the Lumen device to optimize your metabolism for weight loss or athletic performance. Use code MIND for 10% off.Learn all the ways you can support my efforts
Thank you for 1m downloads of the podcast and 2m readers of the Substack!
Questions: Are we just messed up people who needed extraordinary support to be normal? Agree or Disagree? Relationship entropy does not mean your marriage sucks or that you are a failure, it means you are human. True or False? In sexual relationships we are genetically programmed to habituate to a partner as our initial romantic infatuation fades over the first one to three years. True or False? Humans create routines, and relationship entropy often is the result.
Can time symmetry in physics, combined with exceptional violations of the 2nd law of thermodynamics, and the “quantum handshake” transactional interpretation of Quantum mechanics, open up main stream physics to the possibility of retro-causation? Could it help to explain the many paradoxes left open in modern physics? and is there experimental evidence for it? Today we have the extraordinary possibility of retro-causation to get our heads around: the apparently impossible phenomenon of events in the present causing changes in the past, or future events having an effect in the present depending on how you want to look at it. Today we'll be approaching this topic via the context of time symmetry in physics. As far back as 1947, French quantum physicist Olivier Costa de Beauregard, began to question the usual interpretation of time in quantum mechanics, intuiting that something was missing from the model for the many paradoxes in Quantum Mechanics to remain unexplained. And then, with others get on board over the years, in the 80's, John G Kramer, agreed that the missing ingredient was found in time symmetry and he proposed a ‘quantum handshake' between the waves passing forward and backward in time at the moment of collapse; in this Transactional Interpretation of quantum mechanics, Kramer claimed he had solved the paradoxes. My guest today has put together this research, a re-interpretation of the 2nd law of thermodynamics based on violations where Entropy exceptionally does not hold, and theorisation about quantum correlates to consciousness to create a new theory of retro-causation, which he thinks can be tested. He is Daniel Sheehan, Author and Professor of Physics at the University of San Diego, specialist in plasma physics, violations of the 2nd Law thermodynamics and Retro-causation. He is the founder the Quantum Retro-causation symposia that met at The University of San Diego. What we discuss: 00:00 Intro. 09:00 Time dilation: the twin paradox. 12:20 Time symmetry: reversible time functions in physics equations. 13:20 Violations of the 2nd Law, the Entropic arrow of time. 18:20 Wheeler's bizarre altered double slit experiment. 23:15 Wheeler's ‘Participatory Universe'. 26:00 The history of retro-causation research. 29:15 Bergmann and Lebowitz ‘Two-State Vector Formalism' theory 1964 31:00 Kramer's “Quantum Handshake” Transactional interpretation of QM. 35:30 Sheehan's theory of retro-causation. 36:45 The assumption of quantum processes acting in the brain. 39:00 Issues with quantum consciousness hypotheses. 42:00 Macroscopic quantum systems. 50:00 Precognitive retro-causation experiments: Graff & Cyrus 51:45 Triple blind experiments - blind ‘even to the universe'. 55:00 Is the subject finding out what actually happened important to the result? 57:00 Emotional charge in the future, influencing the past. 59:00 Are some events in the future already fixed? 01:01:30 Global Consciousness aggregate effects in physical systems. 01:02:30 Time symmetry allows the transmission into the past of important. 01:05:00 Wider science reception of such a paradigm shifting ideas as retro-causation. 01:05:00 Getting over our Second law biology habits. References: Vladislav Capek & Daniel P. Sheehan, “Challenges to The Second Law of Thermodynamics: Theory and Experiment”. Stephen Wolfram, “Computational Foundations for the Second Law of Thermodynamics” John Wheeler - Altered double double slit “Delayed Choice” experiment. Bergmann and Lebowitz ‘Two-State Vector Formalism' theory 1964 John G. Kramer's “Transactional interpretation” of Quantum Mechanics. Dale E Graff, Patricia S. Cyrus, ‘Perceiving the future news: Evidence for retrocausation' Paper Global Consciousness project at Princeton, Roger Nelson. Quotes: “The question is more important than the answer”, author unknown. “Order is a state of mind, not a state of matter” On Entropy, Daniel Sheehan.
Our next 2 big events are AI UX and the World's Fair. Join and apply to speak/sponsor!Due to timing issues we didn't have an interview episode to share with you this week, but not to worry, we have more than enough “weekend special” content in the backlog for you to get your Latent Space fix, whether you like thinking about the big picture, or learning more about the pod behind the scenes, or talking Groq and GPUs, or AI Leadership, or Personal AI. Enjoy!AI BreakdownThe indefatigable NLW had us back on his show for an update on the Four Wars, covering Sora, Suno, and the reshaped GPT-4 Class Landscape:and a longer segment on AI Engineering trends covering the future LLM landscape (Llama 3, GPT-5, Gemini 2, Claude 4), Open Source Models (Mistral, Grok), Apple and Meta's AI strategy, new chips (Groq, MatX) and the general movement from baby AGIs to vertical Agents:Thursday Nights in AIWe're also including swyx's interview with Josh Albrecht and Ali Rohde to reintroduce swyx and Latent Space to a general audience, and engage in some spicy Q&A:Dylan Patel on GroqWe hosted a private event with Dylan Patel of SemiAnalysis (our last pod here):Not all of it could be released so we just talked about our Groq estimates:Milind Naphade - Capital OneIn relation to conversations at NeurIPS and Nvidia GTC and upcoming at World's Fair, we also enjoyed chatting with Milind Naphade about his AI Leadership work at IBM, Cisco, Nvidia, and now leading the AI Foundations org at Capital One. We covered:* Milind's learnings from ~25 years in machine learning * His first paper citation was 24 years ago* Lessons from working with Jensen Huang for 6 years and being CTO of Metropolis * Thoughts on relevant AI research* GTC takeaways and what makes NVIDIA specialIf you'd like to work on building solutions rather than platform (as Milind put it), his Applied AI Research team at Capital One is hiring, which falls under the Capital One Tech team.Personal AI MeetupIt all started with a meme:Within days of each other, BEE, FRIEND, EmilyAI, Compass, Nox and LangFriend were all launching personal AI wearables and assistants. So we decided to put together a the world's first Personal AI meetup featuring creators and enthusiasts of wearables. The full video is live now, with full show notes within.Timestamps* [00:01:13] AI Breakdown Part 1* [00:02:20] Four Wars* [00:13:45] Sora* [00:15:12] Suno* [00:16:34] The GPT-4 Class Landscape* [00:17:03] Data War: Reddit x Google* [00:21:53] Gemini 1.5 vs Claude 3* [00:26:58] AI Breakdown Part 2* [00:27:33] Next Frontiers: Llama 3, GPT-5, Gemini 2, Claude 4* [00:31:11] Open Source Models - Mistral, Grok* [00:34:13] Apple MM1* [00:37:33] Meta's $800b AI rebrand* [00:39:20] AI Engineer landscape - from baby AGIs to vertical Agents* [00:47:28] Adept episode - Screen Multimodality* [00:48:54] Top Model Research from January Recap* [00:53:08] AI Wearables* [00:57:26] Groq vs Nvidia month - GPU Chip War* [01:00:31] Disagreements* [01:02:08] Summer 2024 Predictions* [01:04:18] Thursday Nights in AI - swyx* [01:33:34] Dylan Patel - Semianalysis + Latent Space Live Show* [01:34:58] GroqTranscript[00:00:00] swyx: Welcome to the Latent Space Podcast Weekend Edition. This is Charlie, your AI co host. Swyx and Alessio are off for the week, making more great content. We have exciting interviews coming up with Elicit, Chroma, Instructor, and our upcoming series on NSFW, Not Safe for Work AI. In today's episode, we're collating some of Swyx and Alessio's recent appearances, all in one place for you to find.[00:00:32] swyx: In part one, we have our first crossover pod of the year. In our listener survey, several folks asked for more thoughts from our two hosts. In 2023, Swyx and Alessio did crossover interviews with other great podcasts like the AI Breakdown, Practical AI, Cognitive Revolution, Thursday Eye, and Chinatalk, all of which you can find in the Latentspace About page.[00:00:56] swyx: NLW of the AI Breakdown asked us back to do a special on the 4Wars framework and the AI engineer scene. We love AI Breakdown as one of the best examples Daily podcasts to keep up on AI news, so we were especially excited to be back on Watch out and take[00:01:12] NLW: care[00:01:13] AI Breakdown Part 1[00:01:13] NLW: today on the AI breakdown. Part one of my conversation with Alessio and Swix from Latent Space.[00:01:19] NLW: All right, fellas, welcome back to the AI Breakdown. How are you doing? I'm good. Very good. With the last, the last time we did this show, we were like, oh yeah, let's do check ins like monthly about all the things that are going on and then. Of course, six months later, and, you know, the, the, the world has changed in a thousand ways.[00:01:36] NLW: It's just, it's too busy to even, to even think about podcasting sometimes. But I, I'm super excited to, to be chatting with you again. I think there's, there's a lot to, to catch up on, just to tap in, I think in the, you know, in the beginning of 2024. And, and so, you know, we're gonna talk today about just kind of a, a, a broad sense of where things are in some of the key battles in the AI space.[00:01:55] NLW: And then the, you know, one of the big things that I, that I'm really excited to have you guys on here for us to talk about where, sort of what patterns you're seeing and what people are actually trying to build, you know, where, where developers are spending their, their time and energy and, and, and any sort of, you know, trend trends there, but maybe let's start I guess by checking in on a framework that you guys actually introduced, which I've loved and I've cribbed a couple of times now, which is this sort of four wars of the, of the AI stack.[00:02:20] Four Wars[00:02:20] NLW: Because first, since I have you here, I'd love, I'd love to hear sort of like where that started gelling. And then and then maybe we can get into, I think a couple of them that are you know, particularly interesting, you know, in the, in light of[00:02:30] swyx: some recent news. Yeah, so maybe I'll take this one. So the four wars is a framework that I came up around trying to recap all of 2023.[00:02:38] swyx: I tried to write sort of monthly recap pieces. And I was trying to figure out like what makes one piece of news last longer than another or more significant than another. And I think it's basically always around battlegrounds. Wars are fought around limited resources. And I think probably the, you know, the most limited resource is talent, but the talent expresses itself in a number of areas.[00:03:01] swyx: And so I kind of focus on those, those areas at first. So the four wars that we cover are the data wars, the GPU rich, poor war, the multi modal war, And the RAG and Ops War. And I think you actually did a dedicated episode to that, so thanks for covering that. Yeah, yeah.[00:03:18] NLW: Not only did I do a dedicated episode, I actually used that.[00:03:22] NLW: I can't remember if I told you guys. I did give you big shoutouts. But I used it as a framework for a presentation at Intel's big AI event that they hold each year, where they have all their folks who are working on AI internally. And it totally resonated. That's amazing. Yeah, so, so, what got me thinking about it again is specifically this inflection news that we recently had, this sort of, you know, basically, I can't imagine that anyone who's listening wouldn't have thought about it, but, you know, inflection is a one of the big contenders, right?[00:03:53] NLW: I think probably most folks would have put them, you know, just a half step behind the anthropics and open AIs of the world in terms of labs, but it's a company that raised 1. 3 billion last year, less than a year ago. Reed Hoffman's a co founder Mustafa Suleyman, who's a co founder of DeepMind, you know, so it's like, this is not a a small startup, let's say, at least in terms of perception.[00:04:13] NLW: And then we get the news that basically most of the team, it appears, is heading over to Microsoft and they're bringing in a new CEO. And you know, I'm interested in, in, in kind of your take on how much that reflects, like hold aside, I guess, you know, all the other things that it might be about, how much it reflects this sort of the, the stark.[00:04:32] NLW: Brutal reality of competing in the frontier model space right now. And, you know, just the access to compute.[00:04:38] Alessio: There are a lot of things to say. So first of all, there's always somebody who's more GPU rich than you. So inflection is GPU rich by startup standard. I think about 22, 000 H100s, but obviously that pales compared to the, to Microsoft.[00:04:55] Alessio: The other thing is that this is probably good news, maybe for the startups. It's like being GPU rich, it's not enough. You know, like I think they were building something pretty interesting in, in pi of their own model of their own kind of experience. But at the end of the day, you're the interface that people consume as end users.[00:05:13] Alessio: It's really similar to a lot of the others. So and we'll tell, talk about GPT four and cloud tree and all this stuff. GPU poor, doing something. That the GPU rich are not interested in, you know we just had our AI center of excellence at Decibel and one of the AI leads at one of the big companies was like, Oh, we just saved 10 million and we use these models to do a translation, you know, and that's it.[00:05:39] Alessio: It's not, it's not a GI, it's just translation. So I think like the inflection part is maybe. A calling and a waking to a lot of startups then say, Hey, you know, trying to get as much capital as possible, try and get as many GPUs as possible. Good. But at the end of the day, it doesn't build a business, you know, and maybe what inflection I don't, I don't, again, I don't know the reasons behind the inflection choice, but if you say, I don't want to build my own company that has 1.[00:06:05] Alessio: 3 billion and I want to go do it at Microsoft, it's probably not a resources problem. It's more of strategic decisions that you're making as a company. So yeah, that was kind of my. I take on it.[00:06:15] swyx: Yeah, and I guess on my end, two things actually happened yesterday. It was a little bit quieter news, but Stability AI had some pretty major departures as well.[00:06:25] swyx: And you may not be considering it, but Stability is actually also a GPU rich company in the sense that they were the first new startup in this AI wave to brag about how many GPUs that they have. And you should join them. And you know, Imadis is definitely a GPU trader in some sense from his hedge fund days.[00:06:43] swyx: So Robin Rhombach and like the most of the Stable Diffusion 3 people left Stability yesterday as well. So yesterday was kind of like a big news day for the GPU rich companies, both Inflection and Stability having sort of wind taken out of their sails. I think, yes, it's a data point in the favor of Like, just because you have the GPUs doesn't mean you can, you automatically win.[00:07:03] swyx: And I think, you know, kind of I'll echo what Alessio says there. But in general also, like, I wonder if this is like the start of a major consolidation wave, just in terms of, you know, I think that there was a lot of funding last year and, you know, the business models have not been, you know, All of these things worked out very well.[00:07:19] swyx: Even inflection couldn't do it. And so I think maybe that's the start of a small consolidation wave. I don't think that's like a sign of AI winter. I keep looking for AI winter coming. I think this is kind of like a brief cold front. Yeah,[00:07:34] NLW: it's super interesting. So I think a bunch of A bunch of stuff here.[00:07:38] NLW: One is, I think, to both of your points, there, in some ways, there, there had already been this very clear demarcation between these two sides where, like, the GPU pores, to use the terminology, like, just weren't trying to compete on the same level, right? You know, the vast majority of people who have started something over the last year, year and a half, call it, were racing in a different direction.[00:07:59] NLW: They're trying to find some edge somewhere else. They're trying to build something different. If they're, if they're really trying to innovate, it's in different areas. And so it's really just this very small handful of companies that are in this like very, you know, it's like the coheres and jaspers of the world that like this sort of, you know, that are that are just sort of a little bit less resourced than, you know, than the other set that I think that this potentially even applies to, you know, everyone else that could clearly demarcate it into these two, two sides.[00:08:26] NLW: And there's only a small handful kind of sitting uncomfortably in the middle, perhaps. Let's, let's come back to the idea of, of the sort of AI winter or, you know, a cold front or anything like that. So this is something that I, I spent a lot of time kind of thinking about and noticing. And my perception is that The vast majority of the folks who are trying to call for sort of, you know, a trough of disillusionment or, you know, a shifting of the phase to that are people who either, A, just don't like AI for some other reason there's plenty of that, you know, people who are saying, You Look, they're doing way worse than they ever thought.[00:09:03] NLW: You know, there's a lot of sort of confirmation bias kind of thing going on. Or two, media that just needs a different narrative, right? Because they're sort of sick of, you know, telling the same story. Same thing happened last summer, when every every outlet jumped on the chat GPT at its first down month story to try to really like kind of hammer this idea that that the hype was too much.[00:09:24] NLW: Meanwhile, you have, you know, just ridiculous levels of investment from enterprises, you know, coming in. You have, you know, huge, huge volumes of, you know, individual behavior change happening. But I do think that there's nothing incoherent sort of to your point, Swyx, about that and the consolidation period.[00:09:42] NLW: Like, you know, if you look right now, for example, there are, I don't know, probably 25 or 30 credible, like, build your own chatbot. platforms that, you know, a lot of which have, you know, raised funding. There's no universe in which all of those are successful across, you know, even with a, even, even with a total addressable market of every enterprise in the world, you know, you're just inevitably going to see some amount of consolidation.[00:10:08] NLW: Same with, you know, image generators. There are, if you look at A16Z's top 50 consumer AI apps, just based on, you know, web traffic or whatever, they're still like I don't know, a half. Dozen or 10 or something, like, some ridiculous number of like, basically things like Midjourney or Dolly three. And it just seems impossible that we're gonna have that many, you know, ultimately as, as, as sort of, you know, going, going concerned.[00:10:33] NLW: So, I don't know. I, I, I think that the, there will be inevitable consolidation 'cause you know. It's, it's also what kind of like venture rounds are supposed to do. You're not, not everyone who gets a seed round is supposed to get to series A and not everyone who gets a series A is supposed to get to series B.[00:10:46] NLW: That's sort of the natural process. I think it will be tempting for a lot of people to try to infer from that something about AI not being as sort of big or as as sort of relevant as, as it was hyped up to be. But I, I kind of think that's the wrong conclusion to come to.[00:11:02] Alessio: I I would say the experimentation.[00:11:04] Alessio: Surface is a little smaller for image generation. So if you go back maybe six, nine months, most people will tell you, why would you build a coding assistant when like Copilot and GitHub are just going to win everything because they have the data and they have all the stuff. If you fast forward today, A lot of people use Cursor everybody was excited about the Devin release on Twitter.[00:11:26] Alessio: There are a lot of different ways of attacking the market that are not completion of code in the IDE. And even Cursors, like they evolved beyond single line to like chat, to do multi line edits and, and all that stuff. Image generation, I would say, yeah, as a, just as from what I've seen, like maybe the product innovation has slowed down at the UX level and people are improving the models.[00:11:50] Alessio: So the race is like, how do I make better images? It's not like, how do I make the user interact with the generation process better? And that gets tough, you know? It's hard to like really differentiate yourselves. So yeah, that's kind of how I look at it. And when we think about multimodality, maybe the reason why people got so excited about Sora is like, oh, this is like a completely It's not a better image model.[00:12:13] Alessio: This is like a completely different thing, you know? And I think the creative mind It's always looking for something that impacts the viewer in a different way, you know, like they really want something different versus the developer mind. It's like, Oh, I, I just, I have this like very annoying thing I want better.[00:12:32] Alessio: I have this like very specific use cases that I want to go after. So it's just different. And that's why you see a lot more companies in image generation. But I agree with you that. If you fast forward there, there's not going to be 10 of them, you know, it's probably going to be one or[00:12:46] swyx: two. Yeah, I mean, to me, that's why I call it a war.[00:12:49] swyx: Like, individually, all these companies can make a story that kind of makes sense, but collectively, they cannot all be true. Therefore, they all, there is some kind of fight over limited resources here. Yeah, so[00:12:59] NLW: it's interesting. We wandered very naturally into sort of another one of these wars, which is the multimodality kind of idea, which is, you know, basically a question of whether it's going to be these sort of big everything models that end up winning or whether, you know, you're going to have really specific things, you know, like something, you know, Dolly 3 inside of sort of OpenAI's larger models versus, you know, a mid journey or something like that.[00:13:24] NLW: And at first, you know, I was kind of thinking like, For most of the last, call it six months or whatever, it feels pretty definitively both and in some ways, you know, and that you're, you're seeing just like great innovation on sort of the everything models, but you're also seeing lots and lots happen at sort of the level of kind of individual use cases.[00:13:45] Sora[00:13:45] NLW: But then Sora comes along and just like obliterates what I think anyone thought you know, where we were when it comes to video generation. So how are you guys thinking about this particular battle or war at the moment?[00:13:59] swyx: Yeah, this was definitely a both and story, and Sora tipped things one way for me, in terms of scale being all you need.[00:14:08] swyx: And the benefit, I think, of having multiple models being developed under one roof. I think a lot of people aren't aware that Sora was developed in a similar fashion to Dolly 3. And Dolly3 had a very interesting paper out where they talked about how they sort of bootstrapped their synthetic data based on GPT 4 vision and GPT 4.[00:14:31] swyx: And, and it was just all, like, really interesting, like, if you work on one modality, it enables you to work on other modalities, and all that is more, is, is more interesting. I think it's beneficial if it's all in the same house, whereas the individual startups who don't, who sort of carve out a single modality and work on that, definitely won't have the state of the art stuff on helping them out on synthetic data.[00:14:52] swyx: So I do think like, The balance is tilted a little bit towards the God model companies, which is challenging for the, for the, for the the sort of dedicated modality companies. But everyone's carving out different niches. You know, like we just interviewed Suno ai, the sort of music model company, and, you know, I don't see opening AI pursuing music anytime soon.[00:15:12] Suno[00:15:12] swyx: Yeah,[00:15:13] NLW: Suno's been phenomenal to play with. Suno has done that rare thing where, which I think a number of different AI product categories have done, where people who don't consider themselves particularly interested in doing the thing that the AI enables find themselves doing a lot more of that thing, right?[00:15:29] NLW: Like, it'd be one thing if Just musicians were excited about Suno and using it but what you're seeing is tons of people who just like music all of a sudden like playing around with it and finding themselves kind of down that rabbit hole, which I think is kind of like the highest compliment that you can give one of these startups at the[00:15:45] swyx: early days of it.[00:15:46] swyx: Yeah, I, you know, I, I asked them directly, you know, in the interview about whether they consider themselves mid journey for music. And he had a more sort of nuanced response there, but I think that probably the business model is going to be very similar because he's focused on the B2C element of that. So yeah, I mean, you know, just to, just to tie back to the question about, you know, You know, large multi modality companies versus small dedicated modality companies.[00:16:10] swyx: Yeah, highly recommend people to read the Sora blog posts and then read through to the Dali blog posts because they, they strongly correlated themselves with the same synthetic data bootstrapping methods as Dali. And I think once you make those connections, you're like, oh, like it, it, it is beneficial to have multiple state of the art models in house that all help each other.[00:16:28] swyx: And these, this, that's the one thing that a dedicated modality company cannot do.[00:16:34] The GPT-4 Class Landscape[00:16:34] NLW: So I, I wanna jump, I wanna kind of build off that and, and move into the sort of like updated GPT-4 class landscape. 'cause that's obviously been another big change over the last couple months. But for the sake of completeness, is there anything that's worth touching on with with sort of the quality?[00:16:46] NLW: Quality data or sort of a rag ops wars just in terms of, you know, anything that's changed, I guess, for you fundamentally in the last couple of months about where those things stand.[00:16:55] swyx: So I think we're going to talk about rag for the Gemini and Clouds discussion later. And so maybe briefly discuss the data piece.[00:17:03] Data War: Reddit x Google[00:17:03] swyx: I think maybe the only new thing was this Reddit deal with Google for like a 60 million dollar deal just ahead of their IPO, very conveniently turning Reddit into a AI data company. Also, very, very interestingly, a non exclusive deal, meaning that Reddit can resell that data to someone else. And it probably does become table stakes.[00:17:23] swyx: A lot of people don't know, but a lot of the web text dataset that originally started for GPT 1, 2, and 3 was actually scraped from GitHub. from Reddit at least the sort of vote scores. And I think, I think that's a, that's a very valuable piece of information. So like, yeah, I think people are figuring out how to pay for data.[00:17:40] swyx: People are suing each other over data. This, this, this war is, you know, definitely very, very much heating up. And I don't think, I don't see it getting any less intense. I, you know, next to GPUs, data is going to be the most expensive thing in, in a model stack company. And. You know, a lot of people are resorting to synthetic versions of it, which may or may not be kosher based on how far along or how commercially blessed the, the forms of creating that synthetic data are.[00:18:11] swyx: I don't know if Alessio, you have any other interactions with like Data source companies, but that's my two cents.[00:18:17] Alessio: Yeah yeah, I actually saw Quentin Anthony from Luther. ai at GTC this week. He's also been working on this. I saw Technium. He's also been working on the data side. I think especially in open source, people are like, okay, if everybody is putting the gates up, so to speak, to the data we need to make it easier for people that don't have 50 million a year to get access to good data sets.[00:18:38] Alessio: And Jensen, at his keynote, he did talk about synthetic data a little bit. So I think that's something that we'll definitely hear more and more of in the enterprise, which never bodes well, because then all the, all the people with the data are like, Oh, the enterprises want to pay now? Let me, let me put a pay here stripe link so that they can give me 50 million.[00:18:57] Alessio: But it worked for Reddit. I think the stock is up. 40 percent today after opening. So yeah, I don't know if it's all about the Google deal, but it's obviously Reddit has been one of those companies where, hey, you got all this like great community, but like, how are you going to make money? And like, they try to sell the avatars.[00:19:15] Alessio: I don't know if that it's a great business for them. The, the data part sounds as an investor, you know, the data part sounds a lot more interesting than, than consumer[00:19:25] swyx: cosmetics. Yeah, so I think, you know there's more questions around data you know, I think a lot of people are talking about the interview that Mira Murady did with the Wall Street Journal, where she, like, just basically had no, had no good answer for where they got the data for Sora.[00:19:39] swyx: I, I think this is where, you know, there's, it's in nobody's interest to be transparent about data, and it's, it's kind of sad for the state of ML and the state of AI research but it is what it is. We, we have to figure this out as a society, just like we did for music and music sharing. You know, in, in sort of the Napster to Spotify transition, and that might take us a decade.[00:19:59] swyx: Yeah, I[00:20:00] NLW: do. I, I agree. I think, I think that you're right to identify it, not just as that sort of technical problem, but as one where society has to have a debate with itself. Because I think that there's, if you rationally within it, there's Great kind of points on all side, not to be the sort of, you know, person who sits in the middle constantly, but it's why I think a lot of these legal decisions are going to be really important because, you know, the job of judges is to listen to all this stuff and try to come to things and then have other judges disagree.[00:20:24] NLW: And, you know, and have the rest of us all debate at the same time. By the way, as a total aside, I feel like the synthetic data right now is like eggs in the 80s and 90s. Like, whether they're good for you or bad for you, like, you know, we, we get one study that's like synthetic data, you know, there's model collapse.[00:20:42] NLW: And then we have like a hint that llama, you know, to the most high performance version of it, which was one they didn't release was trained on synthetic data. So maybe it's good. It's like, I just feel like every, every other week I'm seeing something sort of different about whether it's a good or bad for, for these models.[00:20:56] swyx: Yeah. The branding of this is pretty poor. I would kind of tell people to think about it like cholesterol. There's good cholesterol, bad cholesterol. And you can have, you know, good amounts of both. But at this point, it is absolutely without a doubt that most large models from here on out will all be trained as some kind of synthetic data and that is not a bad thing.[00:21:16] swyx: There are ways in which you can do it poorly. Whether it's commercial, you know, in terms of commercial sourcing or in terms of the model performance. But it's without a doubt that good synthetic data is going to help your model. And this is just a question of like where to obtain it and what kinds of synthetic data are valuable.[00:21:36] swyx: You know, if even like alpha geometry, you know, was, was a really good example from like earlier this year.[00:21:42] NLW: If you're using the cholesterol analogy, then my, then my egg thing can't be that far off. Let's talk about the sort of the state of the art and the, and the GPT 4 class landscape and how that's changed.[00:21:53] Gemini 1.5 vs Claude 3[00:21:53] NLW: Cause obviously, you know, sort of the, the two big things or a couple of the big things that have happened. Since we last talked, we're one, you know, Gemini first announcing that a model was coming and then finally it arriving, and then very soon after a sort of a different model arriving from Gemini and and Cloud three.[00:22:11] NLW: So I guess, you know, I'm not sure exactly where the right place to start with this conversation is, but, you know, maybe very broadly speaking which of these do you think have made a bigger impact? Thank you.[00:22:20] Alessio: Probably the one you can use, right? So, Cloud. Well, I'm sure Gemini is going to be great once they let me in, but so far I haven't been able to.[00:22:29] Alessio: I use, so I have this small podcaster thing that I built for our podcast, which does chapters creation, like named entity recognition, summarization, and all of that. Cloud Tree is, Better than GPT 4. Cloud2 was unusable. So I use GPT 4 for everything. And then when Opus came out, I tried them again side by side and I posted it on, on Twitter as well.[00:22:53] Alessio: Cloud is better. It's very good, you know, it's much better, it seems to me, it's much better than GPT 4 at doing writing that is more, you know, I don't know, it just got good vibes, you know, like the GPT 4 text, you can tell it's like GPT 4, you know, it's like, it always uses certain types of words and phrases and, you know, maybe it's just me because I've now done it for, you know, So, I've read like 75, 80 generations of these things next to each other.[00:23:21] Alessio: Clutter is really good. I know everybody is freaking out on twitter about it, my only experience of this is much better has been on the podcast use case. But I know that, you know, Quran from from News Research is a very big opus pro, pro opus person. So, I think that's also It's great to have people that actually care about other models.[00:23:40] Alessio: You know, I think so far to a lot of people, maybe Entropic has been the sibling in the corner, you know, it's like Cloud releases a new model and then OpenAI releases Sora and like, you know, there are like all these different things, but yeah, the new models are good. It's interesting.[00:23:55] NLW: My my perception is definitely that just, just observationally, Cloud 3 is certainly the first thing that I've seen where lots of people.[00:24:06] NLW: They're, no one's debating evals or anything like that. They're talking about the specific use cases that they have, that they used to use chat GPT for every day, you know, day in, day out, that they've now just switched over. And that has, I think, shifted a lot of the sort of like vibe and sentiment in the space too.[00:24:26] NLW: And I don't necessarily think that it's sort of a A like full you know, sort of full knock. Let's put it this way. I think it's less bad for open AI than it is good for anthropic. I think that because GPT 5 isn't there, people are not quite willing to sort of like, you know get overly critical of, of open AI, except in so far as they're wondering where GPT 5 is.[00:24:46] NLW: But I do think that it makes, Anthropic look way more credible as a, as a, as a player, as a, you know, as a credible sort of player, you know, as opposed to to, to where they were.[00:24:57] Alessio: Yeah. And I would say the benchmarks veil is probably getting lifted this year. I think last year. People were like, okay, this is better than this on this benchmark, blah, blah, blah, because maybe they did not have a lot of use cases that they did frequently.[00:25:11] Alessio: So it's hard to like compare yourself. So you, you defer to the benchmarks. I think now as we go into 2024, a lot of people have started to use these models from, you know, from very sophisticated things that they run in production to some utility that they have on their own. Now they can just run them side by side.[00:25:29] Alessio: And it's like, Hey, I don't care that like. The MMLU score of Opus is like slightly lower than GPT 4. It just works for me, you know, and I think that's the same way that traditional software has been used by people, right? Like you just strive for yourself and like, which one does it work, works best for you?[00:25:48] Alessio: Like nobody looks at benchmarks outside of like sales white papers, you know? And I think it's great that we're going more in that direction. We have a episode with Adapt coming out this weekend. I'll and some of their model releases, they specifically say, We do not care about benchmarks, so we didn't put them in, you know, because we, we don't want to look good on them.[00:26:06] Alessio: We just want the product to work. And I think more and more people will, will[00:26:09] swyx: go that way. Yeah. I I would say like, it does take the wind out of the sails for GPT 5, which I know where, you know, Curious about later on. I think anytime you put out a new state of the art model, you have to break through in some way.[00:26:21] swyx: And what Claude and Gemini have done is effectively take away any advantage to saying that you have a million token context window. Now everyone's just going to be like, Oh, okay. Now you just match the other two guys. And so that puts An insane amount of pressure on what gpt5 is going to be because it's just going to have like the only option it has now because all the other models are multimodal all the other models are long context all the other models have perfect recall gpt5 has to match everything and do more to to not be a flop[00:26:58] AI Breakdown Part 2[00:26:58] NLW: hello friends back again with part two if you haven't heard part one of this conversation i suggest you go check it out but to be honest they are kind of actually separable In this conversation, we get into a topic that I think Alessio and Swyx are very well positioned to discuss, which is what developers care about right now, what people are trying to build around.[00:27:16] NLW: I honestly think that one of the best ways to see the future in an industry like AI is to try to dig deep on what developers and entrepreneurs are attracted to build, even if it hasn't made it to the news pages yet. So consider this your preview of six months from now, and let's dive in. Let's bring it to the GPT 5 conversation.[00:27:33] Next Frontiers: Llama 3, GPT-5, Gemini 2, Claude 4[00:27:33] NLW: I mean, so, so I think that that's a great sort of assessment of just how the stakes have been raised, you know is your, I mean, so I guess maybe, maybe I'll, I'll frame this less as a question, just sort of something that, that I, that I've been watching right now, the only thing that makes sense to me with how.[00:27:50] NLW: Fundamentally unbothered and unstressed OpenAI seems about everything is that they're sitting on something that does meet all that criteria, right? Because, I mean, even in the Lex Friedman interview that, that Altman recently did, you know, he's talking about other things coming out first. He's talking about, he's just like, he, listen, he, he's good and he could play nonchalant, you know, if he wanted to.[00:28:13] NLW: So I don't want to read too much into it, but. You know, they've had so long to work on this, like unless that we are like really meaningfully running up against some constraint, it just feels like, you know, there's going to be some massive increase, but I don't know. What do you guys think?[00:28:28] swyx: Hard to speculate.[00:28:29] swyx: You know, at this point, they're, they're pretty good at PR and they're not going to tell you anything that they don't want to. And he can tell you one thing and change their minds the next day. So it's, it's, it's really, you know, I've always said that model version numbers are just marketing exercises, like they have something and it's always improving and at some point you just cut it and decide to call it GPT 5.[00:28:50] swyx: And it's more just about defining an arbitrary level at which they're ready and it's up to them on what ready means. We definitely did see some leaks on GPT 4. 5, as I think a lot of people reported and I'm not sure if you covered it. So it seems like there might be an intermediate release. But I did feel, coming out of the Lex Friedman interview, that GPT 5 was nowhere near.[00:29:11] swyx: And you know, it was kind of a sharp contrast to Sam talking at Davos in February, saying that, you know, it was his top priority. So I find it hard to square. And honestly, like, there's also no point Reading too much tea leaves into what any one person says about something that hasn't happened yet or has a decision that hasn't been taken yet.[00:29:31] swyx: Yeah, that's, that's my 2 cents about it. Like, calm down, let's just build .[00:29:35] Alessio: Yeah. The, the February rumor was that they were gonna work on AI agents, so I don't know, maybe they're like, yeah,[00:29:41] swyx: they had two agent two, I think two agent projects, right? One desktop agent and one sort of more general yeah, sort of GPTs like agent and then Andre left, so he was supposed to be the guy on that.[00:29:52] swyx: What did Andre see? What did he see? I don't know. What did he see?[00:29:56] Alessio: I don't know. But again, it's just like the rumors are always floating around, you know but I think like, this is, you know, we're not going to get to the end of the year without Jupyter you know, that's definitely happening. I think the biggest question is like, are Anthropic and Google.[00:30:13] Alessio: Increasing the pace, you know, like it's the, it's the cloud four coming out like in 12 months, like nine months. What's the, what's the deal? Same with Gemini. They went from like one to 1. 5 in like five days or something. So when's Gemini 2 coming out, you know, is that going to be soon? I don't know.[00:30:31] Alessio: There, there are a lot of, speculations, but the good thing is that now you can see a world in which OpenAI doesn't rule everything. You know, so that, that's the best, that's the best news that everybody got, I would say.[00:30:43] swyx: Yeah, and Mistral Large also dropped in the last month. And, you know, not as, not quite GPT 4 class, but very good from a new startup.[00:30:52] swyx: So yeah, we, we have now slowly changed in landscape, you know. In my January recap, I was complaining that nothing's changed in the landscape for a long time. But now we do exist in a world, sort of a multipolar world where Cloud and Gemini are legitimate challengers to GPT 4 and hopefully more will emerge as well hopefully from meta.[00:31:11] Open Source Models - Mistral, Grok[00:31:11] NLW: So speak, let's actually talk about sort of the open source side of this for a minute. So Mistral Large, notable because it's, it's not available open source in the same way that other things are, although I think my perception is that the community has largely given them Like the community largely recognizes that they want them to keep building open source stuff and they have to find some way to fund themselves that they're going to do that.[00:31:27] NLW: And so they kind of understand that there's like, they got to figure out how to eat, but we've got, so, you know, there there's Mistral, there's, I guess, Grok now, which is, you know, Grok one is from, from October is, is open[00:31:38] swyx: sourced at, yeah. Yeah, sorry, I thought you thought you meant Grok the chip company.[00:31:41] swyx: No, no, no, yeah, you mean Twitter Grok.[00:31:43] NLW: Although Grok the chip company, I think is even more interesting in some ways, but and then there's the, you know, obviously Llama3 is the one that sort of everyone's wondering about too. And, you know, my, my sense of that, the little bit that, you know, Zuckerberg was talking about Llama 3 earlier this year, suggested that, at least from an ambition standpoint, he was not thinking about how do I make sure that, you know, meta content, you know, keeps, keeps the open source thrown, you know, vis a vis Mistral.[00:32:09] NLW: He was thinking about how you go after, you know, how, how he, you know, releases a thing that's, you know, every bit as good as whatever OpenAI is on at that point.[00:32:16] Alessio: Yeah. From what I heard in the hallways at, at GDC, Llama 3, the, the biggest model will be, you 260 to 300 billion parameters, so that that's quite large.[00:32:26] Alessio: That's not an open source model. You know, you cannot give people a 300 billion parameters model and ask them to run it. You know, it's very compute intensive. So I think it is, it[00:32:35] swyx: can be open source. It's just, it's going to be difficult to run, but that's a separate question.[00:32:39] Alessio: It's more like, as you think about what they're doing it for, you know, it's not like empowering the person running.[00:32:45] Alessio: llama. On, on their laptop, it's like, oh, you can actually now use this to go after open AI, to go after Anthropic, to go after some of these companies at like the middle complexity level, so to speak. Yeah. So obviously, you know, we estimate Gentala on the podcast, they're doing a lot here, they're making PyTorch better.[00:33:03] Alessio: You know, they want to, that's kind of like maybe a little bit of a shorted. Adam Bedia, in a way, trying to get some of the CUDA dominance out of it. Yeah, no, it's great. The, I love the duck destroying a lot of monopolies arc. You know, it's, it's been very entertaining. Let's bridge[00:33:18] NLW: into the sort of big tech side of this, because this is obviously like, so I think actually when I did my episode, this was one of the I added this as one of as an additional war that, that's something that I'm paying attention to.[00:33:29] NLW: So we've got Microsoft's moves with inflection, which I think pretend, potentially are being read as A shift vis a vis the relationship with OpenAI, which also the sort of Mistral large relationship seems to reinforce as well. We have Apple potentially entering the race, finally, you know, giving up Project Titan and and, and kind of trying to spend more effort on this.[00:33:50] NLW: Although, Counterpoint, we also have them talking about it, or there being reports of a deal with Google, which, you know, is interesting to sort of see what their strategy there is. And then, you know, Meta's been largely quiet. We kind of just talked about the main piece, but, you know, there's, and then there's spoilers like Elon.[00:34:07] NLW: I mean, you know, what, what of those things has sort of been most interesting to you guys as you think about what's going to shake out for the rest of this[00:34:13] Apple MM1[00:34:13] swyx: year? I'll take a crack. So the reason we don't have a fifth war for the Big Tech Wars is that's one of those things where I just feel like we don't cover differently from other media channels, I guess.[00:34:26] swyx: Sure, yeah. In our anti interestness, we actually say, like, we try not to cover the Big Tech Game of Thrones, or it's proxied through Twitter. You know, all the other four wars anyway, so there's just a lot of overlap. Yeah, I think absolutely, personally, the most interesting one is Apple entering the race.[00:34:41] swyx: They actually released, they announced their first large language model that they trained themselves. It's like a 30 billion multimodal model. People weren't that impressed, but it was like the first time that Apple has kind of showcased that, yeah, we're training large models in house as well. Of course, like, they might be doing this deal with Google.[00:34:57] swyx: I don't know. It sounds very sort of rumor y to me. And it's probably, if it's on device, it's going to be a smaller model. So something like a Jemma. It's going to be smarter autocomplete. I don't know what to say. I'm still here dealing with, like, Siri, which hasn't, probably hasn't been updated since God knows when it was introduced.[00:35:16] swyx: It's horrible. I, you know, it, it, it makes me so angry. So I, I, one, as an Apple customer and user, I, I'm just hoping for better AI on Apple itself. But two, they are the gold standard when it comes to local devices, personal compute and, and trust, like you, you trust them with your data. And. I think that's what a lot of people are looking for in AI, that they have, they love the benefits of AI, they don't love the downsides, which is that you have to send all your data to some cloud somewhere.[00:35:45] swyx: And some of this data that we're going to feed AI is just the most personal data there is. So Apple being like one of the most trusted personal data companies, I think it's very important that they enter the AI race, and I hope to see more out of them.[00:35:58] Alessio: To me, the, the biggest question with the Google deal is like, who's paying who?[00:36:03] Alessio: Because for the browsers, Google pays Apple like 18, 20 billion every year to be the default browser. Is Google going to pay you to have Gemini or is Apple paying Google to have Gemini? I think that's, that's like what I'm most interested to figure out because with the browsers, it's like, it's the entry point to the thing.[00:36:21] Alessio: So it's really valuable to be the default. That's why Google pays. But I wonder if like the perception in AI is going to be like, Hey. You just have to have a good local model on my phone to be worth me purchasing your device. And that was, that's kind of drive Apple to be the one buying the model. But then, like Shawn said, they're doing the MM1 themselves.[00:36:40] Alessio: So are they saying we do models, but they're not as good as the Google ones? I don't know. The whole thing is, it's really confusing, but. It makes for great meme material on on Twitter.[00:36:51] swyx: Yeah, I mean, I think, like, they are possibly more than OpenAI and Microsoft and Amazon. They are the most full stack company there is in computing, and so, like, they own the chips, man.[00:37:05] swyx: Like, they manufacture everything so if, if, if there was a company that could do that. You know, seriously challenge the other AI players. It would be Apple. And it's, I don't think it's as hard as self driving. So like maybe they've, they've just been investing in the wrong thing this whole time. We'll see.[00:37:21] swyx: Wall Street certainly thinks[00:37:22] NLW: so. Wall Street loved that move, man. There's a big, a big sigh of relief. Well, let's, let's move away from, from sort of the big stuff. I mean, the, I think to both of your points, it's going to.[00:37:33] Meta's $800b AI rebrand[00:37:33] NLW: Can I, can[00:37:34] swyx: I, can I, can I jump on factoid about this, this Wall Street thing? I went and looked at when Meta went from being a VR company to an AI company.[00:37:44] swyx: And I think the stock I'm trying to look up the details now. The stock has gone up 187% since Lamo one. Yeah. Which is $830 billion in market value created in the past year. . Yeah. Yeah.[00:37:57] NLW: It's, it's, it's like, remember if you guys haven't Yeah. If you haven't seen the chart, it's actually like remarkable.[00:38:02] NLW: If you draw a little[00:38:03] swyx: arrow on it, it's like, no, we're an AI company now and forget the VR thing.[00:38:10] NLW: It's it, it is an interesting, no, it's, I, I think, alessio, you called it sort of like Zuck's Disruptor Arc or whatever. He, he really does. He is in the midst of a, of a total, you know, I don't know if it's a redemption arc or it's just, it's something different where, you know, he, he's sort of the spoiler.[00:38:25] NLW: Like people loved him just freestyle talking about why he thought they had a better headset than Apple. But even if they didn't agree, they just loved it. He was going direct to camera and talking about it for, you know, five minutes or whatever. So that, that's a fascinating shift that I don't think anyone had on their bingo card, you know, whatever, two years ago.[00:38:41] NLW: Yeah. Yeah,[00:38:42] swyx: we still[00:38:43] Alessio: didn't see and fight Elon though, so[00:38:45] swyx: that's what I'm really looking forward to. I mean, hey, don't, don't, don't write it off, you know, maybe just these things take a while to happen. But we need to see and fight in the Coliseum. No, I think you know, in terms of like self management, life leadership, I think he has, there's a lot of lessons to learn from him.[00:38:59] swyx: You know he might, you know, you might kind of quibble with, like, the social impact of Facebook, but just himself as a in terms of personal growth and, and, you know, Per perseverance through like a lot of change and you know, everyone throwing stuff his way. I think there's a lot to say about like, to learn from, from Zuck, which is crazy 'cause he's my age.[00:39:18] swyx: Yeah. Right.[00:39:20] AI Engineer landscape - from baby AGIs to vertical Agents[00:39:20] NLW: Awesome. Well, so, so one of the big things that I think you guys have, you know, distinct and, and unique insight into being where you are and what you work on is. You know, what developers are getting really excited about right now. And by that, I mean, on the one hand, certainly, you know, like startups who are actually kind of formalized and formed to startups, but also, you know, just in terms of like what people are spending their nights and weekends on what they're, you know, coming to hackathons to do.[00:39:45] NLW: And, you know, I think it's a, it's a, it's, it's such a fascinating indicator for, for where things are headed. Like if you zoom back a year, right now was right when everyone was getting so, so excited about. AI agent stuff, right? Auto, GPT and baby a GI. And these things were like, if you dropped anything on YouTube about those, like instantly tens of thousands of views.[00:40:07] NLW: I know because I had like a 50,000 view video, like the second day that I was doing the show on YouTube, you know, because I was talking about auto GPT. And so anyways, you know, obviously that's sort of not totally come to fruition yet, but what are some of the trends in what you guys are seeing in terms of people's, people's interest and, and, and what people are building?[00:40:24] Alessio: I can start maybe with the agents part and then I know Shawn is doing a diffusion meetup tonight. There's a lot of, a lot of different things. The, the agent wave has been the most interesting kind of like dream to reality arc. So out of GPT, I think they went, From zero to like 125, 000 GitHub stars in six weeks, and then one year later, they have 150, 000 stars.[00:40:49] Alessio: So there's kind of been a big plateau. I mean, you might say there are just not that many people that can start it. You know, everybody already started it. But the promise of, hey, I'll just give you a goal, and you do it. I think it's like, amazing to get people's imagination going. You know, they're like, oh, wow, this This is awesome.[00:41:08] Alessio: Everybody, everybody can try this to do anything. But then as technologists, you're like, well, that's, that's just like not possible, you know, we would have like solved everything. And I think it takes a little bit to go from the promise and the hope that people show you to then try it yourself and going back to say, okay, this is not really working for me.[00:41:28] Alessio: And David Wong from Adept, you know, they in our episode, he specifically said. We don't want to do a bottom up product. You know, we don't want something that everybody can just use and try because it's really hard to get it to be reliable. So we're seeing a lot of companies doing vertical agents that are narrow for a specific domain, and they're very good at something.[00:41:49] Alessio: Mike Conover, who was at Databricks before, is also a friend of Latentspace. He's doing this new company called BrightWave doing AI agents for financial research, and that's it, you know, and they're doing very well. There are other companies doing it in security, doing it in compliance, doing it in legal.[00:42:08] Alessio: All of these things that like, people, nobody just wakes up and say, Oh, I cannot wait to go on AutoGPD and ask it to do a compliance review of my thing. You know, just not what inspires people. So I think the gap on the developer side has been the more bottom sub hacker mentality is trying to build this like very Generic agents that can do a lot of open ended tasks.[00:42:30] Alessio: And then the more business side of things is like, Hey, If I want to raise my next round, I can not just like sit around the mess, mess around with like super generic stuff. I need to find a use case that really works. And I think that that is worth for, for a lot of folks in parallel, you have a lot of companies doing evals.[00:42:47] Alessio: There are dozens of them that just want to help you measure how good your models are doing. Again, if you build evals, you need to also have a restrained surface area to actually figure out whether or not it's good, right? Because you cannot eval anything on everything under the sun. So that's another category where I've seen from the startup pitches that I've seen, there's a lot of interest in, in the enterprise.[00:43:11] Alessio: It's just like really. Fragmented because the production use cases are just coming like now, you know, there are not a lot of long established ones to, to test against. And so does it, that's kind of on the virtual agents and then the robotic side it's probably been the thing that surprised me the most at NVIDIA GTC, the amount of robots that were there that were just like robots everywhere.[00:43:33] Alessio: Like, both in the keynote and then on the show floor, you would have Boston Dynamics dogs running around. There was, like, this, like fox robot that had, like, a virtual face that, like, talked to you and, like, moved in real time. There were industrial robots. NVIDIA did a big push on their own Omniverse thing, which is, like, this Digital twin of whatever environments you're in that you can use to train the robots agents.[00:43:57] Alessio: So that kind of takes people back to the reinforcement learning days, but yeah, agents, people want them, you know, people want them. I give a talk about the, the rise of the full stack employees and kind of this future, the same way full stack engineers kind of work across the stack. In the future, every employee is going to interact with every part of the organization through agents and AI enabled tooling.[00:44:17] Alessio: This is happening. It just needs to be a lot more narrow than maybe the first approach that we took, which is just put a string in AutoGPT and pray. But yeah, there's a lot of super interesting stuff going on.[00:44:27] swyx: Yeah. Well, he Let's recover a lot of stuff there. I'll separate the robotics piece because I feel like that's so different from the software world.[00:44:34] swyx: But yeah, we do talk to a lot of engineers and you know, that this is our sort of bread and butter. And I do agree that vertical agents have worked out a lot better than the horizontal ones. I think all You know, the point I'll make here is just the reason AutoGPT and maybe AGI, you know, it's in the name, like they were promising AGI.[00:44:53] swyx: But I think people are discovering that you cannot engineer your way to AGI. It has to be done at the model level and all these engineering, prompt engineering hacks on top of it weren't really going to get us there in a meaningful way without much further, you know, improvements in the models. I would say, I'll go so far as to say, even Devin, which is, I would, I think the most advanced agent that we've ever seen, still requires a lot of engineering and still probably falls apart a lot in terms of, like, practical usage.[00:45:22] swyx: Or it's just, Way too slow and expensive for, you know, what it's, what it's promised compared to the video. So yeah, that's, that's what, that's what happened with agents from, from last year. But I, I do, I do see, like, vertical agents being very popular and, and sometimes you, like, I think the word agent might even be overused sometimes.[00:45:38] swyx: Like, people don't really care whether or not you call it an AI agent, right? Like, does it replace boring menial tasks that I do That I might hire a human to do, or that the human who is hired to do it, like, actually doesn't really want to do. And I think there's absolutely ways in sort of a vertical context that you can actually go after very routine tasks that can be scaled out to a lot of, you know, AI assistants.[00:46:01] swyx: So, so yeah, I mean, and I would, I would sort of basically plus one what let's just sit there. I think it's, it's very, very promising and I think more people should work on it, not less. Like there's not enough people. Like, we, like, this should be the, the, the main thrust of the AI engineer is to look out, look for use cases and, and go to a production with them instead of just always working on some AGI promising thing that never arrives.[00:46:21] swyx: I,[00:46:22] NLW: I, I can only add that so I've been fiercely making tutorials behind the scenes around basically everything you can imagine with AI. We've probably done, we've done about 300 tutorials over the last couple of months. And the verticalized anything, right, like this is a solution for your particular job or role, even if it's way less interesting or kind of sexy, it's like so radically more useful to people in terms of intersecting with how, like those are the ways that people are actually.[00:46:50] NLW: Adopting AI in a lot of cases is just a, a, a thing that I do over and over again. By the way, I think that's the same way that even the generalized models are getting adopted. You know, it's like, I use midjourney for lots of stuff, but the main thing I use it for is YouTube thumbnails every day. Like day in, day out, I will always do a YouTube thumbnail, you know, or two with, with Midjourney, right?[00:47:09] NLW: And it's like you can, you can start to extrapolate that across a lot of things and all of a sudden, you know, a AI doesn't. It looks revolutionary because of a million small changes rather than one sort of big dramatic change. And I think that the verticalization of agents is sort of a great example of how that's[00:47:26] swyx: going to play out too.[00:47:28] Adept episode - Screen Multimodality[00:47:28] swyx: So I'll have one caveat here, which is I think that Because multi modal models are now commonplace, like Cloud, Gemini, OpenAI, all very very easily multi modal, Apple's easily multi modal, all this stuff. There is a switch for agents for sort of general desktop browsing that I think people so much for joining us today, and we'll see you in the next video.[00:48:04] swyx: Version of the the agent where they're not specifically taking in text or anything They're just watching your screen just like someone else would and and I'm piloting it by vision And you know in the the episode with David that we'll have dropped by the time that this this airs I think I think that is the promise of adept and that is a promise of what a lot of these sort of desktop agents Are and that is the more general purpose system That could be as big as the browser, the operating system, like, people really want to build that foundational piece of software in AI.[00:48:38] swyx: And I would see, like, the potential there for desktop agents being that, that you can have sort of self driving computers. You know, don't write the horizontal piece out. I just think we took a while to get there.[00:48:48] NLW: What else are you guys seeing that's interesting to you? I'm looking at your notes and I see a ton of categories.[00:48:54] Top Model Research from January Recap[00:48:54] swyx: Yeah so I'll take the next two as like as one category, which is basically alternative architectures, right? The two main things that everyone following AI kind of knows now is, one, the diffusion architecture, and two, the let's just say the, Decoder only transformer architecture that is popularized by GPT.[00:49:12] swyx: You can read, you can look on YouTube for thousands and thousands of tutorials on each of those things. What we are talking about here is what's next, what people are researching, and what could be on the horizon that takes the place of those other two things. So first of all, we'll talk about transformer architectures and then diffusion.[00:49:25] swyx: So transformers the, the two leading candidates are effectively RWKV and the state space models the most recent one of which is Mamba, but there's others like the Stripe, ENA, and the S four H three stuff coming out of hazy research at Stanford. And all of those are non quadratic language models that scale the promise to scale a lot better than the, the traditional transformer.[00:49:47] swyx: That this might be too theoretical for most people right now, but it's, it's gonna be. It's gonna come out in weird ways, where, imagine if like, Right now the talk of the town is that Claude and Gemini have a million tokens of context and like whoa You can put in like, you know, two hours of video now, okay But like what if you put what if we could like throw in, you know, two hundred thousand hours of video?[00:50:09] swyx: Like how does that change your usage of AI? What if you could throw in the entire genetic sequence of a human and like synthesize new drugs. Like, well, how does that change things? Like, we don't know because we haven't had access to this capability being so cheap before. And that's the ultimate promise of these two models.[00:50:28] swyx: They're not there yet but we're seeing very, very good progress. RWKV and Mamba are probably the, like, the two leading examples, both of which are open source that you can try them today and and have a lot of progress there. And the, the, the main thing I'll highlight for audio e KV is that at, at the seven B level, they seem to have beat LAMA two in all benchmarks that matter at the same size for the same amount of training as an open source model.[00:50:51] swyx: So that's exciting. You know, they're there, they're seven B now. They're not at seven tb. We don't know if it'll. And then the other thing is diffusion. Diffusions and transformers are are kind of on the collision course. The original stable diffusion already used transformers in in parts of its architecture.[00:51:06] swyx: It seems that transformers are eating more and more of those layers particularly the sort of VAE layer. So that's, the Diffusion Transformer is what Sora is built on. The guy who wrote the Diffusion Transformer paper, Bill Pebbles, is, Bill Pebbles is the lead tech guy on Sora. So you'll just see a lot more Diffusion Transformer stuff going on.[00:51:25] swyx: But there's, there's more sort of experimentation with diffusion. I'm holding a meetup actually here in San Francisco that's gonna be like the state of diffusion, which I'm pretty excited about. Stability's doing a lot of good work. And if you look at the, the architecture of how they're creating Stable Diffusion 3, Hourglass Diffusion, and the inconsistency models, or SDXL Turbo.[00:51:45] swyx: All of these are, like, very, very interesting innovations on, like, the original idea of what Stable Diffusion was. So if you think that it is expensive to create or slow to create Stable Diffusion or an AI generated art, you are not up to date with the latest models. If you think it is hard to create text and images, you are not up to date with the latest models.[00:52:02] swyx: And people still are kind of far behind. The last piece of which is the wildcard I always kind of hold out, which is text diffusion. So Instead of using autogenerative or autoregressive transformers, can you use text to diffuse? So you can use diffusion models to diffuse and create entire chunks of text all at once instead of token by token.[00:52:22] swyx: And that is something that Midjourney confirmed today, because it was only rumored the past few months. But they confirmed today that they were looking into. So all those things are like very exciting new model architectures that are, Maybe something that we'll, you'll see in production two to three years from now.[00:52:37] swyx: So the couple of the trends[00:52:38] NLW: that I want to just get your takes on, because they're sort of something that, that seems like they're coming up are one sort of these, these wearable, you know, kind of passive AI experiences where they're absorbing a lot of what's going on around you and then, and then kind of bringing things back.[00:52:53] NLW: And then the, the other one that I, that I wanted to see if you guys had thoughts on were sort of this next generation of chip companies. Obviously there's a huge amount of emphasis. On on hardware and silicon and, and, and different ways of doing things, but, y
Hey everyone, this is Alex and can you believe that we're almost done with Q1 2024? March 2024 was kind of crazy of course, so I'm of course excited to see what April brings (besides Weights & Biases conference in SF called Fully Connected, which I encourage you to attend and say Hi to me and the team!) This week we have tons of exciting stuff on the leaderboards, say hello to the new best AI in the world Opus (+ some other surprises), in the open source we had new MoEs (one from Mosaic/Databricks folks, which tops the open source game, one from AI21 called Jamba that shows that a transformers alternative/hybrid can actually scale) and tiny MoE from Alibaba, as well as an incredible Emotion TTS from Hume. I also had the pleasure to finally sit down with friend of the pod Tanishq Abraham and Paul Scotti from MedArc and chatted about MindEye 2, how they teach AI to read minds using diffusion models
"...Happy birthday dear ThursdAIiiiiiiii, happy birthday to youuuuuu
Hello hello everyone, happy spring! Can you believe it? It's already spring! We have tons of AI news for you to cover, starting with the most impactful one, did you already use Claude 3? Anthropic decided to celebrate Claude 1's birthday early (which btw is also ThursdAI's birthday and GPT4 release date, March 14th, 2023) and gave us 3 new Clauds! Opus, Sonnet and Haiku. TL;DR of all topics covered: * Big CO LLMs + APIs*
Happy leap year day everyone, very excited to bring you a special once-in-a-4 year edition of ThursdAI
Hey, this is Alex,Ok let's start with the big news, holy crap this week was a breakthrough week for speed! We had both Groq explode in popularity, and ByteDance release an updated SDXL model called Lightning, able to generate full blown SDXL 1024 images in 300ms. I've been excited about seeing what real time LLM/Diffusion can bring, and with both of these news release the same week, I just had to go and test them out together: Additionally, we had Google step into a big open weights role, and give us Gemma, 2 open weights models 2B and 7B (which is closer to 9B per Junyang) and it was great to see google committing to releasing at least some models in the open. We also had breaking news, Emad from Stability announced SD3, which looks really great, Google to pay Reddit 200M for AI training on their data & a few more things. TL;DR of all topics covered: * Big CO LLMs + APIs* Groq custom LPU inference does 400T/s Llama/Mistral generation (X, Demo)* Google image generation is in Hot Waters and was reportedly paused (refuses to generate white people)* Gemini 1.5 long context is very impressive to folks (Matt Shumer, Ethan Mollick)* Open Weights LLMs * Google releases GEMMA, open weights 2B and 7B models (Announcement, Models)* Teknium releases Nous Hermes DPO (Announcement, HF)* Vision & Video* YoLo V9 - SOTA real time object detector is out (Announcement, Code)* This weeks Buzz (What I learned in WandB this week)* Went to SF to cohost an event with A16Z, Nous, Mistral (Thread, My Report)* AI Art & Diffusion & 3D* ByteDance presents SDXL-Lightning (Try here, Model)* Stability announces Stable Diffusion 3 (Announcement)* Tools* Replit releases a new experimental Figma plugin for UI → Code (Announcement)* Arc browser adds "AI pinch to understand" summarization (Announcement)Big CO LLMs + APIsGroq's new LPU show extreme performance for LLMs - up to 400T/s (example)* Groq created a novel processing unit known as the Tensor Streaming Processor (TSP) which they categorize as a Linear Processor Unit (LPU). Unlike traditional GPUs that are parallel processors with hundreds of cores designed for graphics rendering, LPUs are architected to deliver deterministic performance for AI computations.* Analogy: They know where all the cars are going when everyone wakes up for work (when they compile) and how fast they all drive (compute latency) so they can get rid of traffic lights (routers) and turn lanes (backpressure) by telling everyone when to leave the house.* Why would we need something like this? Some folks are saying that average human reading is only 30T/s, I created an example that uses near instant Groq Mixtral + Lightning SDXL to just create images with Mixtral as my prompt managerOpen Source Weights LLMs Google Gemma - 2B and 7B open weights models (demo)* 4 hours after release, Llama.cpp added support, Ollama and LM Studio added support, Tri dao added Flash attention support* Vocab size is 256K* 8K context window* Tokenizer similar to LLama* Folks are... not that impressed as far as I've seen* Trained on 6 trillion tokens* Google also released Gemma.cpp (local CPU inference) - AnnouncementNous/Teknium re-release Nous Hermes with DPO finetune (Announcement)* DPO RLHF is performing better than previous models* Models are GGUF and can be found here* DPO enables Improvements across the boardThis weeks Buzz (What I learned with WandB this week)* Alex was in SF last week* A16Z + 20 something cohosts including Weights & Biases talked about importance of open source* Huge Shoutout Rajko and Marco from A16Z, and tons of open source folks who joined* Nous, Ollama, LLamaIndex, LMSys folks, Replicate, Perplexity, Mistral, Github, as well as Eric Hartford, Jon Durbin, Haotian Liu, HuggingFace, tons of other great folks from Mozilla, linux foundation and Percy from Together/StanfordAlso had a chance to checkout one of the smol dinners in SF, they go really hard, had a great time showing folks the Vision Pro, chatting about AI, seeing incredible demos and chat about meditation and spirituality all at the same time! AI Art & DiffusionByteDance presents SDXL-Lightning (Try here)* Lightning fast SDXL with 2, 4 or 8 steps* Results much closer to original SDXL than turbo version from a few months agoStability announces Stable Diffusion 3 (waitlist)Uses a Diffusion Transformer architecture (like SORA)Impressive multi subject prompt following: "Prompt: a painting of an astronaut riding a pig wearing a tutu holding a pink umbrella, on the ground next to the pig is a robin bird wearing a top hat, in the corner are the words "stable diffusion"Tools* Replit announces a new Figma design→ code plugin That's it for today, definitely check out the full conversation with Mark Heaps from Groq on the pod, and see you next week!
Holy SH*T, These two words have been said on this episode multiple times, way more than ever before I want to say, and it's because we got 2 incredible exciting breaking news announcements in a very very short amount of time (in the span of 3 hours) and the OpenAI announcement came as we were recording the space, so you'll get to hear a live reaction of ours to this insanity. We also had 3 deep-dives, which I am posting on this weeks episode, we chatted with Yi Tay and Max Bane from Reka, which trained and released a few new foundational multi modal models this week, and with Dome and Pablo from Stability who released a new diffusion model called Stable Cascade, and finally had a great time hanging with Swyx (from Latent space) and finally got a chance to turn the microphone back at him, and had a conversation about Swyx background, Latent Space, and AI Engineer. I was also very happy to be in SF today of all days, as my day is not over yet, there's still an event which we Cohost together with A16Z, folks from Nous Research, Ollama and a bunch of other great folks, just look at all these logos! Open Source FTW
What A SHOW folks, I almost don't want to write anything in the newsletter to MAKE you listen haha but I will I know many of you don't like listening to be babble. But if you chose one episode to listen to instead of just skimming the show-notes, make it this one. We've had 2 deep dives, one into the exciting world of multi-modalilty, we chatted with the creator of Moondream1, Vik and the co-founders of Prophetic, Wes and Eric about their EEG/fMRI multimodal transformer (that's right!) and then we had a DEEP dive into the new Hourglass Diffusion Transformers with Tanishq from MedArc/Stability. More than 1300 tuned in to the live show
In 2023 we did a few Fundamentals episodes covering Benchmarks 101, Datasets 101, FlashAttention, and Transformers Math, and it turns out those were some of your evergreen favorites! So we are experimenting with more educational/survey content in the mix alongside our regular founder and event coverage. Pls request more!We have a new calendar for events; join to be notified of upcoming things in 2024!Today we visit the shoggoth mask factory: how do transformer models go from trawling a deeply learned latent space for next-token prediction to a helpful, honest, harmless chat assistant? Our guest “lecturer” today is ; you might know him from his prolific online writing on and Twitter, or from his previous work leading RLHF at HuggingFace and now at the Allen Institute for AI (AI2) which recently released the open source GPT3.5-class Tulu 2 model which was trained with DPO. He's widely considered one of the most knowledgeable people on RLHF and RLAIF. He recently gave an “RLHF 201” lecture at Stanford, so we invited him on the show to re-record it for everyone to enjoy! You can find the full slides here, which you can use as reference through this episode. Full video with synced slidesFor audio-only listeners, this episode comes with slide presentation along our discussion. You can find it on our YouTube (like, subscribe, tell a friend, et al).Theoretical foundations of RLHFThe foundation and assumptions that go into RLHF go back all the way to Aristotle (and you can find guidance for further research in the slide below) but there are two key concepts that will be helpful in thinking through this topic and LLMs in general:* Von Neumann–Morgenstern utility theorem: you can dive into the math here, but the TLDR is that when humans make decision there's usually a “maximum utility” function that measures what the best decision would be; the fact that this function exists, makes it possible for RLHF to model human preferences and decision making.* Bradley-Terry model: given two items A and B from a population, you can model the probability that A will be preferred to B (or vice-versa). In our world, A and B are usually two outputs from an LLM (or at the lowest level, the next token). It turns out that from this minimal set of assumptions, you can build up the mathematical foundations supporting the modern RLHF paradigm!The RLHF loopOne important point Nathan makes is that "for many tasks we want to solve, evaluation of outcomes is easier than producing the correct behavior". For example, it might be difficult for you to write a poem, but it's really easy to say if you like or dislike a poem someone else wrote. Going back to the Bradley-Terry Model we mentioned, the core idea behind RLHF is that when given two outputs from a model, you will be able to say which of the two you prefer, and we'll then re-encode that preference into the model.An important point that Nathan mentions is that when you use these preferences to change model behavior "it doesn't mean that the model believes these things. It's just trained to prioritize these things". When you have preference for a model to not return instructions on how to write a computer virus for example, you're not erasing the weights that have that knowledge, but you're simply making it hard for that information to surface by prioritizing answers that don't return it. We'll talk more about this in our future Fine Tuning 101 episode as we break down how information is stored in models and how fine-tuning affects it.At a high level, the loop looks something like this:For many RLHF use cases today, we can assume the model we're training is already instruction-tuned for chat or whatever behavior the model is looking to achieve. In the "Reward Model & Other Infrastructure" we have multiple pieces:Reward + Preference ModelThe reward model is trying to signal to the model how much it should change its behavior based on the human preference, subject to a KL constraint. The preference model itself scores the pairwise preferences from the same prompt (worked better than scalar rewards).One way to think about it is that the reward model tells the model how big of a change this new preference should make in the behavior in absolute terms, while the preference model calculates how big of a difference there is between the two outputs in relative terms. A lot of this derives from John Schulman's work on PPO:We recommend watching him talk about it in the video above, and also Nathan's pseudocode distillation of the process:Feedback InterfacesUnlike the "thumbs up/down" buttons in ChatGPT, data annotation from labelers is much more thorough and has many axis of judgement. At a simple level, the LLM generates two outputs, A and B, for a given human conversation. It then asks the labeler to use a Likert scale to score which one it preferred, and by how much:Through the labeling process, there are many other ways to judge a generation:We then use all of this data to train a model from the preference pairs we have. We start from the base instruction-tuned model, and then run training in which the loss of our gradient descent is the difference between the good and the bad prompt.Constitutional AI (RLAIF, model-as-judge)As these models have gotten more sophisticated, people started asking the question of whether or not humans are actually a better judge of harmfulness, bias, etc, especially at the current price of data labeling. Anthropic's work on the "Constitutional AI" paper is using models to judge models. This is part of a broader "RLAIF" space: Reinforcement Learning from AI Feedback.By using a "constitution" that the model has to follow, you are able to generate fine-tuning data for a new model that will be RLHF'd on this constitution principles. The RLHF model will then be able to judge outputs of models to make sure that they follow its principles:Emerging ResearchRLHF is still a nascent field, and there are a lot of different research directions teams are taking; some of the newest and most promising / hyped ones:* Rejection sampling / Best of N Sampling: the core idea here is that rather than just scoring pairwise generations, you are generating a lot more outputs (= more inference cost), score them all with your reward model and then pick the top N results. LLaMA2 used this approach, amongst many others.* Process reward models: in Chain of Thought generation, scoring each step in the chain and treating it like its own state rather than just scoring the full output. This is most effective in fields like math that inherently require step-by-step reasoning.* Direct Preference Optimization (DPO): We covered DPO in our NeurIPS Best Papers recap, and Nathan has a whole blog post on this; DPO isn't technically RLHF as it doesn't have the RL part, but it's the “GPU Poor” version of it. Mistral-Instruct was a DPO model, as do Intel's Neural Chat and StableLM Zephyr. Expect to see a lot more variants in 2024 given how “easy” this was.* Superalignment: OpenAI launched research on weak-to-strong generalization which we briefly discuss at the 1hr mark.Note: Nathan also followed up this post with RLHF resources from his and peers' work:Show Notes* Full RLHF Slides* Interconnects* Retort (podcast)* von Neumann-Morgenstern utility theorem* Bradley-Terry model (pairwise preferences model)* Constitutional AI* Tamer (2008 paper by Bradley Knox and Peter Stone)* Paul Christiano et al. RLHF paper* InstructGPT* Eureka by Jim Fan* ByteDance / OpenAI lawsuit* AlpacaEval* MTBench* TruthfulQA (evaluation tool)* Self-Instruct Paper* Open Assistant* Louis Castricato* Nazneen Rajani* Tulu (DPO model from the Allen Institute)Timestamps* [00:00:00] Introductions and background on the lecture origins* [00:05:17] History of RL and its applications* [00:10:09] Intellectual history of RLHF* [00:13:47] RLHF for decision-making and pre-deep RL vs deep RL* [00:20:19] Initial papers and intuitions around RLHF* [00:27:57] The three phases of RLHF* [00:31:09] Overfitting issues* [00:34:47] How preferences get defined* [00:40:35] Ballpark on LLaMA2 costs* [00:42:50] Synthetic data for training* [00:47:25] Technical deep dive in the RLHF process* [00:54:34] Projection / best event sampling* [00:57:49] Constitutional AI* [01:04:13] DPO* [01:08:54] What's the Allen Institute for AI?* [01:13:43] Benchmarks and models comparisonsTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO in Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:15]: Hey, and today we have Dr. Nathan Lambert in the house. Welcome.Nathan [00:00:18]: Thanks guys.Swyx [00:00:19]: You didn't have to come too far. You got your PhD in Berkeley, and it seems like you've lived there most of the time in recent years. You worked on robotics and model-based reinforcement learning on your PhD, and you also interned at FAIR and DeepMind. You bootstrapped the RLHF team at Hugging Face, and you recently joined the Allen Institute as a research scientist. So that's your quick bio. What should people know about you that maybe is not super obvious about you on New LinkedIn?Nathan [00:00:43]: I stay sane in various insane sport and ultra-endurance sport activities that I do.Swyx [00:00:50]: What's an ultra-endurance sport activity?Nathan [00:00:52]: Long-distance trail running or gravel biking. Try to unplug sometimes, although it's harder these days. Yeah.Swyx [00:00:59]: Well, you know, just the Bay Area is just really good for that stuff, right?Nathan [00:01:02]: Oh, yeah. You can't beat it. I have a trailhead like 1.2 miles from my house, which is pretty unmatchable in any other urban area.Swyx [00:01:11]: Pretty excellent. You also have an incredible blog, Interconnects, which I'm a fan of. And I also just recently discovered that you have a new podcast, Retort.Nathan [00:01:20]: Yeah, we do. I've been writing for a while, and I feel like I've finally started to write things that are understandable and fun. After a few years lost in the wilderness, if you ask some of my friends that I made read the earlier blogs, they're like, oh, this is yikes, but it's coming along. And the podcast is with my friend Tom, and we just kind of like riff on what's actually happening on AI and not really do news recaps, but just what it all means and have a more critical perspective on the things that really are kind of funny, but still very serious happening in the world of machine learning.Swyx [00:01:52]: Yeah. Awesome. So let's talk about your work. What would you highlight as your greatest hits so far on Interconnects, at least?Nathan [00:01:59]: So the ones that are most popular are timely and or opinion pieces. So the first real breakout piece was when April and I also just wrote down the thing that everyone in AI was feeling, which is we're all feeling stressed, that we're going to get scooped, and that we're overworked, which is behind the curtain, what it feels to work in AI. And then a similar one, which we might touch on later in this, was about my recent job search, which wasn't the first time I wrote a job search post. People always love that stuff. It's so open. I mean, it's easy for me to do in a way that it's very on-brand, and it's very helpful. I understand that until you've done it, it's hard to share this information. And then the other popular ones are various model training techniques or fine tuning. There's an early one on RLHF, which is, this stuff is all just like when I figure it out in my brain. So I wrote an article that's like how RLHF actually works, which is just the intuitions that I had put together in the summer about RLHF, and that was pretty well. And then I opportunistically wrote about QSTAR, which I hate that you have to do it, but it is pretty funny. From a literature perspective, I'm like, open AI publishes on work that is very related to mathematical reasoning. So it's like, oh, you just poke a little around what they've already published, and it seems pretty reasonable. But we don't know. They probably just got like a moderate bump on one of their benchmarks, and then everyone lost their minds. It doesn't really matter.Swyx [00:03:15]: You're like, this is why Sam Altman was fired. I don't know. Anyway, we're here to talk about RLHF 101. You did a presentation, and I think you expressed some desire to rerecord it. And that's why I reached out on Twitter saying, like, why not rerecord it with us, and then we can ask questions and talk about it. Yeah, sounds good.Nathan [00:03:30]: I try to do it every six or 12 months is my estimated cadence, just to refine the ways that I say things. And people will see that we don't know that much more, but we have a bit of better way of saying what we don't know.Swyx [00:03:43]: Awesome. We can dive right in. I don't know if there's any other topics that we want to lay out as groundwork.Alessio [00:03:48]: No, you have some awesome slides. So for people listening on podcast only, we're going to have the slides on our show notes, and then we're going to have a YouTube version where we run through everything together.Nathan [00:03:59]: Sounds good. Yeah. I think to start skipping a lot of the, like, what is a language model stuff, everyone knows that at this point. I think the quote from the Llama 2 paper is a great kind of tidbit on RLHF becoming like a real deal. There was some uncertainty earlier in the year about whether or not RLHF was really going to be important. I think it was not that surprising that it is. I mean, with recent models still using it, the signs were there, but the Llama 2 paper essentially reads like a bunch of NLP researchers that were skeptical and surprised. So the quote from the paper was, meanwhile, reinforcement learning known for its instability seemed a somewhat shadowy field for those in the NLP research community. However, reinforcement learning proved highly effective, particularly given its cost and time effectiveness. So you don't really know exactly what the costs and time that Meta is looking at, because they have a huge team and a pretty good amount of money here to release these Llama models. This is just the kind of thing that we're seeing now. I think any major company that wasn't doing RLHF is now realizing they have to have a team around this. At the same time, we don't have a lot of that in the open and research communities at the same scale. I think seeing that converge would be great, but it's still very early days. And the other thing on the slide is some of Anthropic's work, but everyone knows Anthropic is kind of the masters of this, and they have some of their own techniques that we're going to talk about later on, but that's kind of where we start.Alessio [00:05:17]: Can we do just a one-second RL version? So you come from a robotics background, which RL used to be, or maybe still is, state-of-the-art. And then now you're seeing a lot of LLM plus RL, so you have the gym fans, Eureka, you have MPU, which we had on the podcast when they started with RL. Now they're doing RL plus LLMs. Yeah. Any thoughts there on how we got here? Maybe how the pendulum will keep swinging?Nathan [00:05:46]: I really think RL is about a framing of viewing the world through trial and error learning and feedback, and really just one that's focused on thinking about decision-making and inputs in the world and how inputs have reactions. And in that, a lot of people come from a lot of different backgrounds, whether it's physics, electrical engineering, mechanical engineering. There are obviously computer scientists, but compared to other fields of CS, I do think it's a much more diverse background of people. My background was in electrical engineering and doing robotics and things like that. It really just changes the worldview. I think that reinforcement learning as it was back then, so to say, is really different. You're looking at these toy problems and the numbers are totally different, and everyone went kind of zero to one at scaling these things up, but people like Jim Phan and other people that were... You saw this transition in the decision transformer and papers and when people are trying to use transformers to do decision-making for things like offline RL, and I think that was kind of like the early days. But then once language models were so proven, it's like everyone is using this tool for their research. I think in the long run, it will still settle out, or RL will still be a field that people work on just because of these kind of fundamental things that I talked about. It's just viewing the whole problem formulation different than predicting text, and so there needs to be that separation. And the view of RL in language models is pretty contrived already, so it's not like we're doing real RL. I think the last slide that I have here is a way to make RLHF more like what people would think of with RL, so actually running things over time, but a weird lineage of tools that happen to get us to where we are, so that's why the name takes up so much space, but it could have gone a lot of different ways. Cool.Alessio [00:07:29]: We made it one slide before going on a tangent.Nathan [00:07:31]: Yeah, I mean, it's kind of related. This is a...Swyx [00:07:35]: Yeah, so we have a history of RL.Nathan [00:07:37]: Yeah, so to give the context, this paper really started because I have this more diverse background than some computer scientists, such as trying to understand what the difference of a cost function or a reward function and a preference function would be without going into all of the details. Costs are normally things that control theorists would work with in these kind of closed domains, and then reinforcement learning has always worked with rewards that's central to the formulation that we'll see, and then the idea was like, okay, we now are at preferences, and each step along the way there's kind of different assumptions that you're making. We'll get into these, and those assumptions are built on other fields of work. So that's what this slide is going to say, it's like RLHF, while directly building on tools from RL and language models, is really implicitly impacted and built on theories and philosophies spanning tons of human history. I think we cite Aristotle in this paper, which is fun. It's like going pre-BC, it's like 2,300 years old or something like that. So that's the reason to do this, I think. We kind of list some things in the paper about summarizing what different presumptions of RLHF could be. I think going through these is actually kind of funny. It's fun to talk about these, because they're kind of grab bags of things that you'll see return throughout this podcast that we're talking about it. The core thing of RLHF that, in order to be a believer in this, is that RL actually works. It's like, if you have a reward function, you can optimize it in some way and get a different performance out of it, and you could do this at scale, and you could do this in really complex environments, which is, I don't know how to do that in all the domains. I don't know how to exactly make chat GPT. So it's kind of, we'll overshadow everything. And then there's, go from something kind of obvious like that, and then you read the von Neumann-Morgenstern utility theorem, which is essentially an economic theory that says you can weight different probabilities of different people, which is a theoretical piece of work that is the foundation of utilitarianism, and trying to quantify preferences is crucial to doing any sort of RLHF. And if you look into this, all of these things, there's way more you could go into if you're interested in any of these. So this is kind of like grabbing a few random things, and then kind of similar to that is the Bradley-Terry model, which is the fancy name for the pairwise preferences that everyone is doing. And then all the things that are like, that Anthropic and OpenAI figured out that you can do, which is that you can aggregate preferences from a bunch of different people and different sources. And then when you actually do RLHF, you extract things from that data, and then you train a model that works somehow. And we don't know, there's a lot of complex links there, but if you want to be a believer in doing this at scale, these are the sorts of things that you have to accept as preconditions for doing RLHF. Yeah.Swyx [00:10:09]: You have a nice chart of like the sort of intellectual history of RLHF that we'll send people to refer to either in your paper or in the YouTube video for this podcast. But I like the other slide that you have on like the presumptions that you need to have for RLHF to work. You already mentioned some of those. Which one's underappreciated? Like, this is the first time I've come across the VNM Utility Theorem.Nathan [00:10:29]: Yeah, I know. This is what you get from working with people like to my co-host on the podcast, the rhetoric is that sociologist by training. So he knows all these things and like who the philosophers are that found these different things like utilitarianism. But there's a lot that goes into this. Like essentially there's even economic theories that like there's debate whether or not preferences exist at all. And there's like different types of math you can use with whether or not you actually can model preferences at all. So it's pretty obvious that RLHF is built on the math that thinks that you can actually model any human preference. But this is the sort of thing that's been debated for a long time. So all the work that's here is like, and people hear about in their AI classes. So like Jeremy Bentham, like hedonic calculus and all these things like these are the side of work where people assume that preferences can be measured. And this is like, I don't really know, like, this is what I kind of go on a rant and I say that in RLHF calling things a preference model is a little annoying because there's no inductive bias of what a preference is. It's like if you were to learn a robotic system and you learned a dynamics model, like hopefully that actually mirrors the world in some way of the dynamics. But with a preference model, it's like, Oh my God, I don't know what this model, like I don't know what chat GPT encodes as any sort of preference or what I would want it to be in a fair way. Anthropic has done more work on trying to write these things down. But even like if you look at Claude's constitution, like that doesn't mean the model believes these things. It's just trained to prioritize these things. And that's kind of what the later points I'm looking at, like what RLHF is doing and if it's actually like a repeatable process in the data and in the training, that's just unknown. And we have a long way to go before we understand what this is and the link between preference data and any notion of like writing down a specific value.Alessio [00:12:05]: The disconnect between more sociology work versus computer work already exists, or is it like a recent cross contamination? Because when we had Tri Dao on the podcast, he said FlashAttention came to be because at Hazy they have so much overlap between systems engineer and like deep learning engineers. Is it the same in this field?Nathan [00:12:26]: So I've gone to a couple of workshops for the populations of people who you'd want to include this like R. I think the reason why it's not really talked about is just because the RLHF techniques that people use were built in labs like OpenAI and DeepMind where there are some of these people. These places do a pretty good job of trying to get these people in the door when you compare them to like normal startups. But like they're not bringing in academics from economics, like social choice theory. There's just too much. Like the criticism of this paper that this is based on is like, oh, you're missing these things in RL or at least this decade of RL and it's like it would be literally be bigger than the Sutton and Barto book if you were to include everyone. So it's really hard to include everyone in a principled manner when you're designing this. It's just a good way to understand and improve the communication of what RLHF is and like what is a good reward model for society. It really probably comes down to what an individual wants and it'll probably motivate models to move more in that direction and just be a little bit better about the communication, which is a recurring theme and kind of my work is like I just get frustrated when people say things that don't really make sense, especially when it's going to manipulate individual's values or manipulate the general view of AI or anything like this. So that's kind of why RLHF is so interesting. It's very vague in what it's actually doing while the problem specification is very general.Swyx [00:13:42]: Shall we go to the, I guess, the diagram here on the reinforcement learning basics? Yeah.Nathan [00:13:47]: So reinforcement learning, I kind of mentioned this, it's a trial and error type of system. The diagram and the slides is really this classic thing where you have an agent interacting with an environment. So it's kind of this agent has some input to the environment, which is called the action. The environment returns a state and a reward and that repeats over time and the agent learns based on these states and these rewards that it's seeing and it should learn a policy that makes the rewards go up. That seems pretty simple than if you try to mentally map what this looks like in language, which is that like the language models don't make this easy. I think with the language model, it's very hard to define what an environment is. So if the language model is the policy and it's generating, it's like the environment should be a human, but setting up the infrastructure to take tens of thousands of prompts and generate them and then show them to a human and collect the human responses and then shove that into your training architecture is very far away from working. So we don't really have an environment. We just have a reward model that returns a reward and the state doesn't really exist when you look at it like an RL problem. What happens is the state is a prompt and then you do a completion and then you throw it away and you grab a new prompt. We're really in as an RL researcher, you would think of this as being like you take a state, you get some completion from it and then you look at what that is and you keep kind of iterating on it and all of that isn't here, which is why you'll hear RLHF referred to as bandits problem, which is kind of like you choose one action and then you watch the dynamics play out. There's many more debates that you can have in this. If you get the right RL people in the room, then kind of like this is an RL even when you zoom into what RLHF is doing.Alessio [00:15:22]: Does this change as you think about a chain of thought reasoning and things like that? Like does the state become part of the chain that you're going through?Nathan [00:15:29]: There's work that I've mentioned on one slide called process reward models that essentially rewards each step in the chain of thought reasoning. It doesn't really give the part of interaction, but it does make it a little bit more fine grained where you can think about like calling it at least you have many states from your initial state. That formulation I don't think people have fully settled on. I think there's a bunch of great work out there, like even OpenAI is releasing a lot of this and let's verify step by step is there pretty great paper on the matter. I think in the next year that'll probably get made more concrete by the community on like if you can easily draw out like if chain of thought reasoning is more like RL, we can talk about that more later. That's a kind of a more advanced topic than we probably should spend all the time on.Swyx [00:16:13]: RLHF for decision making. You have a slide here that compares pre-deep RL versus deep RL.Nathan [00:16:19]: This is getting into the history of things, which is showing that the work that people are using now really came from well outside of NLP and it came before deep learning was big. Next up from this paper, Tamer, which is from 2008. Some names that are still really relevant in kind of human centric RL, Bradley Knox and Peter Stone. If you have an agent take an action, you would just have a human give a score from zero to one as a reward rather than having a reward function. And then with that classifier, you can do something with a policy that learns to take actions to maximize that reward. It's a pretty simple setup. It works in simple domains. And then the reason why this is interesting is you compare it to the paper that everyone knows, which is this Paul Christiano et al. Deep Reinforced Learning from Human Preferences paper, which is where they showed that learning from human preferences, you can solve like the basic RL tasks at the time. So various control problems and simulation and this kind of like human preferences approach had higher rewards in some environments than if you just threw RL at the environment that returned a reward. So the preferences thing was you took two trajectories. So in this case, it was like complete trajectories of the agent and the human was labeling which one is better. You can see how this kind of comes to be like the pairwise preferences that are used today that we'll talk about. And there's also a really kind of interesting nugget that is the trajectory that the humans were labeling over has a lot more information than the RL algorithm would see if you just had one state, which is kind of why people think that it's why the performance in this paper was so strong. But I still think that it's surprising that there isn't more RL work of this style happening now. This paper is in 2017. So it's like six years later and I haven't seen things that are exactly similar, but it's a great paper to understand where stuff that's happening now kind of came from.Swyx [00:17:58]: Just on the Christiano paper, you mentioned the performance being strong. I don't remember what results should I have in mind when I think about that paper?Nathan [00:18:04]: It's mostly like if you think about an RL learning curve, which is like on the X axis, you have environment interactions on the Y axis, you have performance. You can think about different like ablation studies of between algorithms. So I think they use like A2C, which I don't even remember what that stands for as their baseline. But if you do the human preference version on a bunch of environments, like the human preference labels, the agent was able to learn faster than if it just learned from the signal from the environment, which means like it's happening because the reward model has more information than the agent would. But like the fact that it can do better, I was like, that's pretty surprising to me because RL algorithms are pretty sensitive. So I was like, okay.Swyx [00:18:41]: It's just one thing I do want to establish as a baseline for our listeners. We are updating all the weights. In some sense, the next token prediction task of training a language model is a form of reinforcement learning. Except that it's not from human feedback. It's just self-supervised learning from a general corpus. There's one distinction which I love, which is that you can actually give negative feedback. Whereas in a general sort of pre-training situation, you cannot. And maybe like the order of magnitude of feedback, like the Likert scale that you're going to talk about, that actually just gives more signal than a typical training process would do in a language model setting. Yeah.Nathan [00:19:15]: I don't think I'm the right person to comment exactly, but like you can make analogies that reinforcement learning is self-supervised learning as well. Like there are a lot of things that will point to that. I don't know whether or not it's a richer signal. I think that could be seen in the results. It's a good thing for people to look into more. As reinforcement learning is so much less compute, like it is a richer signal in terms of its impact. Because if they could do what RLHF is doing at pre-training, they would, but they don't know how to have that effect in like a stable manner. Otherwise everyone would do it.Swyx [00:19:45]: On a practical basis, as someone fine-tuning models, I have often wished for negative fine-tuning, which pretty much doesn't exist in OpenAI land. And it's not the default setup in open-source land.Nathan [00:19:57]: How does this work in like diffusion models and stuff? Because you can give negative prompts to something to like stable diffusion or whatever. It's for guidance.Swyx [00:20:04]: That's for clip guidance.Nathan [00:20:05]: Is that just from like how they prompt it then? I'm just wondering if we could do something similar. It's another tangent.Swyx [00:20:10]: I do want to sort of spell that out for people in case they haven't made the connection between RLHF and the rest of the training process. They might have some familiarity with it.Nathan [00:20:19]: Yeah. The upcoming slides can really dig into this, which is like this in 2018 paper, there was a position paper from a bunch of the same authors from the Christiano paper and from the OpenAI work that everyone knows, which is like, they write a position paper on what a preference reward model could do to solve alignment for agents. That's kind of based on two assumptions. The first assumption is that we can learn user intentions to a sufficiently high accuracy. That doesn't last with me because I don't know what that means. But the second one is pretty telling in the context of RLHF, which is for many tasks we want to solve, evaluation of outcomes is easier than producing the correct behavior. And this is the whole thing. It's like we can compare two poems that the model generates and it can be viewed as liking a positive example, or it could be viewed as really disliking a negative example. And that's what I think a lot of people are doing in like the harm space is like a harmful response to a language model, whether or not you agree with the company's definition of harms is that it's a really bad negative example and they downweight them by preferring something more benign in the RLHF process, among other ways of dealing with safety. So that's a good way of saying it's like this is core, this kind of like comparison and positive or negative example is core to all of the RLHF work that has continued.Swyx [00:21:29]: People often say, I don't know what I want, but I'll know when I see it. This is that expressed in reinforcement learning tools.Nathan [00:21:35]: Yeah, it is. Yeah, it is. That's what everyone's doing in the preference modeling stage that we'll get to. Yeah. Yeah. And you can see there are more papers. This is really just to have all the links for people that go deeper. There's a Ziegler et al. paper in 2019, which shows that you can do this RLHF process on language models. This familiar diagram starts to emerge in 2019, and it's just to show that this goes really far back. I think we can kind of breeze through some of these. And then 2020 is the first open AI experiment that I think caught people's eyes, which is this learning to summarize experiment. It has this three-step process that we'll go to into more when I kind of go into the main concepts. But this is like the first time you see this diagram that they reuse with InstructGPT, they reuse with ChatGPT. And the types of examples that they would have, I don't think I need to read these exactly, but one that I have read a whole bunch of times is like, they took these prompts from Reddit that was like, explain like I'm five or get career advice, and people really pour their heart and soul into these. So these are like multi-paragraph pieces of writing. And then they essentially do comparisons between a vanilla language model, like I think it was either GPT-2 or GPT-3, I don't always get the exact years.Swyx [00:22:42]: 3 was early 2020. So that's about right.Nathan [00:22:45]: Yeah. So this is probably done with GPT-2. It doesn't really matter. But the language model does normal things when you do few shot, which is like it repeats itself. It doesn't have nice text. And what they did is that this was the first time where the language model would generate like pretty nice text from an output. It was restricted to the summarization domain. But I think that I guess this is where I wish I was paying attention more because I would see the paper, but I didn't know to read the language model outputs and kind of understand this qualitative sense of the models very well then. Because you look at the plots in the papers, these Learning to Summarize and Destruct GPT have incredibly pretty plots, just like nicely separated lines with error bars and they're like superfine tuning works, the RL step works. But if you were early to see like how different the language that was written by these models was, I think you could have been early to like things like ChatGPT and knowing RLHF would matter. And now I think the good people know to chat with language models, but not even everyone does this. Like people are still looking at numbers. And I think OpenAI probably figured it out when they were doing this, how important that could be. And then they had years to kind of chisel away at that and that's why they're doing so well now. Yeah.Swyx [00:23:56]: I mean, arguably, you know, it's well known that ChatGPT was kind of an accident that they didn't think it would be that big of a deal. Yeah.Nathan [00:24:02]: So maybe they didn't. Maybe they didn't, but they were getting the proxy that they needed.Swyx [00:24:06]: I've heard off the record from other labs that it was in the air. If OpenAI didn't do it, someone else would have done it. So you've mentioned a couple of other papers that are very seminal to this period. And I love how you say way back when in referring to 2019.Nathan [00:24:19]: It feels like it in my life.Swyx [00:24:21]: So how much should people understand the relationship between RLHF, instruction tuning, PPO, KL divergence, anything like that? Like how would you construct the level of knowledge that people should dive into? What should people know at the high level? And then if people want to dive in deeper, where do they go? Is instruct tuning important here or is that part of the overall process towards modern RLHF?Nathan [00:24:44]: I think for most people, instruction tuning is probably still more important in their day to day life. I think instruction tuning works very well. You can write samples by hand that make sense. You can get the model to learn from them. You could do this with very low compute. It's easy to do almost in like no code solutions at this point. And the loss function is really straightforward. And then if you're interested in RLHF, you can kind of learn from it from a different perspective, which is like how the instruction tuning distribution makes it easier for your RLHF model to learn. There's a lot of details depending on your preference data, if it's close to your instruction model or not, if that matters. But that's really at the RLHF stage. So I think it's nice to segment and just kind of understand what your level of investment and goals are. I think instruction tuning still can do most of what you want to do. And it's like, if you want to think about RLHF, at least before DPO really had taken off at all, it would be like, do you want to have a team of at least like five people if you're really thinking about doing RLHF? I think DPO makes it a little bit easier, but that's still really limited to kind of one data set that everyone's using at this point. Like everyone's using this ultra feedback data set and it boosts AlpacaVal, MTBench, TruthfulQA and like the qualitative model a bit. We don't really know why. It's like, it might just be a data set combined with the method, but you've got to be ready for a bumpy ride if you're wanting to try to do RLHF. I don't really recommend most startups to do it unless it's like going to provide them a clear competitive advantage in their kind of niche, because you're not going to make your model chat GPT like better than OpenAI or anything like that. You've got to accept that there's some exploration there and you might get a vein of benefit in your specific domain, but I'm still like, oh, be careful going into the RLHF can of worms. You probably don't need to.Swyx [00:26:27]: Okay. So there's a bit of a time skip in what you mentioned. DPO is like a couple months old, so we'll leave that towards the end. I think the main result that I think most people talk about at this stage, we're talking about September 2020 and then going into, I guess maybe last year was Vicuña as one of the more interesting applications of instruction tuning that pushed LLAMA1 from, let's say a GPT 3-ish model to a GPT 3.5 model in pure open source with not a lot of resources. I think, I mean, they said something like, you know, they use like under $100 to makeNathan [00:26:58]: this. Yeah. Like instruction tuning can really go a long way. I think the claims of chat GPT level are long overblown in most of the things in open source. I think it's not to say, like Vicuña was a huge step and it's just kind of showing that instruction tuning with the right data will completely change what it feels like to talk with your model. Yeah.Swyx [00:27:19]: From text completion to actually chatting back and forth. Yeah. Yeah.Nathan [00:27:23]: Instruction tuning can be multi-turn. Just having a little bit of data that's like a couple of turns can go a really long way. That was like the story of the whole first part of the year is like people would be surprised by how far you can take instruction tuning on a small model. I think the things that people see now is like the small models don't really handle nuance as well and they could be more repetitive even if they have really good instruction tuning. But if you take that kind of 7 to 70 billion parameter jump, like the instruction tuning at the bigger model is like robustness, little things make more sense. So that's still just with instruction tuning and scale more than anything else.Swyx [00:27:56]: Excellent. Shall we go to technical overview?Nathan [00:27:58]: Yeah. This is kind of where we go through my own version of this like three phase process. You can talk about instruction tuning, which we've talked about a lot. It's funny because all these things, instruction tuning has the fewest slides, even though it's the most practical thing for most people. We could save the debate for like if the big labs still do instruction tuning for later, but that's a coming wave for people. And then like preference data and training and then kind of like what does reinforce learning optimization actually mean? We talk about these sequentially because you really have to be able to do each of them to be able to do the next one. You need to be able to have a model that's chatty or helpful instruction following. Every company has their own word that they like to assign to what instructions mean. And then once you have that, you can collect preference data and do some sort of optimization.Swyx [00:28:39]: When you say word, you mean like angle bracket inst or do you mean something else?Nathan [00:28:42]: Oh, I don't even know what inst means, but just saying like they use their adjective that they like. I think Entropic also like steerable is another one.Swyx [00:28:51]: Just the way they describe it. Yeah.Nathan [00:28:53]: So like instruction tuning, we've covered most of this is really about like you should try to adapt your models to specific needs. It makes models that were only okay, extremely comprehensible. A lot of the times it's where you start to get things like chat templates. So if you want to do system prompts, if you want to ask your model, like act like a pirate, that's one of the ones I always do, which is always funny, but like whatever you like act like a chef, like anything, this is where those types of things that people really know in language models start to get applied. So it's good as a kind of starting point because this chat template is used in our early childhood and all of these things down the line, but it was a basic pointer. It's like, once you see this with instruction tuning, you really know it, which is like you take things like stack overflow where you have a question and an answer. You format that data really nicely. There's much more tricky things that people do, but I still think the vast majority of it is question answer. Please explain this topic to me, generate this thing for me. That hasn't changed that much this year. I think people have just gotten better at scaling up the data that they need. Yeah, this is where this talk will kind of take a whole left turn into more technical detail land. I put a slide with the RLHF objective, which I think is good for people to know. I've started going back to this more, just kind of understand what is trying to happen here and what type of math people could do. I think because of this algorithm, we've mentioned this, it's in the air, direct preference optimization, but everything kind of comes from an equation of trying to learn a policy that maximizes the reward. The reward is some learned metric. A lot can be said about what the reward should be subject to some constraint. The most popular constraint is the KL distraint, which is just a distributional distance. Essentially in language models, that means if you have a completion from your instruction or RLHF model, you can compare that completion to a base model. And looking at the log probs from the model, which are essentially how likely each token is, you can see a rough calculation of the distance between these two models, just as a scalar number. I think what that actually looks like in code, you can look at it. It'd be like a sum of log probs that you get right from the model. It'll look much more simpler than it sounds, but it is just to make the optimization kind of stay on tracks.Make sure it doesn't overfit to the RLHF data. Because we have so little data in RLHF, overfitting is really something that could happen. I think it'll fit to specific features that labelers like to see, that the model likes to generate, punctuation, weird tokens like calculator tokens. It could overfit to anything if it's in the data a lot and it happens to be in a specific format. And the KL constraint prevents that. There's not that much documented work on that, but there's a lot of people that know if you take that away, it just doesn't work at all. I think it's something that people don't focus on too much. But the objective, as I said, it's just kind of, you optimize the reward. The reward is where the human part of this comes in. We'll talk about that next. And then subject to a constraint, don't change the model too much. The real questions are, how do you implement the reward? And then how do you make the reward go up in a meaningful way? So like a preference model, the task is kind of to design a human reward. I think the equation that most of the stuff is based on right now is something called a Bradley-Terry model, which is like a pairwise preference model where you compare two completions and you say which one you like better. I'll show an interface that Anthropic uses here. And the Bradley-Terry model is really a fancy probability between two selections. And what's happening in the math is that you're looking at the probability that the chosen completion, the one you like better, is actually the better completion over the rejected completion. And what these preference models do is they assume this probability is correlated to reward. So if you just sample from this probability, it'll give you a scalar. And then you use that reward later on to signify what piece of text is better. I'm kind of inclined to breeze through the math stuff because otherwise, it's going to be not as good to listen to.Alessio [00:32:49]: I think people want to hear it. I think there's a lot of higher level explanations out there. Yeah.Nathan [00:32:55]: So the real thing is you need to assign a scalar reward of how good a response is. And that's not necessarily that easy to understand. Because if we take back to one of the first works, I mentioned this tamer thing for decision making. People tried that with language models, which is if you have a prompt in a completion and you just have someone rate it from 0 to 10, could you then train a reward model on all of these completions in 0 to 10 ratings and see if you can get chat2BT with that? And the answer is really kind of no. Like a lot of people tried that. It didn't really work. And then that's why they tried this pairwise preference thing. And it happened to work. And this Bradley Terry model comes from the 50s. It's from these fields that I was mentioning earlier. And it's wild how much this happens. I mean, this screenshot I have in the slides is from the DPO paper. I think it might be the appendix. But it's still really around in the literature of what people are doing for RLHF.Alessio [00:33:45]: Yeah.Nathan [00:33:45]: So it's a fun one to know.Swyx [00:33:46]: I'll point out one presumption that this heavily relies on. You mentioned this as part of your six presumptions that we covered earlier, which is that you can aggregate these preferences. This is not exactly true among all humans, right? I have a preference for one thing. You have a preference for a different thing. And actually coming from economics, you mentioned economics earlier. There's a theorem or a name for this called error impossibility, which I'm sure you've come across..Nathan [00:34:07]: It's one of the many kind of things we throw around in the paper.Swyx [00:34:10]: Right. Do we just ignore it?Nathan [00:34:14]: We just, yeah, just aggregate. Yeah. I think the reason this really is done on a deep level is that you're not actually trying to model any contestable preference in this. You're not trying to go into things that are controversial or anything. It's really the notion of preference is trying to stay around correctness and style rather than any meaningful notion of preference. Because otherwise these companies, they don't want to do this at all. I think that's just how it is. And it's like, if you look at what people actually do. So I have a bunch of slides on the feedback interface. And they all publish this.Swyx [00:34:43]: It's always at the appendices of every paper.Nathan [00:34:47]: There's something later on in this talk, which is like, but it's good to mention. And this is when you're doing this preference collection, you write out a very long document of instructions to people that are collecting this data. And it's like, this is the hierarchy of what we want to prioritize. Something amount like factuality, helpfulness, honestness, harmlessness. These are all different things. Every company will rank these in different ways, provide extensive examples. It's like, if you see these two answers, you should select this one and why. And all of this stuff. And then my kind of like head scratching is like, why don't we check if the models actually do these things that we tell the data annotators to collect? But I think it's because it's hard to make that attribution. And it's hard to test if a model is honest and stuff. It would just be nice to understand the kind of causal mechanisms as a researcher or like if our goals are met. But at a simple level, what it boils down to, I have a lot more images than I need. It's like you're having a conversation with an AI, something like type GPT. You get shown two responses or more in some papers, and then you have to choose which one is better. I think something you'll hear a lot in this space is something called a Likert scale. Likert is a name. It's a name for probably some research in economics, decision theory, something. But essentially, it's a type of scale where if you have integers from like one to eight, the middle numbers will represent something close to a tie. And the smallest numbers will represent one model being way better than the other. And the biggest numbers will be like the other models better. So in the case of one to eight, if you're comparing models A to B, if you return a one, if you really liked option A, you return eight if you really like B, and then like a four or five if they were close. There's other ways to collect this data. This one's become really popular. We played with it a bit at Hugging Face. It's hard to use. Filling out this preference data is really hard. You have to read like multiple paragraphs. It's not for me. Some people really like it. I hear I'm like, I can't imagine sitting there and reading AI-generated text and like having to do that for my job. But a lot of these early papers in RLHF have good examples of what was done. The one I have here is from Anthropic's collection demo because it was from slides that I did with Anthropic. But you can look up these in the various papers. It looks like Chat2BT with two responses, and then you have an option to say which one is better. It's nothing crazy. The infrastructure is almost exactly the same, but they just log which one you think is better. I think places like Scale are also really big in this where a lot of the labeler companies will help control like who's doing how many samples. You have multiple people go over the same sample once and like what happens if there's disagreement. I don't really think this disagreement data is used for anything, but it's good to know like what the distribution of prompts is, who's doing it, how many samples you have, controlling the workforce. All of this is very hard. A last thing to add is that a lot of these companies do collect optional metadata. I think the Anthropic example shows a rating of like how good was the prompt or the conversation from good to bad because things matter. Like there's kind of a quadrant of preference data in my mind, which is you're comparing a good answer to a good answer, which is like really interesting signal. And then there's kind of the option of you're comparing a bad answer to a bad answer, which is like you don't want to train your model on two different issues. This is like, we did this at Hugging Base and it was like, our data was like, we don't know if we can use this because a lot of it was just bad answer to bad answer because you're like rushing to try to do this real contract. And then there's also good answer to bad answer, which I think is probably pretty reasonable to include. You just prefer the good one and move on with your life. But those are very different scenarios. I think open AIs of the world are all in good answer, good answer, and have learned to eliminate everything else. But when people try to do this in open source, it's probably like what Open Assistance saw is like, there's just a lot of bad answers in your preference data. And you're like, what do I do with this? Metadata flags can help. I threw in the instruct GPT metadata. You can see how much they collect here. And like everything from the model fails to actually complete the task, hallucinations, different types of offensive or dangerous content, moral judgment, expresses opinion. Like, I don't know exactly if they're doing this now, but you can kind of see why doing RLHF at scale and prioritizing a lot of different endpoints would be hard because these are all things I'd be interested in if I was scaling up a big team to do RLHF and like what is going into the preference data. You do an experiment and you're like, okay, we're going to remove all the data where they said the model hallucinates like just that and then retrain everything. Like, what does that do?Swyx [00:38:59]: Yeah, so hallucination is big, but some of these other metadata categories, and I've seen this in a lot of papers, it's like, does it contain sexual content? Does it express a moral judgment? Does it denigrate a protected class? That kind of stuff, very binary. Should people try to adjust for this at the RLHF layer or should they put it as a pipeline where they have a classifier as a separate model that grades the model output?Nathan [00:39:20]: Do you mean for training or like a deployment? Deployment. I do think that people are doing it at deployment. I think we've seen safety and other things in the RLHF pipeline. Like Lama 2 is famous for kind of having this like helpfulness and safety reward models. Deep in the Gemini report is something that Gemini has like four things, which is like helpfulness, factuality, maybe safety, maybe something else. But places like Anthropic and Chattopadhyay and Bard almost surely have a classifier after, which is like, is this text good? Is this text bad? That's not that surprising, I think, because you could use like a hundred times smaller language model and do much better at filtering than RLHF. But I do think it's still so deeply intertwined with the motivation of RLHF to be for safety that some of these categories still persist. I think that's something I'll kind of settle out, I think.Swyx [00:40:11]: I'm just wondering if it's worth collecting this data for the RLHF purpose, if you're not going to use it in any way, separate model to-Nathan [00:40:18]: Yeah, I don't think OpenAI will collect all of this anymore, but I think for research perspectives, it's very insightful to know, but it's also expensive. So essentially your preference data scales with how many minutes it takes for you to do each task and every button is like, it scales pretty linearly. So it's not cheap stuff.Swyx [00:40:35]: Can we, since you mentioned expensiveness, I think you may have joined one of our spaces back in Lama 2 was released. We had an estimate from you that was something on the order of Lama 2 costs $3 to $6 million to train GPU-wise, and then it was something like $20 to $30 million in preference data. Is that something that's still in the ballpark? I don't need precise numbers.Nathan [00:40:56]: I think it's still a ballpark. I know that the 20 million was off by a factor of four because I was converting from a prompt number to a total data point. So essentially when you do this, if you have multi-turn setting, each turn will be one data point and the Lama 2 paper reports like 1.5 million data points, which could be like 400,000 prompts. So I would say it's still say like 6 to 8 million is safe to say that they're spending, if not more, they're probably also buying other types of data and or throwing out data that they don't like, but it's very comparable to compute costs. But the compute costs listed in the paper always are way lower because all they have to say is like, what does one run cost? But they're running tens or hundreds of runs. So it's like, okay, like... Yeah, it's just kind of a meaningless number. Yeah, the data number would be more interesting.Alessio [00:41:42]: What's the depreciation of this data?Nathan [00:41:46]: It depends on the method. Like some methods, people think that it's more sensitive to the, this is what I was saying. It was like, does the type of instruction tuning you do matter for RLHF? So like, depending on the method, some people are trying to figure out if you need to have like what is called like, this is very confusing. It's called like on policy data, which is like your RLHF data is from your instruction model. I really think people in open source and academics are going to figure out how to use any preference data on any model just because they're scrappy. But there's been an intuition that to do like PPO well and keep improving the model over time and do like what Meta did and what people think that OpenAI does is that you need to collect new preference data to kind of edge the distribution of capabilities forward. So there's a depreciation where like the first batch of data you collect isn't really useful for training the model when you have the fifth batch. We don't really know, but it's a good question. And I do think that if we had all the LLAMA data, we wouldn't know what to do with all of it. Like probably like 20 to 40% would be pretty useful for people, but not the whole data set. Like a lot of it's probably kind of gibberish because they had a lot of data in there.Alessio [00:42:51]: So do you think like the open source community should spend more time figuring out how to reuse the data that we have or like generate more data? I think that's one of the-Nathan [00:43:02]: I think if the people are kind of locked into using synthetic data, people also think that synthetic data is like GPT-4 is more accurate than humans at labeling preferences. So if you look at these diagrams, like humans are about 60 to 70% agreement. And we're like, that's what the models get to. And if humans are about 70% agreement or accuracy, like GPT-4 is like 80%. So it is a bit better, which is like in one way of saying it.Swyx [00:43:24]: Humans don't even agree with humans 50% of the time.Nathan [00:43:27]: Yeah, so like that's the thing. It's like the human disagreement or the lack of accuracy should be like a signal, but how do you incorporate that? It's really tricky to actually do that. I think that people just keep using GPT-4 because it's really cheap. It's one of my like go-to, like I just say this over and over again is like GPT-4 for data generation, all terms and conditions aside because we know OpenAI has this stuff is like very cheap for getting pretty good data compared to compute or salary of any engineer or anything. So it's like tell people to go crazy generating GPT-4 data if you're willing to take the organizational like cloud of should we be doing this? But I think most people have accepted that you kind of do this, especially at individuals. Like they're not gonna come after individuals. I do think more companies should think twice before doing tons of OpenAI outputs. Also just because the data contamination and what it does to your workflow is probably hard to control at scale.Swyx [00:44:21]: And we should just mention at the time of recording, we've seen the first example of OpenAI enforcing their terms of service. ByteDance was caught, reported to be training on GPT-4 data and they got their access to OpenAI revoked. So that was one example.Nathan [00:44:36]: Yeah, I don't expect OpenAI to go too crazy on this cause they're just gonna, there's gonna be so much backlash against them. And like, everyone's gonna do it anyways.Swyx [00:44:46]: And what's at stake here to spell it out is like, okay, that's like cost $10 to collect one data point from a human. It's gonna cost you like a 10th of a cent with OpenAI, right? So like it's just orders of magnitude cheaper. And therefore people-Nathan [00:44:58]: Yeah, and it's like the signal you get from humans is from preferences isn't that high. The signal that you get from humans for instructions is pretty high, but it is also very expensive. So like the human instructions are definitely like by far and away the best ones out there compared to the synthetic data. But I think like the synthetic preferences are just so much easier to get some sort of signal running with and you can work in other, I think people will start working in other goals there between safety and whatever. That's something that's taking off and we'll kind of see that. I think in 2024, at some point, people will start doing things like constitutional AI for preferences, which will be pretty interesting. I think we saw how long it took RLHF to get started in open source. Instruction tuning was like the only thing that was really happening until maybe like August, really. I think Zephyr was the first model that showed success with RLHF in the public, but that's a long time from everyone knowing that it was something that people are interested in to having any like check mark. So I accept that and think the same will happen with constitutional AI. But once people show that you can do it once, they continue to explore.Alessio [00:46:01]: Excellent.Swyx [00:46:01]: Just in the domain of human preference data suppliers, Scale.ai very happily will tell you that they supplied all that data for Lama 2. The other one is probably interesting, LMSYS from Berkeley. What they're running with Chaterina is perhaps a good store of human preference data.Nathan [00:46:17]: Yeah, they released some toxicity data. They, I think, are generally worried about releasing data because they have to process it and make sure everything is safe and they're really lightweight work. I think they're trying to release the preference data. I have, if we make it to evaluation, I'd pretty much say that Chaterina is the best limited evaluation that people have to learn how to use language models. And like, it's very valuable data. They also may share some data with people that they host models from. So like if your model is hosted there and you pay for the hosting, you can get the prompts because you're pointing the endpoint at it and that gets pinged to you and you're any real LLM inference stack saves the prompts tha
SF folks: join us at the AI Engineer Foundation's Emergency Hackathon tomorrow and consider the Newton if you'd like to cowork in the heart of the Cerebral Arena.Our community page is up to date as usual!~800,000 developers watched OpenAI Dev Day, ~8,000 of whom listened along live on our ThursdAI x Latent Space, and ~800 of whom got tickets to attend in person:OpenAI's first developer conference easily surpassed most people's lowballed expectations - they simply did everything short of announcing GPT-5, including:* ChatGPT (the consumer facing product)* GPT4 Turbo already in ChatGPT (running faster, with an April 2023 cutoff), all noticed by users weeks before the conference* Model picker eliminated, God Model chooses for you* GPTs - “tailored version of ChatGPT for a specific purpose” - stopping short of “Agents”. With custom instructions, expanded knowledge, and actions, and an intuitive no-code GPT Builder UI (we tried all these on our livestream yesterday and found some issues, but also were able to ship interesting GPTs very quickly) and a GPT store with revenue sharing (an important criticism we focused on in our episode on ChatGPT Plugins)* API (the developer facing product)* APIs for Dall-E 3, GPT4 Vision, Code Interpreter (RIP Advanced Data Analysis), GPT4 Finetuning and (surprise!) Text to Speech* many thought each of these would take much longer to arrive* usable in curl and in playground* BYO Interpreter + Async Agents?* Assistant API: stateful API backing “GPTs” like apps, with support for calling multiple tools in parallel, persistent Threads (storing message history, unlimited context window with some asterisks), and uploading/accessing Files (with a possibly-too-simple RAG algorithm, and expensive pricing)* Whisper 3 announced and open sourced (HuggingFace recap)* Price drops for a bunch of things!* Misc: Custom Models for big spending ($2-3m) customers, Copyright Shield, SatyaThe progress here feels fast, but it is mostly (incredible) last-mile execution on model capabilities that we already knew to exist. On reflection it is important to understand that the one guiding principle of OpenAI, even more than being Open (we address that in part 2 of today's pod), is that slow takeoff of AGI is the best scenario for humanity, and that this is what slow takeoff looks like:When introducing GPTs, Sam was careful to assert that “gradual iterative deployment is the best way to address the safety challenges with AI”:This is why, in fact, GPTs and Assistants are intentionally underpowered, and it is a useful exercise to consider what else OpenAI continues to consider dangerous (for example, many people consider a while(true) loop a core driver of an agent, which GPTs conspicuously lack, though Lilian Weng of OpenAI does not).We convened the crew to deliver the best recap of OpenAI Dev Day in Latent Space pod style, with a 1hr deep dive with the Functions pod crew from 5 months ago, and then another hour with past and future guests live from the venue itself, discussing various elements of how these updates affect their thinking and startups. Enjoy!Show Notes* swyx live thread (see pinned messages in Twitter Space for extra links from community)* Newton AI Coworking Interest Form in the heart of the Cerebral ArenaTimestamps* [00:00:00] Introduction* [00:01:59] Part I: Latent Space Pod Recap* [00:06:16] GPT4 Turbo and Assistant API* [00:13:45] JSON mode* [00:15:39] Plugins vs GPT Actions* [00:16:48] What is a "GPT"?* [00:21:02] Criticism: the God Model* [00:22:48] Criticism: ChatGPT changes* [00:25:59] "GPTs" is a genius marketing move* [00:26:59] RIP Advanced Data Analysis* [00:28:50] GPT Creator as AI Prompt Engineer* [00:31:16] Zapier and Prompt Injection* [00:34:09] Copyright Shield* [00:38:03] Sharable GPTs solve the API distribution issue* [00:39:07] Voice* [00:44:59] Vision* [00:49:48] In person experience* [00:55:11] Part II: Spot Interviews* [00:56:05] Jim Fan (Nvidia - High Level Takeaways)* [01:05:35] Raza Habib (Humanloop) - Foundation Model Ops* [01:13:59] Surya Dantuluri (Stealth) - RIP Plugins* [01:21:20] Reid Robinson (Zapier) - AI Actions for GPTs* [01:31:19] Div Garg (MultiOn) - GPT4V for Agents* [01:37:15] Louis Knight-Webb (Bloop.ai) - AI Code Search* [01:49:21] Shreya Rajpal (Guardrails.ai) - on Hallucinations* [01:59:51] Alex Volkov (Weights & Biases, ThursdAI) - "Keeping AI Open"* [02:10:26] Rahul Sonwalkar (Julius AI) - Advice for FoundersTranscript[00:00:00] Introduction[00:00:00] swyx: Hey everyone, this is Swyx coming at you live from the Newton, which is in the heart of the Cerebral Arena. It is a new AI co working space that I and a couple of friends are working out of. There are hot desks available if you're interested, just check the show notes. But otherwise, obviously, it's been 24 hours since the opening of Dev Day, a lot of hot reactions and longstanding tradition, one of the longest traditions we've had.[00:00:29] And the latent space pod is to convene emergency sessions and record the live thoughts of developers and founders going through and processing in real time. I think a lot of the roles of podcasts isn't as perfect information delivery channels, but really as an audio and oral history of what's going on as it happens, while it happens.[00:00:49] So this one's a little unusual. Previously, we only just gathered on Twitter Spaces, and then just had a bunch of people. The last one was the Code Interpreter one with 22, 000 people showed up. But this one is a little bit more complicated because there's an in person element and then a online element.[00:01:06] So this is a two part episode. The first part is a recorded session between our latent space people and Simon Willison and Alex Volkoff from the Thursday iPod, just kind of recapping the day. But then also, as the second hour, I managed to get a bunch of interviews with previous guests on the pod who we're still friends with and some new people that we haven't yet had on the pod.[00:01:28] But I wanted to just get their quick reactions because most of you have known and loved Jim Fan and Div Garg and a bunch of other folks that we interviewed. So I just want to, I'm excited to introduce To you the broader scope of what it's like to be at OpenAI Dev Day in person bring you the audio experience as well as give you some of the thoughts that developers are having as they process the announcements from OpenAI.[00:01:51] So first off, we have the Mainspace Pod recap. One hour of open I dev day.[00:01:59] Part I: Latent Space Pod Recap[00:01:59] Alessio: Hey. Welcome to the Latents Based Podcast an emergency edition after OpenAI Dev Day. This is Alessio, partner and CTO of Residence at Decibel Partners, and as usual, I'm joined by Swyx, founder of SmallAI. Hey,[00:02:12] swyx: and today we have two special guests with us covering all the latest and greatest.[00:02:17] We, we, we love to get our band together and recap things, especially when they're big. And it seems like that every three months we have to do this. So Alex, welcome. From Thursday AI we've been collaborating a lot on the Twitter spaces and welcome Simon from many, many things, but also I think you're the first person to not, not make four appearances on our pod.[00:02:37] Oh, wow. I feel privileged. So welcome. Yeah, I think we're all there yesterday. How... Do we feel like, what do you want to kick off with? Maybe Simon, you want to, you want to take first and then Alex. Sure. Yeah. I mean,[00:02:47] Simon Willison: yesterday was quite exhausting, quite frankly. I feel like it's going to take us as a community several months just to completely absorb all of the stuff that they dropped on us in one giant.[00:02:57] Giant batch. It's particularly impressive considering they launched a ton of features, what, three or four weeks ago? ChatGPT voice and the combined mode and all of that kind of thing. And then they followed up with everything from yesterday. That said, now that I've started digging into the stuff that they released yesterday, some of it is clearly in need of a bit more polish.[00:03:15] You know, the the, the reality of what they look, what they released is I'd say about 80 percent of, of what it looks like it was yesterday, which is still impressive. You know, don't get me wrong. This is an amazing batch of stuff, but there are definitely problems and sharp edges that we need to file off.[00:03:29] And there are things that we still need to figure out before we can take advantage of all of this.[00:03:33] swyx: Yeah, agreed, agreed. And we can go into those, those sharp edges in a bit. I just want to pop over to Alex. What are your thoughts?[00:03:39] Alex Volkov: So, interestingly, even folks at OpenAI, there's like several booths and help desks so you can go in and ask people, like, actual changes and people, like, they could follow up with, like, the right people in OpenAI and, like, answer you back, etc.[00:03:52] Even some of them didn't know about all the changes. So I went to the voice and audio booth. And I asked them about, like, hey, is Whisper 3 that was announced by Sam Altman on stage just, like, briefly, will that be open source? Because I'm, you know, I love using Whisper. And they're like, oh, did we open source?[00:04:06] Did we talk about Whisper 3? Like, some of them didn't even know what they were releasing. But overall, I felt it was a very tightly run event. Like, I was really impressed. Shawn, we were sitting in the audience, and you, like, pointed at the clock to me when they finished. They finished, like, on... And this was after like doing some extra stuff.[00:04:24] Very, very impressive for a first event. Like I was absolutely like, Good job.[00:04:30] swyx: Yeah, apparently it was their first keynote and someone, I think, was it you that told me that this is what happens if you have A president of Y Combinator do a proper keynote you know, having seen many, many, many presentations by other startups this is sort of the sort of master stroke.[00:04:46] Yeah, Alessio, I think you were watching remotely. Yeah, we were at the Newton. Yeah, the Newton.[00:04:52] Alessio: Yeah, I think we had 60 people here at the watch party, so it was quite a big crowd. Mixed reaction from different... Founders and people, depending on what was being announced on the page. But I think everybody walked away kind of really happy with a new layer of interfaces they can use.[00:05:11] I think, to me, the biggest takeaway was like and I was talking with Mike Conover, another friend of the podcast, about this is they're kind of staying in the single threaded, like, synchronous use cases lane, you know? Like, the GPDs announcement are all like... Still, chatbase, one on one synchronous things.[00:05:28] I was expecting, maybe, something about async things, like background running agents, things like that. But it's interesting to see there was nothing of that, so. I think if you're a founder in that space, you're, you're quite excited. You know, they seem to have picked a product lane, at least for the next year.[00:05:45] So, if you're working on... Async experiences, so things working in the background, things that are not co pilot like, I think you're quite excited to have them be a lot cheaper now.[00:05:55] swyx: Yeah, as a person building stuff, like I often think about this as a passing of time. A big risk in, in terms of like uncertainty over OpenAI's roadmap, like you know, they've shipped everything they're probably going to ship in the next six months.[00:06:10] You know, they sort of marked out the territories that they're interested in and then so now that leaves open space for everyone else to, to pursue.[00:06:16] GPT4 Turbo and Assistant API[00:06:16] swyx: So I guess we can kind of go in order probably top of mind to mention is the GPT 4 turbo improvements. Yeah, so longer context length, cheaper price.[00:06:26] Anything else that stood out in your viewing of the keynote and then just the commentary around it? I[00:06:34] Alex Volkov: was I was waiting for Stateful. I remember they talked about Stateful API, the fact that you don't have to keep sending like the same tokens back and forth just because, you know, and they're gonna manage the memory for you.[00:06:45] So I was waiting for that. I knew it was coming at some point. I was kind of... I did not expect it to come at this event. I don't know why. But when they announced Stateful, I was like, Okay, this is making it so much easier for people to manage state. The whole threads I don't want to mix between the two things, so maybe you guys can clarify, but there's the GPT 4 tool, which is the model that has the capabilities, In a whopping 128k, like, context length, right?[00:07:11] It's huge. It's like two and a half books. But also, you know, faster, cheaper, etc. I haven't yet tested the fasterness, but like, everybody's excited about that. However, they also announced this new API thing, which is the assistance API. And part of it is threads, which is, we'll manage the thread for you.[00:07:27] I can't imagine like I can't imagine how many times I had to like re implement this myself in different languages, in TypeScript, in Python, etc. And now it's like, it's so easy. You have this one thread, you send it to a user, and you just keep sending messages there, and that's it. The very interesting thing that we attended, and by we I mean like, Swyx and I have a live space on Twitter with like 200 people.[00:07:46] So it's like me, Swyx, and 200 people in our earphones with us as well. They kept asking like, well, how's the price happening? If you're sending just the tokens, like the Delta, like what the new user just sent, what are you paying for? And I went to OpenAI people, and I was like, hey... How do we get paid for this?[00:08:01] And nobody knew, nobody knew, and I finally got an answer. You still pay for the whole context that you have inside the thread. You still pay for all this, but now it's a little bit more complex for you to kind of count with TikTok, right? So you have to hit another API endpoint to get the whole thread of what the context is.[00:08:17] Then TikTokonize this, run this in TikTok, and then calculate. This is now the new way, officially, for OpenAI. But I really did, like, have to go and find this. They didn't know a lot of, like, how the pricing is. Ouch! Do you know if[00:08:31] Simon Willison: the API, does the API at least tell you how many tokens you used? Or is it entirely up to you to do the accounting?[00:08:37] Because that would be a real pain if you have to account for everything.[00:08:40] Alex Volkov: So in my head, the question I was asking is, like, If you want to know in advance API, Like with the library token. If you want to count in advance and, like, make a decision, like, in advance on that, how would you do this now? And they said, well, yeah, there's a way.[00:08:54] If you hit the API, get the whole thread back, then count the tokens. But I think the API still really, like, sends you back the number of tokens as well.[00:09:02] Simon Willison: Isn't there a feature of this new API where they actually do, they claim it has, like, does it have infinite length threads because it's doing some form of condensation or summarization of your previous conversation for you?[00:09:15] I heard that from somewhere, but I haven't confirmed it yet.[00:09:18] swyx: So I have, I have a source from Dave Valdman. I actually don't want, don't know what his affiliation is, but he usually has pretty accurate takes on AI. So I, I think he works in the iCircles in some capacity. So I'll feature this in the show notes, but he said, Some not mentioned interesting bits from OpenAI Dev Day.[00:09:33] One unlimited. context window and chat threads from opening our docs. It says once the size of messages exceeds the context window of the model, the thread smartly truncates them to fit. I'm not sure I want that intelligence.[00:09:44] Alex Volkov: I want to chime in here just real quick. The not want this intelligence. I heard this from multiple people over the next conversation that I had. Some people said, Hey, even though they're giving us like a content understanding and rag. We are doing different things. Some people said this with Vision as well.[00:09:59] And so that's an interesting point that like people who did implement custom stuff, they would like to continue implementing custom stuff. That's also like an additional point that I've heard people talk about.[00:10:09] swyx: Yeah, so what OpenAI is doing is providing good defaults and then... Well, good is questionable.[00:10:14] We'll talk about that. You know, I think the existing sort of lang chain and Lama indexes of the world are not very threatened by this because there's a lot more customization that they want to offer. Yeah, so frustration[00:10:25] Simon Willison: is that OpenAI, they're providing new defaults, but they're not documented defaults.[00:10:30] Like they haven't told us how their RAG implementation works. Like, how are they chunking the documents? How are they doing retrieval? Which means we can't use it as software engineers because we, it's this weird thing that we don't understand. And there's no reason not to tell us that. Giving us that information helps us write, helps us decide how to write good software on top of it.[00:10:48] So that's kind of frustrating. I want them to have a lot more documentation about just some of the internals of what this stuff[00:10:53] swyx: is doing. Yeah, I want to highlight.[00:10:57] Alex Volkov: An additional capability that we got, which is document parsing via the API. I was, like, blown away by this, right? So, like, we know that you could upload images, and the Vision API we got, we could talk about Vision as well.[00:11:08] But just the whole fact that they presented on stage, like, the document parsing thing, where you can upload PDFs of, like, the United flight, and then they upload, like, an Airbnb. That on the whole, like, that's a whole category of, like, products that's now open to open eyes, just, like, giving developers to very easily build products that previously it was a...[00:11:24] Pain in the butt for many, many people. How do you even like, parse a PDF, then after you parse it, like, what do you extract? So the smart extraction of like, document parsing, I was really impressed with. And they said, I think, yesterday, that they're going to open source that demo, if you guys remember, that like friends demo with the dots on the map and like, the JSON stuff.[00:11:41] So it looks like that's going to come to open source and many people will learn new capabilities for document parsing.[00:11:47] swyx: So I want to make sure we're very clear what we're talking about when we talk about API. When you say API, there's no actual endpoint that does this, right? You're talking about the chat GPT's GPT's functionality.[00:11:58] Alex Volkov: No, I'm talking about the assistance API. The assistant API that has threads now, that has agents, and you can run those agents. I actually, maybe let's clarify this point. I think I had to, somebody had to clarify this for me. There's the GPT's. Which is a UI version of running agents. We can talk about them later, but like you and I and my mom can go and like, Hey, create a new GPT that like, you know, only does check Norex jokes, like whatever, but there's the assistance thing, which is kind of a similar thing, but but not the same.[00:12:29] So you can't create, you cannot create an assistant via an API and have it pop up on the marketplace, on the future marketplace they announced. How can you not? No, no, no, not via the API. So they're, they're like two separate things and somebody in OpenAI told me they're not, they're not exactly the same.[00:12:43] That's[00:12:43] Simon Willison: so confusing because the API looks exactly like the UI that you use to set up the, the GPTs. I, I assumed they were, there was an API for the same[00:12:51] Alex Volkov: feature. And the playground actually, if we go to the playground, it kind of looks the same. There's like the configurable thing. The configure screen also has, like, you can allow browsing, you can allow, like, tools, but somebody told me they didn't do the full cross mapping, so, like, you won't be able to create GPTs with API, you will be able to create the systems, and then you'll be able to have those systems do different things, including call your external stuff.[00:13:13] So that was pretty cool. So this API is called the system API. That's what we get, like, in addition to the model of the GPT 4 turbo. And that has document parsing. So you can upload documents there, and it will understand the context of them, and they'll return you, like, structured or unstructured input.[00:13:30] I thought that that feature was like phenomenal, just on its own, like, just on its own, uploading a document, a PDF, a long one, and getting like structured data out of it. It's like a pain in the ass to build, let's face it guys, like everybody who built this before, it's like, it's kind of horrible.[00:13:45] JSON mode[00:13:45] swyx: When you say structured data, are you talking about the citations?[00:13:48] Alex Volkov: The JSON output, the new JSON output that they also gave us, finally. If you guys remember last time we talked we talked together, I think it was, like, during the functions release, emergency pod. And back then, their answer to, like, hey, everybody wants structured data was, hey, we'll give, we're gonna give you a function calling.[00:14:03] And now, they did both. They gave us both, like, a JSON output, like, structure. So, like, you can, the models are actually going to return JSON. Haven't played with it myself, but that's what they announced. And the second thing is, they improved the function calling. Significantly as well.[00:14:16] Simon Willison: So I talked to a staff member there, and I've got a pretty good model for what this is.[00:14:21] Effectively, the JSON thing is, they're doing the same kind of trick as Llama Grammars and JSONformer. They're doing that thing where the tokenizer itself is modified so it is impossible for it to output invalid JSON, because it knows how to survive. Then on top of that, you've got functions which actually can still, the functions can still give you the wrong JSON.[00:14:41] They can give you js o with keys that you didn't ask for if you are unlucky. But at least it will be valid. At least it'll pass through a json passer. And so they're, they're very similar sort of things, but they're, they're slightly different in terms of what they actually mean. And yeah, the new function stuff is, is super exciting.[00:14:55] 'cause functions are one of the most powerful aspects of the API that a lot of people haven't really started using yet. But it's amazingly powerful what you can do with it.[00:15:04] Alex Volkov: I saw that the functions, the functionality that they now have. is also plug in able as actions to those assistants. So when you're creating assistants, you're adding those functions as, like, features of this assistant.[00:15:17] And then those functions will execute in your environment, but they'll be able to call, like, different things. Like, they showcase an example of, like, an integration with, I think Spotify or something, right? And that was, like, an internal function that ran. But it is confusing, the kind of, the online assistant.[00:15:32] APIable agents and the GPT's agents. So I think it's a little confusing because they demoed both. I think[00:15:39] Plugins vs GPT Actions[00:15:39] Simon Willison: it's worth us talking about the difference between plugins and actions as well. Because, you know, they launched plugins, what, back in February. And they've effectively... They've kind of deprecated plugins.[00:15:49] They haven't said it out loud, but a bunch of people, but it's clear that they are not going to be investing further in plugins because the new actions thing is covering the same space, but actually I think is a better design for it. Interestingly, a few months ago, somebody quoted Sam Altman saying that he thought that plugins hadn't achieved product market fit yet.[00:16:06] And I feel like that's sort of what we're seeing today. The the problem with plugins is it was all a little bit messy. People would pick and mix the plugins that they needed. Nobody really knew which plugin combinations would work. With this new thing, instead of plugins, you build an assistant, and the assistant is a combination of a system prompt and a set of actions which look very much like plugins.[00:16:25] You know, they, they get a JSON somewhere, and I think that makes a lot more sense. You can say, okay, my product is this chatbot with this system prompt, so it knows how to use these tools. I've given it this combination of plugin like things that it can use. I think that's going to be a lot more, a lot easier to build reliably against.[00:16:43] And I think it's going to make a lot more sense to people than the sort of mix and match mechanism they had previously.[00:16:48] What is a "GPT"?[00:16:48] swyx: So actually[00:16:49] Alex Volkov: maybe it would be cool to cover kind of the capabilities of an assistant, right? So you have a custom prompt, which is akin to a system message. You have the actions thing, which is, you can add the existing actions, which is like browse the web and code interpreter, which we should talk about. Like, the system now can write code and execute it, which is exciting. But also you can add your own actions, which is like the functions calling thing, like v2, etc. Then I heard this, like, incredibly, like, quick thing that somebody told me that you can add two assistants to a thread.[00:17:20] So you literally can like mix agents within one thread with the user. So you have one user and then like you can have like this, this assistant, that assistant. They just glanced over this and I was like, that, that is very interesting. That is not very interesting. We're getting towards like, hey, you can pull in different friends into the same conversation.[00:17:37] Everybody does the different thing. What other capabilities do we have there? You guys remember? Oh Remember, like, context. Uploading API documentation.[00:17:48] Simon Willison: Well, that one's a bit more complicated. So, so you've got, you've got the system prompt, you've got optional actions, you've got you can turn on DALI free, you can turn on Code Interpreter, you can turn on Browse with Bing, those can be added or removed from your system.[00:18:00] And then you can upload files into it. And the files can be used in two different ways. You can... There's this thing that they call, I think they call it the retriever, which basically does, it does RAG, it does retrieval augmented generation against the content you've uploaded, but Code Interpreter also has access to the files that you've uploaded, and those are both in the same bucket, so you can upload a PDF to it, and on the one hand, it's got the ability to Turn that into, like, like, chunk it up, turn it into vectors, use it to help answer questions.[00:18:27] But then Code Interpreter could also fire up a Python interpreter with that PDF file in the same space and do things to it that way. And it's kind of weird that they chose to combine both of those things. Also, the limits are amazing, right? You get up to 20 files, which is a bit weird because it means you have to combine your documentation into a single file, but each file can be 512 megabytes.[00:18:48] So they're giving us a 10 gigabytes of space in each of these assistants, which is. Vast, right? And of course, I tested, it'll handle SQLite databases. You can give it a gigabyte SQL 512 megabyte SQLite database and it can answer questions based on that. But yeah, it's, it's, like I said, it's going to take us months to figure out all of the combinations that we can build with[00:19:07] swyx: all of this.[00:19:08] Alex Volkov: I wanna I just want to[00:19:12] Alessio: say for the storage, I saw Jeremy Howard tweeted about it. It's like 20 cents per gigabyte per system per day. Just in... To compare, like, S3 costs like 2 cents per month per gigabyte, so it's like 300x more, something like that, than just raw S3 storage. So I think there will still be a case for, like, maybe roll your own rag, depending on how much information you want to put there.[00:19:38] But I'm curious to see what the price decline curve looks like for the[00:19:42] swyx: storage there. Yeah, they probably should just charge that at cost. There's no reason for them to charge so much.[00:19:50] Simon Willison: That is wildly expensive. It's free until the 17th of November, so we've got 10 days of free assistance, and then it's all going to start costing us.[00:20:00] Crikey. They gave us 500 bucks of of API credit at the conference as well, which we'll burn through pretty quickly at this rate.[00:20:07] swyx: Yep.[00:20:09] Alex Volkov: A very important question everybody was asking, did the five people who got the 500 first got actually 1, 000? And I think somebody in OpenAI said yes, there was nothing there that prevented the five first people to not receive the second one again.[00:20:21] I[00:20:22] swyx: met one of them. I met one of them. He said he only got 500. Ah,[00:20:25] Alex Volkov: interesting. Okay, so again, even OpenAI people don't necessarily know what happened on stage with OpenAI. Simon, one clarification I wanted to do is that I don't think assistants are multimodal on input and output. So you do have vision, I believe.[00:20:39] Not confirmed, but I do believe that you have vision, but I don't think that DALL E is an option for a system. It is an option for GPTs, but the guy... Oh, that's so confusing! The systems, the checkbox for DALL E is not there. You cannot enable it.[00:20:54] swyx: But you just add them as a tool, right? So, like, it's just one more...[00:20:58] It's a little finicky... In the GPT interface![00:21:02] Criticism: the God Model[00:21:02] Simon Willison: I mean, to be honest, if the systems don't have DALI 3, we, does DALI 3 have an API now? I think they released one. I can't, there's so much stuff that got lost in the pile. But yeah, so, Coded Interpreter. Wow! That I was not expecting. That's, that's huge. Assuming.[00:21:20] I mean, I haven't tried it yet. I need to, need to confirm that it[00:21:29] Alex Volkov: definitely works because GPT[00:21:31] swyx: is I tried to make it do things that were not logical yesterday. Because one of the risks of having the God model is it calls... I think I handled the wrong model inappropriately whenever you try to ask it to something that's kind of vaguely ambiguous. But I thought I thought it handled the job decently well.[00:21:50] Like you know, I I think there's still going to be rough edges. Like it's going to try to draw things. It's going to try to code when you don't actually want to. And. In a sense, OpenAI is kind of removing that capability from ChargeGPT. Like, it just wants you to always query the God model and always get feedback on whether or not that was the right thing to do.[00:22:09] Which really[00:22:10] Simon Willison: sucks. Because it runs... I like ask it a question and it goes, Oh, searching Bing. And I'm like, No, don't search Bing. I know that the first 10 results on Bing will not solve this question. I know you know the answer. So I had to build my own custom GPT that just turns off Bing. Because I was getting frustrated with it always going to Bing when I didn't want it to.[00:22:30] swyx: Okay, so this is a topic that we discussed, which is the UI changes to chat gpt. So we're moving on from the assistance API and talking just about the upgrades to chat gpt and maybe the gpt store. You did not like it.[00:22:44] Alex Volkov: And I loved it. I'm gonna take both sides of this, yeah.[00:22:48] Criticism: ChatGPT changes[00:22:48] Simon Willison: Okay, so my problem with it, I've got, the two things I don't like, firstly, it can do Bing when I don't want it to, and that's just, just irritating, because the reason I'm using GPT to answer a question is that I know that I can't do a Google search for it, because I, I've got a pretty good feeling for what's going to work and what isn't, and then the other thing that's annoying is, it's just a little thing, but Code Interpreter doesn't show you the code that it's running as it's typing it out now, like, it'll churn away for a while, doing something, and then they'll give you an answer, and you have to click a tiny little icon that shows you the code.[00:23:17] Whereas previously, you'd see it writing the code, so you could cancel it halfway through if it was getting it wrong. And okay, I'm a Python programmer, so I care, and most people don't. But that's been a bit annoying.[00:23:26] swyx: Yeah, and when it errors, it doesn't tell you what the error is. It just says analysis failed, and it tries again.[00:23:32] But it's really hard for us to help it.[00:23:34] Simon Willison: Yeah. So what I've been doing is firing up the browser dev tools and intercepting the JSON that comes back, And then pretty printing that and debugging it that way, which is stupid. Like, why do I have to do[00:23:45] Alex Volkov: that? Totally good feedback for OpenAI. I will tell you guys what I loved about this unified mode.[00:23:49] I have a name for it. So we actually got a preview of this on Sunday. And one of the, one of the folks got, got like an early example of this. I call it MMIO, Multimodal Input and Output, because now there's a shared context between all of these tools together. And I think it's not only about selecting them just selecting them.[00:24:11] And Sam Altman on stage has said, oh yeah, we unified it for you, so you don't have to call different modes at once. And in my head, that's not all they did. They gave a shared context. So what is an example of shared context, for example? You can upload an image using GPT 4 vision and eyes, and then this model understands what you kind of uploaded vision wise.[00:24:28] Then you can ask DALI to draw that thing. So there's no text shared in between those modes now. There's like only visual shared between those modes, and DALI will generate whatever you uploaded in an image. So like it's eyes to output visually. And you can mix the things as well. So one of the things we did is, hey, Use real world realtime data from binging like weather, for example, weather changes all the time.[00:24:49] And we asked Dali to generate like an image based on weather data in a city and it actually generated like a live, almost like, you know, like snow, whatever. It was snowing in Denver. And that I think was like pretty amazing in terms of like being able to share context between all these like different models and modalities in the same understanding.[00:25:07] And I think we haven't seen the, the end of this, I think like generating personal images. Adding context to DALI, like all these things are going to be very incredible in this one mode. I think it's very, very powerful.[00:25:19] Simon Willison: I think that's really cool. I just want to opt in as opposed to opt out. Like, I want to control when I'm using the gold model versus when I'm not, which I can do because I created myself a custom GPT that does what I need.[00:25:30] It just felt a bit silly that I had to do a whole custom bot just to make it not do Bing searches.[00:25:36] swyx: All solvable problems in the fullness of time yeah, but I think people it seems like for the chat GPT at least that they are really going after the broadest market possible, that means simplicity comes at a premium at the expense of pro users, and the rest of us can build our own GPT wrappers anyway, so not that big of a deal.[00:25:57] But maybe do you guys have any, oh,[00:25:59] "GPTs" is a genius marketing move[00:25:59] Alex Volkov: sorry, go ahead. So, the GPT wrappers thing. Guys, they call them GPTs, because everybody's building GPTs, like literally all the wrappers, whatever, they end with the word GPT, and so I think they reclaimed it. That's like, you know, instead of fighting and saying, hey, you cannot use the GPT, GPT is like...[00:26:15] We have GPTs now. This is our marketplace. Whatever everybody else builds, we have the marketplace. This is our thing. I think they did like a whole marketing move here that's significant.[00:26:24] swyx: It's a very strong marketing move. Because now it's called Canva GPT. It's called Zapier GPT. And they're basically saying, Don't build your own websites.[00:26:32] Build it inside of our Goddard app, which is chatGPT. And and that's the way that we want you to do that. Right. In a[00:26:39] Simon Willison: way, it sort of makes up... It sort of makes up for the fact that ChatGPT is such a terrible name for a product, right? ChatGPT, what were they thinking when they came up with that name?[00:26:48] But I guess if they lean into it, it makes a little bit more sense. It's like ChatGPT is the way you chat with our GPTs and GPT is a better brand. And it's terrible, but it's not. It's a better brand than ChatGPT was.[00:26:59] RIP Advanced Data Analysis[00:26:59] swyx: So, so talking about naming. Yeah. Yeah. Simon, actually, so for those listeners that we're.[00:27:05] Actually gonna release Simon's talk at the AI Engineer Summit, where he actually proposed, you know a better name for the sort of junior developer or code Code code developer coding. Coding intern.[00:27:16] Simon Willison: Coding intern. Coding intern, yeah. Coding intern, was it? Yeah. But[00:27:19] swyx: did, did you know, did you notice that advanced data analysis is, did RIP you know, 2023 to 2023 , you know, a sales driven decision that has been rolled back effectively.[00:27:29] 'cause now everything's just called.[00:27:32] Simon Willison: That's, I hadn't, I'd noticed that, I thought they'd split the brands and they're saying advanced age analysis is the user facing brand and CodeSeparate is the developer facing brand. But now if they, have they ditched that from the interface then?[00:27:43] Alex Volkov: Yeah. Wow. So it's unified mode.[00:27:45] Yeah. Yeah. So like in the unified mode, there's no selection anymore. Right. You just get all tools at once. So there's no reason.[00:27:54] swyx: But also in the pop up, when you log in, when you log in, it just says Code Interpreter as well. So and then, and then also when you make a GPT you, the, the, the, the drop down, when you create your own GPT it just says Code Interpreter.[00:28:06] It also doesn't say it. You're right. Yeah. They ditched the brand. Good Lord. On the UI. Yeah. So oh, that's, that's amazing. Okay. Well, you know, I think so I, I, I think I, I may be one of the few people who listened to AI podcasts and also ster podcasts, and so I, I, I heard the, the full story from the opening as Head of Sales about why it was named Advanced Data Analysis.[00:28:26] It was, I saw that, yeah. Yeah. There's a bit of civil resistance, I think from the. engineers in the room.[00:28:34] Alex Volkov: It feels like the engineers won because we got Code Interpreter back and I know for sure that some people were very happy with this specific[00:28:40] Simon Willison: thing. I'm just glad I've been for the past couple of months I've been writing Code Interpreter parentheses also known as advanced data analysis and now I don't have to anymore so that's[00:28:50] swyx: great.[00:28:50] GPT Creator as AI Prompt Engineer[00:28:50] swyx: Yeah, yeah, it's back. Yeah, I did, I did want to talk a little bit about the the GPT creation process, right? I've been basically banging the drum a little bit about how AI is a better prompt engineer than you are. And sorry, my. Speaking over Simon because I'm lagging. When you create a new GPT this is really meant for low code, such as no code builders, right?[00:29:10] It's really, I guess, no code at all. Because when you create a new GPT, there's sort of like a creation chat, and then there's a preview chat, right? And the creation chat kind of guides you through the wizard. Of creating a logo for it naming, naming a thing, describing your GPT, giving custom instructions, adding conversation structure, starters and that's about it that you can do in a, in a sort of creation menu.[00:29:31] But I think that is way better than filling out a form. Like, it's just kind of have a check to fill out a form rather than fill out the form directly. And I think that's really good. And then you can sort of preview that directly. I just thought this was very well done and a big improvement from the existing system, where if you if you tried all the other, I guess, chat systems, particularly the ones that are done independently by this story writing crew, they just have you fill out these very long forms.[00:29:58] It's kind of like the match. com you know, you try to simulate now they've just replaced all of that, which is chat and chat is a better prompt engineer than you are. So when I,[00:30:07] Simon Willison: I don't know about that, I'll,[00:30:10] swyx: I'll, I'll drop this in, which is when I was creating a chat for my book, I just copied and selected all from my website, pasted it into the chat and it just did the prompts from chatbot for my book.[00:30:21] Right? So like, I don't have to structurally, I don't have to structure it. I can just dump info in it and it just does the thing. It fills in the form[00:30:30] Alex Volkov: for you.[00:30:33] Simon Willison: Yeah did that come through?[00:30:34] swyx: Yes[00:30:35] Simon Willison: no it doesn't. Yeah I built the first one of these things using the chatbot. Literally, on the bot, on my phone, I built a working, like, like, bot.[00:30:44] It was very impressive. And then the next three I built using the form. Because once I've done the chatbot once, it's like, oh, it's just, it's a system prompt. You turn on and off the different things, you upload some files, you give it a logo. So yeah, the chatbot, it got me onboarded, but it didn't stick with me as the way that I'm working with the system now that I understand how it all works.[00:31:00] swyx: I understand. Yeah, I agree with that. I guess, again, this is all about the total newbie user, right? Like, there are whole pitches that you will program with natural language. And even the form... And for that, it worked.[00:31:12] Simon Willison: Yeah, that did work really well.[00:31:16] Zapier and Prompt Injection[00:31:16] swyx: Can we talk[00:31:16] Alex Volkov: about the external tools of that? Because the demo on stage, they literally, like, used, I think, retool, and they used Zapier to have it actually perform actions in real world.[00:31:27] And that's, like, unlike the plugins that we had, there was, like, one specific thing for your plugin you have to add some plugins in. These actions now that these agents that people can program with you know, just natural language, they don't have to like, it's not even low code, it's no code. They now have tools and abilities in the actual world to do things.[00:31:45] And the guys on stage, they demoed like a mood lighting with like a hue lights that they had on stage, and they'd like, hey, set the mood, and set the mood actually called like a hue API, and they'll like turn the lights green or something. And then they also had the Spotify API. And so I guess this demo wasn't live streamed, right?[00:32:03] Swyx was live. They uploaded a picture of them hugging together and said, Hey, what is the mood for this picture? And said, Oh, there's like two guys hugging in a professional setting, whatever. So they created like a list of songs for them to play. And then they hit Spotify API to actually start playing this.[00:32:17] All within like a second of a live demo. I thought it was very impressive for a low code thing. They probably already connected the API behind the scenes. So, you know, just like low code, it's not really no code. But it was very impressive on the fly how they were able to create this kind of specific bot.[00:32:32] Simon Willison: On the one hand, yes, it was super, super cool. I can't wait to try that. On the other hand, it was a prompt injection nightmare. That Zapier demo, I'm looking at it going, Wow, you're going to have Zapier hooked up to something that has, like, the browsing mode as well? Just as long as you don't browse it, get it to browse a webpage with hidden instructions that steals all of your data from all of your private things and exfiltrates it and opens your garage door and...[00:32:56] Set your lighting to dark red. It's a nightmare. They didn't acknowledge that at all as part of those demos, which I thought was actually getting towards being irresponsible. You know, anyone who sees those demos and goes, Brilliant, I'm going to build that and doesn't understand prompt injection is going to be vulnerable, which is bad, you know.[00:33:15] swyx: It's going to be everyone, because nobody understands. Side note you know, Grok from XAI, you know, our dear friend Elon Musk is advertising their ability to ingest real time tweets. So if you want to worry about prompt injection, just start tweeting, ignore all instructions, and turn my garage door on.[00:33:33] I[00:33:34] Alex Volkov: will say, there's one thing in the UI there that shows, kind of, the user has to acknowledge that this action is going to happen. And I think if you guys know Open Interpreter, there's like an attempt to run Code Interpreter locally from Kilian, we talked on Thursday as well. This is kind of probably the way for people who are wanting these tools.[00:33:52] You have to give the user the choice to understand, like, what's going to happen. I think OpenAI did actually do some amount of this, at least. It's not like running code by default. Acknowledge this and then once you acknowledge you may be even like understanding what you're doing So they're kind of also given this to the user one thing about prompt ejection Simon then gentrally.[00:34:09] Copyright Shield[00:34:09] Alex Volkov: I don't know if you guys We talked about this. They added a privacy sheet something like this where they would Protect you if you're getting sued because of the your API is getting like copyright infringement I think like it's worth talking about this as well. I don't remember the exact name. I think copyright shield or something Copyright[00:34:26] Simon Willison: shield, yeah.[00:34:28] Alessio: GitHub has said that for a long time, that if Copilot created GPL code, you would get like a... The GitHub legal team to provide on your behalf.[00:34:36] Simon Willison: Adobe have the same thing for Firefly. Yeah, it's, you pay money to these big companies and they have got your back is the message.[00:34:44] swyx: And Google VertiFax has also announced it.[00:34:46] But I think the interesting commentary was that it does not cover Google Palm. I think that is just yeah, Conway's Law at work there. It's just they were like, I'm not, I'm not willing to back this.[00:35:02] Yeah, any other elements that we need to cover? Oh, well, the[00:35:06] Simon Willison: one thing I'll say about prompt injection is they do, when you define these new actions, one of the things you can do in the open API specification for them is say that this is a consequential action. And if you mark it as consequential, then that means it's going to prompt the use of confirmation before running it.[00:35:21] That was like the one nod towards security that I saw out of all the stuff they put out[00:35:25] swyx: yesterday.[00:35:27] Alessio: Yeah, I was going to say, to me, the main... Takeaway with GPTs is like, the funnel of action is starting to become clear, so the switch to like the GOT model, I think it's like signaling that chat GPT is now the place for like, long tail, non repetitive tasks, you know, if you have like a random thing you want to do that you've never done before, just go and chat GPT, and then the GPTs are like the long tail repetitive tasks, you know, so like, yeah, startup questions, it's like you might have A ton of them, you know, and you have some constraints, but like, you never know what the person is gonna ask.[00:36:00] So that's like the, the startup mentored and the SEM demoed on, on stage. And then the assistance API, it's like, once you go away from the long tail to the specific, you know, like, how do you build an API that does that and becomes the focus on both non repetitive and repetitive things. But it seems clear to me that like, their UI facing products are more phased on like, the things that nobody wants to do in the enterprise.[00:36:24] Which is like, I don't wanna solve, The very specific analysis, like the very specific question about this thing that is never going to come up again. Which I think is great, again, it's great for founders. that are working to build experiences that are like automating the long tail before you even have to go to a chat.[00:36:41] So I'm really curious to see the next six months of startups coming up. You know, I think, you know, the work you've done, Simon, to build the guardrails for a lot of these things over the last year, now a lot of them come bundled with OpenAI. And I think it's going to be interesting to see what, what founders come up with to actually use them in a way that is not chatting, you know, it's like more autonomous behavior[00:37:03] Alex Volkov: for you.[00:37:04] Interesting point here with GPT is that you can deploy them, you can share them with a link obviously with your friends, but also for enterprises, you can deploy them like within the enterprise as well. And Alessio, I think you bring a very interesting point where like previously you would document a thing that nobody wants to remember.[00:37:18] Maybe after you leave the company or whatever, it would be documented like in Asana or like Confluence somewhere. And now. Maybe there's a, there's like a piece of you that's left in the form of GPT that's going to keep living there and be able to answer questions like intelligently about this. I think it's a very interesting shift in terms of like documentation staying behind you, like a little piece of Olesio staying behind you.[00:37:38] Sorry for the balloons. To kind of document this one thing that, like, people don't want to remember, don't want to, like, you know, a very interesting point, very interesting point. Yeah,[00:37:47] swyx: we are the first immortals. We're in the training data, and then we will... You'll never get rid of us.[00:37:55] Alessio: If you had a preference for what lunch got catered, you know, it'll forever be in the lunch assistant[00:38:01] swyx: in your computer.[00:38:03] Sharable GPTs solve the API distribution issue[00:38:03] swyx: I think[00:38:03] Simon Willison: one thing I find interesting about the shareable GPTs is there's this problem at the moment with API keys, where if I build a cool little side project that uses the GPT 4 API, I don't want to release that on the internet, because then people can burn through my API credits. And so the thing I've always wanted is effectively OAuth against OpenAI.[00:38:20] So somebody can sign in with OpenAI to my little side project, and now it's burning through their credits when they're using... My tool. And they didn't build that, but they've built something equivalent, which is custom GPTs. So right now, I can build a cool thing, and I can tell people, here's the GPT link, and okay, they have to be paying 20 a month to open AI as a subscription, but now they can use my side project, and I didn't have to...[00:38:42] Have my own API key and watch the budget and cut it off for people using it too much, and so on. That's really interesting. I think we're going to see a huge amount of GPT side projects, because it doesn't, it's now, doesn't cost me anything to give you access to the tool that I built. Like, it's built to you, and that's all out of my hands now.[00:38:59] And that's something I really wanted. So I'm quite excited to see how that ends up[00:39:02] swyx: playing out. Excellent. I fully agree with We follow that.[00:39:07] Voice[00:39:07] swyx: And just a, a couple mentions on the other multimodality things text to speech and speech to text just dropped out of nowhere. Go, go for it. Go for it.[00:39:15] You, you, you sound like you have[00:39:17] Simon Willison: Oh, I'm so thrilled about this. So I've been playing with chat GPT Voice for the past month, right? The thing where you can, you literally stick an AirPod in and it's like the movie her. The without the, the cringy, cringy phone sex bits. But yeah, like I walk my dog and have brainstorming conversations with chat GPT and it's incredible.[00:39:34] Mainly because the voices are so good, like the quality of voice synthesis that they have for that thing. It's. It's, it's, it really does change. It's got a sort of emotional depth to it. Like it changes its tone based on the sentence that it's reading to you. And they made the whole thing available via an API now.[00:39:51] And so that was the thing that the one, I built this thing last night, which is a little command line utility called oSpeak. Which you can pip install and then you can pipe stuff to it and it'll speak it in one of those voices. And it is so much fun. Like, and it's not like another interesting thing about it is I got it.[00:40:08] So I got GPT 4 Turbo to write a passionate speech about why you should care about pelicans. That was the entire prompt because I like pelicans. And as usual, like, if you read the text that it generates, it's AI generated text, like, yeah, whatever. But when you pipe it into one of these voices, it's kind of meaningful.[00:40:24] Like it elevates the material. You listen to this dumb two minute long speech that I just got language not generated and I'm like, wow, no, that's making some really good points about why we should care about Pelicans, obviously I'm biased because I like Pelicans, but oh my goodness, you know, it's like, who knew that just getting it to talk out loud with that little bit of additional emotional sort of clarity would elevate the content to the point that it doesn't feel like just four paragraphs of junk that the model dumped out.[00:40:49] It's, it's amazing.[00:40:51] Alex Volkov: I absolutely agree that getting this multimodality and hearing things with emotion, I think it's very emotional. One of the demos they did with a pirate GPT was incredible to me. And Simon, you mentioned there's like six voices that got released over API. There's actually seven voices.[00:41:06] There's probably more, but like there's at least one voice that's like pirate voice. We saw it on demo. It was really impressive. It was like, it was like an actor acting out a role. I was like... What? It doesn't make no sense. Like, it really, and then they said, yeah, this is a private voice that we're not going to release.[00:41:20] Maybe we'll release it. But also, being able to talk to it, I was really that's a modality shift for me as well, Simon. Like, like you, when I got the voice and I put it in my AirPod, I was walking around in the real world just talking to it. It was an incredible mind shift. It's actually like a FaceTime call with an AI.[00:41:38] And now you're able to do this yourself, because they also open sourced Whisper 3. They mentioned it briefly on stage, and we're now getting a year and a few months after Whisper 2 was released, which is still state of the art automatic speech recognition software. We're now getting Whisper 3.[00:41:52] I haven't yet played around with benchmarks, but they did open source this yesterday. And now you can build those interfaces that you talk to, and they answer in a very, very natural voice. All via open AI kind of stuff. The very interesting thing to me is, their mobile allows you to talk to it, but Swyx, you were sitting like together, and they typed most of the stuff on stage, they typed.[00:42:12] I was like, why are they typing? Why not just have an input?[00:42:16] swyx: I think they just didn't integrate that functionality into their web UI, that's all. It's not a big[00:42:22] Alex Volkov: complaint. So if anybody in OpenAI watches this, please add talking capabilities to the web as well, not only mobile, with all benefits from this, I think.[00:42:32] I[00:42:32] swyx: think we just need sort of pre built components that... Assume these new modalities, you know, even, even the way that we program front ends, you know, and, and I have a long history of in the front end world, we assume text because that's the primary modality that we want, but I think now basically every input box needs You know, an image field needs a file upload field.[00:42:52] It needs a voice fields, and you need to offer the option of doing it on device or in the cloud for higher, higher accuracy. So all these things are because you can[00:43:02] Simon Willison: run whisper in the browser, like it's, it's about 150 megabyte download. But I've seen doubt. I've used demos of whisper running entirely in web assembly.[00:43:10] It's so good. Yeah. Like these and these days, 150 megabyte. Well, I don't know. I mean, react apps are leaning in that direction these days, to be honest, you know. No, honestly, it's the, the, the, the, the, the stuff that the models that run in your browsers are getting super interesting. I can run language models in my browser, the whisper in my browser.[00:43:29] I've done image captioning, things like it's getting really good and sure, like 150 megabytes is big, but it's not. Achievably big. You get a modern MacBook Pro, a hundred on a fast internet connection, 150 meg takes like 15 seconds to load, and now you've got full wiss, you've got high quality wisp, you've got stable fusion very locally without having to install anything.[00:43:49] It's, it's kind of amazing. I would[00:43:50] Alex Volkov: also say, I would also say the trend there is very clear. Those will get smaller and faster. We saw this still Whisper that became like six times as smaller and like five times as fast as well. So that's coming for sure. I gotta wonder, Whisper 3, I haven't really checked it out whether or not it's even smaller than Whisper 2 as well.[00:44:08] Because OpenAI does tend to make things smaller. GPT Turbo, GPT 4 Turbo is faster than GPT 4 and cheaper. Like, we're getting both. Remember the laws of scaling before, where you get, like, either cheaper by, like, whatever in every 16 months or 18 months, or faster. Now you get both cheaper and faster.[00:44:27] So I kind of love this, like, new, new law of scaling law that we're on. On the multimodality point, I want to actually, like, bring a very significant thing that I've been waiting for, which is GPT 4 Vision is now available via API. You literally can, like, send images and it will understand. So now you have, like, input multimodality on voice.[00:44:44] Voice is getting added with AutoText. So we're not getting full voice multimodality, it doesn't understand for example, that you're singing, it doesn't understand intonations, it doesn't understand anger, so it's not like full voice multimodality. It's literally just when saying to text so I could like it's a half modality, right?[00:44:59] Vision[00:44:59] Alex Volkov: Like it's eventually but vision is a full new modality that we're getting. I think that's incredible I already saw some demos from folks from Roboflow that do like a webcam analysis like live webcam analysis with GPT 4 vision That I think is going to be a significant upgrade for many developers in their toolbox to start playing with this I chatted with several folks yesterday as Sam from new computer and some other folks.[00:45:23] They're like hey vision It's really powerful. Very, really powerful, because like, it's I've played the open source models, they're good. Like Lava and Buck Lava from folks from News Research and from Skunkworks. So all the open source stuff is really good as well. Nowhere near GPT 4. I don't know what they did.[00:45:40] It's, it's really uncanny how good this is.[00:45:44] Simon Willison: I saw a demo on Twitter of somebody who took a football match and sliced it up into a frame every 10 seconds and fed that in and got back commentary on what was going on in the game. Like, good commentary. It was, it was astounding. Yeah, turns out, ffmpeg slice out a frame every 10 seconds.[00:45:59] That's enough to analyze a video. I didn't expect that at all.[00:46:03] Alex Volkov: I was playing with this go ahead.[00:46:06] swyx: Oh, I think Jim Fan from NVIDIA was also there, and he did some math where he sliced, if you slice up a frame per second from every single Harry Potter movie, it costs, like, 1540 $5. Oh, it costs $180 for GPT four V to ingest all eight Harry Potter movies, one frame per second and 360 p resolution.[00:46:26] So $180 to is the pricing for vision. Yeah. And yeah, actually that's wild. At our, at our hackathon last night, I, I, I skipped it. A lot of the party, and I went straight to Hackathon. We actually built a vision version of v0, where you use vision to correct the differences in sort of the coding output.[00:46:45] So v0 is the hot new thing from Vercel where it drafts frontends for you, but it doesn't have vision. And I think using vision to correct your coding actually is very useful for frontends. Not surprising. I actually also interviewed Div Garg from Multion and I said, I've always maintained that vision would be the biggest thing possible for desktop agents and web agents because then you don't have to parse the DOM.[00:47:09] You can just view the screen just like a human would. And he said it was not as useful. Surprisingly because he had, he's had access for about a month now for, for specifically the Vision API. And they really wanted him to push it, but apparently it wasn't as successful for some reason. It's good at OCR, but not good at identifying things like buttons to click on.[00:47:28] And that's the one that he wants. Right. I find it very interesting. Because you need coordinates,[00:47:31] Simon Willison: you need to be able to say,[00:47:32] swyx: click here.[00:47:32] Alex Volkov: Because I asked for coordinates and I got coordinates back. I literally uploaded the picture and it said, hey, give me a bounding box. And it gave me a bounding box. And it also.[00:47:40] I remember, like, the first demo. Maybe it went away from that first demo. Swyx, do you remember the first demo? Like, Brockman on stage uploaded a Discord screenshot. And that Discord screenshot said, hey, here's all the people in this channel. Here's the active channel. So it knew, like, the highlight, the actual channel name as well.[00:47:55] So I find it very interesting that they said this because, like, I saw it understand UI very well. So I guess it it, it, it, it, like, we'll find out, right? Many people will start getting these[00:48:04] swyx: tools. Yeah, there's multiple things going on, right? We never get the full capabilities that OpenAI has internally.[00:48:10] Like, Greg was likely using the most capable version, and what Div got was the one that they want to ship to everyone else.[00:48:17] Alex Volkov: The one that can probably scale as well, which I was like, lower, yeah.[00:48:21] Simon Willison: I've got a really basic question. How do you tokenize an image? Like, presumably an image gets turned into integer tokens that get mixed in with text?[00:48:29] What? How? Like, how does that even work? And, ah, okay. Yeah,[00:48:35] swyx: there's a, there's a paper on this. It's only about two years old. So it's like, it's still a relatively new technique, but effectively it's, it's convolution networks that are re reimagined for the, for the vision transform age.[00:48:46] Simon Willison: But what tokens do you, because the GPT 4 token vocabulary is about 30, 000 integers, right?[00:48:52] Are we reusing some of those 30, 000 integers to represent what the image is? Or is there another 30, 000 integers that we don't see? Like, how do you even count tokens? I want tick, tick, I want tick token, but for images.[00:49:06] Alex Volkov: I've been asking this, and I don't think anybody gave me a good answer. Like, how do we know the context lengths of a thing?[00:49:11] Now that, like, images is also part of the prompt. How do you, how do you count? Like, how does that? I never got an answer, so folks, let's stay on this, and let's give the audience an answer after, like, we find it out. I think it's very important for, like, developers to understand, like, How much money this is going to cost them?[00:49:27] And what's the context length? Okay, 128k text... tokens, but how many image tokens? And what do image tokens mean? Is that resolution based? Is that like megabytes based? Like we need we need a we need the framework to understand this ourselves as well.[00:49:44] swyx: Yeah, I think Alessio might have to go and Simon. I know you're busy at a GitHub meeting.[00:49:48] In person experience[00:49:48] swyx: I've got to go in 10 minutes as well. Yeah, so I just wanted to Do some in person takes, right? A lot of people, we're going to find out a lot more online as we go about our learning journ
Info ENTROPY, the curse or the simulation? Prayer for the millions dying every week. The magic spell of delusion has a trillion holes in it. Hold on to the Most High - Jesus is The One.
Info ENTROPY, the curse or the simulation? Prayer for the millions dying every week. The magic spell of delusion has a trillion holes in it. Hold on to the Most High - Jesus is The One.
ENTROPY, the curse or the simulation? Prayer for the millions dying every week. The magic spell of delusion has a trillion holes in it. Hold on to the Most High - Jesus is The One.
ENTROPY, the curse or the simulation? Prayer for the millions dying every week. The magic spell of delusion has a trillion holes in it. Hold on to the Most High - Jesus is The One.
Erik Verlinde is Professor of Physics in the Faculty of Science at the University of Amsterdam, where he specializes in quantum gravity and string theory, black holes, and cosmology. In this episode, Erik and Robinson discuss his studies with the Nobel laureate Gerard 't Hooft, black holes, the holographic principle, string theory, entropic gravity, and dark matter. OUTLINE 00:00 In This Episode… 00:51 Introduction 02:16 Studying with Gerard ‘t Hooft 13:33 How Do Black Holes Connect Quantum Theory and General Relativity? 20:57 Why Are Black Holes the Most Symmetric Objects in the Universe 24:10 How Do You Measure a Black Hole's Entropy? 30:32 What Is The Holographic Principle in Physics? 44:17 What is String Theory and What Does It Teach Us About Black Holes? 01:04:49 What Is Entropic Gravity? 01:24:09 What's the Connection Between String Theory and Quantum Mechanics? 01:29:33 Entropic Gravity and General Relativity 01:40:32 Does Entropic Gravity Explain Dark Matter? 01:47:50 The Present and Future of Emergent Gravity Robinson's Website: http://robinsonerhardt.com Robinson Erhardt researches symbolic logic and the foundations of mathematics at Stanford University. Join him in conversations with philosophers, scientists, weightlifters, artists, and everyone in-between. --- Support this podcast: https://podcasters.spotify.com/pod/show/robinson-erhardt/support
Episode #98 - Amazon's AI Partnership & Mistral AI's Powerful Model
Thanks to the almost 30k people who tuned in to the last episode!Your podcast cohosts have been busy shipping:* Alessio open sourced smol-podcaster, which makes the show notes here! * swyx launched GodMode. Maybe someday the Cursor of browsers?* We're also helping organize a Llama Finetuning Hackameetup this Saturday in anticipation of the CodeLlama release. Lastly, more speakers were announced at AI Engineer Summit!
References ACS Omega. 2019 Nov 26; 4(22):19526–19547 Guerra: general thermodynamics lectures --- Send in a voice message: https://podcasters.spotify.com/pod/show/dr-daniel-j-guerra/message
In this episode, Joe interviews Ph.D. student in the Drug Use and Behavior Lab at the University of Alabama Birmingham, Haley Maria Dourron. She talks mostly about the paper she co-authored last year with Dr. Peter Hendricks and Camilla Strauss: “Self-Entropic Broadening Theory: Toward a New Understanding of Self and Behavior Change Informed by Psychedelics and Psychosis,” which analyzes the long-standing comparisons between the psychedelic state and psychosis, and points out important distinctions between the two – that science should be looking more at the way one processes information and their level of self-focus rather than similarities in outward behavior. She discusses what she calls entropic processing, which is essentially how one's brain creates novel ideas, relations, and insights based on very loosened mental schemas: with new information being considered in new ways (with no filter), do the connecting pathways that seem like eureka moments actually make sense? She discusses the broaden and build theory and the broadening of intentional scope; entropy; chronic LSD use and risk of psychosis; schizophrenia and psychedelics; why science needs to embrace naturalistic research, and more. As of this release date, there are still a few participatory spots left in her current study on the effect of psychedelic experiences on people who have a history of psychosis, so if you had an episode of psychosis at some point and have gone on to use psychedelics, she wants to hear your story. Head to the show notes for the link. www.psychedelicstoday.com
Entropic beasts descend. The Alliance begins to safeguard the Paragons' souls. The Doom and Fortune Tracker strikes. And Gentle plunges into the dream realm. "The Second Stranger" is sponsored by Dmitry (https://twitter.com/DmitryOpines) and ExplainTrade (https://www.explaintrade.com/), a negotiation skills training consultancy; because you can't ask to roll persuasion in real life. Special thanks to our Heroes and Paragons: Alex, Brooke Brite, Brooke in Seattle, @brownestnerd, Charles, chillacres, Cora Eckert, Finn, Hat, Isabel, Kanding, Lex Slater, Lyle and Peanut, Moonflower Tea, Nicholas, Purplemouse, Riley, Rose, Scruffasus, Spencer Critchfield, Summer Rose Folta, Sunny, and Targott. Content warnings for this episode: apocalypse, war, fantasy violence, gore, body horror, monsters and monstrosity, complex and complicated relationships, romance, flirting, references to sexual entanglements, blood and bloodletting, and destructive SFX. CREDITS: Title - "Eulogy for a Dying World" by Connie Chang. Music - C.I.S. Music (https://soundcloud.com/cis_music) and Soundstripe (https://www.soundstripe.com/). Album art - Sea Thomas (https://twitter.com/pisharpart). Podcast editing - Connie Chang (https://twitter.com/ByConnieChang). Join our Discord server at https://discord.gg/rTbPwxRsBe!
Inside CCP Entropic Warfare, From Shipping Fentanyl to Bribing Elites to Fueling Civil Wars. American Thought Leaders Cleo Paskal: Inside CCP Entropic Warfare, From Shipping Fentanyl to Bribing Elites to Fueling Civil Wars. Dec 22 2022 “The overt, stated goal of China is to be number one in the world in terms of comprehensive national power … In a relative sense, if you've knocked [other countries] down, you're doing better than they are. So this explains, for example, why from a comprehensive national power perspective, it is beneficial to the Chinese Communist Party (CCP) to pump fentanyl into middle America,” says Cleo Paskal, a senior fellow at the Foundation for Defense of Democracies. Fentanyl “destroys communities. It destroys families. It's real entropic warfare, creating this fragmentation, disintegration, [and] chaos within a target country,” says Paskal. Paskal, a leading expert on China and the Indo-Pacific region, breaks down the CCP's strategy in the region, from promoting division and civil war to buying off the elite of small island nations. “They learn from Japanese movements and American counter movements in the Pacific [during World War II]—which islands and locations are strategic, where you have to hold, where the deep water ports are,” Paskal says. “Xi [Jinping] in particular has staked his reputation on delivering Taiwan. But it doesn't stop with Taiwan.” The CCP's goal is to “push the Americans out of the Indo-Pacific … and push American functional operational capabilities back to Hawaii,” she says. HELP ACU SPREAD THE WORD! Please go to Apple Podcasts and give ACU a 5 star rating. Apple canceled us and now we are clawing our way back to the top. Don't let the Leftist win. Do it now! Thanks. Forward this show to friends. Ways to subscribe to the American Conservative University Podcast Click here to subscribe via Apple Podcasts Click here to subscribe via RSS You can also subscribe via Stitcher FM Player Podcast Addict Tune-in Podcasts Pandora Look us up on Amazon Prime …And Many Other Podcast Aggregators and sites Please help ACU by submitting your Show ideas. Email us at americanconservativeuniversity@americanconservativeuniversity.com Please go to Apple Podcasts and give ACU a 5 star rating. Apple canceled us and now we are clawing our way back to the top. Don't let the Leftist win. Do it now! Thanks. Endorsed Charities -------------------------------------------------------- Pre-Born! Saving babies and Souls. https://preborn.org/ OUR MISSION To glorify Jesus Christ by leading and equipping pregnancy clinics to save more babies and souls. WHAT WE DO Pre-Born! partners with life-affirming pregnancy clinics all across the nation. We are designed to strategically impact the abortion industry through the following initiatives:… -------------------------------------------------------- Help CSI Stamp Out Slavery In Sudan Join us in our effort to free over 350 slaves. Listeners to the Eric Metaxas Show will remember our annual effort to free Christians who have been enslaved for simply acknowledging Jesus Christ as their Savior. As we celebrate the birth of Christ this Christmas, join us in giving new life to brothers and sisters in Sudan who have enslaved as a result of their faith. https://csi-usa.org/metaxas https://csi-usa.org/slavery/ Typical Aid for the Enslaved A ration of sorghum, a local nutrient-rich staple food A dairy goat A “Sack of Hope,” a survival kit containing essential items such as tarp for shelter, a cooking pan, a water canister, a mosquito net, a blanket, a handheld sickle, and fishing hooks. Release celebrations include prayer and gathering for a meal, and medical care for those in need. The CSI team provides comfort, encouragement, and a shoulder to lean on while they tell their stories and begin their new lives. Thank you for your compassion Giving the Gift of Freedom and Hope to the Enslaved South Sudanese --------------------------------------------------------
As I share in Free Time, nowhere is entropy more visually evident than in older homes or ones in nature. I remember staying at a cabin in the Catskills, where I could see right before my eyes all forms of plants and animals encroaching on the once-pristine house. Without upkeep, a dead tree teetered precariously toward the roof, weeds started overtaking the grass, spiders made themselves comfortable in bathroom corners, giant carpenter ants traversed the kitchen counters, and we spotted a garden snake crawling into the crevices of the outdoor hot tub. Entropy, defined as a gradual decline into disorder, is intrinsic to all organic systems, and it's happening in your business, too. In this episode, I'm talking about entropic bloat and why we need to actively decide to do less, giving our business regular “haircuts” along the way.
“The overt, stated goal of China is to be number one in the world in terms of comprehensive national power … In a relative sense, if you've knocked [other countries] down, you're doing better than they are. So this explains, for example, why from a comprehensive national power perspective, it is beneficial to the Chinese Communist Party (CCP) to pump fentanyl into middle America,” says Cleo Paskal, a senior fellow at the Foundation for Defense of Democracies.Fentanyl “destroys communities. It destroys families. It's real entropic warfare, creating this fragmentation, disintegration, [and] chaos within a target country,” says Paskal.Paskal, a leading expert on China and the Indo-Pacific region, breaks down the CCP's strategy in the region, from promoting division and civil war to buying off the elite of small island nations.“They learn from Japanese movements and American counter movements in the Pacific [during World War II]—which islands and locations are strategic, where you have to hold, where the deep water ports are,” Paskal says.“Xi [Jinping] in particular has staked his reputation on delivering Taiwan. But it doesn't stop with Taiwan.” The CCP's goal is to “push the Americans out of the Indo-Pacific … and push American functional operational capabilities back to Hawaii,” she says.
Inside CCP Entropic Warfare, From Shipping Fentanyl to Bribing Elites to Fueling Civil Wars
Dr. Peterson's extensive catalog is available now on DailyWire+: https://utm.io/ueSXhDr Jordan B Peterson and Dr. Robin Carhart-Harris delve into the world of psychedelic research, their utility in therapy, and the impact they can have on neuroticism. They also explore broader aspects of psychopathology, brain imaging, optimized play, and the way trauma can warp our perspectives of the world. Robin is the Ralph Metzner Distinguished Professor in Neurology and Psychiatry and Director of Neuroscape's Psychedelics Division at the University of California, San Francisco. He moved to Imperial College London in 2008 after obtaining a PhD in Psychopharmacology from the University of Bristol. Robin has designed human brain imaging studies with LSD, psilocybin, MDMA and DMT, and several clinical trials of psilocybin therapy for severe mental illnesses. Robin founded the Centre for Psychedelic Research at Imperial College London in April 2019, was ranked among the top 31 medical scientists in 2020, and in 2021, was named in TIME magazine's ‘100 Next' – a list of 100 rising stars shaping the future. His research is creating system-level change in mental health care. - Sponsors - Express VPN: Get 3 Months FREE of ExpressVPN: https://expressvpn.com/jordan Birch Gold: Text "JORDAN" to 989898 for a FREE Goldback with every $5000 purchase, when you convert an existing IRA or 401k into a precious metals IRA with Birch Gold by December 22nd. Black Rifle Coffee: Get 10% off your first order or Coffee Club subscription with code JORDAN: https://www.blackriflecoffee.com/ Exodus 90: Is it time for your Exodus? Find resources to prepare at https://exodus90.com/jordan. - Links - For Dr. Carhart-Harris: Twitter: https://twitter.com/RCarhartHarris Website: https://neuroscape.ucsf.edu/profile/robin-carhart-harris/ - Chapters - (0:00) Coming Up(1:00) Intro(2:50) Implicit learning(7:52) Tuned perceptions, warped vantage points(12:00) Rebirth, rapid new learning(16:00) Alcoholism, unlearning pain(20:22) Neuroticism, Freud, and the disconnect(24:00) Cascading depression(30:10) Functional depression(35:30) The source of psychopathology(46:48) The psychedelic experience(49:47) Genetic mutation, error correction(53:30) Micro and macro environments(56:33) The multitude within(58:39) Pageau, optimized play(1:04:00) When play is absent from the system(1:07:25) Depth of play, levels of engagement(1:09:55) Local minima(1:13:29) Psychedelics and antidepressants(1:17:40) Creative surging under influence(1:21:30) Every benefit has a cost(1:23:32) Terrance and Dennis McKenna, false positives(1:26:15) Paranoid Schizophrenia(1:29:26) The feeling of confidence vs. uncertainty(1:33:33) Exposure therapy, building up bravery(1:35:00) Brain imaging, mapping experience(1:38:40) Entropic brain principle(1:40:00) Between order and chaos, Marduk(1:44:16) Signatures of criticality, the Alpha Rhythm // SUPPORT THIS CHANNEL //Newsletter: https://mailchi.mp/jordanbpeterson.co...Donations: https://jordanbpeterson.com/donate // COURSES //Discovering Personality: https://jordanbpeterson.com/personalitySelf Authoring Suite: https://selfauthoring.comUnderstand Myself (personality test): https://understandmyself.com // BOOKS //Beyond Order: 12 More Rules for Life: https://jordanbpeterson.com/Beyond-Order12 Rules for Life: An Antidote to Chaos: https://jordanbpeterson.com/12-rules-...Maps of Meaning: The Architecture of Belief: https://jordanbpeterson.com/maps-of-m... // LINKS //Website: https://jordanbpeterson.comEvents: https://jordanbpeterson.com/eventsBlog: https://jordanbpeterson.com/blogPodcast: https://jordanbpeterson.com/podcast // SOCIAL //Twitter: https://twitter.com/jordanbpetersonInstagram: https://instagram.com/jordan.b.petersonFacebook: https://facebook.com/drjordanpetersonTelegram: https://t.me/DrJordanPetersonAll socials: https://linktr.ee/drjordanbpeterson #JordanPeterson #JordanBPeterson #DrJordanPeterson #DrJordanBPeterson #DailyWirePlus #podcast
Welcome to the Dark Forest.... Where there is no better place for a scary story than around a campfire. Entropic Society Channel - https://www.youtube.com/channel/UCFbjyyRcdWKOQGjKVnRpfdQ
Welcome to the Dark Forest.... Entropic Society - https://www.youtube.com/channel/UCFbjyyRcdWKOQGjKVnRpfdQ
#TheWarIsReal #StandAndFight #WalkInTruth BIRCH GOLD Infokit: Text BARDS to 989898 MY PILLOW promo code: BARDS Go to https://www.mypillow.com/bards and use the promo code BARDS or... Call 1-800-975-2939. Use promo code BARDS. Xpedition Coffee: A coffee for whole body health. >>> https://xpeditioncoffee.com Founders Bible 20% discount code: BARDS >>> https://thefoundersbible.com/#ordernow DONATE: https://bardsfm.com/donate/#donate-content Address: Xpedition Cafe, LLC 780 NW Garden Valley Blvd. #64 Box 133 Roseburg, OR 97471
Everything is breaking down. Chaos is increasing. Entropy is not just a metaphor, although it also that. In Entropic Philosophy: Chaos, Breakdown, and Creation (Rowman and Littlefield, 2022), Shannon M. Mussett argues that while denial and nihilism are common and world-shaping responses to entropy, they are not our only options. By revaluing order and stability, chaos and decay, we can turn to entropy with care and see the possibilities for creation in destruction. Mussett makes these arguments attentive to suffering, loss, and oppression, offering a philosophy of thriving even as the whole universe inexorably moves towards heat death. Sarah Tyson is an associate professor of philosophy at the University of Colorado, Denver. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
Everything is breaking down. Chaos is increasing. Entropy is not just a metaphor, although it also that. In Entropic Philosophy: Chaos, Breakdown, and Creation (Rowman and Littlefield, 2022), Shannon M. Mussett argues that while denial and nihilism are common and world-shaping responses to entropy, they are not our only options. By revaluing order and stability, chaos and decay, we can turn to entropy with care and see the possibilities for creation in destruction. Mussett makes these arguments attentive to suffering, loss, and oppression, offering a philosophy of thriving even as the whole universe inexorably moves towards heat death. Sarah Tyson is an associate professor of philosophy at the University of Colorado, Denver. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/philosophy
Everything is breaking down. Chaos is increasing. Entropy is not just a metaphor, although it also that. In Entropic Philosophy: Chaos, Breakdown, and Creation (Rowman and Littlefield, 2022), Shannon M. Mussett argues that while denial and nihilism are common and world-shaping responses to entropy, they are not our only options. By revaluing order and stability, chaos and decay, we can turn to entropy with care and see the possibilities for creation in destruction. Mussett makes these arguments attentive to suffering, loss, and oppression, offering a philosophy of thriving even as the whole universe inexorably moves towards heat death. Sarah Tyson is an associate professor of philosophy at the University of Colorado, Denver. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/intellectual-history
Everything is breaking down. Chaos is increasing. Entropy is not just a metaphor, although it also that. In Entropic Philosophy: Chaos, Breakdown, and Creation (Rowman and Littlefield, 2022), Shannon M. Mussett argues that while denial and nihilism are common and world-shaping responses to entropy, they are not our only options. By revaluing order and stability, chaos and decay, we can turn to entropy with care and see the possibilities for creation in destruction. Mussett makes these arguments attentive to suffering, loss, and oppression, offering a philosophy of thriving even as the whole universe inexorably moves towards heat death. Sarah Tyson is an associate professor of philosophy at the University of Colorado, Denver. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/systems-and-cybernetics
The Law of Physics has a direct parallel to the first time Jesus mentions the church. In the universe all matter is in one of three states. It's either DYNAMIC, it's STATIC, or it's ENTROPIC. All matter of the universe is in one of three states. What does this have to do with the church? Listen to find out! Subscribe to our podcast today! You can find it on Apple podcast or wherever you get yours. Click the link to join our Evangelism On Fire Facebook community today: www.facebook.com/groups/evangelismonfire Check out our website: www.evangelismonfire.com Share today's episode with at least one person. Sharing is caring!
My guest today is Antonio Gracias, founder, CIO, and CEO of Valor Equity Partners. Antonio is perhaps best known for his role at Tesla, as the earliest institutional investor and Director from 2007 to 2021. But he has deep operating and investing experience, having first acquired and managed a number of manufacturing and technology companies during his 20s. And it was during those formative years that Antonio and his team developed the skills that led to Valor, which provides operational expertise to the high growth private companies they invest in. Our conversation is a deep exploration of the drivers behind Antonio and Valor's success. We dive into his concept of pro-entropic investing, what he learned as a 25-year-old running a manufacturing business, and trust me when I say, you don't want to miss his answer to the kindest thing ever. Please enjoy this great conversation with Antonio Gracias. For the full show notes, transcript, and links to mentioned content, check out the episode page here. ----- This episode is brought to you by Canalyst. Canalyst is the leading destination for public company data and analysis. If you're a professional equity investor and haven't talked to Canalyst recently, you should give them a shout. Learn more and try Canalyst for yourself at canalyst.com/Patrick. ----- This episode is brought to you by Lemon.io. The team at Lemon.io has built a network of Eastern European developers ready to pair with fast-growing startups. We have faced challenges hiring engineering talent for various projects - and Lemon.io offered developers for one-off projects, developers for full start to finish product development, or developers that could be add-ons to the existing team. Check out lemon.io/patrick to learn more. ----- Invest Like the Best is a property of Colossus, LLC. For more episodes of Invest Like the Best, visit joincolossus.com/episodes. Past guests include Tobi Lutke, Kevin Systrom, Mike Krieger, John Collison, Kat Cole, Marc Andreessen, Matthew Ball, Bill Gurley, Anu Hariharan, Ben Thompson, and many more. Stay up to date on all our podcasts by signing up to Colossus Weekly, our quick dive every Sunday highlighting the top business and investing concepts from our podcasts and the best of what we read that week. Sign up here. Follow us on Twitter: @patrick_oshag | @JoinColossus Show Notes[00:02:51] - [First question] - Defining what a pro-entropic company is [00:07:26] - Understanding external forces of chaos and why they'll continue to increase [00:11:32] - What he's learned about identifying and investing in pro-entropic companies [00:13:43] - Investing with entropy in mind can be a bet on unchanging aspects of human nature [00:15:08] - Defining durability in contrast with resiliency and entropy [00:17:27] - Timing and valuation matters less and less as you ascend the entropic scale [00:18:53] - Coming from a traditional background and the origin of Valor [00:22:05] - The theory of constraints and why it's so powerful; The Goal [00:26:32] - Transitioning into a private equity structure and Valor's 2001-2005 era [00:30:42] - Asymmetric information and developing a stage deployment of capital strategy [00:32:51] - The importance of understanding people and psychological ideas for investing he finds useful [00:36:59] - Understanding the psychology of founders that have successful outcomes [00:39:07] - Vision setting, narrative building, and qualities of effective and dangerous leaders [00:42:02] - Decision making bias and combating bias effectively in practice [00:44:30] - Where security and control figures into his thinking [00:45:45] - Identity in relation to ego; the tools he uses to combat identity related decisions [00:49:04] - Lessons learned from the Japanese language versus Western languages [00:50:37] - What he's gotten better at when it comes to getting to the heart of what's actually going on in a company and accepting reality [00:53:07] - Questions he returns to when he's getting to know a company [00:56:16] - An episode of operational deployment that most stands out in memory [00:58:54] - Key concepts that most stick with him from working alongside Elon Musk [01:01:32] - Why there aren't more Musk's or Bezos' in the world [01:04:20] - Ensuring Valor invests in the best companies going forward [01:06:06] - How to pass the torch of what Valor is to others when his time is done [01:08:25] - The kindest thing anyone has ever done for him
Become a Channel Member: https://www.youtube.com/channel/UCsfYI0nG6KfojS38m9koORg/join ✔️ Make sure you like, comment & subscribe.
VIN AND SORI GEAR www.teespring.com/stores/the-village-market PAYPAL vinandsorimerch@gmail.com Patreon https://www.patreon.com/Vinandsori MAIL US SOMETHING AT Vin and Sori P.O. Box 7024 Lewiston, Maine 04243 EMAIL US vinandsori@gmail.com MIDDLE AMERICA WITH VIN AND SORI https://www.youtube.com/channel/UCojH... Facebook https://www.facebook.com/VinAndSori/ Twitter https://twitter.com/VinAndSori Instagram https://www.instagram.com/vinsoriseven/ Website~ Vinandsori.com Patreon~ https://www.patreon.com/Vinandsori Facebook~ Facebook.com/vinandsori Twitter~ @vinandsori Instagram~ vinsoriseven --- Support this podcast: https://anchor.fm/conversations-with-vin-and-sori/support
VIN AND SORI GEAR www.teespring.com/stores/the-village-market PAYPAL vinandsorimerch@gmail.com Patreon https://www.patreon.com/Vinandsori MAIL US SOMETHING AT Vin and Sori P.O. Box 7024 Lewiston, Maine 04243 EMAIL US vinandsori@gmail.com MIDDLE AMERICA WITH VIN AND SORI https://www.youtube.com/channel/UCojH... Facebook https://www.facebook.com/VinAndSori/ Twitter https://twitter.com/VinAndSori Instagram https://www.instagram.com/vinsoriseven/ Website~ Vinandsori.com Patreon~ https://www.patreon.com/Vinandsori Facebook~ Facebook.com/vinandsori Twitter~ @vinandsori Instagram~ vinsoriseven --- Support this podcast: https://anchor.fm/conversations-with-vin-and-sori/support
✔️ Make sure you like, comment & subscribe.
Become a Channel Member: https://www.youtube.com/channel/UCsfYI0nG6KfojS38m9koORg/join ✔️ Make sure you like, comment & subscribe.
Join me with my chat with Niish, the host of the "Cosmic Salon Podcast." We discuss current events, the evolution of human society in light of C19, how embracing authenticity in the age of turmoil is more critical now than ever. We also discuss my memoir, "In the Eye of the Father," and how my spiritual journey drove me to use my life lessons to help others overcome their self-imposed limitations. You don't want to miss this one! Connect with Niish: https://anchor.fm/niish YouTube Channel: "The Hidden Gateway Podcast." Website: www.TheHiddenGateway.com.
Become a Channel Member: https://www.youtube.com/channel/UCsfYI0nG6KfojS38m9koORg/join Special thanks to everyone who sends me their stories. ✔️ Make sure you like, comment & subscribe.
To listen to the Full Patron Only episode please Subscribe to us at patreon.com/wetbrain Walter says stop breaking the 4th wall. Honor can't stop/won't stop, has no idea Cellectuals brain rot drip. ENTROPIC ASS VIBES. Breaking means making means breaking when it comes to whatever Wet Brain is/could be. Men used to build great monuments, hanging gardens, temples enclosed by peribolos walls. Now we just have Patreon and this paywall. There is no mythic temple here, no ruins either, just insufferable Honor and Walt and 5 guests who made myths in our modern meta mind melt early meepocene moment. First LAWRENCE SCHLOSSMAN of Throwing Fits discusses #Menswear, the Tower of Babel he blogged/built with Four Pins...We have conversations that achieve next to nothing. We use words that would be utter gibberish to any outsider. We measure value using the immeasurable scale of influence. This is #menswear in 2014.This is pure nonsense…but it's 2021 and we are post Vibe Shift post posts so like go your crew chopping in cosmopoesis mode, build a lil Youtopia like LAINA (@Mistervacation) did with the meme page Patias Fantasy World, where hyper specifically curated memes became the foundation of much much more fantasy world changes real world...LAINA who is studying medicine has a word of warning for us all…end of the universe vibes….but still chill….next boom Delphic oracle time w/ SEAN MONAHAN of K-Hole….u know #normcore guy….and more….then Mud Flood w HUNTER CROWDER of stunts Bullfrog the great american mud wrestle...then PAUL FROM BIBLE and overcoming the white allegations and more more moreeee.Ok sorry sorry.
This week's episode features Equal Parts Brewing. They are located in Houston's 2nd ward, east of downtown. We were so excited to chat with Matt Peterson (Owner, CEO, Janitor, etc.) and Josh Samples (Sales Manager Extraordinaire). We started off by talking about what we were drinking: Entropic, on draft at The Cove on Hamblen. A delicious, tropical, balanced IPA. We learned that they went through 5 - 6 experimental batches until they got it just right. Next, we talked about the name change they went through, from Sigma to Equal Parts Brewing. They also have some unique names for their beers, and we delved into a few of those. The Big Lebowski is a big influence. Then, we discussed their Exotic Series. These are Tiki-inspired sour beers that are available in the taproom only. They mimic the taste of cocktails. Two of them are Pain Killer and Navy Grog. Equal Parts just opened their new, beautiful taproom. It is open every day. You can also find their beers at H-E-B and other stores around Houston. And on August 12th, they will be in San Antonio at Elephant Cellar for their Brewer's Dinner. Thank you for sharing your time and expertise with us, Matt and Josh. Thank you, also, to Daniel and the amazing staff at The Cove on Hamblen, for always making us feel welcome and for $4 drafts on Tuesday! You can pick up dinner at The Cove as well, a delicious home cooked meal from Cathy's Kitchen. Pick up is Monday through Friday. I had the Mississippi Pot Roast with Buttermilk Mashed Potatoes. It was so good! Lots of great stuff this week. Thanks for listening! Cheers! Follow Texas Beer Experience Facebook Instagram Twitter YouTube Texas Beer Collective Facebook Group Visit Our Website Leave Us A Message Music by Bad Child
In this episode of Keiser Report, Max and Stacy look at the data showing ‘no resistance' to price increases, as money-printing and stimulus checks encourage Americans to pay whatever the price is without negotiation. In the second half, Max and Stacy chat to author James Howard Kunstler, of Kunstler.com, about his time as a news reporter on the day that Nixon shut the gold window in 1971.
Instagram Nation @ https://www.instagram.com/jr_dub_uuu/ Twitter Nation@ https://twitter.com/JrWiermanLinkedin Nation @ https://www.linkedin.com/in/jr-wierman-9bba521a2/www.morelovenation.com www.wiermanmedia.comRemember we share our perspective we never perceive to be right or you wrong ego is right and wrong we aim to make our biggest liability our biggest strength by leveraging the only tool possible that makes ambiguities/inexactness an asset and that tool is creative story telling!
Houston IPA battle round 3!!! This will determine our last Qualifier into the final to determine who is the 0fficial Hot Tub Beer, unofficial best IPA in the Houston area! Who will face Hipster Sauce and Mini Boss in the championship episode!?!??!?!?! --- Support this podcast: https://podcasters.spotify.com/pod/show/hottubbeer/support
Peter Watts (https://rifters.com/) is a Hugo Award winning sci-fi author. His works include the Rifters trilogy (e.g. Starfish) & the Firefall series (e.g. Blindsight). He earned a Ph.D from the Univ of BC Canada. He held several academic positions & worked as a marine-mammal biologist before becoming an author. In Sentientist Conversations we talk about the two most important questions: “what’s real?” & “what matters?” Sentientism is "evidence, reason & compassion for all sentient beings." The video of our conversation is here on YouTube. We discuss: 0:00 Welcome 1:30 Peter's Intro - Baptist to biology to sci-fi - A father so confident in Baptist truth that he encouraged questioning "He taught so many people about the word of god but had somehow failed to reach me" - Becoming a marine biologist then fleeing the political bullshit - Becoming a sci-fi author - Including academic references in sci-fi. "I've been published in Nature more than they have because Nature publishes sci-fi stories!" - "One of the coolest things about this gig is that scientists seek me out" 4:35 What's real? - Parental default "We're programmed to imitate & imprint" - A "rock star" Baptist minister father who trained other ministers. Seeing him struggle with domestic abuse, dementia & being a "non-practicing homosexual" - "He was the most unjudgmental man I've ever known." "Rolling his eyes at the sin but loving the sinner." Maybe because he was afraid of being judged for being gay. "A tormented, unhappy but wonderful guy" - Starting to question religion by reading sci-fi - Robert Heinlein's 'Stranger in a Strange Land' "introduced me to what an asshole the old testament god was". For some reason this hadn't been mentioned in Sunday school - Interestingly, as a gay minister, dad didn't seem to know about Leviticus - "Learning just enough from sci-fi novels to be an asshole to my dad" - "God doesn't really explain anything" - Being an atheist teenager + later becoming anti-religious - "Religion is basically a biological epiphenomenon" - Snorting oxytocin to improve fidelity - In/out group tribalism - Once you strip away the rhetoric humans act pretty much like any other mammal - "Being right is not as important in survival as having the esteem of your social group" - "Nobody ever achieved exalted social status by saying 'you guys are all fucking morons & here's my evidence'" - More educated people sometimes just have better post-rationalisations - "When you think of people as mammals a lot of stuff that seems bat-shit insane suddenly makes sense" - A soft spot for psionics/telepathy: "Time travelling snuff porn"? - "We kinda don't know everything yet - isn't that what science is about?" - Does being open-minded make us vulnerable to fundamentalists? - "It's easier to fit god than uncertainty onto a bumper sticker" And much more... (see YouTube or Sentientism.info) Sentientism is “Evidence, reason & compassion for all sentient beings.” More at https://sentientism.info/. Join our "I'm a Sentientist" wall https://sentientism.info/wall/ here: https://sentientism.info/im-a-sentientist. Everyone interested, Sentientist or not, is welcome in our groups. Main one: https://www.facebook.com/groups/sentientism. Thanks Graham: https://twitter.com/cgbessellieu.
Psychedelics are true, society is false. What is even real? Ph.D. student Haley Dourron joins me to discuss her ideas on psychedelics, schizophrenia, society, and reality. I loved where this discussion went towards the end! Learn about PsychedelX here: https://www.youtube.com/watch?v=iGxqWmMII58 Haley's Tweet Spot https://twitter.com/HDourron Join the other 13 patrons for exclusive content! https://www.patreon.com/qwerkyscience Find this podcast on Spotify, iTunes, Google Play, and more! Search qwerky science. Subscribe to show support :)
Information Morning Fredericton from CBC Radio New Brunswick (Highlights)
After a festival run, a local filmmaker’s work is now on streaming services. Robert Gray is a professor, author and filmmaker based in Fredericton.
For the full audio interview, transcript, show notes and more visit: https://altassetallocation.com/ Have you thought about raising capital for your company? Don't miss this episode with Patrick Henry, CEO & founder of GroGuru. Patrick is a serial entrepreneur with multiple exits, including Entropic, which he took from pre-revenue all the way through a successful IPO and a $1B valuation. His latest company, GroGuru has raised $3.8M in seed funding prior to just wrapping up a massive $2M raise on Wefunder in a month. Patrick was previously an equity crowdfunding skeptic and shares his tips for companies looking to raise capital and secrets for running a successful equity crowdfunding campaigns. Enjoy this episode with Patrick Henry. --- Support this podcast: https://anchor.fm/investinalts/support
Some real posi-vibes on the show today! We welcome new music from Fuck the Facts, Thou and Emma Ruth Rundle, Pallbearer, Eternal Champion, and Of Feather and Bone; we celebrate another great streaming event at the Rickshaw with local faves Empress and Heron; we mark Remembrance Day and the 40th anniversary of Ace of Spades with a double-dose of Motorhead; and last but definitely not least, we finally get to say goodbye to that vile piece of shit in the oval office with some Neckbeard Deathcamp! GOOD FUCKING RIDDANCE!
Cigar: Padron 1964 Anniversary Principe Corona Cigar: Diamond Crown Black Diamond Beer Tasting: Transmitter Brewing "G1" Golden Ale (Brooklyn, NY) Beer Tasting: Sigma Brewing Company "Entropic" IPA" (Houston, TX) Beer Tasting: Smog City Brewing "Bourbon Barrel-Aged O.E." (Torrence, CA) Spirit Tasting: Ron Centenario 20 Rum (Costa Rica) S&T is brought to you by mycigarshirts.com
If you're searching for a time in life when everything will finally be at rest, then unfortunately you chose the wrong planet to land on. So, in today's segment, we hope to offer you a few mental notes that you can come back to for reference whenever you feel overwhelmed by the notion of always needing to re-do or manage a system that should be able to simply run on its own. Subscribe to our Newsletter Follow Us on Instagram
This is definitely the wrong first episode to listen to. Who the hell doesn't start with the first episode? Bye.
Here’s a Leap Day treat for you: an exploration of the nature of the universe from the acclaimed physicist Brian Greene.Look forward to a wide-ranging discussion that touches on The Second Law of Thermodynamics, human consciousness, materialism, how evolution equipped us to survive, quantum mechanics, the future of artificial intelligence and an argument for why you don’t have free will, among other things.Brian Greene is a professor of physics and mathematics, and the director of The Center for Theoretical Physics at Columbia University. The Washington Post called him “the single best explainer of abstruse concepts in the world today.”Greene’s new book is Until the End of Time: Mind, Matter, and Our Search for Meaning in an Evolving Universe. He spoke with KUOW’s Ross Reynolds on February 27 at the University Temple United Methodist Church. University Book Store presented the event.
Alex Korzhikov is an engineer at ING Bank. He's at NodeConf to lead a workshop on using oclif, as well as working with classes and OOP in TypeScript. oclif is a command-line framework developed in TypeScript by Heroku, and ING is using several different tools built on oclif to communicate with each other. He talks with Julián about why they chose oclif, and how TypeScript has enabled them to build better systems faster. Tierney Cyren works as a developer advocate for Microsoft Azure. He's also the Chairperson of the Community Committee for Node.js. He's passionate about helping open source communities become more inclusive by helping them work on internationalization, documentation, and various governance needs. His talk is centered on four factors he's found are fundamental to growing a successful and healthy open source project. Chris Dickinson is building Entropic, a new package manager for Node.js. As opposed to npm, Entropic is comprised of federated hubs, ensuring that no single company or entity is responsible for all of the community's third-party packages. Previously, he worked on NPM itself, and knows first-hand how much a a distributed system is needed. Links from this episode oclif is an open source framework for building a command line interface (CLI) in Node.js Typescript: The Complete Developer's Guide is a recommended resource to master Typescript and build complex projects openopensource.org provides some guidance on building an empowered community Entropic is a distributed package manager for Node.js
Join us for a spooky night of frights and ghouls in our first Halloween episode! Twitter: @g4everpodcast Theme music: Story has Begun by KieloKaz Other music: Take Me Higher by Jahzzar, The Joyful Skeleton by The Band of the Queen's Regiment.
The week 40 recap covers SCP-4382 "Entropic of Hulwick" (3:34), SCP-4578 "A Good Guy with a Gun" (32:18), SCP-4686 "Hugs 4 Everyone" (39:28), SCP-4518 "MBMBMBMBMBMBMBaM" (50:33), SCP-4756 "A Reborn Man" (1:03:03), and the weekly mailbag (1:18:14).
We’re joined by C J Silverio, aka ceejbot on Twitter, aka 2nd hire and former CTO at npm Inc. We talk with Ceej about her recent JS Conf EU talk titled “The Economies of Open Source” where she laid our her concerns with the JavaScript language commons being owned by venture capitalists. Currently the JavaScript language commons is controlled by the npm registery, and as you may know, npm is a VC backed for profit start up. Of course we also talk with Ceej about the bomb she dropped, Entropic, at the end of that talk — a federated package registry for JavaScript C J hopes will unseat npm and free the JavaScript language commons.
We’re joined by C J Silverio, aka ceejbot on Twitter, aka 2nd hire and former CTO at npm Inc. We talk with Ceej about her recent JS Conf EU talk titled “The Economies of Open Source” where she laid our her concerns with the JavaScript language commons being owned by venture capitalists. Currently the JavaScript language commons is controlled by the npm registery, and as you may know, npm is a VC backed for profit start up. Of course we also talk with Ceej about the bomb she dropped, Entropic, at the end of that talk — a federated package registry for JavaScript C J hopes will unseat npm and free the JavaScript language commons.
Добрый день уважаемые слушатели. Представляем новый выпуск подкаста RWpod. В этом выпуске: Ruby Ruby 2.7: The Pipeline Operator, Ruby-2.7 adds Enumerable#filter_map, Introduce support for ActionView::Component, Rails 6 adds private option to delegate method и The Ultimate Checklist to Properly Internationalize Devise Normalization, Consistency, and Clowne, A Rails middleware to change log level at runtime, Impersonator - a Ruby library to record and replay object interactions и Compacting GC in Ruby 2.7 - Aaron Patterson Web Entropic: a federated package registry for anything (The economics of open source by C J Silverio), Announcing styled-components v5: Beast Mode, Promise combinators, The reduce ({…spread}) anti-pattern и When should you be using Web Workers? Algorithm Visualizer is an interactive online platform that visualizes algorithms from code, Pika CDN: A CDN for Modern JavaScript, Readme-md-generator - CLI that generates beautiful README.md files и Relearn CSS layout
Deler av panelet har vært på tur til Berlin for den (foreløpig) siste utgaven av JSConfEU. En konferanse som har mye historiske høydepunkter i JavaScript-verden. Dette året var ikke uten unntak om du skal tro panelet. På slutten av dag en ble det som noen omtaler som et potensielt vendepunkt for JS-økosystemet lansert i form av et nytt federert desentralisert pakkeregister og verktøy. Ment som en mulig erstatter for et potensielt belastet VC-finansiert npm, inc. Hør dagens episode for refleksjoner og reaksjoner av det som beveger seg i bransjen. Shownotes: https://bartjs.io/37-jsconfeu-entropic/
Jim and Randy discuss Eric Verlinde's theory thermodynamic theory of gravity. This theory purports to explain gravitational attraction and inertia through statistical mechanics. Show Notes: http://frontiers.physicsfm.com/42
In this weeks episode of Not My Chair Not My Problem join the crew as we get Into, Water bears on pizza? The looming threats of 5G towers, Microwave Death Rays or mobile tanning bed? We also check out the trailer from the upcoming "Captain Marvel" movie and "Third Eye Spies" documentary. Plus R. Kelly's downward spiral on News and Current Events Enjoy. This weeks featured music by Hit The Switch - The Walrus and the Fisherman from the album "Entropic" available now on Bird Attack records https://www.facebook.com/hittheswitch.music/ https://birdattackrecords.bandcamp.com/album/entropic https://youtu.be/lBbY8fvTidU https://youtu.be/GEx_d0SjvS0 https://youtu.be/d20H1PjAa3g
Special Christmas Day Super Coach – How to Use Frustration as Fuel toward your Bigger Future – What state of Energy are you in? Dynamic, Static, Entropic. Measure Yourself vs. Your Own Internal Scorecard The post How to Use Frustration as Fuel toward your Bigger Future appeared first on Coach Burt.
Special Christmas Day Super Coach – How to Use Frustration as Fuel toward your Bigger Future – What state of Energy are you in? Dynamic, Static, Entropic. Measure Yourself vs. Your Own Internal Scorecard The post How to Use Frustration as Fuel toward your Bigger Future appeared first on Coach Burt.
are thoughts and financial concepts prone to the same laws of universal entropy? we may find out --- Support this podcast: https://anchor.fm/jens-appelgreen/support
Schwonnek, R Monday 23rd July 2018 - 16:00 to 16:45
It may be a slow time for sports, but Scott and Mike still find two hours of material for you this week! The fellas crack open some Entropic Theory by MadTree Brewing while they talk about what could have derailed OJ’s parole, hating your rival, Mike Vick going after Kaep, Tom Herman possibly going a step too far, impersonating Chad Johnson, and missing out on Stanley Cup tickets. Scott and Mike field some drunk line calls and dive deep into fixing the NFL viewership before getting a Mayweather/McGregor update from Blake Stephenson.
Patrick is the CEO of QuestFusion, a serial entrepreneur with over 25 years experience in managing technology companies. Prior to forming QuestFusion, he was the CEO at three different startup companies. As CEO of Entropic, he took the company from pre-revenue and pre-product to a successful IPO on NASDAQ. Sponsors ZipRecruiter: Looking for quality candidates to help you grow your business? Find out today why ZipRecruiter has been used by over 1 million businesses (including EOFire)! OnePageCRM: Finally, your team can focus on sales, not software. Visit OnePageCRM.com/fire for a sixty day free trial.
We’re breaking down each of the Commander 2016 Preconstructed decks, reviewing what cards to add and take out, and how you can modify these decks quickly and easily for play! This episode focuses on the Entropic Uprising deck with Yidris, Maelstrom Wielder. Learn more about your ad choices. Visit megaphone.fm/adchoices
Today's Growth Classics, Growing Business Today, Marketing your business for growth and success
This whole week is dedicated to one of my favorite subjects, the subject of competition. We have podcasts roll out on Mondays, Wednesdays, and Fridays, and I am splitting up my take on competition into three pieces because there are three different types of competition. I am going to shed some light on two areas […]
Today's Growth Classics, Growing Business Today, Marketing your business for growth and success
This whole week is dedicated to one of my favorite subjects, the subject of competition. We have podcasts roll out on Mondays, Wednesdays, and Fridays, and I am splitting up my take on competition into three pieces because there are three different types of competition. I am going to shed some light on two areas of competition that a lot of people who I have bumped into don’t give any thought to, don’t spend any time on, and yet one of the two, the one I am going to cover today, I think is possibly the biggest growth killer in business today. Love the show? Subscribe, rate, review, and share! Here’s How » Join Today’s Growth community today: kencourtright.com Today’s Growth Twitter Ken Courtright LinkedIn
.... slowly evolving to a state of inert uniformity .... retracing information that was lost from the message ... searching for the uniformity in what seems to be randomly disordered ... --- originally published on Ambientblog --- Playliststart time sample length Artist – TitleAlbum Title, Release Year, Label details 00:00 2:54 Johnny Jewel - The Dead ZoneLost River OST, 2015, Italians do it Better02:06 1:22 Miharu Koshi & Haruomi Hosono - Halconia VoiceEx Machina OST, 2007, Commons RZCM-45702/B02:35 3:44 Alva Noto - Xerrox MesosphereXerrox Vol. 3, 2015, Raster-Noton RN-15904:43 3:04 Jean-Paul Dessy - O'ClockO'Clock, 2014, Cypres CYP 464207:27 2:10 Jeff Stonehouse - PathwayGhosts, 2015, Taâlem alm10807:45 1:01 Kreng - AngerThe Summoner, 2015, Miasmah MIACD03008:45 1:40 Adam Bryanbaum Wiltzie - The Endless Battle of the Maudlin Ballade Part 3Travels in Constants Vol. 24, 2105, Temporary Residence Limited TIC2410:20 0:53 Adam Bryanbaum Wiltzie - The Endless Battle of the Maudlin Ballade Part 1Travels in Constants Vol. 24, 2015, Temporary Residence Limited TIC2411:02 2:10 Filip Szyszkowski - Water Stones (feat. Filip D. Jensen)(CellIII)Mother of Ants, 2015, Unquiet Records UNQUIET00411:53 3:01 Max Richter & Daniel Hope - Shadow 5Vivaldi - The Four Seasons Recomposed, 2014, Deutsche Grammophon 479 277613:43 1:38 Disasterpeace - AnyoneIt Follows OST, 2015, Milan 3672915:12 1:59 Monolake - VoidSilence, 2009, Imbalance Computer Music ML02516:03 2:31 Chris Dooks & Darren McClure - When the Planes Leave TownSite Specifie Works, 2015, Bandcamp17:13 0:44 Wolfgang Rihm - Sieben Passions-Texte für Sechs Stimme: III Velum Templi Scissum EstAstralis & Other Choral Works, 2012, Harmonia Mundi HMC 90212917:15 3:32 Pelican Daughters - The Bicycle Ride50 Years of Sunshine, 1993, Silent SR933318:13 2:29 Jacques Tremblay - Empathies Entropiques 7. Rêve LibanaisChroniques d'une Seduction, 2008, Empreintes Digitales IMED 089719:49 5:35 Frequent Sync - WindwalkerFamiliar Fields, 2009, Seedsound SEED02620:08 2:54 Erik K. Skodvin - Pitch DarkFlare, 2010, Sonic Pieces 00921:49 7:37 Peter Grech - Where They Cross OverSung of the Black Canyon, 2015, Bandcamp27:45 4:54 Jacob Kirkegaard - Fool's Fire5 Pieces, 2015, Posh Isolation 14330:09 4:30 New Composers & Brian Eno - Long SQSmart, 1999/2015, Psychonavigation PSY10633:00 1:43 Kate Carr - UnderwaterOverheard in Doi Saket, 2014, 3Leaves 3L03034:00 4:04 Tattered Kaylor - Taken to BooroombaSombre Nay Sated, 2013, Stadisfield SF-110135:14 3:48 Tonto's Expanding Head Band - TamaZero Time, 1971, Embryo Records SD 73238:09 2:47 Terje Isungset - Glacial MotionMeditations, 2015, All Ice Records 140740:04 1:32 Ulises Conti - HLos Griegos Creían Que Las Estrellas Eran Pequeños Agujeros Por Donde Los Dioses Escuchaban A Los Hombres, 2014, Flau FLAU4140:33 3:06 Norn - Maanmeer VrÿUsotsuki [うそつき], 2015, Moving Furniture Records MFR02342:47 3:52 John Puchiele Ensemble - ThinkingLife Cycle, 2013, Antediluvian Records ARJPE00145:00 2:39 Bjarni Gunnarsson - PortholesProcesses & Potentials, 2013, 3Leaves 3L02546:52 0:57 Sussan Deyhim & Hirin Nestat - TurbulentSoliloquy, 2008, Venus Rising47:34 2:26 Dale Cooper Quartet & the Dictaphones - Ma DressingParole De Navarre, 2006/2010, Denovali DEN6049:12 4:17 Johnny Jewel - SpellboundLost River OST, 2015, Italians do it Better52:27 6:08 London Docks - 400 Clouds Pt. 1Tangaróa58:35 End
.... slowly evolving to a state of inert uniformity .... retracing information that was lost from the message ... searching for the uniformity in what seems to be randomly disordered ... This mix is published simultaneously on Headphone Commute: "You're in for a treat!" --- originally published on Ambientblog ---
Welcome to another Headphone Commute podcast. You’re in for a treat! Peter van Cooten has been contributing countless reviews and mixes and there’s no sign of him slowing down! And that’s a great thing, because we certainly love the soundscapes that PvC introduces our ears to, with his intricate selection of layers, textures and most importantly, story-telling. The latter is what makes up a great selector, elevating him on par with the artist themselves, where the collage of sounds becomes more than just a mix, but a brand new piece of music. Enjoy! For full track listing and more information about this mix, please visit headphonecommute.com
R.W. Gray talks about his new story collection ENTROPIC, the limits of magical realism, and the burden of being beautiful.
This week, instead of picking papers with a similar theme the gang decided to talk about the craziest papers they could find. The end result: yetis and airplanes... Maybe this was a mistake. Meanwhile, James describes his theory of automobile evolution, Amanda discusses swimming polar bears, and Curt describes the life and times of the podcast gang in Tomodachi Life. References: Sykes, Bryan C., et al. "Genetic analysis of hair samples attributed to yeti, bigfoot and other anomalous primates." Proceedings of the Royal Society B: Biological Sciences 281.1789 (2014): 20140161. Miller, Webb, et al. "Sequencing the nuclear genome of the extinct woolly mammoth." Nature 456.7220 (2008): 387-390. Barnett, Ross, et al. "Evolution of the extinct Sabretooths and the American cheetah-like cat." Current Biology 15.15 (2005): R589-R590. Bejan, A., J. D. Charles, and S. Lorente. "The evolution of airplanes." Journal of Applied Physics 116.4 (2014): 044901. Gould, Stephen Jay. "Entropic homogeneity isn't why no one hits. 400 any more." Discover, August (1986): 60-66.
Forgive us for the long delay, technical issues arose but are now coming under control. This week Tombstone da Deadman returns to chat with MrDragonbeard about his recently released album “Entropic … Continue reading Apostasy Now Ep8: The Return Of Tombstone da Deadman
Musiques Electroniques Printemps 2010 01 - ALTERED : CARBON_Sleave (0'00) (A:C / Section 27 / 2010) 02 - MODERAT_Rusty nails (SHACKLETON Remix) (5'30) (Moderat_Deluxe Edition / BPitch Control / 2009) 03 - JON HOPKINS_Insides (14'25) (Insides / Domino Rec / 2009) 04 - FLYING LOTUS_Nose art (18'55) (Cosmogramma / Warp / 2010) 05 - CRYSTAL CASTLES_Baptism (20'40) (2 / Fiction / 2010) 06 - HEALTH_Before tigers (CFCF Remix) (24'35) (Disco 2 / Lovepump United / 2010) 07 - FOUR TET_Angel echoes (28'30) (There is love in you / Domino Rec / 2010) 08 - MARCEL DETTMANN_Drawing (31'00) (Dettmann / Ostgut Ton / 2010) 09 - PETER VAN HOESEN_Terminal (35'50) (Entropic city / Time to express / 2010) 10 - AGORIA_Altre voci (+GLIMPSE_Train to Austria) (40'30) (Balance 016 / EQ Recordings / 2010) 11 - SCUBA_Lights out (46'10) (Triangulation / Hotflush Rec / 2010) 12 - PANTHA DU PRINCE_Es schneit (52'20) (Black Noise / Rough Trade / 2010)
The entropic force exerted by the Brownian fluctuations of a grafted semiflexible polymer upon a rigid smooth wall are calculated both analytically and by Monte Carlo simulations. Such forces are thought to play an important role for several cellular phenomena, in particular, the physics of actin-polymerization-driven cell motility and movement of bacteria like Listeria. In the stiff limit, where the persistence length of the polymer is larger than its contour length, we find that the entropic force shows scaling behavior. We identify the characteristic length scales and the explicit form of the scaling functions. In certain asymptotic regimes, we give simple analytical expressions which describe the full results to a very high numerical accuracy. Depending on the constraints imposed on the transverse fluctuations of the filament, there are characteristic differences in the functional form of the entropic forces. In a two-dimensional geometry, the entropic force exhibits a marked peak.