POPULARITY
Much has been made of the hallucinatory qualities of OpenAI's ChatGPT product. But as the Wall Street Journal's resident authority on OpenAI, Keach Hagey notes, perhaps the most hallucinatory feature the $300 billion start-up co-founded by the deadly duo of Sam Altman and Elon Musk is its attempt to be simultaneously a for-profit and non-profit company. As Hagey notes, the double life of this double company reached a surreal climax this week when Altman announced that OpenAI was abandoning its promised for-profit conversion. So what, I asked Hagey, are the implications of this corporate volte-face for investors who have poured billions of real dollars into the non-profit in order to make a profit? Will they be Waiting For Godot to get their returns?As Hagey - whose excellent biography of Altman, The Optimist, is out in a couple of weeks - explains, this might be the story of the hubristic 2020's. She speaks of Altman's astonishingly (even for Silicon Valley) hubris in believing that he can get away with the alchemic conceit of inventing a multi trillion dollar for-profit non-profit company. Yes, you can be half-pregnant, Sam is promising us. But, as she warns, at some point this will be exposed as fantasy. The consequences might not exactly be another Enron or FTX, but it will have ramifications way beyond beyond Silicon Valley. What will happen, for example, if future investors aren't convinced by Altman's fantasy and OpenAI runs out of cash? Hagey suggests that the OpenAI story may ultimately become a political drama in which a MAGA President will be forced to bail out America's leading AI company. It's TikTok in reverse (imagine if Chinese investors try to acquire OpenAI). Rather than the conveniently devilish Elon Musk, my sense is that Sam Altman is auditioning to become the real Jay Gatsby of our roaring twenties. Last month, Keach Hagey told me that Altman's superpower is as a salesman. He can sell anything to anyone, she says. But selling a non-profit to for-profit venture capitalists might even be a bridge too far for Silicon Valley's most hallucinatory optimist. Five Key Takeaways * OpenAI has abandoned plans to convert from a nonprofit to a for-profit structure, with pressure coming from multiple sources including attorneys general of California and Delaware, and possibly influenced by Elon Musk's opposition.* This decision will likely make it more difficult for OpenAI to raise money, as investors typically want control over their investments. Despite this, Sam Altman claims SoftBank will still provide the second $30 billion chunk of funding that was previously contingent on the for-profit conversion.* The nonprofit structure creates inherent tensions within OpenAI's business model. As Hagey notes, "those contradictions are still there" after nearly destroying the company once before during Altman's brief firing.* OpenAI's leadership is trying to position this as a positive change, with plans to capitalize the nonprofit and launch new programs and initiatives. However, Hagey notes this is similar to what Altman did at Y Combinator, which eventually led to tensions there.* The decision is beneficial for competitors like XAI, Anthropic, and others with normal for-profit structures. Hagey suggests the most optimistic outcome would be OpenAI finding a way to IPO before "completely imploding," though how a nonprofit-controlled entity would do this remains unclear.Keach Hagey is a reporter at The Wall Street Journal's Media and Marketing Bureau in New York, where she focuses on the intersection of media and technology. Her stories often explore the relationships between tech platforms like Facebook and Google and the media. She was part of the team that broke the Facebook Files, a series that won a George Polk Award for Business Reporting, a Gerald Loeb Award for Beat Reporting and a Deadline Award for public service. Her investigation into the inner workings of Google's advertising-technology business won recognition from the Society for Advancing Business Editing and Writing (Sabew). Previously, she covered the television industry for the Journal, reporting on large media companies such as 21st Century Fox, Time Warner and Viacom. She led a team that won a Sabew award for coverage of the power struggle inside Viacom. She is the author of “The King of Content: Sumner Redstone's Battle for Viacom, CBS and Everlasting Control of His Media Empire,” published by HarperCollins. Before joining the Journal, Keach covered media for Politico, the National in Abu Dhabi, CBS News and the Village Voice. She has a bachelor's and a master's in English literature from Stanford University. She lives in Irvington, N.Y., with her husband, three daughters and dog.Named as one of the "100 most connected men" by GQ magazine, Andrew Keen is amongst the world's best known broadcasters and commentators. In addition to presenting the daily KEEN ON show, he is the host of the long-running How To Fix Democracy interview series. He is also the author of four prescient books about digital technology: CULT OF THE AMATEUR, DIGITAL VERTIGO, THE INTERNET IS NOT THE ANSWER and HOW TO FIX THE FUTURE. Andrew lives in San Francisco, is married to Cassandra Knight, Google's VP of Litigation & Discovery, and has two grown children. Full TranscriptAndrew Keen: Hello, everybody. It is May the 6th, a Tuesday, 2025. And the tech media is dominated today by OpenAI's plan to convert its for-profit business to a non-profit side. That's how the Financial Times is reporting it. New York Times says that OpenAI, and I'm quoting them, backtracks on plans to drop nonprofit control and the Wall Street Journal, always very authoritative on the tech front, leads with Open AI abandons planned for profit conversion. The Wall Street Journal piece is written by Keach Hagey, who is perhaps America's leading authority on OpenAI. She was on the show a couple of months ago talking about Sam Altman's superpower which is as a salesman. Keach is also the author of an upcoming book. It's out in a couple weeks, "The Optimist: Sam Altman, OpenAI and the Race to Invent the Future." And I'm thrilled that Keach has been remarkably busy today, as you can imagine, found a few minutes to come onto the show. So, Keach, what is Sam selling here? You say he's a salesman. He's always selling something or other. What's the sell here?Keach Hagey: Well, the sell here is that this is not a big deal, right? The sell is that, this thing they've been trying to do for about a year, which is to make their company less weird, it's not gonna work. And as he was talking to the press yesterday, he was trying to suggest that they're still gonna be able to fundraise, that these folks that they promised that if you give us money, we're gonna convert to a for-profit and it's gonna be much more normal investment for you, but they're gonna get that money, which is you know, a pretty tough thing. So that's really, that's what he's selling is that this is not disruptive to the future of OpenAI.Andrew Keen: For people who are just listening, I'm looking at Keach's face, and I'm sensing that she's doing everything she can not to burst out laughing. Is that fair, Keach?Keach Hagey: Well, it'll remain to be seen, but I do think it will make it a lot harder for them to raise money. I mean, even Sam himself said as much during the talk yesterday that, you know, investors would like to be able to have some say over what happens to their money. And if you're controlled by a nonprofit organization, that's really tough. And what they were trying to do was convert to a new world where investors would have a seat at the table, because as we all remember, when Sam got briefly fired almost two years ago. The investors just helplessly sat on the sidelines and didn't have any say in the matter. Microsoft had absolutely no role to play other than kind of cajoling and offering him a job on the sidelines. So if you're gonna try to raise money, you really need to be able to promise some kind of control and that's become a lot harder.Andrew Keen: And the ramifications more broadly on this announcement will extend to Microsoft and Microsoft stock. I think their stock is down today. We'll come to that in a few minutes. Keach, there was an interesting piece in the week, this week on AI hallucinations are getting worse. Of course, OpenAI is the dominant AI company with their ChatGPT. But is this also kind of hallucination? What exactly is going on here? I have to admit, and I always thought, you know, I certainly know more about tech than I do about other subjects, which isn't always saying very much. But I mean, either you're a nonprofit or you're a for-profit, is there some sort of hallucinogenic process going on where Sam is trying to sell us on the idea that OpenAI is simultaneously a for profit and a nonprofit company?Keach Hagey: Well, that's kind of what it is right now. That's what it had sort of been since 2019 or when it spun up this strange structure where it had a for-profit underneath a nonprofit. And what we saw in the firing is that that doesn't hold. There's gonna come a moment when those two worlds are going to collide and it nearly destroyed the company. To be challenging going forward is that that basic destabilization that like unstable structure remains even though now everything is so much bigger there's so much more money coursing through and it's so important for the economy. It's a dangerous position.Andrew Keen: It's not so dangerous, you seem still faintly amused. I have to admit, I'm more than faintly amused, it's not too bothersome for us because we don't have any money in OpenAI. But for SoftBank and the other participants in the recent $40 billion round of investment in OpenAI, this must be, to say the least, rather disconcerting.Keach Hagey: That was one of the biggest surprises from the press conference yesterday. Sam Altman was asked point blank, is SoftBank still going to give you this sort of second chunk, this $30 billion second chunk that was contingent upon being able to convert to a for-profit, and he said, quite simply, yes. Who knows what goes on in behind the scenes? I think we're gonna find out probably a lot more about that. There are many unanswered questions, but it's not great, right? It's definitely not great for investors.Andrew Keen: Well, you have to guess at the very minimum, SoftBank would be demanding better terms. They're not just going to do the same thing. I mean, it suddenly it suddenly gives them an additional ace in their hand in terms of negotiation. I mean this is not some sort of little startup. This is 30 or 40 billion dollars. I mean it's astonishing number. And presumably the non-public conversations are very interesting. I'm sure, Keach, you would like to know what's being said.Keach Hagey: Don't know yet, but I think your analysis is pretty smart on this matter.Andrew Keen: So if you had to guess, Sam is the consummate salesman. What did he tell SoftBank before April to close the round? And what is he telling them now? I mean, how has the message changed?Keach Hagey: One of the things that we see a little bit about this from the messaging that he gave to the world yesterday, which is this is going to be a simpler structure. It is going to be slightly more normal structure. They are changing the structure a little bit. So although the non-profit is going to remain in charge, the thing underneath it, the for-profit, is going change its structure a little bit and become kind of a little more normal. It's not going to have this capped profit thing where, you know, the investors are capped at 100 times what they put in. So parts of it are gonna become more normal. For employees, it's probably gonna be easier for them to get equity and things like that. So I'm sure that that's part of what he's selling, that this new structure is gonna be a little bit better, but it's not gonna be as good as what they were trying to do.Andrew Keen: Can Sam? I mean, clearly he has sold it. I mean as we joked earlier when we talked, Sam could sell ice to the Laplanders or sand to the Saudis. But these people know Sam. It's no secret that he's a remarkable salesman. That means that sometimes you have to think carefully about what he's saying. What's the impact on him? To what extent is this decision one more chip on the Altman brand?Keach Hagey: It's a setback for sure, and it's kind of a win for Elon Musk, his rival.Andrew Keen: Right.Keach Hagey: Elon has been suing him, Elon has been trying to block this very conversion. And in the end, it seems like it was actually the attorneys general of California and Delaware that really put the nail in the coffin here. So there's still a lot to find out about exactly how it all shook out. There were actually huge campaigns as well, like in the streets, billboards, posters. Polls saying, trying to put pressure on the attorney general to block this thing. So it was a broad coalition, I think, that opposed the conversion, and you can even see that a little bit in their speech. But you got to admit that Elon probably looked at this and was happy.Andrew Keen: And I'm sure Elon used his own X platform to promote his own agenda. Is this an example, Keach, in a weird kind of way of the plebiscitary politics now of Silicon Valley is that titans like Altman and Musk are fighting out complex corporate economic battles in the naked public of social media.Keach Hagey: Yes, in the naked public of social media, but what we're also seeing here is that it's sort of, it's become through the apparatus of government. So we're seeing, you know, Elon is in the Doge office and this conversion is really happening in the state AG's houses. So that's what's sort interesting to me is these like private fights have now expanded to fill both state and federal government.Andrew Keen: Last time we talked, I couldn't find the photo, but there was a wonderful photo of, I think it was Larry Ellison and Sam Altman in the Oval Office with Trump. And Ellison looked very excited. He looked extremely old as well. And Altman looked very awkward. And it's surprising to see Altman look awkward because generally he doesn't. Has Trump played a role in this or is he keeping out of it?Keach Hagey: As far as my current reporting right now, we have no reporting that Trump himself was directly involved. I can't go further than that right now.Andrew Keen: Meaning that you know something that you're not willing to ignore.Keach Hagey: Just I hope you keep your subscription to the Wall Street Journal on what role the White House played, I would say. But as far as that awkwardness, I don't know if you noticed that there was a box that day for Masa Yoshison to see.Andrew Keen: Oh yeah, and Son was in the office too, right, that was the third person.Keach Hagey: So it was a box in the podium, which I think contributed to the awkwardness of the day, because he's not a tall man.Andrew Keen: Right. To put it politely. The way that OpenAI spun it, in classic Sam Altman terms, is new funding to build towards AGI. So it's their Altman-esque use of the public to vindicate this new investment, is this just more quote unquote, and this is my word. You don't have to agree with it. Just sales pitch or might even be dishonesty here. I mean, the reality is, is new funding to build towards AGI, which is, artificial general intelligence. It's not new funding, to build toward AGI. It's new funding to build towards OpenAI, there's no public benefit of any of this, is there?Keach Hagey: Well, what they're saying is that the nonprofit will be capitalized and will sort of be hiring up and doing a bunch more things that it wasn't really doing. We'll have programs and initiatives and all of that. Which really, as someone who studied Sam's life, this sounds really a lot like what he did at Y Combinator. When he was head of Y Combinator, he also spun up a nonprofit arm, which is actually what OpenAI grew out of. So I think in Sam's mind, a nonprofit there's a place to go. Sort of hash out your ideas, it's a place to kind of have pet projects grow. That's where he did things like his UBI study. So I can sort of see that once the AGs are like, this is not gonna happen, he's like, great, we'll just make a big nonprofit and I'll get to do all these projects I've always wanted to do.Andrew Keen: Didn't he get thrown out of Y Combinator by Paul Graham for that?Keach Hagey: Yes, a little bit. You know, I would say there's a general mutiny for too much of that kind of stuff. Yeah, it's true. People didn't love it, and they thought that he took his eye off the ball. A little bit because one of those projects became OpenAI, and he became kind of obsessed with it and stopped paying attention. So look, maybe OpenAI will spawn the next thing, right? And he'll get distracted by that and move on.Andrew Keen: No coincidence, of course, that Sam went on to become a CEO of OpenAI. What does it mean for the broader AI ecosystem? I noted earlier you brought up Microsoft. I mean, I think you've already written on this and lots of other people have written about the fact that the relationship between OpenAI and Microsoft has cooled dramatically. As well as between Nadella and Altman. What does this mean for Microsoft? Is it a big deal?Keach Hagey: They have been hashing this out for months. So it is a big deal in that it will change the structure of their most important partner. But even before this, Microsoft and OpenAI were sort of locked in negotiations over how large and how Microsoft's stake in this new OpenAI will be valued. And that still has to be determined, regardless of whether it's a non-profit or a for-profit in charge. And their interests are diverging. So those negotiations are not as warm as they maybe would have been a few years ago.Andrew Keen: It's a form of polyamory, isn't it? Like we have in Silicon Valley, everyone has sex with everybody else, to put it politely.Keach Hagey: Well, OpenAI does have a new partner in Oracle. And I would expect them to have many more in terms of cloud computing partners going forward. It's just too much risk for any one company to build these huge and expensive data centers, not knowing that OpenAI is going to exist in a certain number of years. So they have to diversify.Andrew Keen: Keach, you know, this is amusing and entertaining and Altman is a remarkable individual, able to sell anything to anyone. But at what point are we really on the Titanic here? And there is such a thing as an iceberg, a real thing, whatever Donald Trump or other manufacturers of ontologies might suggest. At some point, this thing is going to end in a massive disaster.Keach Hagey: Are you talking about the Existence Force?Andrew Keen: I'm not talking about the Titanic, I'm talking about OpenAI. I mean, Parmi Olson, who's the other great authority on OpenAI, who won the FT Book of the Year last year, she's been on the show a couple of times, she wrote in Bloomberg that OpenAI can't have its money both ways, and that's what Sam is trying to do. My point is that we can all point out, excuse me, the contradictions and the hypocrisy and all the rest of it. But there are laws of gravity when it comes to economics. And at a certain point, this thing is going to crash, isn't it? I mean, what's the metaphor? Is it Enron? Is it Sam Bankman-Fried? What kind of examples in history do we need to look at to try and figure out what really is going on here?Keach Hagey: That's certainly one possibility, and there are a good number of people who believe that.Andrew Keen: Believe what, Enron or Sam Bankman-Fried?Keach Hagey: Oh, well, the internal tensions cannot hold, right? I don't know if fraud is even necessary so much as just, we've seen it, we've already seen it happen once, right, the company almost completely collapsed one time and those contradictions are still there.Andrew Keen: And when you say it happened, is that when Sam got pushed out or was that another or something else?Keach Hagey: No, no, that's it, because Sam almost got pushed out and then all of the funders would go away. So Sam needs to be there for them to continue raising money in the way that they have been raising money. And that's really going to be the question. How long can that go on? He's a young man, could go on a very long time. But yeah, I think that really will determine whether it's a disaster or not.Andrew Keen: But how long can it go on? I mean, how long could Sam have it both ways? Well, there's a dream. I mean maybe he can close this last round. I mean he's going to need to raise more than $40 billion. This is such a competitive space. Tens of billions of dollars are being invested almost on a monthly basis. So this is not the end of the road, this $40-billion investment.Keach Hagey: Oh, no. And you know, there's talk of IPO at some point, maybe not even that far away. I don't even let me wrap my mind around what it would be for like a nonprofit to have a controlling share at a public company.Andrew Keen: More hallucinations economically, Keach.Keach Hagey: But I mean, IPO is the exit for investors, right? That's the model, that is the Silicon Valley model. So it's going to have to come to that one way or another.Andrew Keen: But how does it work internally? I mean, for the guys, the sales guys, the people who are actually doing the business at OpenAI, they've been pretty successful this year. The numbers are astonishing. But how is this gonna impact if it's a nonprofit? How does this impact the process of selling, of building product, of all the other internal mechanics of this high-priced startup?Keach Hagey: I don't think it will affect it enormously in the short term. It's really just a question of can they continue to raise money for the enormous amount of compute that they need. So so far, he's been able to do that, right? And if that slows up in any way, they're going to be in trouble. Because as Sam has said many times, AI has to be cheap to be actually useful. So in order to, you know, for it to be widespread, for to flow like water, all of those things, it's got to be cheap and that's going to require massive investment in data centers.Andrew Keen: But how, I mean, ultimately people are putting money in so that they get the money back. This is not a nonprofit endeavor to put 40 billion from SoftBank. SoftBank is not in the nonprofit business. So they're gonna need their money back and the only way they generally, in my understanding, getting money back is by going public, especially with these numbers. How can a nonprofit go public?Keach Hagey: It's a great question. That's what I'm just phrasing. I mean, this is, you know, you talk to folks, this is what's like off in the misty distance for them. It's an, it's a fascinating question and one that we're gonna try to answer this week.Andrew Keen: But you look amused. I'm no financial genius. Everyone must be asking the same question.Keach Hagey: Well, the way that they've said it is that the for-profit will be, will have a, the non-profit will control the for profit and be the largest shareholder in it, but the rest of the shares could be held by public markets theoretically. That's a great question though.Andrew Keen: And lawyers all over the world must be wrapping their hands. I mean, in the very best case, it's gonna be lawsuits on this, people suing them up the wazoo.Keach Hagey: It's absolutely true. You should see my inbox right now. It's just like layers, layers, layer.Andrew Keen: Yeah, my wife. My wife is the head of litigation. I don't know if I should be saying this publicly anyway, I am. She's the head of Litigation at Google. And she lost some of her senior people and they all went over to AI. I'm big, I'm betting that they regret going over there can't be much fun being a lawyer at OpenAI.Keach Hagey: I don't know, I think it'd be great fun. I think you'd have like enormous challenges and have lots of billable hours.Andrew Keen: Unless, of course, they're personally being sued.Keach Hagey: Hopefully not. I mean, look, it is a strange and unprecedented situation.Andrew Keen: To what extent is this, if not Shakespearean, could have been written by some Greek dramatist? To what extend is this symbolic of all the hype and salesmanship and dishonesty of Silicon Valley? And in a sense, maybe this is a final scene or a penultimate scene in the Silicon Valley story of doing good for the world. And yet, of course, reaping obscene profit.Keach Hagey: I think it's a little bit about trying to have your cake and eat it too, right? Trying to have the aura of altruism, but also make something and make a lot of money. And what it seems like today is that if you started as a nonprofit, it's like a black hole. You can never get out. There's no way to get out, and that idea was just like maybe one step too clever when they set it up in the beginning, right. It seemed like too good to be true because it was. And it might end up really limiting the growth of the company.Andrew Keen: Is Sam completely in charge here? I mean, a number of the founders have left. Musk, of course, when you and I talked a couple of months ago, OpenAI came out of conversations between Musk and Sam. Is he doing this on his own? Does he have lieutenants, people who he can rely on?Keach Hagey: Yeah, I mean, he does. He has a number of folks that have been there, you know, a long time.Andrew Keen: Who are they? I mean, do we know their names?Keach Hagey: Oh, sure. Yeah. I mean, like Brad Lightcap and Jason Kwon and, you know, just they're they're Greg Brockman, of course, still there. So there are a core group of executives that have that have been there pretty much from the beginning, close to it, that he does trust. But if you're asking, like, is Sam really in control of this whole thing? I believe the answer is yes. Right. He is on the board of this nonprofit, and that nonprofit will choose the board of the for-profit. So as long as that's the case, he's in charge.Andrew Keen: How divided is OpenAI? I mean, one of the things that came out of the big crisis, what was it, 18 months ago when they tried to push him out, was it was clearly a profoundly divided company between those who believed in the nonprofit mission versus the for-profit mission. Are those divisions still as acute within the company itself? It must be growing. I don't know how many thousands of people work.Keach Hagey: It has grown very fast. It is not as acute in my experience. There was a time when it was really sort of a warring of tribes. And after the blip, as they call it, a lot of those more safety focused people, people that subscribe to effective altruism, left or were kind of pushed out. So Sam took over and kind of cleaned house.Andrew Keen: But then aren't those people also very concerned that it appears as if Sam's having his cake and eating it, having it both ways, talking about the company being a non-profit but behaving as if it is a for-profit?Keach Hagey: Oh, yeah, they're very concerned. In fact, a number of them have signed on to this open letter to the attorneys general that dropped, I don't know, a week and a half ago, something like that. You can see a number of former OpenAI employees, whistleblowers and others, saying this very thing, you know, that the AG should block this because it was supposed to be a charitable mission from the beginning. And no amount of fancy footwork is gonna make it okay to toss that overboard.Andrew Keen: And I mean, in the best possible case, can Sam, the one thing I think you and I talked about last time is Sam clearly does, he's not driven by money. There's something else. There's some other demonic force here. Could he theoretically reinvent the company so that it becomes a kind of AI overlord, a nonprofit AI overlord for our 21st century AI age?Keach Hagey: Wow, well I think he sometimes thinks of it as like an AI layer and you know, is this my overlord? Might be, you know.Andrew Keen: As long as it's not made in China, I hope it's made in India or maybe in Detroit or something.Keach Hagey: It's a very old one, so it's OK. But it's really my attention overlord, right? Yeah, so I don't know about the AI overlord part. Although it's interesting, Sam from the very beginning has wanted there to be a democratic process to control what decision, what kind of AI gets built and what are the guardrails for AGI. As long as he's there.Andrew Keen: As long as he's the one determining it, right?Keach Hagey: We talked about it a lot in the very beginning of the company when things were smaller and not so crazy. And what really strikes me is he doesn't really talk about that much anymore. But what we did just see is some advocacy organizations that kind of function in that exact way. They have voters all over the world and they all voted on, hey, we want you guys to go and try to that ended up having this like democratic structure for deciding the future of AI and used it to kind of block what he was trying to do.Andrew Keen: What are the implications for OpenAI's competitors? There's obviously Anthropic. Microsoft, we talked about a little bit, although it's a partner and a competitor simultaneously. And then of course there's Google. I assume this is all good news for the competition. And of course XAI.Keach Hagey: It is good news, especially for a company like XAI. I was just speaking to an XAI investor today who was crowing. Yeah, because those companies don't have this weird structure. Only OpenAI has this strange nonprofit structure. So if you are an investor who wants to have some exposure to AI, it might just not be worth the headache to deal with the uncertainty around the nonprofit, even though OpenAI is like the clear leader. It might be a better bet to invest in Anthropic or XAI or something else that has just a normal for-profit structure.Andrew Keen: Yeah. And it's hard to actually quote unquote out-Trump, Elon Musk on economic subterfuge. But Altman seems to have done that. I mean, Musk, what he folded X into XAI. It was a little bit of controversy, but he seems to got away with it. So there is a deep hostility between these two men, which I'm assuming is being compounded by this process.Keach Hagey: Absolutely. Again, this is a win for Elon. All these legal cases and Elon trying to buy OpenAI. I remember that bid a few months ago where he actually put a number on it. All that was about trying to block the for-profit conversion because he's trying to stop OpenAI and its tracks. He also claims they've abandoned their mission, but it's always important to note that it's coming from a competitor.Andrew Keen: Could that be a way out of this seeming box? Keach, a company like XAI or Microsoft or Google, or that probably wouldn't happen on the antitrust front, would buy OpenAI as maybe a nonprofit and then transform it into a for-profit company?Keach Hagey: Maybe you and Sam should get together and hash that out. That's the kind ofAndrew Keen: Well Sam, I'm available to be hired if you're watching. I'll probably charge less than your current consigliere. What's his name? Who's the consiglieri who's working with him on this?Keach Hagey: You mean Chris Lehane?Andrew Keen: Yes, Chris Lehane, the ego.Keach Hagey: Um,Andrew Keen: How's Lehane holding up in this? Do you think he's getting any sleep?Keach Hagey: Well, he's like a policy guy. I'm sure this has been challenging for everybody. But look, you are pointing to something that I think is real, which is there will probably be consolidation at some point down the line in AI.Andrew Keen: I mean, I know you're not an expert on the maybe sort of corporate legal stuff, but is it in theory possible to buy a nonprofit? I don't even know how you buy a non-profit and then turn it into a for-profit. I mean is that one way out of this, this cul-de-sac?Keach Hagey: I really don't know the answer to that question, to be honest with you. I can't think of another example of it happening. So I'm gonna go with no, but I don't now.Andrew Keen: There are no equivalents, sorry to interrupt, go on.Keach Hagey: No, so I was actually asking a little bit, are there precedents for this? And someone mentioned Blue Cross Blue Shield had gone from being a nonprofit to a for-profit successfully in the past.Andrew Keen: And we seem a little amused by that. I mean, anyone who uses US health care as a model, I think, might regret it. Your book, The Optimist, is out in a couple of weeks. When did you stop writing it?Keach Hagey: The end of December, end of last year, was pencils fully down.Andrew Keen: And I'm sure you told the publisher that that was far too long a window. Seven months on Silicon Valley is like seven centuries.Keach Hagey: It was actually a very, very tight timeline. They turned it around like incredibly fast. Usually it'sAndrew Keen: Remarkable, yeah, exactly. Publishing is such, such, they're such quick actors, aren't they?Keach Hagey: In this case, they actually were, so I'm grateful for that.Andrew Keen: Well, they always say that six months or seven months is fast, but it is actually possible to publish a book in probably a week or two, if you really choose to. But in all seriousness, back to this question, I mean, and I want everyone to read the book. It's a wonderful book and an important book. The best book on OpenAI out. What would you have written differently? Is there an extra chapter on this? I know you warned about a lot of this stuff in the book. So it must make you feel in some ways quite vindicated.Keach Hagey: I mean, you're asking if I'd had a longer deadline, what would I have liked to include? Well, if you're ready.Andrew Keen: Well, if you're writing it now with this news under your belt.Keach Hagey: Absolutely. So, I mean, the thing, two things, I guess, definitely this news about the for-profit conversion failing just shows the limits of Sam's power. So that's pretty interesting, because as the book was closing, we're not really sure what those limits are. And the other one is Trump. So Trump had happened, but we do not yet understand what Trump 2.0 really meant at the time that the book was closing. And at that point, it looked like Sam was in the cold, you know, he wasn't clear how he was going to get inside Trump's inner circle. And then lo and behold, he was there on day one of the Trump administration sharing a podium with him announcing that Stargate AI infrastructure investment. So I'm sad that that didn't make it into the book because it really just shows the kind of remarkable character he is.Andrew Keen: He's their Zelig, but then we all know what happened to Woody Allen in the end. In all seriousness, and it's hard to keep a straight face here, Keach, and you're trying although you're not doing a very good job, what's going to happen? I know it's an easy question to ask and a hard one to answer, but ultimately this thing has to end in catastrophe, doesn't it? I use the analogy of the Titanic. There are real icebergs out there.Keach Hagey: Look, there could be a data breach. I do think that.Andrew Keen: Well, there could be data breaches if it was a non-profit or for-profit, I mean, in terms of this whole issue of trying to have it both ways.Keach Hagey: Look, they might run out of money, right? I mean, that's one very real possibility. They might run outta money and have to be bought by someone, as you said. That is a totally real possibility right now.Andrew Keen: What would happen if they couldn't raise any more money. I mean, what was the last round, the $40 billion round? What was the overall valuation? About $350 billion.Keach Hagey: Yeah, mm-hmm.Andrew Keen: So let's say that they begin to, because they've got, what are their hard costs monthly burn rate? I mean, it's billions of just.Keach Hagey: Well, the issue is that they're spending more than they are making.Andrew Keen: Right, but you're right. So they, let's say in 18 months, they run out of runway. What would people be buying?Keach Hagey: Right, maybe some IP, some servers. And one of the big questions that is yet unanswered in AI is will it ever economically make sense, right? Right now we are all buying the possibility of in the future that the costs will eventually come down and it will kind of be useful, but that's still a promise. And it's possible that that won't ever happen. I mean, all these companies are this way, right. They are spending far, far more than they're making.Andrew Keen: And that's the best case scenario.Keach Hagey: Worst case scenario is the killer robots murder us all.Andrew Keen: No, what I meant in the best case scenario is that people are actually still without all the blow up. I mean, people are actual paying for AI. I mean on the one hand, the OpenAI product is, would you say it's successful, more or less successful than it was when you finished the book in December of last year?Keach Hagey: Oh, yes, much more successful. Vastly more users, and the product is vastly better. I mean, even in my experience, I don't know if you play with it every day.Andrew Keen: I use Anthropic.Keach Hagey: I use both Claude and ChatGPT, and I mean, they're both great. And I find them vastly more useful today than I did even when I was closing the book. So it's great. I don't know if it's really a great business that they're only charging me $20, right? That's great for me, but I don't think it's long term tenable.Andrew Keen: Well, Keach Hagey, your new book, The Optimist, your new old book, The Optimist: Sam Altman, Open AI and the Race to Invent the Future is out in a couple of weeks. I hope you're writing a sequel. Maybe you should make it The Pessimist.Keach Hagey: I think you might be the pessimist, Andrew.Andrew Keen: Well, you're just, you are as pessimistic as me. You just have a nice smile. I mean, in all reality, what's the most optimistic thing that can come out of this?Keach Hagey: The most optimistic is that this becomes a product that is actually useful, but doesn't vastly exacerbate inequality.Andrew Keen: No, I take the point on that, but in terms of this current story of this non-profit versus profit, what's the best case scenario?Keach Hagey: I guess the best case scenario is they find their way to an IPO before completely imploding.Andrew Keen: With the assumption that a non-profit can do an IPO.Keach Hagey: That they find the right lawyers from wherever they are and make it happen.Andrew Keen: Well, AI continues its hallucinations, and they're not in the product themselves. I think they're in their companies. One of the best, if not the best authority, our guide to all these hallucinations in a corporate level is Keach Hagey, her new book, The Optimist: Sam Altman, Open AI and the Race to Invent the Future is out in a couple of weeks. Essential reading for anyone who wants to understand Sam Altman as the consummate salesman. And I think one thing we can say for sure, Keach, is this is not the end of the story. Is that fair?Keach Hagey: Very fair. Not the end of the story. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe
AI Arms Race from ChatGPT to Deepseek - AZ TRT S06 EP08 (269) 4-20-2025 What We Learned This Week AI Arms Race is real with the major tech co's involved ChatGPT by OpenAI is considering the top chat AI program Google has Gemini (was Bard), Microsoft has CoPilot, Amazon has Claude / Alexa Deepseek is a startup from China that has disrupted AI landscape with a more cost effective AI model Costs and investment $ dollars into AI is being rethought as Deepseek spent millions $ vs Silicon Valley spending billions $ Notes: Seg 1: Major Tech Giants AI Programs - Gemini (was Bard) Developed by Google, Gemini is known for its multimodal capabilities and integration with Google Search. It can analyze images, understand verbal prompts, and engage in verbal conversations. ChatGPT Developed by OpenAI, ChatGPT is known for its versatility and platform-agnostic solution for text generation and learning. It can write code in almost any language, and can also be used to provide research assistance, generate writing prompts, and answer questions. Microsoft Copilot Developed by Microsoft, Copilot is known for its integration with applications like Word, Excel, and Power BI. It's particularly well-suited for document automation. Amazon Alexa w/ Claude - Improved AI Model: Claude is a powerful AI model from Anthropic, known for its strengths in natural language processing and conversational AI, as noted in the video and other sources. Industry 3.0 (1969-2010): The Third Industrial Revolution, or the Digital Revolution, was marked by the automation of production through the use of computers, information technology, and the internet. This era saw the widespread adoption of digital technologies, including programmable logic controllers and robots. Industry 4.0 (2010-present): The Fourth Industrial Revolution, also known as the Fourth Industrial Revolution, is characterized by the integration of digital technologies, including the Internet of Things (IoT), artificial intelligence (AI), big data, and cyber-physical systems, into manufacturing and industrial processes. This era is focused on creating "smart factories" and "smart products" that can communicate and interact with each other, leading to increased efficiency, customization, and sustainability. Top AI programs include a range of software, platforms, and resources for learning and working with artificial intelligence. Some of the most popular AI software tools include Viso Suite, ChatGPT, Jupyter Notebooks, and Google Cloud AI Platform, while popular AI platforms include TensorFlow and PyTorch. Educational resources like Coursera's AI Professional Certificate and Fast.ai's practical deep learning course also offer valuable learning opportunities. ChatGPT is a generative artificial intelligence chatbot developed by OpenAI and launched in 2022. It is based on large language models (LLMs) such as GPT-4o. ChatGPT can generate human-like conversational responses and enables users to refine and steer a conversation towards a desired length, format, style, level of detail, and language.[2] It is credited with accelerating the AI boom, which has led to ongoing rapid investment in and public attention to the field of artificial intelligence (AI).[3] Some observers have raised concern about the potential of ChatGPT and similar programs to displace human intelligence, enable plagiarism, or fuel misinformation.[4][5] OpenAI was founded in December 2015 by Sam Altman, Greg Brockman, Elon Musk, Ilya Sutskever, Wojciech Zaremba, and John Schulman. The founding team combined their diverse expertise in technology entrepreneurship, machine learning, and software engineering to create an organization focused on advancing artificial intelligence in a way that benefits humanity. Elon Musk is no longer involved in OpenAI, and Sam Altman is the current CEO of the organization. ChatGPT has had a profound influence on the evolution of AI, paving the way for advancements in natural language understanding and generation. It has demonstrated the effectiveness of transformer-based models for language tasks, which has encouraged other AI researchers to adopt and refine this architecture. The model's success has also stimulated interest in LLMs, leading to a wave of research and development in this area. Seg 2: DeepSeek is a private Chinese company founded in July 2023 by Liang Wenfeng, a graduate of Zhejiang University, one of China's top universities, who funded the startup via his hedge fund, according to the MIT Technology Review. Liang has about $8 billion in assets, Ives wrote in a Jan. 27 research note. Chinese startup DeepSeek's launch of its latest AI models, which it says are on a par or better than industry-leading models in the United States at a fraction of the cost, is threatening to upset the technology world order. The company has attracted attention in global AI circles after writing in a paper last month that the training of DeepSeek-V3 required less than $6 million worth of computing power from Nvidia H800 chips. DeepSeek's AI Assistant, powered by DeepSeek-V3, has overtaken rival ChatGPT to become the top-rated free application available on Apple's App Store in the United States. This has raised doubts about the reasoning behind some U.S. tech companies' decision to pledge billions of dollars in AI investment and shares of several big tech players, including Nvidia, have been hit. NVIDIA Blackwell Ultra Enables AI ReasoningThe NVIDIA GB300 NVL72 connects 72 Blackwell Ultra GPUs and 36 Arm Neoverse-based NVIDIA Grace™ CPUs in a rack-scale design, acting as a single massive GPU built for test-time scaling. With the NVIDIA GB300 NVL72, AI models can access the platform's increased compute capacity to explore different solutions to problems and break down complex requests into multiple steps, resulting in higher-quality responses. GB300 NVL72 is also expected to be available on NVIDIA DGX™ Cloud, an end-to-end, fully managed AI platform on leading clouds that optimizes performance with software, services and AI expertise for evolving workloads. NVIDIA DGX SuperPOD™ with DGX GB300 systems uses the GB300 NVL72 rack design to provide customers with a turnkey AI factory. The NVIDIA HGX B300 NVL16 features 11x faster inference on large language models, 7x more compute and 4x larger memory compared with the Hopper generation to deliver breakthrough performance for the most complex workloads, such as AI reasoning. AZ TRT Shows – related to AI Topic Link: https://brt-show.libsyn.com/size/5/?search=ai+ Biotech Shows: https://brt-show.libsyn.com/category/Biotech-Life+Sciences-Science AZ Tech Council Shows: https://brt-show.libsyn.com/size/5/?search=az+tech+council *Includes Best of AZ Tech Council show from 2/12/2023 Tech Topic: https://brt-show.libsyn.com/category/Tech-Startup-VC-Cybersecurity-Energy-Science Best of Tech: https://brt-show.libsyn.com/size/5/?search=best+of+tech ‘Best Of' Topic: https://brt-show.libsyn.com/category/Best+of+BRT Thanks for Listening. Please Subscribe to the AZ TRT Podcast. AZ Tech Roundtable 2.0 with Matt Battaglia The show where Entrepreneurs, Top Executives, Founders, and Investors come to share insights about the future of business. AZ TRT 2.0 looks at the new trends in business, & how classic industries are evolving. Common Topics Discussed: Startups, Founders, Funds & Venture Capital, Business, Entrepreneurship, Biotech, Blockchain / Crypto, Executive Comp, Investing, Stocks, Real Estate + Alternative Investments, and more… AZ TRT Podcast Home Page: http://aztrtshow.com/ ‘Best Of' AZ TRT Podcast: Click Here Podcast on Google: Click Here Podcast on Spotify: Click Here More Info: https://www.economicknight.com/azpodcast/ KFNX Info: https://1100kfnx.com/weekend-featured-shows/ Disclaimer: The views and opinions expressed in this program are those of the Hosts, Guests and Speakers, and do not necessarily reflect the views or positions of any entities they represent (or affiliates, members, managers, employees or partners), or any Station, Podcast Platform, Website or Social Media that this show may air on. All information provided is for educational and entertainment purposes. Nothing said on this program should be considered advice or recommendations in: business, legal, real estate, crypto, tax accounting, investment, etc. Always seek the advice of a professional in all business ventures, including but not limited to: investments, tax, loans, legal, accounting, real estate, crypto, contracts, sales, marketing, other business arrangements, etc.
This week, Sasha Orloff is joined by Kash Ali, Founder and CEO of TaxGPT, who shares how a personal tax issue led to the platform's creation, its rapid growth after Greg Brockman's GPT-4 demo, investor interest from Marc Cuban and Jason Calacanis, its shift from a consumer tool to a B2B AI assistant for accountants, its unique advantages over general AI, and his own journey from Pakistan to Silicon Valley, culminating in Y Combinator and a vision for the future of AI-driven tax assistance. -- SPONSORS: Notion Boost your startup with Notion—the ultimate connected workspace trusted by thousands worldwide! From engineering specs to onboarding and fundraising, Notion keeps your team organized and efficient. For a limited time, get 6 months of Notion AI FREE to supercharge your workflow. Claim your offer now at https://notion.com/startups/puzzle Puzzle
Hey everyone, Alex here
Comenzamos con Grok 3, el último modelo de Elon Musk, diseñado para superar a GPT-4 en razonamiento y en acceso en tiempo real a la web. También profundizamos en cómo los sistemas de IA desarrollan una noción de sus propias limitaciones cuando tienen más tiempo para “pensar”, y debatimos las estructuras de prompts más efectivas según Greg Brockman de OpenAI. Además, abordamos el controvertido uso del Torrent por parte de Meta y la demanda colectiva resultante, así como los desafíos éticos que surgen cuando dos agentes de IA interactúan directamente. Un episodio repleto de avances, reflexiones y cuestiones éticas.Suscríbete a la newsletter de la Tertul-IA y nuestro podcast en https://tertulia.mumbler.io/00:00 Intro y presentación.02:30 Cómo usamos en AI Hackers LangChain y LangGraph20:15 El enfoque DeepResearch en los distintos LLMs43:00 Claude Sonnet 3.7 y el presupuesto de razonamiento52:00 Greg Brockman y cómo hacer los mejores prompts57:15 Meta bajándose libros con Torrent para entrenar sus modelos1:01:00 Dos agentes inteligentes hablando en su idiomaFuentes: Grok 3 (lanzamiento) https://x.com/karpathy/status/1891720635363254772 (hablar que tiene deepresearch también (lo han llamado Deep Search) (Hoy vamos a hablar de Grok 3, el nuevo modelo de IA de Elon Musk que promete superar a GPT-4 en razonamiento y acceso en tiempo real a la web. )Todos los Deep Research OpenAIGemini: https://blog.google/products/gemini/google-gemini-deep-research/Grok: https://www.tomsguide.com/ai/i-just-tested-ai-deep-research-on-grok-3-vs-perplexity-vs-gemini-heres-the-winnerOpen Source: https://github.com/zilliztech/deep-searcherPerplexity https://www.perplexity.ai/es-es/hub/blog/introducing-perplexity-deep-researchAI systems develop a sense of their own limitations with more time to "think" vs. Claude 3.7 Sonnet offers multiple thinking modes https://the-decoder.com/ai-systems-develop-a-sense-of-their-own-limitations-with-more-time-to-think/, https://www.anthropic.com/news/claude-3-7-sonnetOpenAI's Greg Brockman gave us the ultimate Prompt Breakdownhttps://www.linkedin.com/posts/growth-hacking-speaker_openais-greg-brockman-gave-us-the-ultimate-activity-7297517449988501504-gwy4/ Meta se enfrenta a una demanda colectiva por usar el Torrent: https://elchapuzasinformatico.com/2025/02/meta-demanda-libros-ia/What if an AI agent makes a phone call, then realizes the other person is also an AI agent? https://www.linkedin.com/posts/luke-harries_what-if-an-ai-agent-makes-a-phone-call-then-activity-7299878652291272704-M0HK, https://devpost.com/software/gibber-link
El primer episodio se titula Lo que la IA puede hacer por nosotros/a nosotros, y la dualidad del título es indicativa de dónde nos encontramos con la inteligencia artificial en 2024. Gates, así como los productores del programa Morgan Neville y Caitrin Rogers son los productores ejecutivos principales del programa hablan con varias personas involucradas en el desarrollo de software de IA, como el fundador de OpenAI, Greg Brockman. También hablan con expertos que estudian la tecnología, como el Dr. Fei-Fei Li de la Universidad de Stanford. Kevin Roose, un reportero de tecnología de The New York Times, es entrevistado sobre la historia que escribió cuando el chatbot de IA de Bing le dijo que quería estar vivo y que debía dejar a su esposa.
Our episode dives into the latest developments in the tech world's most watched legal battle, filed Friday in the U.S. District Court for the Northern District of California. At its heart is Elon Musk's preliminary injunction against OpenAI, its leadership, and Microsoft, revealing a stark contrast between the company's announced $1 billion in funding and the actual $130 million received, with Musk's personal $44 million contribution now at the center of controversy. The story unfolds through remarkable email exchanges, including Sam Altman's 2015 message expressing concerns about AI development and suggesting an alternative to Google's dominance. We explore Musk's visceral reaction to the Microsoft partnership, captured in his words: "This actually made me feel nauseous. It sucks and is exactly what I would expect from them." The tension escalates with the founding team's confrontation of Musk about control issues, documented in their statement: "You stated that you don't want to control the final AGI, but during this negotiation, you've shown to us that absolute control is extremely important to you." The cast of characters in this unfolding drama includes Elon Musk as the plaintiff, Sam Altman as OpenAI's CEO, Greg Brockman serving as president, Reid Hoffman's role as former board member, Dee Templeton's position as Microsoft VP and former board observer, and Shivon Zilis's perspective as a former OpenAI advisor. Their interactions span from OpenAI's nonprofit founding in 2015 through the Microsoft partnership proposal in 2016, internal conflicts in 2017, Musk's departure in 2018, and the introduction of the "capped-profit" structure in 2019, leading to the current legal action in 2024. The financial landscape reveals Microsoft's substantial $13 billion investment for a 49% stake, while OpenAI's annual spending exceeds $5 billion, recently supplemented by a $6.6 billion fundraising round. The legal action seeks to prevent OpenAI from discouraging investors from backing competitors, halt asset transfers to for-profit entities, and stop the sharing of proprietary information with Microsoft. Our analysis draws from U.S. District Court filings, original email correspondence, OpenAI's corporate documents, and Microsoft partnership agreements. This episode sets up our next discussion, where we'll examine the technical implications of the OpenAI-Microsoft partnership and its global impact on AI development. These materials provide crucial context for understanding how corporate governance shapes the future of AI development and industry competition.
Everyone told Vicente Silveira that his startup—a GPT wrapper—would fail. Instead, one year later, it's thriving—with about 500,000 registered users, nearly 3,000 paying subscribers, and over 2 million conversations in the GPT store. Vicente is the cofounder and CEO of AI PDF, a tool that can help you summarize, chat with, and organize your PDF files. When OpenAI allowed users to upload PDFs to ChatGPT, the consensus was that his startup, and all the other GPT wrappers out there, were toast. Some of his competitors even shut shop, but Vicente believed they could still create value for users as a specialized tool. The AI PDF team kept building. A year later, AI PDF is one of the most popular AI-powered PDF readers in the world—and they did it all with a five-person team, and a friends and family round. I sat down with Vicente to understand, in granular detail, the success of AI PDF. We get into: Why staying small and specialized is a bigger advantage than you think The power of building with your early adopters Why lean startups are better positioned than frontier AI companies to create radical solutions When a growing startup should think about raising venture capital The emerging role of ‘AI managers' who will be responsible for overseeing AI agents We even demo an agent integrated into AI PDF, prompting it to analyze recent articles from my column Chain of Thought and write a bulleted list of the core thesis statements. This is a must-watch for small teams building profitable companies at the bleeding edge of AI. If you found this episode interesting, please like, subscribe, comment, and share! Want even more? Sign up for Every to unlock our ultimate guide to prompting ChatGPT here: https://every.ck.page/ultimate-guide-to-prompting-chatgpt. It's usually only for paying subscribers, but you can get it here for free. To hear more from Dan Shipper: Subscribe to Every: https://every.to/subscribe Follow him on X: https://twitter.com/danshipper Timestamps: Introduction: (00:00:35) AI PDF's story begins with an email to OpenAI's Greg Brockman: (00:02:58) Why users choose AI PDF over ChatGPT: (00:05:41) How to compete—and thrive—as a GPT wrapper: (00:06:58) Why building with early adopters is key: (00:20:49) Being small and specialized is your biggest advantage: (00:27:53) When should AI startups raise capital: (00:31:47) The emerging role of humans who will manage AI agents: (00:34:53) Why AI is different from other tech revolutions: (00:45:25) A live demo of an agent integrated into AI PDF: (00:54:01)
AI agents are not as independent as headlines suggest. Join hosts Mike Kaput and Paul Roetzer as they examine why giants like OpenAI and Google are seeing diminishing returns in their AI development, demystify the current state of AI agents, and unpack fascinating insights from Anthropic CEO Dario Amodei's recent conversation with Lex Fridman about the future of responsible AI development and the challenges ahead. Today's episode is brought to you by our AI for Agencies Summit, a virtual event taking place from 12pm - 5pm ET on Wednesday, November 20. Visit www.aiforagencies.com and use the code AIFORWARD200 for $200 off your ticket. 00:04:34 — Has AI Hit a Wall? 00:14:31 — What Is An AI Agent? 00:38:56 — Dario Amodei Interview 00:49:27 — OpenAI Nears Launch of AI Agent Tool 00:51:58 — OpenAI Co-Founder Returns to Startup After Monthslong Leave 00:53:41 — Research: How Gen AI Is Already Impacting the Labor Market 00:58:42 — Google's Latest Gemini Model Now Tops the AI Leaderboard 01:02:53 — Microsoft Copilot Is Struggling 01:09:03 — Microsoft 200+ AI Transformation Stories 01:11:11 — xAI Is Raising Up to $6 Billion at $50 Billion Valuation 01:13:15 — Writer Raises $200M Series C at $1.9B Valuation 01:15:24 — How Spotify Views AI-Generated Music Want to receive our videos faster? SUBSCRIBE to our channel! Visit our website: https://www.marketingaiinstitute.com Receive our weekly newsletter: https://www.marketingaiinstitute.com/newsletter-subscription Looking for content and resources? Register for a free webinar: https://www.marketingaiinstitute.com/resources#filter=.webinar Come to our next Marketing AI Conference: www.MAICON.ai Enroll in AI Academy for Marketers: https://www.marketingaiinstitute.com/academy/home Join our community: Slack: https://www.marketingaiinstitute.com/slack-group-form LinkedIn: https://www.linkedin.com/company/mktgai Twitter: https://twitter.com/MktgAi Instagram: https://www.instagram.com/marketing.ai/ Facebook: https://www.facebook.com/marketingAIinstitute
Apple werkt naar verluidt aan een nieuw apparaat, een scherm om onder meer smart home-producten aan te sturen. Volgend jaar zou deze al op de markt komen, vertelt Joe van Burik in deze Tech Update. Apple zou hiermee een centraal scherm voor in je huis willen bieden, zo meldt Bloomberg. Denk aan een soort vierkante iPad, om je slimme apparaten te besturen, te videobellen en - natuurlijk - AI te gebruiken met diverse apps. In maart 2025 zou Apple dit nieuwe product al willen onthullen, dat nu nog intern bekend is onder codenaam J490. Dit soort producten kennen we al, Google heeft in dat kader bijvoorbeeld de Nest-lijn en de Pixel Tablet en Amazon biedt Echo ook schermen, voorzien van de slimme assistent Alexa. Apple zou zich hier in willen mengen en zich waarschijnlijk onderscheiden qua design en functionaliteit. Verder in deze Tech Update: China beschuldigt de VS ervan de positie van Taiwan in gevaar te brengen, vanwege beperkingen op de chipsleveringen vanuit TSMC Bij OpenAI keert Greg Brockman terug, de rechterhand van topman Sam Altman, die drie maanden geleden nog vertrok See omnystudio.com/listener for privacy information.
Guest: Sal Khan, founder of Khan AcademyAI is poised to change nearly every business, but few are changing as quickly as education. And Sal Khan, who has spend more than a decade manually creating more than 7,000 educational videos, says that's a good thing. He's encouraged Khan Academy to focus on “disrupt[ing] ourselves ... more than almost any other organization that I know of.” The reason is backed up by the data: Personalized tutors — designed to help students achieve mastery in a subject, but previously thought to be unscalable — could shift the educational bell curve “significantly to the right,” Sal says.Chapters:(00:52) - John and Ann Doerr (05:20) - Khan Academy's origins (07:42) - What it is now (12:43) - Emotional fortitude (15:25) - Generating revenue (19:36) - The two-sigma “problem” (21:31) - OpenAI and Sam Altman (24:47) - What AI can do (27:56) - Cheating and other fears (30:06) - Video production (34:08) - Standardized tests (38:36) - AI tutors' tone (40:22) - Not leaving the closet (43:20) - Who Khan Academy is hiring (45:58) - What “grit” means to Sal Mentioned in this episode: Nasdaq, Dan Wohl, Vedic and Buddhist literature, Microsoft, Benjamin Bloom, ChatGPT, the Turing Test, Greg Brockman, Donald Trump, Bing Chat and Sydney, Khanmigo, the SAT and ACT, Schoolhouse.world, Craig Silverstein and Google, John Resig and jQuery, and Angela Duckworth.Links:Connect with SalTwitterLinkedInConnect with JoubinTwitterLinkedInEmail: grit@kleinerperkins.com Learn more about Kleiner PerkinsThis episode was edited by Eric Johnson from LightningPod.fm
We are recording our next big recap episode and taking questions! Submit questions and messages on Speakpipe here for a chance to appear on the show!Also subscribe to our calendar for our Singapore, NeurIPS, and all upcoming meetups!In our first ever episode with Logan Kilpatrick we called out the two hottest LLM frameworks at the time: LangChain and Dust. We've had Harrison from LangChain on twice (as a guest and as a co-host), and we've now finally come full circle as Stanislas from Dust joined us in the studio.After stints at Oracle and Stripe, Stan had joined OpenAI to work on mathematical reasoning capabilities. He describes his time at OpenAI as "the PhD I always wanted to do" while acknowledging the challenges of research work: "You're digging into a field all day long for weeks and weeks, and you find something, you get super excited for 12 seconds. And at the 13 seconds, you're like, 'oh, yeah, that was obvious.' And you go back to digging." This experience, combined with early access to GPT-4's capabilities, shaped his decision to start Dust: "If we believe in AGI and if we believe the timelines might not be too long, it's actually the last train leaving the station to start a company. After that, it's going to be computers all the way down."The History of DustDust's journey can be broken down into three phases:* Developer Framework (2022): Initially positioned as a competitor to LangChain, Dust started as a developer tooling platform. While both were open source, their approaches differed – LangChain focused on broad community adoption and integration as a pure developer experience, while Dust emphasized UI-driven development and better observability that wasn't just `print` statements.* Browser Extension (Early 2023): The company pivoted to building XP1, a browser extension that could interact with web content. This experiment helped validate user interaction patterns with AI, even while using less capable models than GPT-4.* Enterprise Platform (Current): Today, Dust has evolved into an infrastructure platform for deploying AI agents within companies, with impressive metrics like 88% daily active users in some deployments.The Case for Being HorizontalThe big discussion for early stage companies today is whether or not to be horizontal or vertical. Since models are so good at general tasks, a lot of companies are building vertical products that take care of a workflow end-to-end in order to offer more value and becoming more of “Services as Software”. Dust on the other hand is a platform for the users to build their own experiences, which has had a few advantages:* Maximum Penetration: Dust reports 60-70% weekly active users across entire companies, demonstrating the potential reach of horizontal solutions rather than selling into a single team.* Emergent Use Cases: By allowing non-technical users to create agents, Dust enables use cases to emerge organically from actual business needs rather than prescribed solutions.* Infrastructure Value: The platform approach creates lasting value through maintained integrations and connections, similar to how Stripe's value lies in maintaining payment infrastructure. Rather than relying on third-party integration providers, Dust maintains its own connections to ensure proper handling of different data types and structures.The Vertical ChallengeHowever, this approach comes with trade-offs:* Harder Go-to-Market: As Stan talked about: "We spike at penetration... but it makes our go-to-market much harder. Vertical solutions have a go-to-market that is much easier because they're like, 'oh, I'm going to solve the lawyer stuff.'"* Complex Infrastructure: Building a horizontal platform requires maintaining numerous integrations and handling diverse data types appropriately – from structured Salesforce data to unstructured Notion pages. As you scale integrations, the cost of maintaining them also scales. * Product Surface Complexity: Creating an interface that's both powerful and accessible to non-technical users requires careful design decisions, down to avoiding technical terms like "system prompt" in favor of "instructions." The Future of AI PlatformsStan initially predicted we'd see the first billion-dollar single-person company in 2023 (a prediction later echoed by Sam Altman), but he's now more focused on a different milestone: billion-dollar companies with engineering teams of just 20 people, enabled by AI assistance.This vision aligns with Dust's horizontal platform approach – building the infrastructure that allows small teams to achieve outsized impact through AI augmentation. Rather than replacing entire job functions (the vertical approach), they're betting on augmenting existing workflows across organizations.Full YouTube EpisodeChapters* 00:00:00 Introductions* 00:04:33 Joining OpenAI from Paris* 00:09:54 Research evolution and compute allocation at OpenAI* 00:13:12 Working with Ilya Sutskever and OpenAI's vision* 00:15:51 Leaving OpenAI to start Dust* 00:18:15 Early focus on browser extension and WebGPT-like functionality* 00:20:20 Dust as the infrastructure for agents* 00:24:03 Challenges of building with early AI models* 00:28:17 LLMs and Workflow Automation* 00:35:28 Building dependency graphs of agents* 00:37:34 Simulating API endpoints* 00:40:41 State of AI models* 00:43:19 Running evals* 00:46:36 Challenges in building AI agents infra* 00:49:21 Buy vs. build decisions for infrastructure components* 00:51:02 Future of SaaS and AI's Impact on Software* 00:53:07 The single employee $1B company race* 00:56:32 Horizontal vs. vertical approaches to AI agentsTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:11]: Hey, and today we're in a studio with Stanislas, welcome.Stan [00:00:14]: Thank you very much for having me.Swyx [00:00:16]: Visiting from Paris.Stan [00:00:17]: Paris.Swyx [00:00:18]: And you have had a very distinguished career. It's very hard to summarize, but you went to college in both Ecopolytechnique and Stanford, and then you worked in a number of places, Oracle, Totems, Stripe, and then OpenAI pre-ChatGPT. We'll talk, we'll spend a little bit of time about that. About two years ago, you left OpenAI to start Dust. I think you were one of the first OpenAI alum founders.Stan [00:00:40]: Yeah, I think it was about at the same time as the Adept guys, so that first wave.Swyx [00:00:46]: Yeah, and people really loved our David episode. We love a few sort of OpenAI stories, you know, for back in the day, like we're talking about pre-recording. Probably the statute of limitations on some of those stories has expired, so you can talk a little bit more freely without them coming after you. But maybe we'll just talk about, like, what was your journey into AI? You know, you were at Stripe for almost five years, there are a lot of Stripe alums going into OpenAI. I think the Stripe culture has come into OpenAI quite a bit.Stan [00:01:11]: Yeah, so I think the buses of Stripe people really started flowing in, I guess, after ChatGPT. But, yeah, my journey into AI is a... I mean, Greg Brockman. Yeah, yeah. From Greg, of course. And Daniela, actually, back in the days, Daniela Amodei.Swyx [00:01:27]: Yes, she was COO, I mean, she is COO, yeah. She had a pretty high job at OpenAI at the time, yeah, for sure.Stan [00:01:34]: My journey started as anybody else, you're fascinated with computer science and you want to make them think, it's awesome, but it doesn't work. I mean, it was a long time ago, it was like maybe 16, so it was 25 years ago. Then the first big exposure to AI would be at Stanford, and I'm going to, like, disclose a whole lamb, because at the time it was a class taught by Andrew Ng, and there was no deep learning. It was half features for vision and a star algorithm. So it was fun. But it was the early days of deep learning. At the time, I think a few years after, it was the first project at Google. But you know, that cat face or the human face trained from many images. I went to, hesitated doing a PhD, more in systems, eventually decided to go into getting a job. Went at Oracle, started a company, did a gazillion mistakes, got acquired by Stripe, worked with Greg Buckman there. And at the end of Stripe, I started interesting myself in AI again, felt like it was the time, you had the Atari games, you had the self-driving craziness at the time. And I started exploring projects, it felt like the Atari games were incredible, but there were still games. And I was looking into exploring projects that would have an impact on the world. And so I decided to explore three things, self-driving cars, cybersecurity and AI, and math and AI. It's like I sing it by a decreasing order of impact on the world, I guess.Swyx [00:03:01]: Discovering new math would be very foundational.Stan [00:03:03]: It is extremely foundational, but it's not as direct as driving people around.Swyx [00:03:07]: Sorry, you're doing this at Stripe, you're like thinking about your next move.Stan [00:03:09]: No, it was at Stripe, kind of a bit of time where I started exploring. I did a bunch of work with friends on trying to get RC cars to drive autonomously. Almost started a company in France or Europe about self-driving trucks. We decided to not go for it because it was probably very operational. And I think the idea of the company, of the team wasn't there. And also I realized that if I wake up a day and because of a bug I wrote, I killed a family, it would be a bad experience. And so I just decided like, no, that's just too crazy. And then I explored cybersecurity with a friend. We're trying to apply transformers to cut fuzzing. So cut fuzzing, you have kind of an algorithm that goes really fast and tries to mutate the inputs of a library to find bugs. And we tried to apply a transformer to that and do reinforcement learning with the signal of how much you propagate within the binary. Didn't work at all because the transformers are so slow compared to evolutionary algorithms that it kind of didn't work. Then I started interested in math and AI and started working on SAT solving with AI. And at the same time, OpenAI was kind of starting the reasoning team that were tackling that project as well. I was in touch with Greg and eventually got in touch with Ilya and finally found my way to OpenAI. I don't know how much you want to dig into that. The way to find your way to OpenAI when you're in Paris was kind of an interesting adventure as well.Swyx [00:04:33]: Please. And I want to note, this was a two-month journey. You did all this in two months.Stan [00:04:38]: The search.Swyx [00:04:40]: Your search for your next thing, because you left in July 2019 and then you joined OpenAI in September.Stan [00:04:45]: I'm going to be ashamed to say that.Swyx [00:04:47]: You were searching before. I was searching before.Stan [00:04:49]: I mean, it's normal. No, the truth is that I moved back to Paris through Stripe and I just felt the hardship of being remote from your team nine hours away. And so it kind of freed a bit of time for me to start the exploration before. Sorry, Patrick. Sorry, John.Swyx [00:05:05]: Hopefully they're listening. So you joined OpenAI from Paris and from like, obviously you had worked with Greg, but notStan [00:05:13]: anyone else. No. Yeah. So I had worked with Greg, but not Ilya, but I had started chatting with Ilya and Ilya was kind of excited because he knew that I was a good engineer through Greg, I presume, but I was not a trained researcher, didn't do a PhD, never did research. And I started chatting and he was excited all the way to the point where he was like, hey, come pass interviews, it's going to be fun. I think he didn't care where I was, he just wanted to try working together. So I go to SF, go through the interview process, get an offer. And so I get Bob McGrew on the phone for the first time, he's like, hey, Stan, it's awesome. You've got an offer. When are you coming to SF? I'm like, hey, it's awesome. I'm not coming to the SF. I'm based in Paris and we just moved. He was like, hey, it's awesome. Well, you don't have an offer anymore. Oh, my God. No, it wasn't as hard as that. But that's basically the idea. And it took me like maybe a couple more time to keep chatting and they eventually decided to try a contractor set up. And that's how I kind of started working at OpenAI, officially as a contractor, but in practice really felt like being an employee.Swyx [00:06:14]: What did you work on?Stan [00:06:15]: So it was solely focused on math and AI. And in particular in the application, so the study of the larger grid models, mathematical reasoning capabilities, and in particular in the context of formal mathematics. The motivation was simple, transformers are very creative, but yet they do mistakes. Formal math systems are of the ability to verify a proof and the tactics they can use to solve problems are very mechanical, so you miss the creativity. And so the idea was to try to explore both together. You would get the creativity of the LLMs and the kind of verification capabilities of the formal system. A formal system, just to give a little bit of context, is a system in which a proof is a program and the formal system is a type system, a type system that is so evolved that you can verify the program. If the type checks, it means that the program is correct.Swyx [00:07:06]: Is the verification much faster than actually executing the program?Stan [00:07:12]: Verification is instantaneous, basically. So the truth is that what you code in involves tactics that may involve computation to search for solutions. So it's not instantaneous. You do have to do the computation to expand the tactics into the actual proof. The verification of the proof at the very low level is instantaneous.Swyx [00:07:32]: How quickly do you run into like, you know, halting problem PNP type things, like impossibilities where you're just like that?Stan [00:07:39]: I mean, you don't run into it at the time. It was really trying to solve very easy problems. So I think the... Can you give an example of easy? Yeah, so that's the mass benchmark that everybody knows today. The Dan Hendricks one. The Dan Hendricks one, yeah. And I think it was the low end part of the mass benchmark at the time, because that mass benchmark includes AMC problems, AMC 8, AMC 10, 12. So these are the easy ones. Then AIME problems, somewhat harder, and some IMO problems, like Crazy Arm.Swyx [00:08:07]: For our listeners, we covered this in our Benchmarks 101 episode. AMC is literally the grade of like high school, grade 8, grade 10, grade 12. So you can solve this. Just briefly to mention this, because I don't think we'll touch on this again. There's a bit of work with like Lean, and then with, you know, more recently with DeepMind doing like scoring like silver on the IMO. Any commentary on like how math has evolved from your early work to today?Stan [00:08:34]: I mean, that result is mind blowing. I mean, from my perspective, spent three years on that. At the same time, Guillaume Lampe in Paris, we were both in Paris, actually. He was at FAIR, was working on some problems. We were pushing the boundaries, and the goal was the IMO. And we cracked a few problems here and there. But the idea of getting a medal at an IMO was like just remote. So this is an impressive result. And we can, I think the DeepMind team just did a good job of scaling. I think there's nothing too magical in their approach, even if it hasn't been published. There's a Dan Silver talk from seven days ago where it goes a little bit into more details. It feels like there's nothing magical there. It's really applying reinforcement learning and scaling up the amount of data that can generate through autoformalization. So we can dig into what autoformalization means if you want.Alessio [00:09:26]: Let's talk about the tail end, maybe, of the OpenAI. So you joined, and you're like, I'm going to work on math and do all of these things. I saw on one of your blog posts, you mentioned you fine-tuned over 10,000 models at OpenAI using 10 million A100 hours. How did the research evolve from the GPD 2, and then getting closer to DaVinci 003? And then you left just before ChatGPD was released, but tell people a bit more about the research path that took you there.Stan [00:09:54]: I can give you my perspective of it. I think at OpenAI, there's always been a large chunk of the compute that was reserved to train the GPTs, which makes sense. So it was pre-entropic splits. Most of the compute was going to a product called Nest, which was basically GPT-3. And then you had a bunch of, let's say, remote, not core research teams that were trying to explore maybe more specific problems or maybe the algorithm part of it. The interesting part, I don't know if it was where your question was going, is that in those labs, you're managing researchers. So by definition, you shouldn't be managing them. But in that space, there's a managing tool that is great, which is compute allocation. Basically by managing the compute allocation, you can message the team of where you think the priority should go. And so it was really a question of, you were free as a researcher to work on whatever you wanted. But if it was not aligned with OpenAI mission, and that's fair, you wouldn't get the compute allocation. As it happens, solving math was very much aligned with the direction of OpenAI. And so I was lucky to generally get the compute I needed to make good progress.Swyx [00:11:06]: What do you need to show as incremental results to get funded for further results?Stan [00:11:12]: It's an imperfect process because there's a bit of a... If you're working on math and AI, obviously there's kind of a prior that it's going to be aligned with the company. So it's much easier than to go into something much more risky, much riskier, I guess. You have to show incremental progress, I guess. It's like you ask for a certain amount of compute and you deliver a few weeks after and you demonstrate that you have a progress. Progress might be a positive result. Progress might be a strong negative result. And a strong negative result is actually often much harder to get or much more interesting than a positive result. And then it generally goes into, as any organization, you would have people finding your project or any other project cool and fancy. And so you would have that kind of phase of growing up compute allocation for it all the way to a point. And then maybe you reach an apex and then maybe you go back mostly to zero and restart the process because you're going in a different direction or something else. That's how I felt. Explore, exploit. Yeah, exactly. Exactly. Exactly. It's a reinforcement learning approach.Swyx [00:12:14]: Classic PhD student search process.Alessio [00:12:17]: And you were reporting to Ilya, like the results you were kind of bringing back to him or like what's the structure? It's almost like when you're doing such cutting edge research, you need to report to somebody who is actually really smart to understand that the direction is right.Stan [00:12:29]: So we had a reasoning team, which was working on reasoning, obviously, and so math in general. And that team had a manager, but Ilya was extremely involved in the team as an advisor, I guess. Since he brought me in OpenAI, I was lucky to mostly during the first years to have kind of a direct access to him. He would really coach me as a trainee researcher, I guess, with good engineering skills. And Ilya, I think at OpenAI, he was the one showing the North Star, right? He was his job and I think he really enjoyed it and he did it super well, was going through the teams and saying, this is where we should be going and trying to, you know, flock the different teams together towards an objective.Swyx [00:13:12]: I would say like the public perception of him is that he was the strongest believer in scaling. Oh, yeah. Obviously, he has always pursued the compression thesis. You have worked with him personally, what does the public not know about how he works?Stan [00:13:26]: I think he's really focused on building the vision and communicating the vision within the company, which was extremely useful. I was personally surprised that he spent so much time, you know, working on communicating that vision and getting the teams to work together versus...Swyx [00:13:40]: To be specific, vision is AGI? Oh, yeah.Stan [00:13:42]: Vision is like, yeah, it's the belief in compression and scanning computes. I remember when I started working on the Reasoning team, the excitement was really about scaling the compute around Reasoning and that was really the belief we wanted to ingrain in the team. And that's what has been useful to the team and with the DeepMind results shows that it was the right approach with the success of GPT-4 and stuff shows that it was the right approach.Swyx [00:14:06]: Was it according to the neural scaling laws, the Kaplan paper that was published?Stan [00:14:12]: I think it was before that, because those ones came with GPT-3, basically at the time of GPT-3 being released or being ready internally. But before that, there really was a strong belief in scale. I think it was just the belief that the transformer was a generic enough architecture that you could learn anything. And that was just a question of scaling.Alessio [00:14:33]: Any other fun stories you want to tell? Sam Altman, Greg, you know, anything.Stan [00:14:37]: Weirdly, I didn't work that much with Greg when I was at OpenAI. He had always been mostly focused on training the GPTs and rightfully so. One thing about Sam Altman, he really impressed me because when I joined, he had joined not that long ago and it felt like he was kind of a very high level CEO. And I was mind blown by how deep he was able to go into the subjects within a year or something, all the way to a situation where when I was having lunch by year two, I was at OpenAI with him. He would just quite know deeply what I was doing. With no ML background. Yeah, with no ML background, but I didn't have any either, so I guess that explains why. But I think it's a question about, you don't necessarily need to understand the very technicalities of how things are done, but you need to understand what's the goal and what's being done and what are the recent results and all of that in you. And we could have kind of a very productive discussion. And that really impressed me, given the size at the time of OpenAI, which was not negligible.Swyx [00:15:44]: Yeah. I mean, you've been a, you were a founder before, you're a founder now, and you've seen Sam as a founder. How has he affected you as a founder?Stan [00:15:51]: I think having that capability of changing the scale of your attention in the company, because most of the time you operate at a very high level, but being able to go deep down and being in the known of what's happening on the ground is something that I feel is really enlightening. That's not a place in which I ever was as a founder, because first company, we went all the way to 10 people. Current company, there's 25 of us. So the high level, the sky and the ground are pretty much at the same place. No, you're being too humble.Swyx [00:16:21]: I mean, Stripe was also like a huge rocket ship.Stan [00:16:23]: Stripe, I was a founder. So I was, like at OpenAI, I was really happy being on the ground, pushing the machine, making it work. Yeah.Swyx [00:16:31]: Last OpenAI question. The Anthropic split you mentioned, you were around for that. Very dramatic. David also left around that time, you left. This year, we've also had a similar management shakeup, let's just call it. Can you compare what it was like going through that split during that time? And then like, does that have any similarities now? Like, are we going to see a new Anthropic emerge from these folks that just left?Stan [00:16:54]: That I really, really don't know. At the time, the split was pretty surprising because they had been trying GPT-3, it was a success. And to be completely transparent, I wasn't in the weeds of the splits. What I understood of it is that there was a disagreement of the commercialization of that technology. I think the focal point of that disagreement was the fact that we started working on the API and wanted to make those models available through an API. Is that really the core disagreement? I don't know.Swyx [00:17:25]: Was it safety?Stan [00:17:26]: Was it commercialization?Swyx [00:17:27]: Or did they just want to start a company?Stan [00:17:28]: Exactly. Exactly. That I don't know. But I think what I was surprised of is how quickly OpenAI recovered at the time. And I think it's just because we were mostly a research org and the mission was so clear that some divergence in some teams, some people leave, the mission is still there. We have the compute. We have a site. So it just keeps going.Swyx [00:17:50]: Very deep bench. Like just a lot of talent. Yeah.Alessio [00:17:53]: So that was the OpenAI part of the history. Exactly. So then you leave OpenAI in September 2022. And I would say in Silicon Valley, the two hottest companies at the time were you and Lanktrain. What was that start like and why did you decide to start with a more developer focused kind of like an AI engineer tool rather than going back into some more research and something else?Stan [00:18:15]: Yeah. First, I'm not a trained researcher. So going through OpenAI was really kind of the PhD I always wanted to do. But research is hard. You're digging into a field all day long for weeks and weeks and weeks, and you find something, you get super excited for 12 seconds. And at the 13 seconds, you're like, oh, yeah, that was obvious. And you go back to digging. I'm not a trained, like formally trained researcher, and it wasn't kind of a necessarily an ambition of me of creating, of having a research career. And I felt the hardness of it. I enjoyed a lot of like that a ton. But at the time, I decided that I wanted to go back to something more productive. And the other fun motivation was like, I mean, if we believe in AGI and if we believe the timelines might not be too long, it's actually the last train leaving the station to start a company. After that, it's going to be computers all the way down. And so that was kind of the true motivation for like trying to go there. So that's kind of the core motivation at the beginning of personally. And the motivation for starting a company was pretty simple. I had seen GPT-4 internally at the time, it was September 2022. So it was pre-GPT, but GPT-4 was ready since, I mean, I'd been ready for a few months internally. I was like, okay, that's obvious, the capabilities are there to create an insane amount of value to the world. And yet the deployment is not there yet. The revenue of OpenAI at the time were ridiculously small compared to what it is today. So the thesis was, there's probably a lot to be done at the product level to unlock the usage.Alessio [00:19:49]: Yeah. Let's talk a bit more about the form factor, maybe. I think one of the first successes you had was kind of like the WebGPT-like thing, like using the models to traverse the web and like summarize things. And the browser was really the interface. Why did you start with the browser? Like what was it important? And then you built XP1, which was kind of like the browser extension.Stan [00:20:09]: So the starting point at the time was, if you wanted to talk about LLMs, it was still a rather small community, a community of mostly researchers and to some extent, very early adopters, very early engineers. It was almost inconceivable to just build a product and go sell it to the enterprise, though at the time there was a few companies doing that. The one on marketing, I don't remember its name, Jasper. But so the natural first intention, the first, first, first intention was to go to the developers and try to create tooling for them to create product on top of those models. And so that's what Dust was originally. It was quite different than Lanchain, and Lanchain just beat the s**t out of us, which is great. It's a choice.Swyx [00:20:53]: You were cloud, in closed source. They were open source.Stan [00:20:56]: Yeah. So technically we were open source and we still are open source, but I think that doesn't really matter. I had the strong belief from my research time that you cannot create an LLM-based workflow on just one example. Basically, if you just have one example, you overfit. So as you develop your interaction, your orchestration around the LLM, you need a dozen examples. Obviously, if you're running a dozen examples on a multi-step workflow, you start paralyzing stuff. And if you do that in the console, you just have like a messy stream of tokens going out and it's very hard to observe what's going there. And so the idea was to go with an UI so that you could kind of introspect easily the output of each interaction with the model and dig into there through an UI, which is-Swyx [00:21:42]: Was that open source? I actually didn't come across it.Stan [00:21:44]: Oh yeah, it wasn't. I mean, Dust is entirely open source even today. We're not going for an open source-Swyx [00:21:48]: If it matters, I didn't know that.Stan [00:21:49]: No, no, no, no, no. The reason why is because we're not open source because we're not doing an open source strategy. It's not an open source go-to-market at all. We're open source because we can and it's fun.Swyx [00:21:59]: Open source is marketing. You have all the downsides of open source, which is like people can clone you.Stan [00:22:03]: But I think that downside is a big fallacy. Okay. Yes, anybody can clone Dust today, but the value of Dust is not the current state. The value of Dust is the number of eyeballs and hands of developers that are creating to it in the future. And so yes, anybody can clone it today, but that wouldn't change anything. There is some value in being open source. In a discussion with the security team, you can be extremely transparent and just show the code. When you have discussion with users and there's a bug or a feature missing, you can just point to the issue, show the pull request, show the, show the, exactly, oh, PR welcome. That doesn't happen that much, but you can show the progress if the person that you're chatting with is a little bit technical, they really enjoy seeing the pull request advancing and seeing all the way to deploy. And then the downsides are mostly around security. You never want to do security by obfuscation. But the truth is that your vector of attack is facilitated by you being open source. But at the same time, it's a good thing because if you're doing anything like a bug bountying or stuff like that, you just give much more tools to the bug bountiers so that their output is much better. So there's many, many, many trade-offs. I don't believe in the value of the code base per se. I think it's really the people that are on the code base that have the value and go to market and the product and all of those things that are around the code base. Obviously, that's not true for every code base. If you're working on a very secret kernel to accelerate the inference of LLMs, I would buy that you don't want to be open source. But for product stuff, I really think there's very little risk. Yeah.Alessio [00:23:39]: I signed up for XP1, I was looking, January 2023. I think at the time you were on DaVinci 003. Given that you had seen GPD 4, how did you feel having to push a product out that was using this model that was so inferior? And you're like, please, just use it today. I promise it's going to get better. Just overall, as a founder, how do you build something that maybe doesn't quite work with the model today, but you're just expecting the new model to be better?Stan [00:24:03]: Yeah, so actually, XP1 was even on a smaller one that was the post-GDPT release, small version, so it was... Ada, Babbage... No, no, no, not that far away. But it was the small version of GDPT, basically. I don't remember its name. Yes, you have a frustration there. But at the same time, I think XP1 was designed, was an experiment, but was designed as a way to be useful at the current capability of the model. If you just want to extract data from a LinkedIn page, that model was just fine. If you want to summarize an article on a newspaper, that model was just fine. And so it was really a question of trying to find a product that works with the current capability, knowing that you will always have tailwinds as models get better and faster and cheaper. So that was kind of a... There's a bit of a frustration because you know what's out there and you know that you don't have access to it yet. It's also interesting to try to find a product that works with the current capability.Alessio [00:24:55]: And we highlighted XP1 in our anatomy of autonomy post in April of last year, which was, you know, where are all the agents, right? So now we spent 30 minutes getting to what you're building now. So you basically had a developer framework, then you had a browser extension, then you had all these things, and then you kind of got to where Dust is today. So maybe just give people an overview of what Dust is today and the courtesies behind it. Yeah, of course.Stan [00:25:20]: So Dust, we really want to build the infrastructure so that companies can deploy agents within their teams. We are horizontal by nature because we strongly believe in the emergence of use cases from the people having access to creating an agent that don't need to be developers. They have to be thinkers. They have to be curious. But anybody can create an agent that will solve an operational thing that they're doing in their day-to-day job. And to make those agents useful, there's two focus, which is interesting. The first one is an infrastructure focus. You have to build the pipes so that the agent has access to the data. You have to build the pipes such that the agents can take action, can access the web, et cetera. So that's really an infrastructure play. Maintaining connections to Notion, Slack, GitHub, all of them is a lot of work. It is boring work, boring infrastructure work, but that's something that we know is extremely valuable in the same way that Stripe is extremely valuable because it maintains the pipes. And we have that dual focus because we're also building the product for people to use it. And there it's fascinating because everything started from the conversational interface, obviously, which is a great starting point. But we're only scratching the surface, right? I think we are at the pong level of LLM productization. And we haven't invented the C3. We haven't invented Counter-Strike. We haven't invented Cyberpunk 2077. So this is really our mission is to really create the product that lets people equip themselves to just get away all the work that can be automated or assisted by LLMs.Alessio [00:26:57]: And can you just comment on different takes that people had? So maybe the most open is like auto-GPT. It's just kind of like just trying to do anything. It's like it's all magic. There's no way for you to do anything. Then you had the ADAPT, you know, we had David on the podcast. They're very like super hands-on with each individual customer to build super tailored. How do you decide where to draw the line between this is magic? This is exposed to you, especially in a market where most people don't know how to build with AI at all. So if you expect them to do the thing, they're probably not going to do it. Yeah, exactly.Stan [00:27:29]: So the auto-GPT approach obviously is extremely exciting, but we know that the agentic capability of models are not quite there yet. It just gets lost. So we're starting, we're starting where it works. Same with the XP one. And where it works is pretty simple. It's like simple workflows that involve a couple tools where you don't even need to have the model decide which tools it's used in the sense of you just want people to put it in the instructions. It's like take that page, do that search, pick up that document, do the work that I want in the format I want, and give me the results. There's no smartness there, right? In terms of orchestrating the tools, it's mostly using English for people to program a workflow where you don't have the constraint of having compatible API between the two.Swyx [00:28:17]: That kind of personal automation, would you say it's kind of like an LLM Zapier type ofStan [00:28:22]: thing?Swyx [00:28:22]: Like if this, then that, and then, you know, do this, then this. You're programming with English?Stan [00:28:28]: So you're programming with English. So you're just saying, oh, do this and then that. You can even create some form of APIs. You say, when I give you the command X, do this. When I give you the command Y, do this. And you describe the workflow. But you don't have to create boxes and create the workflow explicitly. It just needs to describe what are the tasks supposed to be and make the tool available to the agent. The tool can be a semantic search. The tool can be querying into a structured database. The tool can be searching on the web. And obviously, the interesting tools that we're only starting to scratch are actually creating external actions like reimbursing something on Stripe, sending an email, clicking on a button in the admin or something like that.Swyx [00:29:11]: Do you maintain all these integrations?Stan [00:29:13]: Today, we maintain most of the integrations. We do always have an escape hatch for people to kind of custom integrate. But the reality is that the reality of the market today is that people just want it to work, right? And so it's mostly us maintaining the integration. As an example, a very good source of information that is tricky to productize is Salesforce. Because Salesforce is basically a database and a UI. And they do the f**k they want with it. And so every company has different models and stuff like that. So right now, we don't support it natively. And the type of support or real native support will be slightly more complex than just osing into it, like is the case with Slack as an example. Because it's probably going to be, oh, you want to connect your Salesforce to us? Give us the SQL. That's the Salesforce QL language. Give us the queries you want us to run on it and inject in the context of dust. So that's interesting how not only integrations are cool, and some of them require a bit of work on the user. And for some of them that are really valuable to our users, but we don't support yet, they can just build them internally and push the data to us.Swyx [00:30:18]: I think I understand the Salesforce thing. But let me just clarify, are you using browser automation because there's no API for something?Stan [00:30:24]: No, no, no, no. In that case, so we do have browser automation for all the use cases and apply the public web. But for most of the integration with the internal system of the company, it really runs through API.Swyx [00:30:35]: Haven't you felt the pull to RPA, browser automation, that kind of stuff?Stan [00:30:39]: I mean, what I've been saying for a long time, maybe I'm wrong, is that if the future is that you're going to stand in front of a computer and looking at an agent clicking on stuff, then I'll hit my computer. And my computer is a big Lenovo. It's black. Doesn't sound good at all compared to a Mac. And if the APIs are there, we should use them. There is going to be a long tail of stuff that don't have APIs, but as the world is moving forward, that's disappearing. So the core API value in the past has really been, oh, this old 90s product doesn't have an API. So I need to use the UI to automate. I think for most of the ICP companies, the companies that ICP for us, the scale ups that are between 500 and 5,000 people, tech companies, most of the SaaS they use have APIs. Now there's an interesting question for the open web, because there are stuff that you want to do that involve websites that don't necessarily have APIs. And the current state of web integration from, which is us and OpenAI and Anthropic, I don't even know if they have web navigation, but I don't think so. The current state of affair is really, really broken because you have what? You have basically search and headless browsing. But headless browsing, I think everybody's doing basically body.innertext and fill that into the model, right?Swyx [00:31:56]: MARK MIRCHANDANI There's parsers into Markdown and stuff.Stan [00:31:58]: FRANCESC CAMPOY I'm super excited by the companies that are exploring the capability of rendering a web page into a way that is compatible for a model, being able to maintain the selector. So that's basically the place where to click in the page through that process, expose the actions to the model, have the model select an action in a way that is compatible with model, which is not a big page of a full DOM that is very noisy, and then being able to decompress that back to the original page and take the action. And that's something that is really exciting and that will kind of change the level of things that agents can do on the web. That I feel exciting, but I also feel that the bulk of the useful stuff that you can do within the company can be done through API. The data can be retrieved by API. The actions can be taken through API.Swyx [00:32:44]: For listeners, I'll note that you're basically completely disagreeing with David Wan. FRANCESC CAMPOY Exactly, exactly. I've seen it since it's summer. ADEPT is where it is, and Dust is where it is. So Dust is still standing.Alessio [00:32:55]: Can we just quickly comment on function calling? You mentioned you don't need the models to be that smart to actually pick the tools. Have you seen the models not be good enough? Or is it just like, you just don't want to put the complexity in there? Like, is there any room for improvement left in function calling? Or do you feel you usually consistently get always the right response, the right parametersStan [00:33:13]: and all of that?Alessio [00:33:13]: FRANCESC CAMPOY So that's a tricky product question.Stan [00:33:15]: Because if the instructions are good and precise, then you don't have any issue, because it's scripted for you. And the model will just look at the scripts and just follow and say, oh, he's probably talking about that action, and I'm going to use it. And the parameters are kind of abused from the state of the conversation. I'll just go with it. If you provide a very high level, kind of an auto-GPT-esque level in the instructions and provide 16 different tools to your model, yes, we're seeing the models in that state making mistakes. And there is obviously some progress can be made on the capabilities. But the interesting part is that there is already so much work that can assist, augment, accelerate by just going with pretty simply scripted for actions agents. What I'm excited about by pushing our users to create rather simple agents is that once you have those working really well, you can create meta agents that use the agents as actions. And all of a sudden, you can kind of have a hierarchy of responsibility that will probably get you almost to the point of the auto-GPT value. It requires the construction of intermediary artifacts, but you're probably going to be able to achieve something great. I'll give you some example. We have our incidents are shared in Slack in a specific channel, or shipped are shared in Slack. We have a weekly meeting where we have a table about incidents and shipped stuff. We're not writing that weekly meeting table anymore. We have an assistant that just go find the right data on Slack and create the table for us. And that assistant works perfectly. It's trivially simple, right? Take one week of data from that channel and just create the table. And then we have in that weekly meeting, obviously some graphs and reporting about our financials and our progress and our ARR. And we've created assistants to generate those graphs directly. And those assistants works great. By creating those assistants that cover those small parts of that weekly meeting, slowly we're getting to in a world where we'll have a weekly meeting assistance. We'll just call it. You don't need to prompt it. You don't need to say anything. It's going to run those different assistants and get that notion page just ready. And by doing that, if you get there, and that's an objective for us to us using Dust, get there, you're saving an hour of company time every time you run it. Yeah.Alessio [00:35:28]: That's my pet topic of NPM for agents. How do you build dependency graphs of agents? And how do you share them? Because why do I have to rebuild some of the smaller levels of what you built already?Swyx [00:35:40]: I have a quick follow-up question on agents managing other agents. It's a topic of a lot of research, both from Microsoft and even in startups. What you've discovered best practice for, let's say like a manager agent controlling a bunch of small agents. It's two-way communication. I don't know if there should be a protocol format.Stan [00:35:59]: To be completely honest, the state we are at right now is creating the simple agents. So we haven't even explored yet the meta agents. We know it's there. We know it's going to be valuable. We know it's going to be awesome. But we're starting there because it's the simplest place to start. And it's also what the market understands. If you go to a company, random SaaS B2B company, not necessarily specialized in AI, and you take an operational team and you tell them, build some tooling for yourself, they'll understand the small agents. If you tell them, build AutoGP, they'll be like, Auto what?Swyx [00:36:31]: And I noticed that in your language, you're very much focused on non-technical users. You don't really mention API here. You mention instruction instead of system prompt, right? That's very conscious.Stan [00:36:41]: Yeah, it's very conscious. It's a mark of our designer, Ed, who kind of pushed us to create a friendly product. I was knee-deep into AI when I started, obviously. And my co-founder, Gabriel, was a Stripe as well. We started a company together that got acquired by Stripe 15 years ago. It was at Alain, a healthcare company in Paris. After that, it was a little bit less so knee-deep in AI, but really focused on product. And I didn't realize how important it is to make that technology not scary to end users. It didn't feel scary to me, but it was really seen by Ed, our designer, that it was feeling scary to the users. And so we were very proactive and very deliberate about creating a brand that feels not too scary and creating a wording and a language, as you say, that really tried to communicate the fact that it's going to be fine. It's going to be easy. You're going to make it.Alessio [00:37:34]: And another big point that David had about ADAPT is we need to build an environment for the agents to act. And then if you have the environment, you can simulate what they do. How's that different when you're interacting with APIs and you're kind of touching systems that you cannot really simulate? If you call it the Salesforce API, you're just calling it.Stan [00:37:52]: So I think that goes back to the DNA of the companies that are very different. ADAPT, I think, was a product company with a very strong research DNA, and they were still doing research. One of their goals was building a model. And that's why they raised a large amount of money, et cetera. We are 100% deliberately a product company. We don't do research. We don't train models. We don't even run GPUs. We're using the models that exist, and we try to push the product boundary as far as possible with the existing models. So that creates an issue. Indeed, so to answer your question, when you're interacting in the real world, well, you cannot simulate, so you cannot improve the models. Even improving your instructions is complicated for a builder. The hope is that you can use models to evaluate the conversations so that you can get at least feedback and you could get contradictive information about the performance of the assistance. But if you take actual trace of interaction of humans with those agents, it is even for us humans extremely hard to decide whether it was a productive interaction or a really bad interaction. You don't know why the person left. You don't know if they left happy or not. So being extremely, extremely, extremely pragmatic here, it becomes a product issue. We have to build a product that identifies the end users to provide feedback so that as a first step, the person that is building the agent can iterate on it. As a second step, maybe later when we start training model and post-training, et cetera, we can optimize around that for each of those companies. Yeah.Alessio [00:39:17]: Do you see in the future products offering kind of like a simulation environment, the same way all SaaS now kind of offers APIs to build programmatically? Like in cybersecurity, there are a lot of companies working on building simulative environments so that then you can use agents like Red Team, but I haven't really seen that.Stan [00:39:34]: Yeah, no, me neither. That's a super interesting question. I think it's really going to depend on how much, because you need to simulate to generate data, you need to train data to train models. And the question at the end is, are we going to be training models or are we just going to be using frontier models as they are? On that question, I don't have a strong opinion. It might be the case that we'll be training models because in all of those AI first products, the model is so close to the product surface that as you get big and you want to really own your product, you're going to have to own the model as well. Owning the model doesn't mean doing the pre-training, that would be crazy. But at least having an internal post-training realignment loop, it makes a lot of sense. And so if we see many companies going towards that all the time, then there might be incentives for the SaaS's of the world to provide assistance in getting there. But at the same time, there's a tension because those SaaS, they don't want to be interacted by agents, they want the human to click on the button. Yeah, they got to sell seats. Exactly.Swyx [00:40:41]: Just a quick question on models. I'm sure you've used many, probably not just OpenAI. Would you characterize some models as better than others? Do you use any open source models? What have been the trends in models over the last two years?Stan [00:40:53]: We've seen over the past two years kind of a bit of a race in between models. And at times, it's the OpenAI model that is the best. At times, it's the Anthropic models that is the best. Our take on that is that we are agnostic and we let our users pick their model. Oh, they choose? Yeah, so when you create an assistant or an agent, you can just say, oh, I'm going to run it on GP4, GP4 Turbo, or...Swyx [00:41:16]: Don't you think for the non-technical user, that is actually an abstraction that you should take away from them?Stan [00:41:20]: We have a sane default. So we move the default to the latest model that is cool. And we have a sane default, and it's actually not very visible. In our flow to create an agent, you would have to go in advance and go pick your model. So this is something that the technical person will care about. But that's something that obviously is a bit too complicated for the...Swyx [00:41:40]: And do you care most about function calling or instruction following or something else?Stan [00:41:44]: I think we care most for function calling because you want to... There's nothing worse than a function call, including incorrect parameters or being a bit off because it just drives the whole interaction off.Swyx [00:41:56]: Yeah, so got the Berkeley function calling.Stan [00:42:00]: These days, it's funny how the comparison between GP4O and GP4 Turbo is still up in the air on function calling. I personally don't have proof, but I know many people, and I'm probably part of them, to think that GP4 Turbo is still better than GP4O on function calling. Wow. We'll see what comes out of the O1 class if it ever gets function calling. And Cloud 3.5 Summit is great as well. They kind of innovated in an interesting way, which was never quite publicized. But it's that they have that kind of chain of thought step whenever you use a Cloud model or Summit model with function calling. That chain of thought step doesn't exist when you just interact with it just for answering questions. But when you use function calling, you get that step, and it really helps getting better function calling.Swyx [00:42:43]: Yeah, we actually just recorded a podcast with the Berkeley team that runs that leaderboard this week. So they just released V3.Stan [00:42:49]: Yeah.Swyx [00:42:49]: It was V1 like two months ago, and then they V2, V3. Turbo is on top.Stan [00:42:53]: Turbo is on top. Turbo is over 4.0.Swyx [00:42:54]: And then the third place is XLAM from Salesforce, which is a large action model they've been trying to popularize.Stan [00:43:01]: Yep.Swyx [00:43:01]: O1 Mini is actually on here, I think. O1 Mini is number 11.Stan [00:43:05]: But arguably, O1 Mini has been in a line for that. Yeah.Alessio [00:43:09]: Do you use leaderboards? Do you have your own evals? I mean, this is kind of intuitive, right? Like using the older model is better. I think most people just upgrade. Yeah. What's the eval process like?Stan [00:43:19]: It's funny because I've been doing research for three years, and we have bigger stuff to cook. When you're deploying in a company, one thing where we really spike is that when we manage to activate the company, we have a crazy penetration. The highest penetration we have is 88% daily active users within the entire employee of the company. The kind of average penetration and activation we have in our current enterprise customers is something like more like 60% to 70% weekly active. So we basically have the entire company interacting with us. And when you're there, there is so many stuff that matters most than getting evals, getting the best model. Because there is so many places where you can create products or do stuff that will give you the 80% with the work you do. Whereas deciding if it's GPT-4 or GPT-4 Turbo or et cetera, you know, it'll just give you the 5% improvement. But the reality is that you want to focus on the places where you can really change the direction or change the interaction more drastically. But that's something that we'll have to do eventually because we still want to be serious people.Swyx [00:44:24]: It's funny because in some ways, the model labs are competing for you, right? You don't have to do any effort. You just switch model and then it'll grow. What are you really limited by? Is it additional sources?Stan [00:44:36]: It's not models, right?Swyx [00:44:37]: You're not really limited by quality of model.Stan [00:44:40]: Right now, we are limited by the infrastructure part, which is the ability to connect easily for users to all the data they need to do the job they want to do.Swyx [00:44:51]: Because you maintain all your own stuff.Stan [00:44:53]: You know, there are companies out thereSwyx [00:44:54]: that are starting to provide integrations as a service, right? I used to work in an integrations company. Yeah, I know.Stan [00:44:59]: It's just that there is some intricacies about how you chunk stuff and how you process information from one platform to the other. If you look at the end of the spectrum, you could think of, you could say, oh, I'm going to support AirByte and AirByte has- I used to work at AirByte.Swyx [00:45:12]: Oh, really?Stan [00:45:13]: That makes sense.Swyx [00:45:14]: They're the French founders as well.Stan [00:45:15]: I know Jean very well. I'm seeing him today. And the reality is that if you look at Notion, AirByte does the job of taking Notion and putting it in a structured way. But that's the way it is not really usable to actually make it available to models in a useful way. Because you get all the blocks, details, et cetera, which is useful for many use cases.Swyx [00:45:35]: It's also for data scientists and not for AI.Stan [00:45:38]: The reality of Notion is that sometimes you have a- so when you have a page, there's a lot of structure in it and you want to capture the structure and chunk the information in a way that respects that structure. In Notion, you have databases. Sometimes those databases are real tabular data. Sometimes those databases are full of text. You want to get the distinction and understand that this database should be considered like text information, whereas this other one is actually quantitative information. And to really get a very high quality interaction with that piece of information, I haven't found a solution that will work without us owning the connection end-to-end.Swyx [00:46:15]: That's why I don't invest in, there's Composio, there's All Hands from Graham Newbig. There's all these other companies that are like, we will do the integrations for you. You just, we have the open source community. We'll do off the shelf. But then you are so specific in your needs that you want to own it.Swyx [00:46:28]: Yeah, exactly.Stan [00:46:29]: You can talk to Michel about that.Swyx [00:46:30]: You know, he wants to put the AI in there, but you know. Yeah, I will. I will.Stan [00:46:35]: Cool. What are we missing?Alessio [00:46:36]: You know, what are like the things that are like sneakily hard that you're tackling that maybe people don't even realize they're like really hard?Stan [00:46:43]: The real parts as we kind of touch base throughout the conversation is really building the infra that works for those agents because it's a tenuous walk. It's an evergreen piece of work because you always have an extra integration that will be useful to a non-negligible set of your users. I'm super excited about is that there's so many interactions that shouldn't be conversational interactions and that could be very useful. Basically, know that we have the firehose of information of those companies and there's not going to be that many companies that capture the firehose of information. When you have the firehose of information, you can do a ton of stuff with models that are just not accelerating people, but giving them superhuman capability, even with the current model capability because you can just sift through much more information. An example is documentation repair. If I have the firehose of Slack messages and new Notion pages, if somebody says, I own that page, I want to be updated when there is a piece of information that should update that page, this is not possible. You get an email saying, oh, look at that Slack message. It says the opposite of what you have in that paragraph. Maybe you want to update or just ping that person. I think there is a lot to be explored on the product layer in terms of what it means to interact productively with those models. And that's a problem that's extremely hard and extremely exciting.Swyx [00:48:00]: One thing you keep mentioning about infra work, obviously, Dust is building that infra and serving that in a very consumer-friendly way. You always talk about infra being additional sources, additional connectors. That is very important. But I'm also interested in the vertical infra. There is an orchestrator underlying all these things where you're doing asynchronous work. For example, the simplest one is a cron job. You just schedule things. But also, for if this and that, you have to wait for something to be executed and proceed to the next task. I used to work on an orchestrator as well, Temporal.Stan [00:48:31]: We used Temporal. Oh, you used Temporal? Yeah. Oh, how was the experience?Swyx [00:48:34]: I need the NPS.Stan [00:48:36]: We're doing a self-discovery call now.Swyx [00:48:39]: But you can also complain to me because I don't work there anymore.Stan [00:48:42]: No, we love Temporal. There's some edges that are a bit rough, surprisingly rough. And you would say, why is it so complicated?Swyx [00:48:49]: It's always versioning.Stan [00:48:50]: Yeah, stuff like that. But we really love it. And we use it for exactly what you said, like managing the entire set of stuff that needs to happen so that in semi-real time, we get all the updates from Slack or Notion or GitHub into the system. And whenever we see that piece of information goes through, maybe trigger workflows to run agents because they need to provide alerts to users and stuff like that. And Temporal is great. Love it.Swyx [00:49:17]: You haven't evaluated others. You don't want to build your own. You're happy with...Stan [00:49:21]: Oh, no, we're not in the business of replacing Temporal. And Temporal is so... I mean, it is or any other competitive product. They're very general. If it's there, there's an interesting theory about buy versus build. I think in that case, when you're a high-growth company, your buy-build trade-off is very much on the side of buy. Because if you have the capability, you're just going to be saving time, you can focus on your core competency, etc. And it's funny because we're seeing, we're starting to see the post-high-growth company, post-SKF company, going back on that trade-off, interestingly. So that's the cloud news about removing Zendesk and Salesforce. Do you believe that, by the way?Alessio [00:49:56]: Yeah, I did a podcast with them.Stan [00:49:58]: Oh, yeah?Alessio [00:49:58]: It's true.Swyx [00:49:59]: No, no, I know.Stan [00:50:00]: Of course they say it's true,Swyx [00:50:00]: but also how well is it going to go?Stan [00:50:02]: So I'm not talking about deflecting the customer traffic. I'm talking about building AI on top of Salesforce and Zendesk, basically, if I understand correctly. And all of a sudden, your product surface becomes much smaller because you're interacting with an AI system that will take some actions. And so all of a sudden, you don't need the product layer anymore. And you realize that, oh, those things are just databases that I pay a hundred times the price, right? Because you're a post-SKF company and you have tech capabilities, you are incentivized to reduce your costs and you have the capability to do so. And then it makes sense to just scratch the SaaS away. So it's interesting that we might see kind of a bad time for SaaS in post-hyper-growth tech companies. So it's still a big market, but it's not that big because if you're not a tech company, you don't have the capabilities to reduce that cost. If you're a high-growth company, always going to be buying because you go faster with that. But that's an interesting new space, new category of companies that might remove some SaaS. Yeah, Alessio's firmSwyx [00:51:02]: has an interesting thesis on the future of SaaS in AI.Alessio [00:51:05]: Service as a software, we call it. It's basically like, well, the most extreme is like, why is there any software at all? You know, ideally, it's all a labor interface where you're asking somebody to do something for you, whether that's a person, an AI agent or whatnot.Stan [00:51:17]: Yeah, yeah, that's interesting. I have to ask.Swyx [00:51:19]: Are you paying for Temporal Cloud or are you self-hosting?Stan [00:51:22]: Oh, no, no, we're paying, we're paying. Oh, okay, interesting.Swyx [00:51:24]: We're paying way too much.Stan [00:51:26]: It's crazy expensive, but it makes us-Swyx [00:51:28]: That's why as a shareholder, I like to hear that. It makes us go faster,Stan [00:51:31]: so we're happy to pay.Swyx [00:51:33]: Other things in the infrastack, I just want a list for other founders to think about. Ops, API gateway, evals, you know, anything interesting there that you build or buy?Stan [00:51:41]: I mean, there's always an interesting question. We've been building a lot around the interface between models and because Dust, the original version, was an orchestration platform and we basically provide a unified interface to every model providers.Swyx [00:51:56]: That's what I call gateway.Stan [00:51:57]: That we add because Dust was that and so we continued building upon and we own it. But that's an interesting question was in you, you want to build that or buy it?Swyx [00:52:06]: Yeah, I always say light LLM is the current open source consensus.Stan [00:52:09]: Exactly, yeah. There's an interesting question there.Swyx [00:52:12]: Ops, Datadog, just tracking.Stan [00:52:14]: Oh yeah, so Datadog is an obvious... What are the mistakes that I regret? I started as pure JavaScript, not TypeScript, and I think you want to, if you're wondering, oh, I want to go fast, I'll do a little bit of JavaScript. No, don't, just start with TypeScript. I see, okay.Swyx [00:52:30]: So interesting, you are a research engineer that came out of OpenAI that bet on TypeScript.Stan [00:52:36]: Well, the reality is that if you're building a product, you're going to be doing a lot of JavaScript, right? And Next, we're using Next as an example. It's
Cette semaine, on vous emmène à nouveau dans les coulisses d'OpenAI. Ils viennent de lever plus de 6 milliards à une valorisation de 157 milliards de dollars, oui mais voilà, visiblement l'argent ne fait pas le bonheur… Greg Brockman, le co-fondateur, est sur le point de revenir après plusieurs mois de congés sabbatiques et il arrive en plein chaos. Sam Altman a pris le contrôle de la Tech et OpenAI a changé radicalement : depuis les départs se succèdent, les luttes internes se multiplient et la concurrence se réveille. Alors OpenAI est-il au bord du gouffre ? C'est la question qu'on se pose cette semaine…On parle aussi des élections américaines qui approchent à grands pas et la bataille pour la victoire a de plus en plus lieu online. Elon Musk affronte Jessica Alter dans cette bataille numérique. Musk soutient Trump en faisant tourner la machine à billets, tandis qu'Alter mobilise les électeurs pour Harris avec des stratégies digitales super rodées. Lequel d'entre eux va gagner ?On décrypte tout ça dans Silicon Carne cette semaine !
Wes, Eneasz, and David keep the rationalist community informed about what's going on outside of the rationalist communitySupport us on Substack!News discussed:There was a Biden vs Lettuce! The Lettuce won.CrowdStrike Falcon is endpoint monitoring software. (doesn't just protect from malicious code, also tracks assets). Reminder: Any time you have auto-update enabled for anything, you have installed a free backdoorThe UBI experiment actually showed a lot of major upsides! Increased entrepreneurship, people held out longer for better jobs (which contributed to the lower employment number!), young people got more edu and single parents did more child raising, people were able to leave abusive relationships. Looking at naive averages isn't that useful. (also AskWhoCastsAI is one of my fav podcasts)The World Central Kitchen fuckup has been investigated. “An Australian review into the deaths said the Israel Defense Forces (IDF) decided to launch missiles at the convoy after mistakenly believing it was being hijacked by Hamas”Leaked Zoom recording of white house staff & DNC discussing how to best censor reports of Biden's mental decline. Most interesting part is how polite the threats are. Stuff like ‘you say you're reducing disinformation, but it looks like this disinformation about Biden's decline is still up on facebook. I'm not sure that's what reduction looks like'. Feels like Reality Is Becoming Impossible To SatirizeUS recognizes challenger as winner as riots continue in Venezuela Hamas political leader Ismail Haniyeh killed by detonating a bomb planted in advance in his bedroom at the Iranian government official residenceIran vows major counterstrikes on the 12thHamas names a top architect of the Oct 7 terror rampage as new leader Hezbollah top commander Fouad Shukur (behind recent rocket attack in Isreal) killed by airstrike in Lebanontop Hamas military leader Mohammed Deif killed in GazaDays of riots in Britain. Backlash against immigration and perceived rise in crime. Best way to fix that is always beating up innocent minorities and destroying local businesses.Walz!Google lost antitrust suitTwitter files antitrust lawsuit against GARM. Sounds like BS lawsuit, but GARM does legit threaten corps that advertise on media outlets that they want to destroy with standard “shame if your store burned down” mob tactics. Much like patent trolls what they do is legal but should get them jail/confiscation/blinding. Russian prisoner exchange. The tween & teen kids of a spy couple discover they're Russian on the flight over to Russia.MegaQuake advisory in Japan In Bangladesh a government job quota system favoring the in-group sparked weeks of deadly riots (after over a decade of ratcheting authoritarianism and economic issues). Finally the prime minister fled the country after protestors stormed her residence, the president dissolved parliament, and the military is forming an interim govt.BAGUETTE NEWS! Frenchman's Giant Baguette is his undoing at the pole vault. (is offered $250k porn deal??) Joins acapella group.More top people leave Open AI - cofounder John Schulman leaves to join Anthropic, president Greg Brockman takes extended leave of absence, a VP resigns.accused 9/11 mastermind and two others gitmo got a plea deal, next day it was rescindedHappy News!“bridge editing” papers published in Nature promise more precise gene editing than CRISPR with fewer errors and disruptions.Nanofiber molecules cause human cartilage to begin regeneration process, which humans can't do in adulthood. Could lead to actual joint repair/regrowth. So far only on cell samples, not in humans.Texas Heart Institute implants the first Total Artificial Heart. Made of titanium and don't require anti-rejection medications. Uses rotary blood pump with a single moving part that utilizes a magnetically levitated rotor to greatly reduce friction and wear. Increases blood flow with demand to up to 12L/minute, allowing patients to exercise! Was only a place-holder for this surgery, but meant for long-term eventually. 3500 people on heart transplant list ATM.Australia begins campaign to eradicate peanut allergyTroop DeploymentEneasz - Kamala Harris is The Mask That SmilesWes - Don't be a Baby About the ElectionGot something to say? Come chat with us on the Bayesian Conspiracy Discord or email us at themindkillerpodcast@gmail.com. Say something smart and we'll mention you on the next show!Follow us!RSS: http://feeds.feedburner.com/themindkillerGoogle: https://play.google.com/music/listen#/ps/Iqs7r7t6cdxw465zdulvwikhekmPocket Casts: https://pca.st/vvcmifu6 Stitcher: https://www.stitcher.com/podcast/the-mind-killer Apple: Intro/outro music: On Sale by Golden Duck Orchestra This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit mindkiller.substack.com/subscribe
We have had our minds on strawberry fields this week with all the latest happenings at OpenAI. Join our hosts as they discuss OpenAI's secret project “Strawberry” and OpenAI's leadership changes, including Greg Brockman's sabbatical and John Schulman's move to Anthropic. They'll explore JobsGPT, Paul Roetzer's tool for understanding AI's impact on jobs. Plus, they'll examine OpenAI's GPT-4o System Card, detailing risk management for their new voice model. In our rapid-fire section, we'll touch on the latest legal disputes with OpenAI, Figure's 02 Robot, ChatGPT watermarking, new AI image generator Flux, and more. 00:03:48 — OpenAI Departures + OpenAI's secret project “Strawberry” Mystery Grows 00:23:20 — SmarterX.ai JobsGPT 00:43:02 — GPT-4o System Card Evaluates Risks/Dangers 00:56:43 — Groq's Huge Funding Round 00:59:08 — Figure Teases Figure 02 Robot 01:02:44 — Musk Brings Back OpenAI Lawsuit 01:05:48 — YouTuber Files Class Action Suit Over AI Scraping + Nvidia Gets Caught 01:09:00 — ChatGPT Watermarking 01:11:40 — New AI image generator Flux.1 01:13:51 —Godmother of AI on California's AI Bill SB-1047 This week's episode is brought to you by MAICON, our 5th annual Marketing AI Conference, happening in Cleveland, Sept. 10 - 12. The code POD200 saves $200 on all pass types. For more information on MAICON and to register for this year's conference, visit www.MAICON.ai. Want to receive our videos faster? SUBSCRIBE to our channel! Visit our website: https://www.marketingaiinstitute.com Receive our weekly newsletter: https://www.marketingaiinstitute.com/newsletter-subscription Looking for content and resources? Register for a free webinar: https://www.marketingaiinstitute.com/resources#filter=.webinar Come to our next Marketing AI Conference: www.MAICON.ai Enroll in AI Academy for Marketers: https://www.marketingaiinstitute.com/academy/home Join our community: Slack: https://www.marketingaiinstitute.com/slack-group-form LinkedIn: https://www.linkedin.com/company/mktgai Twitter: https://twitter.com/MktgAi Instagram: https://www.instagram.com/marketing.ai/ Facebook: https://www.facebook.com/marketingAIinstitute
The AI Breakdown: Daily Artificial Intelligence News and Discussions
OpenAI faces a series of high-profile executive departures, raising questions about the company's future direction and stability. This episode explores the recent exits of key figures like Greg Brockman, John Shulman, and Peter Deng, examining the implications for OpenAI and the broader AI landscape. With ongoing controversies, including legal challenges from Elon Musk and Microsoft's positioning as a competitor, what does this mean for OpenAI's strategic path? Concerned about being spied on? Tired of censored responses? AI Daily Brief listeners receive a 20% discount on Venice Pro. Visit https://venice.ai/nlw and enter the discount code NLWDAILYBRIEF. Learn how to use AI with the world's biggest library of fun and useful tutorials: https://besuper.ai/ Use code 'podcast' for 50% off your first month. The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614 Subscribe to the newsletter: https://aidailybrief.beehiiv.com/ Join our Discord: https://bit.ly/aibreakdown
Major Shakeups at OpenAI & Google Declared a Monopoly In this episode of Hashtag Trending, host Jim Love covers the major leadership shakeups at OpenAI, where key figures including President Greg Brockman, VP Peter Deng, and co-founder John Shulman announced their departures. We delve into the financial and competitive pressures facing OpenAI and its implications. Additionally, a federal judge has declared Google a monopoly, potentially reshaping the tech industry landscape. Lastly, we explore the ongoing feud between Delta and Microsoft regarding recent IT outages. All this and more in today's episode! 00:00 Introduction and Headlines 00:19 Leadership Shakeup at OpenAI 04:30 Google Declared a Monopoly 07:28 Microsoft vs. Delta: The IT Clash 09:44 Conclusion and Sign-off
Send Everyday AI and Jordan a text messageWin a free year of ChatGPT or other prizes! Find out how.What the heck is happening at OpenAI? In a somewhat shocking development, an OpenAI co-founder has left OpenAI for rival Anthropic. And President Greg Brockman is taking an 'extended leave of absence.' What's it all mean? Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan questions on OpenAIRelated Episodes: Ep 318: GPT-4o Mini: What you need to know and what no one's talking aboutEp 149: Sam Altman leaving and the future of OpenAI – 7 things you need to knowUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. Major Changes at OpenAI2. Legal Trouble for OpenAI3. OpenAI's Technology and Impact4. Future of OpenAITimestamps:02:00 Daily AI news06:15 Multiple high-level departures at OpenAI, significant impact.12:47 GPT technology widely used by large companies.16:08 Employees threatened to leave if demands not met.18:22 Key OpenAI figures change, raising concerns.21:05 Economic chaos and political instability in 72 hours.25:22 Apple rebranding AI as 'Apple Intelligence.' GPT technology used.27:16 Microsoft's early commitment to AI pays off.30:32 NVIDIA is least reliant on OpenAI.35:08 AI advancements raise immense safety concerns and risks.40:16 Ilya Sutskever left OpenAI to start SSI.41:16 OpenAI's new model amidst reporting and rumors.44:20 OpenAI's incredible capabilities are beyond imagination.Keywords:OpenAI, Jordan Wilson, Everyday AI, OpenAI drama, co-founder departure, OpenAI president, extended leave, AI news, Figure humanoid AI robot, NVIDIA, copyright violations, Elon Musk, Sam Altman, lawsuit, Peter Dang, John Shulman, Greg Brockman, OpenAI leadership changes, Andrei Karpathy, Ilya Sutskever, Microsoft, artificial intelligence, AGI, Jan Leakey, Anthropic, GPT 5, GPT NEXT, Apple Intelligence, US economy, global economic turmoil. Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: John Schulman leaves OpenAI for Anthropic, published by Sodium on August 6, 2024 on LessWrong. Schulman writes: I shared the following note with my OpenAI colleagues today: I've made the difficult decision to leave OpenAI. This choice stems from my desire to deepen my focus on AI alignment, and to start a new chapter of my career where I can return to hands-on technical work. I've decided to pursue this goal at Anthropic, where I believe I can gain new perspectives and do research alongside people deeply engaged with the topics I'm most interested in. To be clear, I'm not leaving due to lack of support for alignment research at OpenAI. On the contrary, company leaders have been very committed to investing in this area. My decision is a personal one, based on how I want to focus my efforts in the next phase of my career. (statement continues on X, Altman responds here) TechCrunch notes that only three of the eleven original founders of OpenAI remain at the company. Additionally, The Information reports: Greg Brockman, OpenAI's president and one of 11 cofounders of the artificial intelligence firm, is taking an extended leave of absence. (I figured that there should be at least one post about this on LW where people can add information as more comes in, saw that no one has made one yet, and wrote this one up) Update 1: Greg Brockman posts on X: I'm taking a sabbatical through end of year. First time to relax since co-founding OpenAI 9 years ago. The mission is far from complete; we still have a safe AGI to build. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Krithika Muthukumar is a marketing veteran. She is currently the VP of Marketing at OpenAI where she was the first marketing hire. Before that, she was Head of Marketing at Retool. Her longest tenure was at Stripe where she was hired as the first marketer and scaled with the company over nine years, from a 60-person team to 7500+. She began her career in Product Marketing at Google and Dropbox. – In today's episode, we discuss: Marketing lessons from OpenAI, Stripe, and Retool The 3 pillars of Stripe's approach to brand How to manage resource allocation as a marketer Adapting marketing strategy to different business models Advice for early marketing hires – Referenced: Coca-Cola AI-generated wish card campaign: https://theprint.in/ani-press-releases/coca-cola-ignites-diwali-celebrations-with-unique-personalized-ai-generated-wish-cards/1840093/ Cristina Cordova: https://www.linkedin.com/in/cristinajcordova/ Gong: https://www.gong.io/ Greg Brockman: https://www.linkedin.com/in/thegdb/ Kenzo Fong: https://www.linkedin.com/in/kenzofong/ Retool: https://retool.com/ Stripe's “Capture the Flag” campaign: https://techcrunch.com/2012/08/22/stripes-capture-the-flag-2-0-a-hands-on-contest-for-app-developers-to-test-their-security-know-how/ Stripe Press: https://press.stripe.com/ Stripe Sigma: https://stripe.com/us/sigma Tanya Khakbaz: https://www.linkedin.com/in/tanya-khakbaz-a725732/ – Where to find Krithika Muthukumar: LinkedIn: https://www.linkedin.com/in/krithix/ Twitter/X: https://x.com/krithix – Where to find Brett Berson: LinkedIn: https://www.linkedin.com/in/brett-berson-9986094/ Twitter/X: https://twitter.com/brettberson – Timestamps: (00:00) Intro (02:43) Getting involved in Stripe (05:37) Evaluating success in product marketing (06:35) The 3 pillars of Stripe's approach to brand (12:10) Managing resource allocation as Stripe grew (17:22) How Stripe scaled taste (21:30) Were Stripe reviews micromanaging? (24:16) Marketing under founders with strong marketing skills (26:44) Advice for early marketing hires (31:52) Marketing at Retool vs Stripe (33:59) Marketing to mid-market vs SMB vs enterprise (37:02) Marketing programs that had an outsized impact (39:59) Marketing horizontal vs vertical products (43:20) Lessons from OpenAI (52:22) Inside OpenAI's recent website relaunch (55:57) How OpenAI's marketers use OpenAI tooling (59:53) When to start hiring marketers (61:34) How to screen early marketing hires (66:39) The biggest influences on Krithika's career (67:52) Outro
Our 168th episode with a summary and discussion of last week's big AI news! With guest host Gavin Purcell from AI for Humans podcast! Read out our text newsletter and comment on the podcast at https://lastweekin.ai/ Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai Timestamps + Links: (00:00:00) Intro / Banter + Response to listener comments / corrections Tools & Apps (00:08:00) OpenAI says Sky voice in ChatGPT will be paused after concerns it sounds too much like Scarlett Johansson (00:16:14) Microsoft's Copilot assistant is getting a GPT-4o upgrade + Recall is Microsoft's key to unlocking the future of PCs (00:21:36) ElevenLabs Launches AI-Voiced Screen Reader App (00:22:40) Adobe Lightroom gets a magic eraser, and it's impressive (00:25:07) Microsoft, Khan Academy provide free AI assistant for all educators in US (00:27:40) Microsoft Paint is getting an AI-powered image generator that responds to your text prompts and doodles Applications & Business (00:29:16) OpenAI founders Sam Altman and Greg Brockman go on the defensive after top safety researchers quit (00:36:58) OpenAI, WSJ Owner News Corp Strike Content Deal Valued at Over $250 Million (00:41:27) CoreWeave Raises $7.5 Billion in Debt for AI Computing Push (00:44:13) Google announced Trillium, its sixth generation of Tensor processors. (00:45:09) Inflection AI reveals new team and plan to embed emotional AI in business bots (00:47:01) Data-labeling startup Scale AI raises $1B as valuation doubles to $13.8B Projects & Open Source (00:48:35) Abacus AI Releases Smaug-Llama-3-70B-Instruct: The New Benchmark in Open-Source Conversational AI Rivaling GPT-4 Turbo (00:52:24) Introducing New Chatbot Arena Category: Hard Prompts (00:54:56) Microsoft brings out a small language model that can look at pictures Research & Advancements (00:56:05) New Anthropic Research Sheds Light on AI's 'Black Box' (01:04:03) Chameleon: Mixed-Modal Early-Fusion Foundation Models (01:08:14) SpeechVerse: A Large-scale Generalizable Audio Language Model (01:09:05) CAT3D: Create Anything in 3D with Multi-View Diffusion Models (01:11:17) Coin3D: Controllable and Interactive 3D Assets Generation with Proxy-Guided Conditioning (01:12:10) SpeechGuard: Exploring the Adversarial Robustness of Multimodal Large Language Models Policy & Safety (01:15:01) World's first major law for artificial intelligence gets final EU green light (01:17:18) Colorado governor signs sweeping AI regulation bill (01:22:10) Senators Propose $32 Billion in Annual A.I. Spending but Defer Regulation (01:23:25) Google DeepMind launches new framework to assess the dangers of AI models (01:25:05) Tech giants pledge AI safety commitments — including a ‘kill switch' if they can't mitigate risks Synthetic Media & Art (01:28:32) Sony Music warns tech companies over ‘unauthorized' use of its content to train AI (01:32:34) Hollywood agency CAA aims to help stars manage their own AI likenesses (01:38:28) What Do You Do When A.I. Takes Your Voice? (01:42:01) Outro + AI Song
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: OpenAI: Exodus, published by Zvi on May 20, 2024 on LessWrong. Previously: OpenAI: Facts From a Weekend, OpenAI: The Battle of the Board, OpenAI: Leaks Confirm the Story, OpenAI: Altman Returns, OpenAI: The Board Expands. Ilya Sutskever and Jan Leike have left OpenAI. This is almost exactly six months after Altman's temporary firing and The Battle of the Board, the day after the release of GPT-4o, and soon after a number of other recent safety-related OpenAI departures. Many others working on safety have also left recently. This is part of a longstanding pattern at OpenAI. Jan Leike later offered an explanation for his decision on Twitter. Leike asserts that OpenAI has lost the mission on safety and culturally been increasingly hostile to it. He says the superalignment team was starved for resources, with its public explicit compute commitments dishonored, and that safety has been neglected on a widespread basis, not only superalignment but also including addressing the safety needs of the GPT-5 generation of models. Altman acknowledged there was much work to do on the safety front. Altman and Brockman then offered a longer response that seemed to say exactly nothing new. Then we learned that OpenAI has systematically misled and then threatened its departing employees, forcing them to sign draconian lifetime non-disparagement agreements, which they are forbidden to reveal due to their NDA. Altman has to some extent acknowledged this and promised to fix it once the allegations became well known, but so far there has been no fix implemented beyond an offer to contact him privately for relief. These events all seem highly related. Also these events seem quite bad. What is going on? This post walks through recent events and informed reactions to them. The first ten sections address departures from OpenAI, especially Sutskever and Leike. The next five sections address the NDAs and non-disparagement agreements. Then at the end I offer my perspective, highlight another, and look to paths forward. Table of Contents 1. The Two Departure Announcements 2. Who Else Has Left Recently? 3. Who Else Has Left Overall? 4. Early Reactions to the Departures 5. The Obvious Explanation: Altman 6. Jan Leike Speaks 7. Reactions After Lekie's Statement 8. Greg Brockman and Sam Altman Respond to Leike 9. Reactions from Some Folks Unworried About Highly Capable AI 10. Don't Worry, Be Happy? 11. The Non-Disparagement and NDA Clauses 12. Legality in Practice 13. Implications and Reference Classes 14. Altman Responds on Non-Disparagement Clauses 15. So, About That Response 16. How Bad Is All This? 17. Those Who Are Against These Efforts to Prevent AI From Killing Everyone 18. What Will Happen Now? 19. What Else Might Happen or Needs to Happen Now? The Two Departure Announcements Here are the full announcements and top-level internal statements made on Twitter around the departures of Ilya Sutskever and Jan Leike. Ilya Sutskever: After almost a decade, I have made the decision to leave OpenAI. The company's trajectory has been nothing short of miraculous, and I'm confident that OpenAI will build AGI that is both safe and beneficial under the leadership of @sama, @gdb, @miramurati and now, under the excellent research leadership of Jakub Pachocki. It was an honor and a privilege to have worked together, and I will miss everyone dearly. So long, and thanks for everything. I am excited for what comes next - a project that is very personally meaningful to me about which I will share details in due time. [Ilya then shared the photo below] Jakub Pachocki: Ilya introduced me to the world of deep learning research, and has been a mentor to me, and a great collaborator for many years. His incredible vision for what deep learning could become was foundational to what OpenAI, and the field of AI, is today. I...
This is a recap of the top 10 posts on Hacker News on March 1st, 2024.This podcast was generated by wondercraft.ai(00:33): Elon Musk sues Sam Altman, Greg Brockman, and OpenAI [pdf]Original post: https://news.ycombinator.com/item?id=39559966&utm_source=wondercraft_ai(02:23): Apple reverses course on death of Progressive Web Apps in EUOriginal post: https://news.ycombinator.com/item?id=39563618&utm_source=wondercraft_ai(04:09): Fugitive Wirecard COO Jan Marsalek exposed as decade-long GRU spyOriginal post: https://news.ycombinator.com/item?id=39561021&utm_source=wondercraft_ai(05:46): JPEG XL and the Pareto FrontOriginal post: https://news.ycombinator.com/item?id=39559281&utm_source=wondercraft_ai(07:23): CACM Is Now Open AccessOriginal post: https://news.ycombinator.com/item?id=39559411&utm_source=wondercraft_ai(09:05): Study: 61 UK firms tried a 4-day workweek and after a year, they still love itOriginal post: https://news.ycombinator.com/item?id=39562760&utm_source=wondercraft_ai(11:11): Where I'm at on the whole CSS-Tricks thingOriginal post: https://news.ycombinator.com/item?id=39560705&utm_source=wondercraft_ai(13:06): The 'Atlanta Magnet Man' is saving our car tires, one bike ride at a timeOriginal post: https://news.ycombinator.com/item?id=39561356&utm_source=wondercraft_ai(14:50): Company forgets why they exist after 11-week migration to Kubernetes (2020)Original post: https://news.ycombinator.com/item?id=39560033&utm_source=wondercraft_ai(16:49): California Approves Waymo Expansion to Los Angeles and SF Peninsula [pdf]Original post: https://news.ycombinator.com/item?id=39567597&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai
As large social media sites and platforms make deals to provide training data to AI operators will users be negatively impacted? Plus Elon Musk is suing OpenAI, Sam Altman and Greg Brockman saying they have violated the company's founding agreement. And Spotify is adding 15 hours of free audiobooks listening to its subscription plan.Starring Tom Merritt, Sarah Lane, Jason Howell, Len Peralta, Roger Chang, Joe.Link to the Show Notes.
As large social media sites and platforms make deals to provide training data to AI operators will users be negatively impacted? Plus Elon Musk is suing OpenAI, Sam Altman and Greg Brockman saying they have violated the company's founding agreement. And Spotify is adding 15 hours of free audiobooks listening to its subscription plan. Starring Tom Merritt, Sarah Lane, Jason Howell, Len Peralta, Roger Chang, Joe. To read the show notes in a separate page click here! Support the show on Patreon by becoming a supporter!
Logan Kilpatrick leads developer relations at OpenAI, supporting developers building with the OpenAI API and ChatGPT. He is also on the board of directors at NumFOCUS, the nonprofit organization that supports open source projects like Jupyter, Pandas, NumPy, and more. Before OpenAI, Logan was a machine-learning engineer at Apple and advised NASA on open source policy. In our conversation, we discuss:• OpenAI's fast-paced and innovative work environment• The value of high agency and high urgency in your employees• Tips for writing better ChatGPT prompts• How the GPT Store is doing• OpenAI's planning process and decision-making criteria• Where OpenAI is heading in the next few years• Insight into OpenAI's B2B offerings• Why Logan “measures in hundreds”—Brought to you by:• Hex—Helping teams ask and answer data questions by working together• Whimsical—The iterative product workspace• Arcade Software—Create effortlessly beautiful demos in minutes—Find the transcript for this episode and all past episodes at: https://www.lennyspodcast.com/episodes/. Today's transcript will be live by 8 a.m. PT.—Where to find Logan Kilpatrick:• X: https://twitter.com/OfficialLoganK• LinkedIn: https://www.linkedin.com/in/logankilpatrick/• Website: https://logank.ai/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Logan's background(03:49) The impact of recent events on OpenAI's team and culture(08:20) Exciting developments in AI interfaces(09:52) Using OpenAI tools to make companies more efficient(13:04) Examples of using AI effectively(18:35) Prompt engineering(22:12) How to write better prompts(26:05) The launch of GPTs and the OpenAI Store(32:10) The importance of high agency and urgency(34:35) OpenAI's ability to move fast and ship high-quality products(35:56) OpenAI's planning process and decision-making criteria(40:22) The importance of real-time communication(42:33) OpenAI's team and growth(44:47) Future developments at OpenAI(47:42) GPT-5 and building toward the future(50:38) OpenAI's enterprise offering and the value of sharing custom applications(52:30) New updates and features from OpenAI(55:09) How to leverage OpenAI's technology in products(58:26) Encouragement for building with AI(59:30) Lightning round—Referenced:• OpenAI: https://openai.com/• Sam Altman on X: https://twitter.com/sama• Greg Brockman on X: https://twitter.com/gdb• tldraw: https://www.tldraw.com/• Harvey: https://www.harvey.ai/• Boost Your Productivity with Generative AI: https://hbr.org/2023/06/boost-your-productivity-with-generative-ai• Research: quantifying GitHub Copilot's impact on developer productivity and happiness: https://github.blog/2022-09-07-research-quantifying-github-copilots-impact-on-developer-productivity-and-happiness/• Lesson learnt from the DPD AI Chatbot swearing blunder: https://www.linkedin.com/pulse/lesson-learnt-from-dpd-ai-chatbot-swearing-blunder-kitty-sz57e/• Dennis Yang on LinkedIn: https://www.linkedin.com/in/dennisyang/• Tim Ferriss's blog: https://tim.blog/• Tyler Cowen on X: https://twitter.com/tylercowen• Tom Cruise on X: https://twitter.com/TomCruise• Canva: https://www.canva.com/• Zapier: https://zapier.com/• Siqi Chen on X: https://twitter.com/blader• Runway: https://runway.com/• Universal Primer: https://chat.openai.com/g/g-GbLbctpPz-universal-primer• “I didn't expect ChatGPT to get so good” | Unconfuse Me with Bill Gates: https://www.youtube.com/watch?v=8-Ymdc6EdKw• Microsoft Azure: https://azure.microsoft.com/• Lennybot: https://www.lennybot.com/• Visual Electric: https://visualelectric.com/• DALL-E: https://openai.com/research/dall-e• The One World Schoolhouse: https://www.amazon.com/One-World-Schoolhouse-Education-Reimagined/dp/1455508373/ref=sr_1_1• Why We Sleep: Unlocking the Power of Sleep and Dreams: https://www.amazon.com/Why-We-Sleep-Unlocking-Dreams/dp/1501144324• Gran Turismo: https://www.netflix.com/title/81672085• Gran Turismo video game: https://www.playstation.com/en-us/gran-turismo/• Manta sleep mask: https://mantasleep.com/products/manta-sleep-mask• WAOAW sleep mask: https://www.amazon.com/WAOAW-Sleep-Sleeping-Blocking-Blindfold/dp/B09712FSLY—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe
Semiconductors power much of modern technology -- and thus, our lives and our politics. Pranay Kotasthane and Abhiram Manchi join Amit Varma in episode 358 of The Seen and the Unseen to shed light on how so much geopolitics today centres around chips -- and why its such a big deal. (FOR FULL LINKED SHOW NOTES, GO TO SEENUNSEEN.IN.) Also check out 1. Pranay Kotasthane on Twitter, LinkedIn, Amazon and the Takshashila Institution. 2. Abhiram Manchi on Twitter, LinkedIn, Instagram and the Takshashila Institution. 3. When the Chips Are Down: A Deep Dive into a Global Crisis -- Pranay Kotasthane & Abhiram Manchi. 4. Puliyabaazi — Pranay Kotasthane's podcast (co-hosted with Saurabh Chandra). 5. Missing In Action: Why You Should Care About Public Policy — Pranay Kotasthane and Raghu S Jaitley. 6. The Long Road From Neeyat to Neeti -- Episode 313 of The Seen and the Unseen (w Pranay Kotasthane & Raghu S Jaitley). 7. Anticipating the Unintended — Pranay Kotasthane and Raghu Sanjaylal Jaitley's newsletter. 8. Siliconpolitik -- The tech newsletter started by Pranay Kotasthane. 9. Pranay Kotasthane Talks Public Policy — Episode 233 of The Seen and the Unseen. 10. Foreign Policy is a Big Deal — Episode 170 of The Seen and the Unseen (w Pranay Kotasthane & Manoj Kewalramani). 11. Older episodes of The Seen and the Unseen w Pranay Kotasthane: 1, 2, 3, 4, 5, 6, 7, 8. 12. Ilya Sutskever on the dinner invite from Elon Musk, Sam Altman and Greg Brockman. 13. The BJP Before Modi — Episode 202 of The Seen and the Unseen (w Vinay Sitapati, with the quote about perfection being the enemy of production). 14. Luke Burgis Sees the Deer at His Window — Episode 337 of The Seen and the Unseen. 15. Chip War -- Chris Miller. 16. The New World Upon Us (2017) -- Amit Varma. 17. The Incredible Insights of Timur Kuran -- Episode 349 of The Seen and the Unseen. 18. The Beauty of Finance -- Episode 21 of Everything is Everything. 19. The Tamilian gentleman who took on the world -- Amit Varma. 20. Demystifying GDP — Episode 130 of The Seen and the Unseen (w Rajeswari Sengupta). 21. I, Pencil — Leonard Read. 22. The Three Globalizations -- Episode 17 of Everything is Everything. 23. The Great Redistribution (2015) -- Amit Varma. 24. A trade deficit with a babysitter (2005) -- Tim Harford. 25. Nuclear Power Can Save the World — Joshua S Goldstein, Staffan A Qvist and Steven Pinker. 26. Paper Tigers, Hidden Dragons -- Douglas B Fuller. 27. Why Talent Comes in Clusters -- Episode 8 of Everything is Everything. 28. Jawaan -- Atlee. 29. Terry Pratchett on Amazon. 30. Robert Sapolsky and Joseph Henrich on Amazon. This episode is sponsored by the Pune Public Policy Festival 2024, which takes place on January 19 & 20, 2024. The theme this year is Trade-offs! Amit Varma and Ajay Shah have launched a new video podcast. Check out Everything is Everything on YouTube. Check out Amit's online course, The Art of Clear Writing. And subscribe to The India Uncut Newsletter. It's free! Episode art: ‘Fighting for Chips' by Simahina.
Paris Marx is joined by Mike Isaac to discuss the drama around Sam Altman being temporarily removed from OpenAI, what it means for the future of the company, and how Microsoft benefits from its partnership with the company.Mike Isaac is a technology reporter at the New York Times. He's also the author of Super Pumped: The Battle for Uber.Tech Won't Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon.The podcast is produced by Eric Wickham. Transcripts are by Brigitte Pawliw-Fry.Also mentioned in this episode:Mike summarized the OpenAI-Sam Altman affair with his colleagues in the New York Times. He's been reporting on it since it began.Paris wrote about the Sam Altman-Microsoft relationship in Disconnect.Semafor reported that in 2018, Elon Musk tried to take over OpenAI but was pushed out instead.Forbes reporter Sarah Emerson went through Emmett Shear's old tweets — and yikes.Support the show
Keeping up with AI can seem like a tsunami at times. How can we make use of all the need tools and technologies that are always coming out. What strategies can we create to put AI to use? Usha Jagannathan, a Responsible AI Leader and Ex-McKinsey, joins us to discuss how to use AI for recruitment, retention, and growth.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Usha Jordan questions about AI and growthUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTimestamps:[00:01:25] Daily AI news[00:05:10] About Usha Jagannathan[00:10:08] Using AI in recruitment[00:12:50] Upskilling with AI[00:18:33] AI has an easy learning curve[00:23:22] Example of an AI solution[00:27:33] Identifying skills with AI[00:31:08] Usha's final takeawayTopics Covered in This Episode:1. Strategies for recruitment in AI2. Upskilling and reskilling with AI3. Navigating the ever-evolving AI landscape4. Identifying skills in candidates or employees using AIKeywords:Usha Jagannathan, software engineering, corporate work, Marsh McLennan, McKinsey, fairness, transparency, accountability, AI applications, smaller companies, AI recruitment strategies, current trends in recruitment, job seekers, company study, upskilling, reskilling, changing job market, company-specific skills, half-life of skills, reskilling revolution, AI landscape, technology skills, technologically obsolete, AI learning, low-code solutions, technology training programs, partnerships with universities, apprenticeship programs, work-study programs, customer-facing solutions, claims approval, OpenAI, Sam Altman, Microsoft CEO, Satya Nadella, Anthropics, Claude language model, extended context window, API improvements, OpenAI's Playground, Greg Brockman, diversity in leadership, explainability in AI, lime, SHAP, AI algorithms, project work, industry knowledge, unpaid internships, low code development, cloud services, ethically responsible AI products Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/
On this episode of Windows Weekly, Leo, Paul, and Richard dive deep into the latest OpenAI/Microsoft partnership drama involving Sam Altman's position. They also discuss upcoming EU regulations and their impact on Microsoft products, evaluate NVIDIA's record-breaking Q3 earnings, and reminisce about the classic PC FPS Half-Life. And you thought AI was already controversial... On Friday, OpenAI's board suddenly and unexpectedly fired CEO Sam Altman, kicking off several days of unprecedented high drama OpenAI president and board chairman Greg Brockman announced that he was quitting in protest Microsoft announced it had hired Altman and Brockman over the weekend 95 percent of OpenAI employees threatened to quit if Altman did not come back Altman began negotiating his return to OpenAI (and major governance changes) Altman is once again CEO of OpenAI Key takeaway: No matter what happens, Microsoft wins Wrong takeaway: Nothing changed Windows Windows 11 is about to get awesome in the EEA WHY IS WINDOWS 11 ONLY GOING TO BE AWESOME IN THE EEA??? Microsoft confirms that Copilot is coming to Windows 10 too. "No new features, my ass!" Copilot begins rolling out to Windows 10 in Insider Program Release Preview: Copilot in Alt + Tab and on other displays, limited Copilot with local account, DMA compliance, Windows Spotlight changes Canary: Disable Phone Link in the Bluetooth settings, display Teams contacts in the Windows share window when signed in with a Microsoft Entra ID Dev: Narrator improvements, File Explorer fixes (wait for it) Redmond, we have a problem. With Windows Hello Earnings learning NVIDIA continues to soar on AI (Winner) Zoom has settled back down to reality HP stumbles through its fourth quarter and FY2023: AI PCS FTW in late 2024! Lenovo stumbles too, but explicitly predicts industry recovery Antitrust Apple, ByteDance, and Meta contest their DMA gatekeeper designations Xbox Half-Life turned 25 last weekend and Valve finally remembered it exists Nvidia's GeForce Now adds Microsoft Store, PC Game Pass, and Ubisoft+ integration - over 1700 games now Amazon Luna comes to France, Italy, and Spain Next Call of Duty leaks! Tips and Picks Tip of the week: Ignite's over, but the videos are forever App pick of the week: Half-Lif RunAs Radio this week: Azure Operator Nexus with Jennelle Crothers Brown liquor pick of the week: Willett Wheated 8 Year Bourbon Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Download or subscribe to this show at https://twit.tv/shows/windows-weekly Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Sponsors: wix.com/studio?utm_campaign=pa_podcast_studio_10/ 23_TWiT%5Esponsors_cta cachefly.com/twit
The AI Breakdown: Daily Artificial Intelligence News and Discussions
After 5 days of tense negotiations, Sam Altman has returned as CEO of OpenAI, bringing Greg Brockman with him. That doesn't mean nothing has changed however. Join NLW for an exploration of the latest on what caused the rift and where the company goes from here. Interested in the AI Breakdown Edu/Learning Community Beta? https://bit.ly/aibeta ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI. Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/
AI technology in the hot seat. Apple gives in to pressure on messaging Short week - plenty of action. PLUS we are now on Spotify and Amazon Music/Podcasts! Click HERE for Show Notes and Links DHUnplugged is now streaming live - with listener chat. Click on link on the right sidebar. Love the Show? Then how about a Donation? Follow John C. Dvorak on Twitter Follow Andrew Horowitz on Twitter Warm Up - CROCs CTP is almost over - 4 days until end... Then December starts the CTP CUP 2024! - - Back from an RCL cruise- my, have things changed - Market starting to overheat (KRI +5) - End of Year - Santa Claus Rally - on schedule - PSI - Markets Closed Thursday and after 1pm Friday - Big doings in Crypto World - Pied Pipers (Binance now in hot water) Market Update - WEIRD Action - in the AI Space - Fire, Quit, Oust and then Hire - USD continues to fall - VIX has a 13 handle - BIG news - NVDA earnings What in the? - Sam Altman pushed out over the weekend.... --- Hair on FIRE in Silicon Valley - - Weekend scramble - MSFT picks up Altman to hear AI Unit - Hundreds of OpenAI employees have signed a letter demanding the board resign or face an employee exodus to Sam Altman's new venture at Microsoft “eminently.” - What is going on? MORE OpenAI - OpenAI named ex-Twitch boss Emmett Shear as interim CEO - Commercialization of product seems to be at the heart of the matter - Microsoft Chief Executive Satya Nadella said in posts on X that Altman would become CEO of a new research group inside the software maker, along with other departing OpenAI colleagues such as outgoing President Greg Brockman who quit following Altman's ouster. --- Was there a deeper and darker plan here? Hmmm- This sounds Off - OpenAI customers are looking for the exits, signaling a possible exodus of business that could devastate the startup. - More than 100 OpenAI customers contacted OpenAI competitor Anthropic over the weekend, a startup that has raised billions from both Amazon and Google in recent months, according to someone familiar with the situation. - Sounds like companies trying to create a corpse and pick on its remains or manufactroversy - A manufactured controversy is a contrived disagreement, typically motivated by profit or ideology, designed to create public confusion concerning an issue about which there is no substantial academic dispute. This concept has also been referred to as manufactured uncertainty. BitCoin ETF - I was interviewed about this very topic last week by Paul Barron (he intimated I was a boomer - look up the view) - The US Securities and Exchange Commission has deferred making a decision again on whether to approve the first US exchange-traded fund that invests directly in Bitcoin. - The primary US securities regulator deferred on filings from Franklin and Globe X, according to documents Friday. The deferrals come after delays for other filers because both companies had put More Crypto Crap - Changpeng “CZ” Zhao is stepping down as CEO of Binance as part of a major $4 billion settlement between the Department of Justice and the cryptocurrency exchange he founded, according to sources close to the discussions with the agency. - The settlement will be with the DOJ and Commodities Futures Trading Commission; the Securities and Exchange Commission is not participating. - As part of the settlement, Zhao will also plead guilty to anti-money laundering charges brought by the Department of Justice. He is scheduled to enter the plea in federal court in Seattle on Tuesday afternoon, the Wall Street Journal reported. - Binance, the DOJ, CFTC, and SEC had not replied to requests for comment at the time of publication. - The SEC charged Binance, and its founder CZ, in June with operating an unregistered exchange and misleading investors by using a Switzerland-based fund Sigma Chain, which was also owned by CZ,
In this riveting episode of the Adams Archive, host Austin Adams takes you on an exploratory journey through a series of compelling and thought-provoking topics. From the storm brewing on TikTok over Osama Bin Laden's controversial 2002 letter to America, Austin doesn't shy away from delving into the complex narratives that are often avoided. He challenges the mainstream outrage and seeks to understand the underlying truths in these contentious dialogues. Following this, the episode shifts to an examination of the recent Israel-Hamas ceasefire. Austin will dissect the nuances of this agreement and its potential longevity, providing insights into the geopolitical implications. The conversation then takes a technological turn, delving into Michigan's capital gun ban enforcement through AI. Austin scrutinizes ZeroEyes' AI technology, raising critical questions about its impact on Second Amendment rights and the future of surveillance. The episode also covers the U.S. Army's reversal of its COVID-19 vaccine mandate decision. Hear the Army's call for the return of the troops who left over the mandate and Austin's take on this dramatic policy shift. Then, gear up for a deep dive into the OpenAI saga, a whirlwind of decisions and employee backlashes that could potentially reshape the AI industry's future. This segment promises to unravel one of the most astonishing episodes in modern business history. Finally, Austin introduces you to the newly elected Libertarian President of Argentina, a figure attracting global attention for his unorthodox approach and bold declarations against the deep state and government overspending. With a fresh crew cut and his signature engaging style, Austin is all set to guide you through these fascinating topics. Don't forget to hit subscribe, leave a five-star review, and get ready for an episode packed with insights, analyses, and a touch of the unexpected. All the links: https://linktr.ee/theaustinjadams Substack: https://austinadams.substack.com ----more---- Full Transcription Adam's Archive. Hello, you beautiful people and welcome to the Adam's Archive. My name is Austin Adams and thank you so much for listening today. On today's episode, we are going to be jumping into first, what happened recently with Tick tock and all these conservative influencers calling out all you young tick tockers out there for talking about even considering speaking about this document that must not be named. But I am me and I will name it and we will read it. And that is Osama bin Laden's letter. From 2002, a letter to America. Now there was a ton of controversy that came out on tick talk about this, a ton of conservative channels that are crying out saying, you should, you should be ashamed of yourself for even considering agreeing with any of his points. And I understand the sentiment, right? We have, um, some wounds from that man as a great country, however. I think that kind of takes away from the point, right? If you're afraid to look at something in the eye, uh, then maybe that's more of a reason to address it head on. So we will read that together. And I believe personally, there is actually some valid points and hold your thoughts until I read it. Cause I think. You might agree to. Alright. Once we walk through that, we are going to then move on into what has been called a agreement on a ceasefire between Israel and Temas. We'll see how long that, uh, lasts for and what the actual, uh, the actual breakdown of the deal was. But we'll go into that together. After that, we will go into the Michigan capital enforcing its gun ban with artificial intelligence. We'll actually look at the name of this company, which is zero I, and we will watch some of their advertisements. We will see what this technology actually does and talk about what the implications of that could be on your second amendment rights in the future where big brother knows exactly who's carrying and when following up on that. We will go into. The situation with the U S army who has asked their troops who left because of the COVID mandate. Come back, come back. We need you. Oh, that's silly. That's the only thing we did where we kicked you out because we wanted to mandate a, a experimental drug on your body and then not pay for the effects of it later. Oh, that thing. Yeah. Yeah. Yeah. Don't worry about that. Um, just, just come back, come back. So we'll actually read that letter together from the army that they issued. And then we'll talk all about the open AI drama that has been going down. If you haven't heard about this, it is crazy. This shit's definitely going to be a documentary in like 10 years from now, five years from now, who knows, three years from now, if AI can put it together fast enough. Um, this was one of the wildest sagas that you've seen in modern business history. Okay, so we will talk about that. We'll walk through what the situation is. I am pretty... Astounded by the way that this went down. They essentially could have just dismantled a multi billion dollar, probably the single most powerful entity in the prospect of the future of humanity as standing today. over a weekend and over a split second decision, which was not very well thought through, which ended in 200 or some, sorry, 95 percent of their employees threatening to leave and go to a different company. So we will actually read that letter today together too. Wow. We're reading a lot of letters. I didn't realize that. And then, uh, last but not least, we will talk about the, uh, president. of the Libertarian President of Argentina, um, that was recently elected and he has some wild moments, but I also have some, some agreements with him. And I, they're calling him this far, right? Crazy guy, because he's talking about the deep state and saying he wants to dismantle the over bloated government. And, uh, so lots of interesting stuff. And then he went through. The parade with a chainsaw saying he's going to cut down government spending so wild dude, but I'm all for it. Alright guys, that's what I got. Go ahead and hit that subscribe button. Leave a five star review. If you are watching this and not just listening, you'll notice that Cut some hair on top of my head. So, uh, essentially, uh, got, uh, a, uh, crew cut now. So, you know, if you see me on Instagram, I might look a little bit different than, than you saw me before. All right. That's what I got for you guys. Subscribe, leave a five star review and yeah, let's jump into it. The Adams archive. All right, all right, the very first topic that we are going to discuss today was the recent document from Osama bin Laden, which went viral. On TikTok, now there was over 9 million views mentioned, but TikTok tried to diminish it in a recent, uh, recent note. I believe it was on Twitter even, or X, now as the kids call it. Uh, but, I digress. Let's jump into it. This is a letter that was written by Osama Bin Laden in 2002, which was a letter to America. Now in the way he, that he breaks down this letter, it's, it's hard to argue with some of the... Thoughts that he, the way that he portrays the United States. And so, we'll read that full letter together. But first, let's read this, which breaks down how this all went down and why this even came to fruition. Because I never read this document until I heard about this. And the way that I heard about it was all these conservative influencers that were coming out and saying, How dare you, these young kids coming out here siding with the terrorists. How dare you read this letter. How dare you say that you agree with any of his points. That he had valid opinions. How dare you? And we go back to this, this idea of, of the good guy and the bad guy, right? And this has been a theme more recently in the Israel and Palestine and Hamas conflict for me, which has been reconciling with the fact that We've almost always been told, whether it be through Hollywood, whether it be through music, whether it be through plays and books and everything that we've ever been told is that there's a good guy and that there's a bad guy. And that philosophy, as I've come to know it now, today, is generally flawed. And the reason that I say that is it's... It's far more complicated than we're giving it credit, right? There's generally not just a good guy and not just a bad guy, right? There's absolutely people who do acts of malevolence and horrific, atrocious acts in the name of being a terrible person and just inflicting pain on people. That is true. That is factual. But majority of the time that you see these national and world stage conflicts, it's generally not. The case, most of the time, what you'll find is both sides think that they're the good guy, right? And not just both sides is this radical extreme side of things on the far, far terrorist organizations. It's not just them. It's the people behind them, the general population, they have some. with these people that there's a good guy and there's a bad guy. And, and so I think the only way that we reconcile and we start to actually pull the curtains behind the military industrial complex is by, by recognizing that. It's not that simple. There's not generally a good guy and a bad guy. This isn't, and even when you look at the old western, you know, westerns, where they try to portray it, you know, there's cowboys and there's Indians, and those are the savages, and we're the upstanding. You know, uh, enforcers of the law, right? It's again, it's just generally not that simple. And you, and as you start to take this framework and start to untangle the programming that you've been given, which is that there's again, a good guy and a bad guy, as you start to untangle that, you can almost go back through. Almost any conflict in history, I say almost, almost for a reason, but you can almost go back and look at any framework, any, any conflict, any large scale war, any, and you start to pull on some of the threads, like, okay, there's a, there's an idea in debate and then, uh, I guess, I don't know what, what to classify it as, but there's the idea of steel manning within an argument which is essentially if I was taking the, the opposing individual or opposing forces position and trying to be as generous as possible and trying to make the most compelling argument for it. Their side of the argument for them being the good guy and me being the bad guy in this instance How would I do? So how would I? Take critical thinking and how would I? Critique my own position and and if you can do that right if you can actually steel man the argument and and look and at the the conflict or the Situation or the debate point that you're arguing and say, okay if I had to take their position If I had to steel man the case that X, Y, and Z was the good guy, and now I'm the bad guy, how would I do that? And if you take that into the equation, you start to see that, okay, maybe there are some compelling opinions on maybe why we shouldn't be in this conflict to begin with, right? And maybe it's a sign of peace rather than a return of fire in some instances. Now again, that's not to say there's not... There's not reasons for war at certain points and for certain reasons and if we are gonna say that you know in the instance of Israel vs. Palestine, I would much rather them come out and just say hey We're taking our land back almost the same way, you know that they tried to frame that as what it was for Russia Right Russia not wanting Ukraine and NATO to infringe on their territory or at least within the immediate vicinity of it They even tried to go. Oh, they're colonizing Ukraine. It's It's like, okay, that's not really what's happening here. So even if we were to be generous within the Israel and Palestine, uh, conflict and say, okay, they're doing this as a response and not just to colonize the area, which it seems more and more likely that they're just trying to take that area over. However, we'll get into that ceasefire a little bit later in here, but, but my point in this is that when you go back and you start to do those on that unwinding and start to take that steel man argument and look at some of the things that have happened in history, whether it be to the United States or by the United States against other countries or individuals or, uh, organizations, you can start to, at the very least, steel man your argument. If you want to say that the United States is the good guy and every other country we've ever been in conflict with is the bad guy, and you want to die on that hill, you better, you better be able to steel man that argument. And if you're not, you're just blindly following a religion. Right? You're not, you're not even reading the textbooks, right? You're just blindly having faith that, oh, daddy has my back, right? Daddy government knows all and is, is essentially, you know, giving blind faith into that institution, which we already know is corrupt. So whether you're right or you're left, you should think critically about these things and go back and start to pull on those threads. And that's what we'll do here today. So on that note, this comes from Time Magazine. And the article title is Why Osama Bin Laden's Letter Went Viral. Now within this letter, he mentions Palestine several times. Okay, and I'll share this with you. So if you're on YouTube, you'll actually be able to see what we are looking at here together. If you're not on YouTube, you can always join us over there. And it's just the Adams archive. So if you type that into YouTube at the very top, you'll be able to see what we're looking at. However, let's move on. It says two decades ago, Osama bin Laden, the Al Qaeda leader behind 9 11, it says, probably also next to the three letter organizations, laid out his attempt. His attempt at justification for the attack against the U. S. that killed nearly 3, 000 people in his letter to America. This week, that same letter went viral on TikTok among a new generation, many of whom are debating the Israel Hamas war and the role played by the U. S. for some, a big part of Bin Laden's justification. American support for Israel's occupation of the Palestinian territories and what the U. N. deems a violation of international law resonates with what's going on now in the Middle East, leading them to renew calls. For a Gaza ceasefire in one video, which was live on the app of Thursday afternoon with more than 900, 000 views, a tick tocker made the claim that everything we learned about the Middle East 9 11 and terrorism was a lie. Others on social media have criticized the video sympathizing with terrorists and legitimizing violence. In a statement posted on the next on Thursday, TikTok said content promoting this letter clearly violates our rule in supporting any form of terrorism. We are proactively and aggressively removing the content and investigating how to get it on how it got onto our platform. So let's watch this video. Let's see if it's actually still there. It might not be anymore. If they were removing all of it. Yeah, and there it is. It's gone. Who knew censorship for the Not when, uh, in a statement posted on X on Thursday, TikTok said, Nope, we just read that the company also says that the content did not reflect a widespread trend, but rather just a few posts on the platform. Number of videos on TikTok is small and reports of a trending on their platform are inaccurate. This is not unique to TikTok and has appeared across multiple platforms. And the media views on video shared by the hashtag letter to America had over 14 million views on Thursday, CNN reported. But as of Thursday afternoon, the phrase could not be searched on the app due to guideline violations. This isn't the first time that Tik TOK has faced controversy for what's been shared on the app. The company has responded to Republican criticisms on the platform being biased towards pro Palestinian content by pointing to polling that shows younger people are more sympathetic to Palestinians. While tens of thousands of people recently publicly showed their support for Israel and the U. S. condemned anti semitism in France, hundreds of thousands have taken to the streets in pro Palestinian protests around the world calling the ceasefire to protect civilians in Gaza since the start of the war on October 7th. Okay, now we have context. Now where I originally went to search for this was the Guardian. And if we go look at the Guardian's website, which we can do right now, the document was originally set here on, uh, November 15th of 2023, it was removed. It was originally placed on this by on Sunday, November 24th of 2002. So 21 years later, they decided that, oh, now this is dangerous for people to read because They're actually reading it. So you can actually find this document and I will keep it in the, uh, I'll actually send this out. So if you're on my Instagram, go to at the Austin J Adams. So the Austin J Adams on Instagram. And if you comment on my most recent video and some of my other videos that will be about this, after I get some of this content out, I will send this letter to you. Okay, so let's go ahead and read the letter, the letter that must not be named according to TikTok and the guardian. Here it is. All right. It says page one in the name of God, the compassionate, the merciful to the American people, peace be upon those who follow the righteous track hereafter. The subject of my talk to you is in the overwhelming control of capital. And it's a fact on the ongoing war between us. I direct my talk specifically those who support real change, especially the youth. I say from the onset, your former president warned you previously about the devastation or the devastating Jewish control of capital and about a day that would come when it would enslave you. It has happened. Your current president warns you now about the. enormity of capital control and it has a cycle whereby it devours humanity when it is devoid of the percepts or the precepts of God's law and says in parentheses sharia the tyranny of control of capital by large companies has harmed your economy as it did ours And that was my motivation for this talk. Tens of millions of you are below the poverty line. Millions have lost their homes, and millions have lost their jobs to mark the highest average unemployment in 60 years. Your financial system in its totality was about to collapse within 48 hours had the administration not reverted to using taxpayer monies to rescue the vultures by using the assets of the victims. As for us, our Iraq was invaded in response to pressure from capitalists with greed. from black gold and you continue to support the oppressive Israelis in their occupation of our Palestine in response to pressures on your administration by a Jewish lobby backed by enormous financial capabilities. Hmm. Okay. So let's break that down a little bit. We start from the very beginning. And he talks about the Jewish control of capital, right? Your former president warned you previously about the devastating Jewish control of capital. This has been a pretty consistent conversation, right? Surrounding the control of media within media about the control within Hollywood and news corporations and. So this is in line with some very recent conversations that people are having and that probably leads to why this had some effect of ringing true, especially when you bring in something like Palestine and the conflict and him referencing that occupation all the way back 21 years. So now what he says is the tyranny of control of capital by large companies has harmed your economy as it did ours. And that was my motivation for this talk. Tens of millions of you are below the poverty line. So now he's calling out the, the lack of. care from our government surrounding people of low income surrounding people who are homeless surrounding, you know, talking about the financial system and its totality was about to collapse within 48 hours had the administration reverted to had not the administration reverted to using taxpayers money to rescue the vultures by using the assets of the victims. Essentially saying that the banking. Corporations, the banks, all lever, when they went bankrupt, they essentially took taxpayer money and then utilized that to bail the banks out when the people who suffered the most from that was not the organizations, it was the individuals who banked with them, right? So he's saying. Your own government used your money to help the people who oppressed you to begin with. As for us, our Iraq was invaded in response to the pressure of capitalists, then talks about their greed for black gold, meaning oil. And you can continue to support the oppressive Israelis and their occupation of our Palestine in response to pressures on your administration by a Jewish lobby backed by an enormous Financial capabilities now what we look at there is is the discussion surrounding The the fact that the and you continue to support the oppressive Israelis in their occupation of Palestine in response to pressures by your administration Okay, so What, what we can get into from that is, is realizing that this has been a longstanding issue that has been bubbling below the surface for a very long time, right? This discussion around the powers that be the, the individuals that we know that the, you know, however many families that, you know, control a massive amount of wealth, right? You want to get more into that? But go back to the episode that I did on. Uh, the creature in Jekyll Island, which is a great book that was written surrounding the end of the gold standard and the rise of the Federal Reserve. Now go back and look at who the people were that were involved in that conversation. I believe it was 13 families that were on a private train together and essentially on Jekyll Island, which is a small island, came up with the idea of the Federal Reserve and then implemented it. Perfectly. And they now control all of the world as a result of controlling the largest corporation or the largest country's capital, being able to essentially print money at will with no repercussions to themselves. And just to the American people, right? In walks inflation. So now it goes on to say that an observer of the policies of the new administration relieves or realizes that the change is tactical and not strategy or strategic. It does not at all agree with this, the change you seek. There are very many indicators of this, especially concerning important matters related to your own security and economy, particularly the ongoing war between us. The previous administration was successful in implicating you in the wars against us under the premise that they are necessary for your security according to the promise that it would be short and would finish in six days or six weeks. Six years has passed, and that administration is gone without realizing the victory. The man calling for change promised you victory in Afghanistan and set a time for withdrawal. Before the end of the set time, Patriots from the previous administration came and asked for an extension of six more months if it was The six day war that started by President Bush and six years have not enough to has not been enough to finish it Then the wise men should question How long would a six month war? take and whether you would be able to fund a war that requires a large amount of money that weakens your economy and Your dollar interesting. Okay. So what he's saying there is essentially that they said this would take six days. Now they said it was six weeks. Now they're saying it's six years, right? And how much are you willing to sacrifice as a country of your financial stability as a nation, by simply coming over here and looking to go after our oil, which, you know, we go back to the weapons of mass destruction conversation, which were never found. So, moving on here, uh, so, so that's interesting to me. It's like the amount of people that are now realizing that there's some. Some cracks in the armor, right, that, that, the great nation that we were told that we were brought up as patriots for that, you know, I myself joined the military to defend, right, maybe there's some questions that should be asked about whether or not we should be in these wars, and you guys know. If you listen to me enough that I'm at this point, there's very little you could do to convince me that we should be at war with essentially anybody unless we're specifically defending our home territory, which nobody's encroached upon the United States in Lord since the British. So moving on. This says for Obama to leave one third of its soldiers in Iraq and the statements from his administration about this, especially from Aderno about the possibility for Obama's ordering the return of the forces he took out of Iraq, it would have been better for him had he disagreed with the ethics of the previous administration and adopted the truth as a friend and told you that he would not withdraw from Iraq, which may not serve the U S interests, but it is in the interest of the large corporations, right? So he's talking about the war machine. Right? He's talking about the Military Industrial Complex. It serves... Doesn't serve the U S, but it serves the large corporations, meaning the same corporations that we know own all the other corporations, the same corporations who own all of the politicians, the same corporations who own all of the military companies like Raytheon, or at least have the largest share percentage within those companies. It says the course of these policies of the president administration in several areas clearly reveals that whatever, whoever enters the white house, even with good intentions to safeguard the people's interests is no more than a train operator. His only task is to keep the train on the tracks that are laid down by the lobbyists in New York and Washington to serve their interests first, even if it's counter to your security and economy. Any president who tries to move the train from the lobbyist tracks to a track for the American people's interests will confront very strong opposition and pressures from the lobbyists. Your president described the decisions by the court in favor of corporations to intervene in the political arena as a victory. But it is not for the American people, except for the big corporations. Okay, so now what he's saying is that your president... is controlled. No matter how many, how, how good of intentions he has, if he goes to fight the machine, if he goes to do what's in the best interest of the American people, he will be met by the corporations, right? So that's why when people are saying, Oh, there's merit to this. Yes, there's absolutely some merit to this. Our government has been commandeered by large corporate entities that have the only best interest of their entities making more profits. And generally, the best way to do that is by siphoning it from the people, not by serving the people's best interests. And we've talked about this, our system is fundamentally flawed, almost everybody who goes in with a good intentions gets spit out, or ends up 13 indictments before they go for re election. We exactly saw that play out with Donald Trump. The entire machine. All the news companies. All the, the, the, entire, entirety of Hollywood. All of the, the journalists that, that were a part of any actual legitimate organization. All of them conspired. Even the FBI and the CIA did the same thing with the letters that they signed about the Hunter Biden laptop. Right? It says. The course of the, the course of the policies of the President Administration in several areas clearly reveals that whoever enters the White House, even with good intentions, to safe, to safeguard the people's interests, is no more than a train operator. His only task is to keep the train on the tracks that are laid down by the lobbyists, even if it's counter to your security and economy. Now tell me you disagree with that, because I, I will argue that point with you all day. There is no doubt about it, that it is right, and... That it is also a right for the administration to support the oppressive Israelis for the continued, let's, let's get context. I think I maybe skipped something. Um, it says, There is no doubt about it that it is a right, and it is also a right for the administration to support the oppressive Israelis for the continued occupation of our land and the killing of our brothers, marking a victory for the Jewish lobby. The president was not able to defend you against the security and economic loss. The way for change and freeing yourselves from the pressure of lobbyists is not through the Republican or the Democratic parties, but through undertaking a great revolution for freedom. Not to free Iraq from Saddam Hussein, but to free the White House, and to free Barack Hussein so he can implement the change you seek. It is not, it does not only include improvement of your economic situation and ensure your security, but more importantly, help him, helps him in making a rational decision to save humanity from the harmful gases that threaten its destiny. Let's read that again. So what, what he's saying there again. The way for change and freeing yourselves from the pressure of lobbyists is not through Republican or Democratic parties, but through undertaking a great revolution for freedom, not to free Iraq from Saddam Hussein, but to free the White House and to free the president so they can implement the change you seek. Free them from who? Free them from the lobbyists. Free them from what lobbyists? Well, and what he's referencing here, he says the Jewish lobbyists, the individuals who own those large corporate entities. Who control a portion, a large portion of Hollywood. And the news entities, right? So again, and this is far different than when everybody wants to ring the anti Semitic bell. It's like nobody's saying anything about the religion and nobody's saying anything about the people who are in those areas who hold the title of being Jewish. No, it just so happens that the people that we're discussing here have a Jewish background, have Jewish blood running through their veins, and are from That origin. It does not mean anything against the peoples themselves. It means that there is a large portion of people who also hold these characteristics that is what they push their agenda through. Okay, so it's like, it's very important to make that distinction. No, it is not all Jewish people that are running Hollywood. It is not all Jewish people that are controlling the White House. It is not all, no, it has nothing to do with the fact that they are Jewish or their beliefs in their religion or where their origin is from. It has to do with that. There is a small, very small, like, handful of people and families in power that all have similar characteristic that unites them, which so happens to be that cultural background. So everybody crying anti semitic when you say, oh, don't bomb or don't agree with the fact that, you know, there is a strong Jewish lobby. It's like, you're missing the point. Love Jewish people. Love all my people. I have nothing against really any class or group or culture or background or ethnicity or race or religion. It's like that to me is such a low frequency beta. Uh, uninteresting perspective to have that has just no value. There, there is no reason to have any distinction between people and, and, and say, this group is this thing. No. But there is a way to categorize people based on that. And when a small group of people who hold Those powerful positions hold that uniting culture, then it's going to be referenced, which is important to make a distinction of. Okay, moving on. The British military governor in the United States used to have the right to appoint judges and mayors. Similarly, the corruption is deep and rooted now in all the higher authorities, thus giving authorities over to these officials or these offices to corporations. Hmm. Subsequently, the higher court adjudicated their support of political financing by corporations under such circumstances. Now he's talking about the lobbying. Reading the book by the intellectual Thomas Paine helped your fathers in the revolution against the oppressors. It is useful for you to read it under the current similar circumstances. You are in need of people like Thomas Paine to publish books pointing out the similarities between the two phases, and that will have a similar effect. You are also in need of men with courage and initiative like those of your forefathers at that time when they refused to allow one company to harm the interests of the United States, a company that had a monopoly on tea and its prices. Talking about the Boston Tea Party. Right? Talking about the, um, the, what is it? The, the Indian tea company or whatever it's called. I'm going to have to look that one up. Um, forget the, the, uh, God, what's the name of it? Um, that's so stupid that I can't remember that. Let's see. The East India Trade Company, is that what it is? Pretty sure that's what it is. East India Trade Company. Um, Yeah, the East India Company. That's what it was. Thank you. At least I got it right. I got there eventually, guys. Before even Google told me, and you can reference the YouTube video to see it. Um, alright. So, it says, uh, yeah, there are now many companies that endanger the United States economy, which continues to be vulnerable to collapse, and they also formulate the policies for the White House. They threw hundreds of thousands of soldiers against us and have formed an alliance with the Israelis to oppress us and occupy our land. That was the reason for our response on the 11th. Palestine has been under occupation for decades. Now what he's referencing there is obviously September 11th. Now, obviously that's obviously not a justification to commit acts of terrorism against random civilians, which has been the theme this whole year with the Israel and Palestine conflict. So again, Don't agree with that. It's a horrific way to respond to this. The way that you respond to this is what this letter was attempting to do. Just do it more effectively. Cause the fact that nobody read this, now all of a sudden people are reading this and now there's value to it. Anyways, it says, uh, Palestine had been under occupation for decades and none of your presidents could talk, talked about it until after September 11th, when Bush realized that your oppression and the tyranny against us were part of the reason for the attack. Then he talked about it, the necessity for two States. Obama is trying to address the issue with the same solutions, suggesting by his predecessors they are quilting fruitless solutions not of concern to us. If you want a real settlement that guarantees your security in your country and safeguards your economy from being depleted in a matter similar to our war of attrition against the Soviet Union, then you have to implement a roadmap that returns the Palestine land to us. All of it from the sever or the sea to the river. It is an Islamic land, not subject to being traded or granted to any party. In conclusion, be assured that we do not fight for mere killing, but to stop the killing of our people. It is a sin to kill a person without proper justifiable cause, but terminating his killer is a right. You should be aware that justice is the strongest army and security offers the best livelihood. You lost it by your own making when you supported the Israelis in occupying our land and killing our brothers in Palestine. The road to safety starts with the stopping of aggression. And again, the way to combat aggression is not more aggression. And the way to stop people from killing your people is not by killing their people. Fundamentally disagree with him on that. Palestine should not be seen, and even in his own argument there, he says that it is a sin to kill a person without proper justifiable cause. Okay, 3, 000 people on 9 11 that you killed without proper justifiable cause, regardless of the country that they lived within. Palestine should not be seen captive, for we will try to break its shackles. The United States shall pay for its arrogance with the blood of Christians and their funds. Peace be upon those who follow the righteous track. All right, so again, fundamentally disagree with a lot of what he says there, but there is merit to some of the points that he makes surrounding lobbying, surrounding our president not being in control regardless of good intentions. Several things that he said there that holds true. in the awakening that we've seen over the last three to four years. So when you see all these people shouting at, you know, saying that anybody who reads this and agrees with any of the points made that they're a terrorist, it's no, that you're missing the point. And We probably, if you hadn't already gotten to this point where you realize these things without reading a letter from Osama bin Laden, like maybe you should do that first and there's far better ways to probably get to this point from far more intelligent, far less polarizing, far less bloodthirsty people, then Osama Bin Laden, so there's that, like you could definitely get this point across without having to hear it from him. But, that, that, you see the censorship, you see the people coming out and calling, you know, everybody a terrorist who reshares this, or says that there's any merit to some of the points that he made about the occupation, and so. I just wanted to get that out there. I think it's a value to actually read through these things and not just hear the headlines and just assume that everybody who makes any point about this is siding with a terrorist organization. Because again, I fundamentally, fundamentally disagree with the acts that were committed on behalf of this ideology. But that doesn't mean that there's no merit to some of the points that he made about the United States of America being flawed, because It is. And if you disagree with that, you're very likely brainwashed at this point. All right. Alright, so the next thing that we're going to discuss is that Israel and Hamas have agreed to a temporary ceasefire for humanitarian purposes that include a hostage release deal, which has come from Fox News. Let's go ahead and read this article where it says the Israeli government is committed to the return of all hostages home. Tonight, the government approved the outline for the first stage of achieving this goal. According to which, at least 50 hostages, women and children, will be released for four days, during which there will be a lull in the fighting. The release of every 10 additional hostages will result in an additional day of respute. The Israeli government, the IDF, and the security forces will continue the war in order to return all the hostages, to complete the elimination of Hamas, and to ensure that Gaza does not renew any threat to the State of Israel. The ceasefire was officially announced hours after Israeli and Hamas leaders said Tuesday that negotiations were in their final stages. Both sides ultimately agreed to their conditions. Qatari... Negotiators helped broker the agreement under the deal. Israeli's government has agreed to temporarily stop its pursuit of Hamas, including its ground invasion of Gaza and its airstrikes for humanitarian purposes. Also, Hamas has agreed to release dozens of hostages in tandem with Israeli government or with Israel agreeing to release Palestinian prisoners on a three to one ratio. Fox news, Trey Yankst reported Hamas leaders would release one hostage for every three Palestinians that Israel releases from its prisons. Hamas, so that means that Israel essentially has to have three times the amount of hostages slash prisoners. Hamas, which governs Gaza, took about 240 hostages from Israel during its terror attack on October 7th when it invaded Israel and killed approximately 1, 200 people, mostly civilians. The terror group said at the time that it took enough hostages, which included Israelis, Americans, and other foreign nationals, to free all Palestinians in Israel. Interesting. So you'll see the first hostages come. out over the course of Thursday. Netanyahu met with his war council Tuesday afternoon, then the security council, and then this full cabinet before the agreement was announced. Ahead of the meetings, he said he hoped there would be good news. Earlier Tuesday, Hamas leader Ismail Hanaya and Mark Rijev, the senior advisor to Israel's prime minister, Benjamin Netanyahu, openly said a deal was closed. The deal being, hey, we'll stop for three or four days and we'll, you know, exchange hostages. You know, for every one that you give us, we'll give you three. Um, okay, so not exactly what I was thinking. So not a long term, this is not a long term ceasefire. This is just a ceasefire for three or four, potentially five days where they release their hostages together. And then Israel will go back to leveling the city of Gaza, apparently. Uh... Yeah, that's what it seems like. Okay, not exactly what I was thinking, but as you guys know, when things pop up and we have breaking news, you'll get it while we're here. Um, so again, this was Israel and Hamas agreed to a temporary ceasefire hostage release deal, including freeing three Americans. Now, the original headline of this made it seem like it was more of a longstanding agreement, which obviously it's not. So let's move on to our next topic, which is that the Michigan Capitol is going to enforce gun bans with artificial intelligence. There's a software that has been created that allows them to using video surveillance footage in real time to identify threats. And by threats, they mean anybody who's potentially carrying a weapon, whether it be lawful or unlawful. So let's read this article. It comes from, uh, bridgemi. com. And the article says, Michigan capital to enforce a gun ban with artificial intelligence. Now, to me, this signifies some dystopian stuff, right? What my concern around this would be is now that this has... Been created. You can't put it back in the box, right? You've opened the box. Now, there is a software that will allow them to identify people who have weapons on them, whether it be lawfully or unlawfully because it is our right to carry and bear arms. It is our right to conceal weapons. It is our right to open carry weapons where the laws allow. So now. You can be punished for that. You can be approached by police and you can have this technology that will be implemented in God knows what way, right? We don't know how this is going to be used for sure. How do you make sure that this isn't going to be used to, I don't know, stop people from defending property or defending life at rallies when they're allowed to open carry right in walks Kyle Rittenhouse to me, it's like, this isn't the issue that I have is not. Making sure that we're more safe in our capital buildings. It's is what is the actual use case for this going to be right? when you're talking about Smart cities and things like that and the totalitarian surveillance in Michigan itself just put up 400 400 cameras on one highway alone 400 cameras, Michigan just put up to surveil its own citizens in the name of stopping violent crime. How does 400 cameras on a highway stop violent crime? That's not what it's for. It's to surveil the general public. If you think that that data just stops and they're scrubbing through hundreds of thousands of tens of thousands of millions of cars flying by every single day, To look for one, two, three, four people. No, there's no return on investment there, right? They want to surveil people. They want to know where you're going, how you're getting there. I challenge you to drive down the highway right now, drive five miles in any city without seeing a camera up in the sky watching you drive. It infuriates me. It's so frustrating that you can't even drive your car on a road that they built with your tax dollars without daddy government Big big brother sitting there watching you tracking your license plate This says authorities in the michigan state capitol are beginning to use artificial intelligence to detect any Firearms in a bid to increase security amid a growing national wave of political threats and violence Show me a recent violent gum crime at the Capitol and what justifies utilizing this software. In fact, why don't we use this software at school zones? Why don't we put this software outside of every single school in America? Instead of funding Israel's war, instead of funding Ukraine's war, why don't we take this software and actually use it for some implementation that people want? Because the implementation that people want is not going to Capitol buildings that already have security. Arm security at that. Why not put it into school zones? Why not put a video camera on outside of every single school that identifies threats that way? I'm cool with that because you shouldn't be open carrying by a school anyways. Company officials at the Zero Eyes firm announced the deployment Monday, saying Michigan is the first state capital in the nation to use its gun detection technology, which has also been implemented last year at Oxford High School in the wake of the mass shooting. Thank you. The system, which also analyzes footage from existing video cameras to identify brandished or otherwise drawn firearms, represents the latest in a series of escalating security measures at the Michigan capital following armed protests in 2020. I'm sure you'll be fine. The Michigan Capitol Commission earlier this year approved installation of metal detectors inside the building and implemented a full indoor gun ban. Except for lawmakers with a concealed weapons permit. Except for lawmakers with a concealed weapons permit. So the lawmakers get to protect themselves, but not the citizens who are there, right? Interesting. Um, commissioners last month unanimously approved the, the lease with zero wise, a Pennsylvania based firm, which expected to cost about $3,000 a month. The money will come from existing security funding. First proposed by Governor Gretchen Whitmer, who is the subject of a kidnapping plot by the FBI, by the FBI orchestrated by the men who also discussed storming the capitol. You mean the FBI agents? And also the people, I'm pretty sure they got released because it was entrapment. It's just another layer of protection, Rob Blackshaw said, an executive director of the State Capitol Commission. Our latest goal, as we've said from day one, is to decrease any potential of a mass shooting and increase our level of safety for the people who work here and visit here. The artificial intelligence system will tap into existing surveillance video at the Capitol, including inside the building and outside grounds when openly carried firearms are still allowed. Where openly carried firearms are still allowed. So again... If a gun is identified, images will be immediately reviewed by trained specialists at Zero Eyes, including military and law enforcement veterans, the company said on Monday. If those specialists confirm a threat, they'll send alerts and other actionable intelligence to Capitol Police in a matter of seconds, according to the firm. Hmm. So how do you identify a threat for somebody who's not a threat? Do they wear a red jersey? Do they, you know? Anyways, here's the video by Zero Eye, so you can see what this technology is all. About. And we'll go ahead and watch it together. ZeroEyes is a team of former Navy SEALs and military special operations veterans teamed up with elite technologists with a mission to save lives. We use your existing video cameras coupled with our artificial intelligence gun detection to prevent threats rather than react to them. There is no better purpose right now and no more difficult problem to solve than mass shootings. We go over the existing security cameras at a building, so on the interior and exterior, at entrances, exits, choke points, bottlenecks, inside the hallways. So when a shooter walks up and they take out a weapon, zero eye system will pick that weapon up. And our military trained operation experts verify every detection before sending out alerts to local staff, security. And the local 911 center to get the alert to first responders. It takes about three seconds from the time a gun enters the frame of a camera to the time an alert is sent. So now they know what the shooter looks like. What type of weapon they have. We have an armed subject in the southwest vault. How many there are. And what was our last known location? First responders on scene have access to this information before shots are fired. That will allow them to go directly to the shooter and prevent more violence from occurring. Drop your weapon now! Drop the gun! Drop the gun! So we can really decrease response times and save lives. Turn around! So we're going to stop threats at first sight, not first shot. Mass shootings are devastating. Current alternatives are reactive. We need a proactive solution that mitigates gun violence, provides actionable intelligence, reduces response time, ultimately saving lives, while at the same time respecting our privacy and rights. ZeroEyes is that solution. Save time, save lives. Interesting. So, I don't disagree with the premise of the application when it's in the context of school environments. But literally probably only school environments. It just doesn't seem to me that there should be any other use case for this other than schools, because when you put it in the context of government and organizations, and what the potential is for this software to be leveraged nationwide when you have Basically, a surveillance camera on every single corner now within three seconds of anybody ever having a weapon that they are legally hold according to our Second Amendment rights, they can be identified and immediately, immediately have authorities contacted for no other reason than lawfully carrying a firearm. Right. And like I said, you have Michigan putting out 400 cameras just on their highways alone with your tax dollars to surveil you. And for 3, 000 a month. They, too, can make sure that you're not actually leveraging your rights as an American citizen. So, you know, when we talk about a surveillance state, that's a terrifying application. And again, under the context of school shootings and this being leveraged within schools and the perimeter of schools, I don't have any problem with that. I think it's a great idea. I like the idea of proactive identification of threats. It doesn't end there. It won't end there. And that's where I have a problem with it. All right. So that seems to me like, you know, again, I don't think that there's any way to remedy that the cat's out of the bag. And obviously there's going to be military applications for this and, and government applications for this, but I don't think that we have to allow it, right. We can push back against our tax dollars being used for these things. As long as the application is not being used in a way that is, uh, you know, useful to the people. And useful to the people, to me, does not mean the Capitol building. It doesn't, because they already have armed security there. And we as, as the people in the United States of America have a right to carry firearms. Now, if this was communist China, just imagine the applications of this in communist China. And that, my friends, is coming to a city near you in the very near future, right? Oh, you, you actually can identify a, a, uh, concealed weapon, right? Down the road. Maybe they can see people printing on the side of their waistband. Uh, and now all of a sudden it bumps your... Your social credit score, right? Like where where does this end and this is obviously just just the beginning So that that's more so the the terrifying applications of this All right moving on the next thing that we're going to discuss here is going to be that the u. s army asked the troops who they Fired who they gave dishonorable discharges to just, just come back, right? The people that they got out of the military, right? The U S army kicked people out of the military for not having the vaccine for not getting the vaccine for not agreeing with an experimental drug being injected into their bodies now. They're telling them to come back, come back. We won't even mandate that to you. And I think there's a bigger play at hand here. I don't think it's just as simple as them saying, Hey, we're missing recruiting numbers. I think it's bigger than that. Um, I actually think the, the app, the reason that they're doing this is to mitigate legal costs more than very likely. Um, so let's look at this together. This comes from the post millennial and it says the U. S. Army asked troops who left over COVID mandate to come back as war looms. Now, I don't know if that's the reason why. Um, I, again, I think this might be more of a legal play than anything, but the United States Army is inviting, because if you're in the army, you're not going to be able to sue the army, right? But there could be a large class action lawsuit against the institutions that mandated this as. Especially when it was the federal government, the United States army is inviting service members to return to the branch who had been separated over the refusal of the COVID 19 vaccine. This comes as the US military struggles to, uh, to achieve targeted recruitment numbers due to years of woke political activism, which has reportedly turned off its primary recruitment base. And you see this, you see the, I think it was the air force now doing special forces, uh, videos for recruitment where it's all white. How dare you? How egregious! Could you imagine a military that was mostly occupied by straight, white men who don't dress up as, you know, women on their weekends to shake their ass for dollar bills at a gay bar? Like, imagine the world. Uh, the United States Army is inviting its service members to return to the branch who had been separated over the refusal of the COVID 19 vaccine. The Army issued a recent letter to former... Service members informing them that they can apply to return to service following the recession of the vaccine requirement. The Army had enacted four separations for unvaccinated service members early last year and announced in early 2023 that they had rescinded the mandate for current service members and applicants. The letter uploaded to X reads, Dear former service member, and I'll read it here verbatim for you. Dear former service member, we write to notify you of new army guidance surrounding the correction of military records for former members of the army following recession of the COVID 19 vaccination requirement. As a result of the recession of all or the rescission of So let's try that again. We write to notify you of the new army guidance regarding the correction of military records of the former members of the army, following the rescission of the COVID 19 vaccination requirement. As a result of the rescission of all current COVID 19 vaccination requirements, former soldiers who were. involuntary separated for refusal to receive the COVID 19 vaccination may request a correction of their military records from either both or either or both of the Army Discharge Review Board or the Army Board for correction of military records. Individuals may request a correction to military personnel records including records regarding the characterization of a discharge by submitting a request to the ADB or the ADRB or the ABCMR online at Uh, individuals who desire to apply to return to service should contact their local U. S. Army Reserves or Army Recruiter for more information. Individuals may locate an Army Recruiter by visiting that website. How about no? How about if you want to mandate upon my body a experimental experimental drug that we now know caused harm to me, that you did not have my best interest in mind. You had the best interest in mind of pharmaceutical companies. You had the best interest in mind of saving political face to half of the country who wanted to, you know, call on people to have separation of, of workforce and, and have people lose their jobs and lose their livelihood and not be able to even see their grandma in a hospital if they don't get vaccinated. Right? Like we went so crazy during COVID and now you see them walking everything back. Even the army walking back the ability now to join again now that they hit no recruiting numbers that they've had like the lowest recruiting numbers we've seen in a very long time. In one of the most highest tension times in American history. So no, you have to, you have to look at this and take a stand and say, this person, this entity, this thing did not, and obviously most people in the military know that the military does not immediately have their best interest in mind. Let's be very clear about that. Um, but, and in this case, the only thing they had in mind was how do we, how do we a make profits for the pharmaceutical companies, which is actually where vaccines became popularized to begin with. So we can, we can. touch on that fairly quickly, which is that the reason that vaccines became mandated even in schools was because the, the, the penicillin manufacturers, which is where vaccines became very prevalent was penicillin shots during world war two penicillin because of world war two was used so often. And so the, the, the people who came up with the penicillin shot, and I believe if you go back and look, it's, it was Pfizer and I have a book back here. called Code Blue. See if that knocks over my whole thing here. This book, Code Blue, is a tremendous read. It's inside America's medical industrial complex, and it goes back into the history. And I actually did a whole breakdown of this on the very first episode. That I did so go back and listen to the very, very first episode of the red pill revolution podcast, which you can just find in the feed that you're on right now, um, where it talks about this and why the penicillin became such a prevalent drug and why it was mandated in schools was specifically due to the fact that they had built so many industrialized or in so many industrial plants to build penicillin. That they, I'll put this here for you guys, um, that they essentially needed to continue perpetuating that profitability. So, instead, they, instead of shutting down all their manufacturing plants for penicillin, they actually opened, or they actually started to spend their money on lobbying, uh, Washington to make it mandatory within schools that you now vaccinate your children. And the reason they were doing this for, for soldiers was because people were coming back with like gangrene and all types of shit in World War II. And I went into the military, when I went into the military, we called it a peanut butter shop. One of the very first things that you get is a big needle shoved in your, your ass so that they can inject you with penicillin. For no reason at all, by the way. None of us were, well, maybe not none of us, but I wasn't sick when I went in. I didn't need penicillin, but they just give it to you because you're cattle. That's all you are to them is cattle. So when you talk about what, what they, what happened here, you realize that it was far more about. appeasing the pharmaceutical complexes that probably lobbied to make it mandatory within the military than it was about, you know, and, and who that helped at the very top of the military that makes these decisions, right? There's lobbying in that aspect too. Um, so I find it comical. Absolutely not. You showed your hand and we will not be a part of it. No matter how many cool badass Advertisements you put out showing straight white men you showed your hand and now you just you don't get the support And that was obviously a mistake. All right, and that leads us to one of our bigger discussion points today, which is a Historical Historical blunder by one of the most successful companies of all time, which almost overnight collapsed an entire An entire industry essentially and we'll get to that right after this which is the fact that you haven't subscribed yet You haven't left a review because I see you. I know I look every week to see who did what and I know Maybe it seems like you didn't leave a review. Not last week. Not this week. Not yet So what I'm asking you right now is stop what you're doing unless you're driving and then you know pull over There's there's somewhere you could there's a gas station right there. There's a McDonald's. Maybe there's a rest stop pull over right now Be safe. Don't do it while you're driving go to Apple podcasts. Go to Spotify hit the Five star review button. If you're on Apple podcasts, go ahead and leave a note. That's actually means way more than it does to just hit the five star review button, leave a review, say something nice, what you like about the podcast. I would appreciate it from the bottom of my heart. All right, so let's get into this open AI essentially almost collapsed overnight after the board fired Sam Altman. Now, if you don't know the back back. Story of Sam Altman. Sam Altman is the front face of Silicon Valley. He has been for a very long time. He was the head of Y Combinator, which is a startup incubator in, uh, in Silicon Valley. And For a very long time. He was not very well known outside of Silicon Valley until more recently with open AI He just he exploded in his celebrity and he was just most recently which makes me have some questions about this More recently is the fact that Sam Altman was on both Joe Rogan And Lex Friedman, not two weeks before this whole thing happened. So he gets one of the biggest celebrity moments and pushes of his face and his name just two weeks before he gets fired by the board. And what is the worst decision making ever by any company literally ever. As shown by the fact that 725 people, the last time I looked, signed a letter saying that if they don't reinstate him and fire the entirety of the board that made this decision, all 725 employees will go over to the same company that offered Sam Altman a position as the CEO of a new venture with an AI company. Which is Microsoft, and we'll read about that in just a second. So, essentially, let's, let's go ahead and let's dive into this article together. And I'll give you the very first thing, which is that OpenAI came out with this letter, directly on their website as a blog post. And it reads, Not what I wanted. And it reads, Chief Technology Officer Mira Murati, appointed Interim CEO to lead OpenAI. Sam Altman departs the company. Search process underway to identify permanent success for. The Board of Directors of OpenAI, that acts as the overall governing body for all OpenAI activities, today announced that Sam Altman will be departed as CEO and leave the Board of Directors. Mira Murati and the company's Chief Technology Officer will receive or will serve as Interim CEO, effective immediately. A member of OpenAI's leadership team for five years, Mira has played a critical role in OpenAI's evolution into a greater AI leader. She brings a unique skill set, understanding of the true company values, operations, and business leaders, and already have leads the company's research product and safety functions. Um, okay. Who cares about that? Mr. Altman's departure follows a deliberative review process by the board, which concluded that he has not been consistently candid with his communications with the board, hindering its ability to exercise its responsibilities. So the reason that they state, which is so obscure and vague, and nobody seems to actually know the reason, and they won't come out with it, even after being threatened by all sorts of people within OpenAI, including the letter, uh, Mr. Altman's departure follows a deliberative Review process by the board, which concluded that he was not consistently candid in his communications with the board. So because he wasn't candid with us, we're going to fire him. Okay. Probably the worst decision ever. The board no longer has confidence in his ability to continue leading open AI. In a statement, the board of directors said open AI was deliberately structured to advance our mission, to ensure that artificial general intelligence benefits all of humanity. The board remains fully committed to serving this mission. We are all grateful for Sam's many contributions to the founding and growth of OpenAI. At the same time, we believe new leadership is necessar
On this episode of Windows Weekly, Leo, Paul, and Richard dive deep into the latest OpenAI/Microsoft partnership drama involving Sam Altman's position. They also discuss upcoming EU regulations and their impact on Microsoft products, evaluate NVIDIA's record-breaking Q3 earnings, and reminisce about the classic PC FPS Half-Life. And you thought AI was already controversial... On Friday, OpenAI's board suddenly and unexpectedly fired CEO Sam Altman, kicking off several days of unprecedented high drama OpenAI president and board chairman Greg Brockman announced that he was quitting in protest Microsoft announced it had hired Altman and Brockman over the weekend 95 percent of OpenAI employees threatened to quit if Altman did not come back Altman began negotiating his return to OpenAI (and major governance changes) Altman is once again CEO of OpenAI Key takeaway: No matter what happens, Microsoft wins Wrong takeaway: Nothing changed Windows Windows 11 is about to get awesome in the EEA WHY IS WINDOWS 11 ONLY GOING TO BE AWESOME IN THE EEA??? Microsoft confirms that Copilot is coming to Windows 10 too. "No new features, my ass!" Copilot begins rolling out to Windows 10 in Insider Program Release Preview: Copilot in Alt + Tab and on other displays, limited Copilot with local account, DMA compliance, Windows Spotlight changes Canary: Disable Phone Link in the Bluetooth settings, display Teams contacts in the Windows share window when signed in with a Microsoft Entra ID Dev: Narrator improvements, File Explorer fixes (wait for it) Redmond, we have a problem. With Windows Hello Earnings learning NVIDIA continues to soar on AI (Winner) Zoom has settled back down to reality HP stumbles through its fourth quarter and FY2023: AI PCS FTW in late 2024! Lenovo stumbles too, but explicitly predicts industry recovery Antitrust Apple, ByteDance, and Meta contest their DMA gatekeeper designations Xbox Half-Life turned 25 last weekend and Valve finally remembered it exists Nvidia's GeForce Now adds Microsoft Store, PC Game Pass, and Ubisoft+ integration - over 1700 games now Amazon Luna comes to France, Italy, and Spain Next Call of Duty leaks! Tips and Picks Tip of the week: Ignite's over, but the videos are forever App pick of the week: Half-Lif RunAs Radio this week: Azure Operator Nexus with Jennelle Crothers Brown liquor pick of the week: Willett Wheated 8 Year Bourbon Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Download or subscribe to this show at https://twit.tv/shows/windows-weekly Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Sponsors: wix.com/studio?utm_campaign=pa_podcast_studio_10/ 23_TWiT%5Esponsors_cta cachefly.com/twit
OpenAI's weekend coup, plus our thoughts on Microsoft's gambit and their looming risk.
A federal appeals court on Monday issued a ruling that jeopardizes the Voting Rights Act of 1965. In a 2-1 decision, the Court of Appeals for the Eighth Circuit ruled that private groups or individuals can't sue under a key provision of the VRA. We're joined by Jay Willis, Editor-in-Chief of Balls and Strikes, to discuss what comes next.Over in Wisconsin, the state Supreme Court is set to hear arguments on Tuesday in a case that could toss Republican-drawn legislative maps. The lawsuit was filed by 19 Democratic voters in Wisconsin who argue that the maps are proof of gerrymandering because they ensure the GOP has an unfair advantage in State Assembly and Senate races.And in headlines: Microsoft hired Sam Altman and Greg Brockman to lead an A.I. research team, far-right populist Javier Milei was elected to be Argentina's next president, and autoworkers ratified their contract with Detroit carmakers.Show notes:WAD will be taking a break to celebrate Thanksgiving, and will be back with a new episode on Monday, November 27th.Balls and Strikes – https://ballsandstrikes.org/NASA's “Message In A Bottle” – https://europa.nasa.gov/message-in-a-bottle/sign-on/What A Day – YouTube – https://www.youtube.com/@whatadaypodcastCrooked Coffee is officially here. Our first blend, What A Morning, is available in medium and dark roasts. Wake up with your own bag at crooked.com/coffeeFollow us on Instagram – https://www.instagram.com/crookedmedia/For a transcript of this episode, please visit crooked.com/whataday
Today's Headlines: On the 45th day of the Gaza war, the U.S. intelligence community has shifted its stance, expressing growing confidence in the accuracy of death toll reports from the Hamas-run Gaza Health Ministry, contrary to earlier skepticism by the Biden administration. Meanwhile, Amos Hochstein, a senior Biden adviser, landed in Israel for talks with Israeli and Lebanese officials, addressing concerns about escalating tensions along Israel's northern border. Back in the U.S., a federal court ruling has the potential to impact the Voting Rights Act in seven states, limiting the ability of individuals and groups to sue under the act. Speaker Mike Johnson plans to release 44,000 hours of footage from the January 6th insurrection, with some portions withheld for sensitive security reasons. As Thanksgiving approaches, a massive storm is expected to sweep the eastern United States, adding challenges to travel during the busiest days of the year. Recent climate developments include the planet surpassing a key threshold, experiencing the first two days with a global average surface temperature above 2 degrees Celsius compared to preindustrial levels. The UN's 2023 Emissions Gap report warns of nearly three degrees Celsius of warming by 2100, even if current emission policies are met. Meanwhile, OpenAI faces internal turmoil following the sudden firing of CEO Sam Altman, prompting Microsoft to hire Altman and former president Greg Brockman to lead a new advanced AI research team. OpenAI employees threaten to quit, leading to an independent investigation into Altman's firing. Resources/Articles mentioned in this episode: Wall Street Journal: U.S. Officials Have Growing Confidence in Death Toll Reports From Gaza Axios: Senior Biden adviser in Israel for talks on preventing war with Lebanon NBC News: Federal court threatens to deal a death blow to the Voting Rights Act AP News: Speaker Johnson says he'll make 44,000 hours of Jan. 6 footage available to the general public WA Post: Large storm to cause Thanksgiving travel trouble in eastern U.S. Axios: Earth likely briefly passed critical warming threshold on Friday and Saturday Axios: Earth is hurtling toward nearly 3°C of warming AP News: Company that created ChatGPT is thrown into turmoil after Microsoft hires its ousted CEO Morning Announcements is produced by Sami Sage alongside Amanda Duberman and Bridget Schwartz Edited by Grace Hernandez-Johnson Learn more about your ad choices. Visit megaphone.fm/adchoices
It’s been a chaotic few days for the folks at OpenAI, including now-former CEO Sam Altman. To recap, on Friday the company’s board announced it had let Altman go, citing a lack of confidence in his “ability to continue leading OpenAI.” Several staff members then resigned and hundreds of others threatened to do the same if Altman wasn’t reinstated as CEO. That option is pretty much moot now that Microsoft — a major OpenAI investor — has hired Altman to lead a new AI research team along with former President Greg Brockman, who resigned in solidarity. Marketplace’s Lily Jamali spoke with Reed Albergotti, tech editor at Semafor, about what the dramatic ouster was really all about.
It’s been a chaotic few days for the folks at OpenAI, including now-former CEO Sam Altman. To recap, on Friday the company’s board announced it had let Altman go, citing a lack of confidence in his “ability to continue leading OpenAI.” Several staff members then resigned and hundreds of others threatened to do the same if Altman wasn’t reinstated as CEO. That option is pretty much moot now that Microsoft — a major OpenAI investor — has hired Altman to lead a new AI research team along with former President Greg Brockman, who resigned in solidarity. Marketplace’s Lily Jamali spoke with Reed Albergotti, tech editor at Semafor, about what the dramatic ouster was really all about.
Why do 700 employees want to follow in Sam Altman and Greg Brockman's footsteps? Kipp and Kieran dive into the new subplots of the chaotic OpenAI saga. Learn more on what does this mean for startups right now, the importance of diversification in AI, why you need to own your story, and how to prepare yourself for paradigm shifts in business. Mentions Tweet from Julian Lehr https://twitter.com/julianlehr/status/1726597518212071494 Tweet from Paul Graham https://twitter.com/paulg/status/1726936672875753952 Tweet from Marc Andreesen https://twitter.com/pmarca/status/1726894319255339081 Tweet from Patrick Campbell https://twitter.com/Patticus/status/1726797587875848459 Tweet from Adam D'Angelo https://twitter.com/adamdangelo/status/1717237512077561869 Tweet from Nic Carter https://twitter.com/nic__carter/status/1726958022424215920 Reply tweet from Brian Halligan https://twitter.com/bhalligan/status/1726705311103484323 Tweet from Chris Bakke https://twitter.com/ChrisJBakke/status/1726756262875205678 We're on Social Media! Follow us for everyday marketing wisdom straight to your feed YouTube: https://www.youtube.com/channel/UCGtXqPiNV8YC0GMUzY-EUFg Twitter: https://twitter.com/matgpod TikTok: https://www.tiktok.com/@matgpod Thank you for tuning into Marketing Against The Grain! Don't forget to hit subscribe and follow us on Apple Podcasts (so you never miss an episode)! https://podcasts.apple.com/us/podcast/marketing-against-the-grain/id1616700934 If you love this show, please leave us a 5-Star Review https://link.chtbl.com/h9_sjBKH and share your favorite episodes with friends. We really appreciate your support. Host Links: Kipp Bodnar, https://twitter.com/kippbodnar Kieran Flanagan, https://twitter.com/searchbrat ‘Marketing Against The Grain' is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Produced by Darren Clarke.
Episode 520: Shaan Puri (https://twitter.com/ShaanVP) and Sam Parr (https://twitter.com/theSamParr) are dropping an EMERGENCY POD. They've been on Twitter all weekend–ignoring their families–just to bring you this breakdown of the chaos going down at OpenAI. No more small boy spreadsheets, build your business on the free HubSpot CRM: https://mfmpod.link/hrd — Show Notes: (0:00) Intro (2:00) Act One - Sam Altman gets fired from OpenAI (6:30) Act Two - The Board vs The Team (9:30) Act Three: A new CEO is crowned (12:30) Sam Altman: the man, the myth (18:00) Who is Greg Brockman and Ilya Sutskever? (32:30) Emmett Shear enters the chat (48:00) The boys make predictions (50:00) Takeaway: The best and worst of Silicon Valley in 48 hours — Links: • Sam Altman's Twitter - https://twitter.com/sama • Greg Brockman's Blog - https://blog.gregbrockman.com — Check Out Shaan's Stuff: • Try Shepherd Out - https://www.supportshepherd.com/ • Shaan's Personal Assistant System - http://shaanpuri.com/remoteassistant • Power Writing Course - https://maven.com/generalist/writing • Small Boy Newsletter - https://smallboy.co/ • Daily Newsletter - https://www.shaanpuri.com/ Check Out Sam's Stuff: • Hampton - https://www.joinhampton.com/ • Ideation Bootcamp - https://www.ideationbootcamp.co/ • Copy That - https://copythat.com/ Past guests on My First Million include Rob Dyrdek, Hasan Minhaj, Balaji Srinivasan, Jake Paul, Dr. Andrew Huberman, Gary Vee, Lance Armstrong, Sophia Amoruso, Ariel Helwani, Ramit Sethi, Stanley Druckenmiller, Peter Diamandis, Dharmesh Shah, Brian Halligan, Marc Lore, Jason Calacanis, Andrew Wilkinson, Julian Shapiro, Kat Cole, Codie Sanchez, Nader Al-Naji, Steph Smith, Trung Phan, Nick Huber, Anthony Pompliano, Ben Askren, Ramon Van Meer, Brianne Kimmel, Andrew Gazdecki, Scott Belsky, Moiz Ali, Dan Held, Elaine Zelby, Michael Saylor, Ryan Begelman, Jack Butcher, Reed Duchscher, Tai Lopez, Harley Finkelstein, Alexa von Tobel, Noah Kagan, Nick Bare, Greg Isenberg, James Altucher, Randy Hetrick and more. — Other episodes you might enjoy: • #224 Rob Dyrdek - How Tracking Every Second of His Life Took Rob Drydek from 0 to $405M in Exits • #209 Gary Vaynerchuk - Why NFTS Are the Future • #178 Balaji Srinivasan - Balaji on How to Fix the Media, Cloud Cities & Crypto • #169 - How One Man Started 5, Billion Dollar Companies, Dan Gilbert's Empire, & Talking With Warren Buffett • #218 - Why You Should Take a Think Week Like Bill Gates • Dave Portnoy vs The World, Extreme Body Monitoring, The Future of Apparel Retail, "How Much is Anthony Pompliano Worth?", and More • How Mr Beast Got 100M Views in Less Than 4 Days, The $25M Chrome Extension, and More
In a dramatic turn of events, OpenAI's board of directors fired CEO and co-founder Sam Altman. Then they tried to hire him back. Then they announced a former Twitch CEO will lead the company. What the what?See omnystudio.com/listener for privacy information.
The board of OpenAI, the company behind ChatGPT, ousted CEO Sam Altman on Friday. Since then, the board has appointed not one, but two, interim CEOs. And Altman and his OpenAI co-founder Greg Brockman got snatched up by Microsoft. The New York Times' Kevin Roose (@kevinroose) joins Vox's Peter Kafka to talk about what we know and what we don't about this whole situation. Host: Peter Kafka (@pkafka), Senior Editor at Recode More to explore: Subscribe for free to Recode Media, Peter Kafka, one of the media industry's most acclaimed reporters, talks to business titans, journalists, comedians, and more to get their take on today's media landscape. About Recode by Vox: Recode by Vox helps you understand how tech is changing the world — and changing us. Learn more about your ad choices. Visit podcastchoices.com/adchoices
The AI Breakdown: Daily Artificial Intelligence News and Discussions
WTAF is going on? First the board announced former Twitch CEO Emmett Shear as new CEO. Then Microsoft CEO Satya Nadella said Sam Altman and Greg Brockman were running a new division in Redmond. Then employees started rebelling with a letter demanding the board's resignation. Interested in the AI Breakdown Edu/Learning Community Beta? https://bit.ly/aibeta ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI. Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/
I discuss the latest in the OpenAI saga… this morning Sam Altman and Greg Brockman agree to join Microsoft. OpenAI appionts a new CEO, Emmett Shear (former Twitch CEO). And OpenAI employees demand board to resign. Shockingly Ilya Sutskever signs the letter as well. Social
The AI Breakdown: Daily Artificial Intelligence News and Discussions
Act 3 of the Dumbest Palace Coup Ever, as employees and investors revolt and the board appears forced to capitulate. ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI. Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/
This Week in Startups is brought to you by… Vanta. Compliance and security shouldn't be a deal-breaker for startups to win new business. Vanta makes it easy for companies to get a SOC 2 report fast. TWiST listeners can get $1,000 off for a limited time at vanta.com/twist Lemon.io. Get access to Lemon Hire, a platform with more than 80,000 pre-vetted engineers that you can interview within 48 hours. Get $2000 off your first hire at http://lemon.io/hire today! The Equinix Startup program offers a hybrid infrastructure solution for startups, including up to $100K in credits and personalized consultations and guidance from the Equinix team. Go to https://deploy.equinix.com/startups/ to apply today. Today's show: Sunny Madra joins Jason for an emergency podcast! They break down the business implications of Altman leaving OpenAI (1:23), speculation around why he was fired (10:27), ongoing developments like Greg Brockman quitting, and more! (23:24) * Time stamps: (0:00) Sunny Madra goes live with Jason for an emergency Pod! (1:23) Sam Altman ousted from OpenAI and Greg Brockman quits! (6:44) The board's accusations against Sam Altman (10:27) Speculating “no conflict, no interest” (12:16) Vanta - Get $1000 off your SOC 2 at https://vanta.com/twist(13:22) Sam's extraordinary deal-making, lack of OpenAI shares, and the possible impact from Humane launch (23:24) The OpenAI and Microsoft relationship (29:40) Lemon.io - Get $2000 off your first hire at http://lemon.io/hire (30:49) More speculation and anonymous theories (34:29) The impact of founder-led switching to not founder-led (37:10) Equinix - Join the Equinix Startup Program for up to $100K in credits and much more at https://deploy.equinix.com/startups/ (38:30) Questions from live audience (56:12) Final thoughts * Follow Sunny: https://twitter.com/sundeep * Great 2023 interviews: Steve Huffman, Brian Chesky, Aaron Levie, Sophia Amoruso, Reid Hoffman, Frank Slootman, Billy McFarland Check out Jason's suite of newsletters: https://substack.com/@calacanis * Follow Jason: Twitter: https://twitter.com/jason Instagram: https://www.instagram.com/jason LinkedIn: https://www.linkedin.com/in/jasoncalacanis * Follow TWiST: Substack: https://twistartups.substack.com Twitter: https://twitter.com/TWiStartups YouTube: https://www.youtube.com/thisweekin * Subscribe to the Founder University Podcast: https://www.founder.university/podcast
The AI Breakdown: Daily Artificial Intelligence News and Discussions
Backstabbing. Ambition. Betrayal. After Sam Altman was fired and Greg Brockman quit, NLW digs in to what we've learned so far. ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI. Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/
We're excited to sit down with Marc Kuhn, an inspiring entrepreneur who has built a multimillion-dollar business from humble beginnings. Marc's journey is a testament to the power of perseverance and hard work.In our conversation, Marc shares his personal story, starting from the challenges he faced in the early days of his entrepreneurial venture to the struggles of buying a home. Throughout it all, Marc acknowledges the unwavering support of his wife, who has been his rock.Marc also delves into his fascination with the business world, drawing inspiration from successful entrepreneurs like Greg Brockman, who made a significant impact despite not completing college. He emphasizes the importance of delegation and prioritizing growth over daily tasks, lessons he learned from his own early mistakes.Money management is another crucial aspect of achieving business success, according to Marc. He discusses how both the private sector and the US government utilize effective strategies to leverage money for growth. Marc explores the EOS system, which aligns business values and emphasizes the importance of every dollar saved as a soldier fighting for growth.Media leverage is another key topic Marc highlights. He examines the successful strategy of Dave Ramsey and the impact of carefully chosen words and content creation in marketing. Prepare to shift your perspective on entrepreneurship and the art of leverage as we dive into this game-changing conversation with Marc Kuhn.Ready to connect with Marc Kuhn and learn from his inspiring journey as an entrepreneur? Join him on LinkedIn for valuable insights, industry updates, and networking opportunities. Don't miss out on the chance to connect with a true business visionary. VISIT OUR WEBSITEhttps://lifebridgecapital.com/Here are ways you can work with us here at Life Bridge Capital:⚡️START INVESTING TODAY: If you think that real estate syndication may be right for you, contact us today to learn more about our current investment opportunities: https://lifebridgecapital.com/investwithlbc⚡️Watch on YouTube: https://www.youtube.com/@TheRealEstateSyndicationShow
In a talk from the cutting edge of technology, OpenAI cofounder Greg Brockman explores the underlying design principles of ChatGPT and demos some mind-blowing, unreleased plug-ins for the chatbot that sent shockwaves across the world. After the talk, head of TED Chris Anderson joins Brockman to dig into the timeline of ChatGPT's development and get Brockman's take on the risks, raised by many in the tech industry and beyond, of releasing such a powerful tool into the world.