POPULARITY
Categories
Everyone wants the latest and greatest AI buzzword. But at what cost? And what the heck is the difference between algos, LLMs, and agents anyway? Tune in to find out.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Choosing AI: Algorithms vs. AgentsUnderstanding AI Models and AgentsUsing Conditional Statements in AIImportance of Data in AI TrainingRisk Factors in Agentic AI ProjectsInnovation through AI ExperimentationEvaluating AI for Business SolutionsTimestamps:00:00 AWS AI Leader Departs Amid Talent War03:43 Meta Wins Copyright Lawsuit07:47 Choosing AI: Short or Long Term?12:58 Agentic AI: Dynamic Decision Models16:12 "Demanding Data-Driven Precision in Business"20:08 "Agentic AI: Adoption and Risks"22:05 Startup Challenges Amidst Tech Giants24:36 Balancing Innovation and Routine27:25 AGI: Future of Work and SurvivalKeywords:AI algorithms, Large Language Models, LLMs, Agents, Agentic AI, Multi agentic AI, Amazon Web Services, AWS, Vazhi Philemon, Gen AI efforts, Amazon Bedrock, talent wars in tech, OpenAI, Google, Meta, Copyright lawsuit, AI training, Sarah Silverman, Llama, Fair use in AI, Anthropic, AI deep research model, API, Webhooks, MCP, Code interpreter, Keymaker, Data labeling, Training datasets, Computer vision models, Block out time to experiment, Decision-making, If else conditional statements, Data-driven approach, AGI, Teleporting, Innovation in AI, Experiment with AI, Business leaders, Performance improvements, Sustainable business models, Corporate blade.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Try Gemini 2.5 Flash! Sign up at AIStudio.google.com to get started. Try Gemini 2.5 Flash! Sign up at AIStudio.google.com to get started.
My fellow pro-growth/progress/abundance Up Wingers,Once-science-fiction advancements like AI, gene editing, and advanced biotechnology have finally arrived, and they're here to stay. These technologies have seemingly set us on a course towards a brand new future for humanity, one we can hardly even picture today. But progress doesn't happen overnight, and it isn't the result of any one breakthrough.As Jamie Metzl explains in his new book, Superconvergence: How the Genetics, Biotech, and AI Revolutions will Transform our Lives, Work, and World, tech innovations work alongside and because of one another, bringing about the future right under our noses.Today on Faster, Please! — The Podcast, I chat with Metzl about how humans have been radically reshaping the world around them since their very beginning, and what the latest and most disruptive technologies mean for the not-too-distant future.Metzl is a senior fellow of the Atlantic Council and a faculty member of NextMed Health. He has previously held a series of positions in the US government, and was appointed to the World Health Organization's advisory committee on human genome editing in 2019. He is the author of several books, including two sci-fi thrillers and his international bestseller, Hacking Darwin.In This Episode* Unstoppable and unpredictable (1:54)* Normalizing the extraordinary (9:46)* Engineering intelligence (13:53)* Distrust of disruption (19:44)* Risk tolerance (24:08)* What is a “newnimal”? (13:11)* Inspired by curiosity (33:42)Below is a lightly edited transcript of our conversation. Unstoppable and unpredictable (1:54)The name of the game for all of this . . . is to ask “What are the things that we can do to increase the odds of a more positive story and decrease the odds of a more negative story?”Pethokoukis: Are you telling a story of unstoppable technological momentum or are you telling a story kind of like A Christmas Carol, of a future that could be if we do X, Y, and Z, but no guarantees?Metzl: The future of technological progress is like the past: It is unstoppable, but that doesn't mean it's predetermined. The path that we have gone over the last 12,000 years, from the domestication of crops to building our civilizations, languages, industrialization — it's a bad metaphor now, but — this train is accelerating. It's moving faster and faster, so that's not up for grabs. It is not up for grabs whether we are going to have the capacities to engineer novel intelligence and re-engineer life — we are doing both of those things now in the early days.What is up for grabs is how these revolutions will play out, and there are better and worse scenarios that we can imagine. The name of the game for all of this, the reason why I do the work that I do, why I write the books that I write, is to ask “What are the things that we can do to increase the odds of a more positive story and decrease the odds of a more negative story?”Progress has been sort of unstoppable for all that time, though, of course, fits and starts and periods of stagnation —— But when you look back at those fits and starts — the size of the Black Plague or World War II, or wiping out Berlin, and Dresden, and Tokyo, and Hiroshima, and Nagasaki — in spite of all of those things, it's one-directional. Our technologies have gotten more powerful. We've developed more capacities, greater ability to manipulate the world around us, so there will be fits and starts but, as I said, this train is moving. That's why these conversations are so important, because there's so much that we can, and I believe must, do now.There's a widely held opinion that progress over the past 50 years has been slower than people might have expected in the late 1960s, but we seem to have some technologies now for which the momentum seems pretty unstoppable.Of course, a lot of people thought, after ChatGPT came out, that superintelligence would happen within six months. That didn't happen. After CRISPR arrived, I'm sure there were lots of people who expected miracle cures right away.What makes you think that these technologies will look a lot different, and our world will look a lot different than they do right now by decade's end?They certainly will look a lot different, but there's also a lot of hype around these technologies. You use the word “superintelligence,” which is probably a good word. I don't like the words “artificial intelligence,” and I have a six-letter framing for what I believe about AGI — artificial general intelligence — and that is: AGI is BS. We have no idea what human intelligence is, if we define our own intelligence so narrowly that it's just this very narrow form of thinking and then we say, “Wow, we have these machines that are mining the entirety of digitized human cultural history, and wow, they're so brilliant, they can write poems — poems in languages that our ancestors have invented based on the work of humans.” So we humans need to be very careful not to belittle ourselves.But we're already seeing, across the board, if you say, “Is CRISPR on its own going to fundamentally transform all of life?” The answer to that is absolutely no. My last book was about genetic engineering. If genetic engineering is a pie, genome editing is a slice and CRISPR is just a tiny little sliver of that slice. But the reason why my new book is called Superconvergence, the entire thesis is that all of these technologies inspire, and influence, and are embedded in each other. We had the agricultural revolution 12,000 years ago, as I mentioned. That's what led to these other innovations like civilization, like writing, and then the ancient writing codes are the foundation of computer codes which underpin our machine learning and AI systems that are allowing us to unlock secrets of the natural world.People are imagining that AI equals ChatGPT, but that's really not the case (AI equals ChatGPT like electricity equals the power station). The story of AI is empowering us to do all of these other things. As a general-purpose technology, already AI is developing the capacity to help us just do basic things faster. Computer coding is the archetypal example of that. Over the last couple of years, the speed of coding has improved by about 50 percent for the most advanced human coders, and as we code, our coding algorithms are learning about the process of coding. We're just laying a foundation for all of these other things.That's what I call “boring AI.” People are imagining exciting AI, like there's a magic AI button and you just press it and AI cures cancer. That's not how it's going to work. Boring AI is going to be embedded in human resource management. It's going to be embedded just giving us a lot of capabilities to do things better, faster than we've done them before. It doesn't mean that AIs are going to replace us. There are a lot of things that humans do that machines can just do better than we are. That's why most of us aren't doing hunting, or gathering, or farming, because we developed machines and other technologies to feed us with much less human labor input, and we have used that reallocation of our time and energy to write books and invent other things. That's going to happen here.The name of the game for us humans, there's two things: One is figuring out what does it mean to be a great human and over-index on that, and two, lay the foundation so that these multiple overlapping revolutions, as they play out in multiple fields, can be governed wisely. That is the name of the game. So when people say, “Is it going to change our lives?” I think people are thinking of it in the wrong way. This shirt that I'm wearing, this same shirt five years from now, you'll say, “Well, is there AI in your shirt?” — because it doesn't look like AI — and what I'm going to say is “Yes, in the manufacturing of this thread, in the management of the supply chain, in figuring out who gets to go on vacation, when, in the company that's making these buttons.” It's all these little things. People will just call it progress. People are imagining magic AI, all of these interwoven technologies will just feel like accelerating progress, and that will just feel like life.Normalizing the extraordinary (9:46)20, 30 years ago we didn't have the internet. I think things get so normalized that this just feels like life.What you're describing is a technology that economists would call a general-purpose technology. It's a technology embedded in everything, it's everywhere in the economy, much as electricity.What you call “boring AI,” the way I think about it is: I was just reading a Wall Street Journal story about Applebee's talking about using AI for more efficient customer loyalty programs, and they would use machine vision to look at their tables to see if they were cleaned well enough between customers. That, to people, probably doesn't seem particularly science-fictional. It doesn't seem world-changing. Of course, faster growth and a more productive economy is built on those little things, but I guess I would still call those “boring AI.”What to me definitely is not boring AI is the sort of combinatorial aspect that you're talking about where you're talking about AI helping the scientific discovery process and then interweaving with other technologies in kind of the classic Paul Romer combinatorial way.I think a lot of people, if they look back at their lives 20 or 30 years ago, they would say, “Okay, more screen time, but probably pretty much the same.”I don't think they would say that. 20, 30 years ago we didn't have the internet. I think things get so normalized that this just feels like life. If you had told ourselves 30 years ago, “You're going to have access to all the world's knowledge in your pocket.” You and I are — based on appearances, although you look so youthful — roughly the same age, so you probably remember, “Hurry, it's long distance! Run down the stairs!”We live in this radical science-fiction world that has been normalized, and even the things that you are mentioning, if you see open up your newsfeed and you see that there's this been incredible innovation in cancer care, and whether it's gene therapy, or autoimmune stuff, or whatever, you're not thinking, “Oh, that was AI that did that,” because you read the thing and it's like “These researchers at University of X,” but it is AI, it is electricity, it is agriculture. It's because our ancestors learned how to plant seeds and grow plants where you're stationed and not have to do hunting and gathering that you have had this innovation that is keeping your grandmother alive for another 10 years.What you're describing is what I call “magical AI,” and that's not how it works. Some of the stuff is magical: the Jetsons stuff, and self-driving cars, these things that are just autopilot airplanes, we live in a world of magical science fiction and then whenever something shows up, we think, “Oh yeah, no big deal.” We had ChatGPT, now ChatGPT, no big deal?If you had taken your grandparents, your parents, and just said, “Hey, I'm going to put you behind a screen. You're going to have a conversation with something, with a voice, and you're going to do it for five hours,” and let's say they'd never heard of computers and it was all this pleasant voice. In the end they said, “You just had a five-hour conversation with a non-human, and it told you about everything and all of human history, and it wrote poems, and it gave you a recipe for kale mush or whatever you're eating,” you'd say, “Wow!” I think that we are living in that sci-fi world. It's going to get faster, but every innovation, we're not going to say, “Oh, AI did that.” We're just going to say, “Oh, that happened.”Engineering intelligence (13:53)I don't like the word “artificial intelligence” because artificial intelligence means “artificial human intelligence.” This is machine intelligence, which is inspired by the products of human intelligence, but it's a different form of intelligence . . .I sometimes feel in my own writing, and as I peruse the media, like I read a lot more about AI, the digital economy, information technology, and I feel like I certainly write much less about genetic engineering, biotechnology, which obviously is a key theme in your book. What am I missing right now that's happening that may seem normal five years from now, 10 years, but if I were to read about it now or understand it now, I'd think, “Well, that is kind of amazing.”My answer to that is kind of everything. As I said before, we are at the very beginning of this new era of life on earth where one species, among the billions that have ever lived, suddenly has the increasing ability to engineer novel intelligence and re-engineer life.We have evolved by the Darwinian processes of random mutation and natural selection, and we are beginning a new phase of life, a new Cambrian Revolution, where we are creating, certainly with this novel intelligence that we are birthing — I don't like the word “artificial intelligence” because artificial intelligence means “artificial human intelligence.” This is machine intelligence, which is inspired by the products of human intelligence, but it's a different form of intelligence, just like dolphin intelligence is a different form of intelligence than human intelligence, although we are related because of our common mammalian route. That's what's happening here, and our brain function is roughly the same as it's been, certainly at least for tens of thousands of years, but the AI machine intelligence is getting smarter, and we're just experiencing it.It's become so normalized that you can even ask that question. We live in a world where we have these AI systems that are just doing more and cooler stuff every day: driving cars, you talked about discoveries, we have self-driving laboratories that are increasingly autonomous. We have machines that are increasingly writing their own code. We live in a world where machine intelligence has been boxed in these kinds of places like computers, but very soon it's coming out into the world. The AI revolution, and machine-learning revolution, and the robotics revolution are going to be intersecting relatively soon in meaningful ways.AI has advanced more quickly than robotics because it hasn't had to navigate the real world like we have. That's why I'm always so mindful of not denigrating who we are and what we stand for. Four billion years of evolution is a long time. We've learned a lot along the way, so it's going to be hard to put the AI and have it out functioning in the world, interacting in this world that we have largely, but not exclusively, created.But that's all what's coming. Some specific things: 30 years from now, my guess is many people who are listening to this podcast will be fornicating regularly with robots, and it'll be totally normal and comfortable.. . . I think some people are going to be put off by that.Yeah, some people will be put off and some people will be turned on. All I'm saying is it's going to be a mix of different —Jamie, what I would like to do is be 90 years old and be able to still take long walks, be sharp, not have my knee screaming at me. That's what I would like. Can I expect that?I think this can help, but you have to decide how to behave with your personalized robot.That's what I want. I'm looking for the achievement of human suffering. Will there be a world of less human suffering?We live in that world of less human suffering! If you just look at any metric of anything, this is the best time to be alive, and it's getting better and better. . . We're living longer, we're living healthier, we're better educated, we're more informed, we have access to more and better food. This is by far the best time to be alive, and if we don't massively screw it up, and frankly, even if we do, to a certain extent, it'll continue to get better.I write about this in Superconvergence, we're moving in healthcare from our world of generalized healthcare based on population averages to precision healthcare, to predictive and preventive. In education, some of us, like myself, you have had access to great education, but not everybody has that. We're going to have access to fantastic education, personalized education everywhere for students based on their own styles of learning, and capacities, and native languages. This is a wonderful, exciting time.We're going to get all of those things that we can hope for and we're going to get a lot of things that we can't even imagine. And there are going to be very real potential dangers, and if we want to have the good story, as I keep saying, and not have the bad story, now is the time where we need to start making the real investments.Distrust of disruption (19:44)Your job is the disruption of this thing that's come before. . . stopping the advance of progress is just not one of our options.I think some people would, when they hear about all these changes, they'd think what you're telling them is “the bad story.”I just talked about fornicating with robots, it's the bad story?Yeah, some people might find that bad story. But listen, we live at an age where people have recoiled against the disruption of trade, for instance. People are very allergic to the idea of economic disruption. I think about all the debate we had over stem cell therapy back in the early 2000s, 2002. There certainly is going to be a certain contingent that, what they're going to hear what you're saying is: you're going to change what it means to be a human. You're going to change what it means to have a job. I don't know if I want all this. I'm not asking for all this.And we've seen where that pushback has greatly changed, for instance, how we trade with other nations. Are you concerned that that pushback could create regulatory or legislative obstacles to the kind of future you're talking about?All of those things, and some of that pushback, frankly, is healthy. These are fundamental changes, but those people who are pushing back are benchmarking their own lives to the world that they were born into and, in most cases, without recognizing how radical those lives already are, if the people you're talking about are hunter-gatherers in some remote place who've not gone through domestication of agriculture, and industrialization, and all of these kinds of things, that's like, wow, you're going from being this little hunter-gatherer tribe in the middle of Atlantis and all of a sudden you're going to be in a world of gene therapy and shifting trading patterns.But the people who are saying, “Well, my job as a computer programmer, as a whatever, is going to get disrupted,” your job is the disruption. Your job is the disruption of this thing that's come before. As I said at the start of our conversation, stopping the advance of progress is just not one of our options.We could do it, and societies have done it before, and they've lost their economies, they've lost their vitality. Just go to Europe, Europe is having this crisis now because for decades they saw their economy and their society, frankly, as a museum to the past where they didn't want to change, they didn't want to think about the implications of new technologies and new trends. It's why I am just back from Italy. It's wonderful, I love visiting these little farms where they're milking the goats like they've done for centuries and making cheese they've made for centuries, but their economies are shrinking with incredible rapidity where ours and the Chinese are growing.Everybody wants to hold onto the thing that they know. It's a very natural thing, and I'm not saying we should disregard those views, but the societies that have clung too tightly to the way things were tend to lose their vitality and, ultimately, their freedom. That's what you see in the war with Russia and Ukraine. Let's just say there are people in Ukraine who said, “Let's not embrace new disruptive technologies.” Their country would disappear.We live in a competitive world where you can opt out like Europe opted out solely because they lived under the US security umbrella. And now that President Trump is threatening the withdrawal of that security umbrella, Europe is being forced to race not into the future, but to race into the present.Risk tolerance (24:08). . . experts, scientists, even governments don't have any more authority to make these decisions about the future of our species than everybody else.I certainly understand that sort of analogy, and compared to Europe, we look like a far more risk-embracing kind of society. Yet I wonder how resilient that attitude — because obviously I would've said the same thing maybe in 1968 about the United States, and yet a decade later we stopped building nuclear reactors — I wonder how resilient we are to anything going wrong, like something going on with an AI system where somebody dies. Or something that looks like a cure that kills someone. Or even, there seems to be this nuclear power revival, how resilient would that be to any kind of accident? How resilient do you think are we right now to the inevitable bumps along the way?It depends on who you mean by “we.” Let's just say “we” means America because a lot of these dawns aren't the first ones. You talked about gene therapy. This is the second dawn of gene therapy. The first dawn came crashing into a halt in 1999 when a young man at the University of Pennsylvania died as a result of an error carried out by the treating physicians using what had seemed like a revolutionary gene therapy. It's the second dawn of AI after there was a lot of disappointment. There will be accidents . . .Let's just say, hypothetically, there's an accident . . . some kind of self-driving car is going to kill somebody or whatever. And let's say there's a political movement, the Luddites that is successful, and let's just say that every self-driving car in America is attacked and destroyed by mobs and that all of the companies that are making these cars are no longer able to produce or deploy those cars. That's going to be bad for self-driving cars in America — it's not going to be bad for self-driving cars. . . They're going to be developed in some other place. There are lots of societies that have lost their vitality. That's the story of every empire that we read about in history books: there was political corruption, sclerosis. That's very much an option.I'm a patriotic American and I hope America leads these revolutions as long as we can maintain our values for many, many centuries to come, but for that to happen, we need to invest in that. Part of that is investing now so that people don't feel that they are powerless victims of these trends they have no influence over.That's why all of my work is about engaging people in the conversation about how do we deploy these technologies? Because experts, scientists, even governments don't have any more authority to make these decisions about the future of our species than everybody else. What we need to do is have broad, inclusive conversations, engage people in all kinds of processes, including governance and political processes. That's why I write the books that I do. That's why I do podcast interviews like this. My Joe Rogan interviews have reached many tens of millions of people — I know you told me before that you're much bigger than Joe Rogan, so I imagine this interview will reach more than that.I'm quite aspirational.Yeah, but that's the name of the game. With my last book tour, in the same week I spoke to the top scientists at Lawrence Livermore National Laboratory and the seventh and eighth graders at the Solomon Schechter Hebrew Academy of New Jersey, and they asked essentially the exact same questions about the future of human genetic engineering. These are basic human questions that everybody can understand and everybody can and should play a role and have a voice in determining the big decisions and the future of our species.To what extent is the future you're talking about dependent on continued AI advances? If this is as good as it gets, does that change the outlook at all?One, there's no conceivable way that this is as good as it gets because even if the LLMs, large language models — it's not the last word on algorithms, there will be many other philosophies of algorithms, but let's just say that LLMs are the end of the road, that we've just figured out this one thing, and that's all we ever have. Just using the technologies that we have in more creative ways is going to unleash incredible progress. But it's certain that we will continue to have innovations across the field of computer science, in energy production, in algorithm development, in the ways that we have to generate and analyze massive data pools. So we don't need any more to have the revolution that's already started, but we will have more.Politics always, ultimately, can trump everything if we get it wrong. But even then, even if . . . let's just say that the United States becomes an authoritarian, totalitarian hellhole. One, there will be technological innovation like we're seeing now even in China, and two, these are decentralized technologies, so free people elsewhere — maybe it'll be Europe, maybe it'll be Africa or whatever — will deploy these technologies and use them. These are agnostic technologies. They don't have, as I said at the start, an inevitable outcome, and that's why the name of the game for us is to weave our best values into this journey.What is a “newnimal”? (30:11). . . we don't live in a state of nature, we live in a world that has been massively bio-engineered by our ancestors, and that's just the thing that we call life.When I was preparing for this interview and my research assistant was preparing, I said, “We have to have a question about bio-engineered new animals.” One, because I couldn't pronounce your name for these . . . newminals? So pronounce that name and tell me why we want these.It's a made up word, so you can pronounce it however you want. “Newnimals” is as good as anything.We already live in a world of bio-engineered animals. Go back 50,000 years, find me a dog, find me a corn that is recognizable, find me rice, find me wheat, find me a cow that looks remotely like the cow in your local dairy. We already live in that world, it's just people assume that our bioengineered world is some kind of state of nature. We already live in a world where the size of a broiler chicken has tripled over the last 70 years. What we have would have been unrecognizable to our grandparents.We are already genetically modifying animals through breeding, and now we're at the beginning of wanting to have whatever those same modifications are, whether it's producing more milk, producing more meat, living in hotter environments and not dying, or whatever it is that we're aiming for in these animals that we have for a very long time seen not as ends in themselves, but means to the alternate end of our consumption.We're now in the early stages xenotransplantation, modifying the hearts, and livers, and kidneys of pigs so they can be used for human transplantation. I met one of the women who has received — and seems to so far to be thriving — a genetically modified pig kidney. We have 110,000 people in the United States on the waiting list for transplant organs. I really want these people not just to survive, but to survive and thrive. That's another area we can grow.Right now . . . in the world, we slaughter about 93 billion land animals per year. We consume 200 million metric tons of fish. That's a lot of murder, that's a lot of risk of disease. It's a lot of deforestation and destruction of the oceans. We can already do this, but if and when we can grow bioidentical animal products at scale without having all of these negative externalities of whether it's climate change, environmental change, cruelty, deforestation, increased pandemic risk, what a wonderful thing to do!So we have these technologies and you mentioned that people are worried about them, but the reason people are worried about them is they're imagining that right now we live in some kind of unfettered state of nature and we're going to ruin it. But that's why I say we don't live in a state of nature, we live in a world that has been massively bio-engineered by our ancestors, and that's just the thing that we call life.Inspired by curiosity (33:42). . . the people who I love and most admire are the people who are just insatiably curious . . .What sort of forward thinkers, or futurists, or strategic thinkers of the past do you model yourself on, do you think are still worth reading, inspired you?Oh my God, so many, and the people who I love and most admire are the people who are just insatiably curious, who are saying, “I'm going to just look at the world, I'm going to collect data, and I know that everybody says X, but it may be true, it may not be true.” That is the entire history of science. That's Galileo, that's Charles Darwin, who just went around and said, “Hey, with an open mind, how am I going to look at the world and come up with theses?” And then he thought, “Oh s**t, this story that I'm coming up with for how life advances is fundamentally different from what everybody in my society believes and organizes their lives around.” Meaning, in my mind, that's the model, and there are so many people, and that's the great thing about being human.That's what's so exciting about this moment is that everybody has access to these super-empowered tools. We have eight billion humans, but about two billion of those people are just kind of locked out because of crappy education, and poor water sanitation, electricity. We're on the verge of having everybody who has a smartphone has the possibility of getting a world-class personalized education in their own language. How many new innovations will we have when little kids who were in slums in India, or in Pakistan, or in Nairobi, or wherever who have promise can educate themselves, and grow up and cure cancers, or invent new machines, or new algorithms. This is pretty exciting.The summary of the people from the past, they're kind of like the people in the present that I admire the most, are the people who are just insatiably curious and just learning, and now we have a real opportunity so that everybody can be their own Darwin.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedMicro Reads▶ Economics* AI Hype Is Proving to Be a Solow's Paradox - Bberg Opinion* Trump Considers Naming Next Fed Chair Early in Bid to Undermine Powell - WSJ* Who Needs the G7? - PS* Advances in AI will boost productivity, living standards over time - Dallas Fed* Industrial Policy via Venture Capital - SSRN* Economic Sentiment and the Role of the Labor Market - St. Louis Fed▶ Business* AI valuations are verging on the unhinged - Economist* Nvidia shares hit record high on renewed AI optimism - FT* OpenAI, Microsoft Rift Hinges on How Smart AI Can Get - WSJ* Takeaways From Hard Fork's Interview With OpenAI's Sam Altman - NYT* Thatcher's legacy endures in Labour's industrial strategy - FT* Reddit vows to stay human to emerge a winner from artificial intelligence - FT▶ Policy/Politics* Anthropic destroyed millions of print books to build its AI models - Ars* Don't Let Silicon Valley Move Fast and Break Children's Minds - NYT Opinion* Is DOGE doomed to fail? Some experts are ready to call it. - Ars* The US is failing its green tech ‘Sputnik moment' - FT▶ AI/Digital* Future of Work with AI Agents: Auditing Automation and Augmentation Potential across the U.S. Workforce - Arxiv* Is the Fed Ready for an AI Economy? - WSJ Opinion* How Much Energy Does Your AI Prompt Use? I Went to a Data Center to Find Out. - WSJ* Meta Poaches Three OpenAI Researchers - WSJ* AI Agents Are Getting Better at Writing Code—and Hacking It as Well - Wired* Exploring the Capabilities of the Frontier Large Language Models for Nuclear Energy Research - Arxiv▶ Biotech/Health* Google's new AI will help researchers understand how our genes work - MIT* Does using ChatGPT change your brain activity? Study sparks debate - Nature* We cure cancer with genetic engineering but ban it on the farm. - ImmunoLogic* ChatGPT and OCD are a dangerous combo - Vox▶ Clean Energy/Climate* Is It Too Soon for Ocean-Based Carbon Credits? - Heatmap* The AI Boom Can Give Rooftop Solar a New Pitch - Bberg Opinion▶ Robotics/Drones/AVs* Tesla's Robotaxi Launch Shows Google's Waymo Is Worth More Than $45 Billion - WSJ* OpenExo: An open-source modular exoskeleton to augment human function - Science Robotics▶ Space/Transportation* Bezos and Blue Origin Try to Capitalize on Trump-Musk Split - WSJ* Giant asteroid could crash into moon in 2032, firing debris towards Earth - The Guardian▶ Up Wing/Down Wing* New Yorkers Vote to Make Their Housing Shortage Worse - WSJ* We Need More Millionaires and Billionaires in Latin America - Bberg Opinion▶ Substacks/Newsletters* Student visas are a critical pipeline for high-skilled, highly-paid talent - AgglomerationsState Power Without State Capacity - Breakthrough JournalFaster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
74% of CEOs think their jobs are on the line because of AI. Not because AI might replace them, but because failing to implement it successfully could cost them everything.Merlin Bise, CTO of Inbenta and former Head of Technology at a firm acquired by the London Stock Exchange, joins us to share how Inbenta is helping enterprises modernise their customer experience. Merlin explains that so many AI deployments fail, not because the technology is lacking, but because companies often bet on the wrong frameworks, overlook data foundations, or underestimate the importance of testing. We explore how traditional rules-based systems give way to agentic frameworks that can reason, triage ambiguous queries, and even correct automation gaps in real time. Merlin walks us through the journey many enterprises take: beginning with deterministic rules, evolving to AI-powered agents, and ultimately orchestrating complex automation through agentic manager systems that oversee and improve themselves.Security and customer experience are front and centre in this episode. Merlin breaks down the cybersecurity concerns that make enterprises hesitate and why, in most cases, those fears are rooted more in perception than reality.Finally, we reflect on the broader trajectory of AI. While the race toward AGI dominates headlines, Merlin argues that the tools enterprises need to radically improve productivity are already here. The challenge is implementing what exists with purpose and precision.Shownotes:Check out Inbenta: https://www.inbenta.com/Subscribe to VUX World: https://vuxworld.typeform.com/to/Qlo5aaeWSubscribe to The AI Ultimatum Substack: https://open.substack.com/pub/kanesimmsGet in touch with Kane on LinkedIn: https://www.linkedin.com/in/kanesimms/ Hosted on Acast. See acast.com/privacy for more information.
Paris Marx is joined by Nitasha Tiku to discuss how AI companies are preying on users to drive engagement and how that's repeating many of the problems we're belatedly trying to address with social media companies at an accelerated pace.Nitasha Tiku is a technology reporter at the Washington Post.Tech Won't Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon.The podcast is made in partnership with The Nation. Production is by Kyla Hewson.Also mentioned in this episode:Nitasha wrote about how chatbots are messing with people's minds.Paris wrote about Mark Zuckerberg's comments about people needing AI friends.AI companies are facing ongoing lawsuits over harmful content.Support the show
Have you ever felt a quiet nudge, an inner whisper that there's more to your life than what it currently looks like?Many of us reach a point where outward success no longer matches our inner truth. This episode explores that subtle discontent and how listening to it can lead to profound transformation, even if it begins with nothing more than a hesitant “yes.”In this personal reflection, you will:Discover how recognising and responding to a whisper within can shift the course of your life.Learn why clarity doesn't require a master plan, just presence and honest self-reflection.Hear Agi's personal journey from dentistry to podcasting and how saying yes to the unknown opened new purpose and possibility.If you're sensing something deeper stirring within you, press play to uncover what might be waiting when you finally listen.˚VALUABLE RESOURCES:Click here to get in touch with Agi and discuss mentoring/coaching.˚You can find the previous episodes of this series here: #489, #495, #501, #505, #509˚Join our growing community at MasterySeekersTribe.com, where self-mastery seekers come together for connection and growth.˚
Artificial Intelligence isn't coming — it's already here. And it's changing everything. In the latest episode of the Wealth on the Beach Podcast, I sat down with AI strategist Adriana to explore: ✅ Will AI take your job in the next 3–5 years? ✅ What is AGI — and why are tech leaders warning us? ✅ Is Universal Basic Income a solution or a silent threat? ✅ What makes us human in a machine-driven future? "You will become irrelevant if you don't pivot." — A line from the episode that hit hard. This isn't just about AI. It's about YOU, your future, and how to stay ahead of the curve. Let's reclaim the future — before it's too late. st
Join us on the latest episode, hosted by Jared S. Taylor!Our Guest: Max Marchione, Co-Founder at Superpower.What you'll get out of this episode:Building a Healthcare Super App: Superpower offers an AI-driven healthcare membership that includes 100+ blood biomarker tests, data integration, and holistic care.Vision of Widespread Access: Aims to create a health membership as universal as Amazon Prime, making preventive healthcare accessible and affordable.Founder Insights on Innovation: Max Marchione emphasizes the importance of ignoring outdated advice and maintaining conviction in forward-thinking solutions.Entrepreneurial Wisdom: Advises founders to build businesses that are resilient to advancements like AGI, focusing on immediate revenue and customer obsession.Personal Routines and Hacks: Max shares his productivity rituals, nutritional hacks (including a powerhouse smoothie), and mental resilience mantra.To learn more about Superpower:Website https://superpower.com/ Linkedin https://www.linkedin.com/company/superpower-health/Our sponsors for this episode are:Sage Growth Partners https://www.sage-growth.com/Quantum Health https://www.quantum-health.com/Show and Host's Socials:Slice of HealthcareLinkedIn: https://www.linkedin.com/company/sliceofhealthcare/Jared S TaylorLinkedIn: https://www.linkedin.com/in/jaredstaylor/WHAT IS SLICE OF HEALTHCARE?The go-to site for digital health executive/provider interviews, technology updates, and industry news. Listed to in 65+ countries.
The era of making AI smarter just by making it bigger is ending. But that doesn't mean progress is slowing down — far from it. AI models continue to get much more powerful, just using very different methods, and those underlying technical changes force a big rethink of what coming years will look like.Toby Ord — Oxford philosopher and bestselling author of The Precipice — has been tracking these shifts and mapping out the implications both for governments and our lives.Links to learn more, video, highlights, and full transcript: https://80k.info/to25As he explains, until recently anyone can access the best AI in the world “for less than the price of a can of Coke.” But unfortunately, that's over.What changed? AI companies first made models smarter by throwing a million times as much computing power at them during training, to make them better at predicting the next word. But with high quality data drying up, that approach petered out in 2024.So they pivoted to something radically different: instead of training smarter models, they're giving existing models dramatically more time to think — leading to the rise in “reasoning models” that are at the frontier today.The results are impressive but this extra computing time comes at a cost: OpenAI's o3 reasoning model achieved stunning results on a famous AI test by writing an Encyclopedia Britannica's worth of reasoning to solve individual problems at a cost of over $1,000 per question.This isn't just technical trivia: if this improvement method sticks, it will change much about how the AI revolution plays out, starting with the fact that we can expect the rich and powerful to get access to the best AI models well before the rest of us.Toby and host Rob discuss the implications of all that, plus the return of reinforcement learning (and resulting increase in deception), and Toby's commitment to clarifying the misleading graphs coming out of AI companies — to separate the snake oil and fads from the reality of what's likely a "transformative moment in human history."Recorded on May 23, 2025.Chapters:Cold open (00:00:00)Toby Ord is back — for a 4th time! (00:01:20)Everything has changed (and changed again) since 2020 (00:01:37)Is x-risk up or down? (00:07:47)The new scaling era: compute at inference (00:09:12)Inference scaling means less concentration (00:31:21)Will rich people get access to AGI first? Will the rest of us even know? (00:35:11)The new regime makes 'compute governance' harder (00:41:08)How 'IDA' might let AI blast past human level — or not (00:50:14)Reinforcement learning brings back 'reward hacking' agents (01:04:56)Will we get warning shots? Will they even help? (01:14:41)The scaling paradox (01:22:09)Misleading charts from AI companies (01:30:55)Policy debates should dream much bigger (01:43:04)Scientific moratoriums have worked before (01:56:04)Might AI 'go rogue' early on? (02:13:16)Lamps are regulated much more than AI (02:20:55)Companies made a strategic error shooting down SB 1047 (02:29:57)Companies should build in emergency brakes for their AI (02:35:49)Toby's bottom lines (02:44:32)Tell us what you thought! https://forms.gle/enUSk8HXiCrqSA9J8Video editing: Simon MonsourAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongMusic: Ben CordellCamera operator: Jeremy ChevillotteTranscriptions and web: Katy Moore
What if the most powerful technology in human history is being built by people who openly admit they don't trust each other? In this explosive 2-hour debate, three AI experts pull back the curtain on the shocking psychology driving the race to Artificial General Intelligence—and why the people building it might be the biggest threat of all. Kokotajlo predicts AGI by 2028 based on compute scaling trends. Marcus argues we haven't solved basic cognitive problems from his 2001 research. The stakes? If Kokotajlo is right and Marcus is wrong about safety progress, humanity may have already lost control.Sponsor messages:========Google Gemini: Google Gemini features Veo3, a state-of-the-art AI video generation model in the Gemini app. Sign up at https://gemini.google.comTufa AI Labs are hiring for ML Engineers and a Chief Scientist in Zurich/SF. They are top of the ARCv2 leaderboard! https://tufalabs.ai/========Guest PowerhouseGary Marcus - Cognitive scientist, author of "Taming Silicon Valley," and AI's most prominent skeptic who's been warning about the same fundamental problems for 25 years (https://garymarcus.substack.com/)Daniel Kokotajlo - Former OpenAI insider turned whistleblower who reveals the disturbing rationalizations of AI lab leaders in his viral "AI 2027" scenario (https://ai-2027.com/)Dan Hendrycks - Director of the Center for AI Safety who created the benchmarks used to measure AI progress and argues we have only years, not decades, to prevent catastrophe (https://danhendrycks.com/)Transcript: http://app.rescript.info/public/share/tEcx4UkToi-2jwS1cN51CW70A4Eh6QulBRxDILoXOnoTOC:Introduction: The AI Arms Race00:00:04 - The Danger of Automated AI R&D00:00:43 - The Rationalization: "If we don't, someone else will"00:01:56 - Sponsor Reads (Tufa AI Labs & Google Gemini)00:02:55 - Guest IntroductionsThe Philosophical Stakes00:04:13 - What is the Positive Vision for AGI?00:07:00 - The Abundance Scenario: Superintelligent Economy00:09:06 - Differentiating AGI and Superintelligence (ASI)00:11:41 - Sam Altman: "A Decade in a Month"00:14:47 - Economic Inequality & The UBI ProblemPolicy and Red Lines00:17:13 - The Pause Letter: Stopping vs. Delaying AI00:20:03 - Defining Three Concrete Red Lines for AI Development00:25:24 - Racing Towards Red Lines & The Myth of "Durable Advantage"00:31:15 - Transparency and Public Perception00:35:16 - The Rationalization Cascade: Why AI Labs Race to "Win"Forecasting AGI: Timelines and Methodologies00:42:29 - The Case for Short Timelines (Median 2028)00:47:00 - Scaling Limits: Compute, Data, and Money00:49:36 - Forecasting Models: Bio-Anchors and Agentic Coding00:53:15 - The 10^45 FLOP Thought ExperimentThe Great Debate: Cognitive Gaps vs. Scaling00:58:41 - Gary Marcus's Counterpoint: The Unsolved Problems of Cognition01:00:46 - Current AI Can't Play Chess Reliably01:08:23 - Can Tools and Neurosymbolic AI Fill the Gaps?01:16:13 - The Multi-Dimensional Nature of Intelligence01:24:26 - The Benchmark Debate: Data Contamination and Reliability01:31:15 - The Superhuman Coder Milestone Debate01:37:45 - The Driverless Car AnalogyThe Alignment Problem01:39:45 - Has Any Progress Been Made on Alignment?01:42:43 - "Fairly Reasonably Scares the Sh*t Out of Me"01:46:30 - Distinguishing Model vs. Process AlignmentScenarios and Conclusions01:49:26 - Gary's Alternative Scenario: The Neurosymbolic Shift01:53:35 - Will AI Become Jeff Dean?01:58:41 - Takeoff Speeds and Exceeding Human Intelligence02:03:19 - Final Disagreements and Closing RemarksREFS:Gary Marcus (2001) - The Algebraic Mind https://mitpress.mit.edu/9780262632683/the-algebraic-mind/ 00:59:00Gary Marcus & Ernest Davis (2019) - Rebooting AI https://www.penguinrandomhouse.com/books/566677/rebooting-ai-by-gary-marcus-and-ernest-davis/ 01:31:59Gary Marcus (2024) - Taming SV https://www.hachettebookgroup.com/titles/gary-marcus/taming-silicon-valley/9781541704091/ 00:03:01
digital kompakt | Business & Digitalisierung von Startup bis Corporate
Tauche ein in die faszinierende Welt der Singularity! Im Gespräch mit Joel Kaczmarek enthüllt Lars Jankowfsky, Gründer von Gradion, die Geheimnisse hinter dem exponentiellen Technologiewachstum. Gemeinsam erkunden sie die Chancen und Herausforderungen, die uns in den kommenden Jahrzehnten erwarten. Von der medizinischen Revolution bis hin zu interplanetaren Reisen – welche Rolle spielt künstliche Intelligenz und wie verändert sie unser Leben? Lass dich inspirieren und erfahre, warum die Zukunft aufregender ist, als du denkst! Du erfährst... …wie Lars Jankowfsky die Chancen und Risiken der Singularity einschätzt …welche Rolle künstliche Intelligenz in der medizinischen Forschung spielt …wie die Entwicklung von Robotik Arbeitsplätze und Gesellschaft verändert …welche Energiefragen bei exponentiellem Technologiewachstum aufkommen …warum die Zukunft von Kapitalismus und Gesellschaft neu gedacht werden muss __________________________ ||||| PERSONEN |||||
What makes a good AI benchmark? Greg Kamradt joins Demetrios to break it down—from human-easy, AI-hard puzzles to wild new games that test how fast models can truly learn. They talk hidden datasets, compute tradeoffs, and why benchmarks might be our best bet for tracking progress toward AGI. It's nerdy, strategic, and surprisingly philosophical.// BioGreg has mentored thousands of developers and founders, empowering them to build AI-centric applications.By crafting tutorial-based content, Greg aims to guide everyone from seasoned builders to ambitious indie hackers.Greg partners with companies during their product launches, feature enhancements, and funding rounds. His objective is to cultivate not just awareness, but also a practical understanding of how to optimally utilize a company's tools.He previously led Growth @ Salesforce for Sales & Service Clouds in addition to being early on at Digits, a FinTech Series-C company.// Related LinksWebsite: https://gregkamradt.com/YouTube channel: https://www.youtube.com/@DataIndependent~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreMLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Greg on LinkedIn: /gregkamradt/Timestamps:[00:00] Human-Easy, AI-Hard[05:25] When the Model Shocks Everyone[06:39] “Let's Circle Back on That Benchmark…”[09:50] Want Better AI? Pay the Compute Bill[14:10] Can We Define Intelligence by How Fast You Learn?[16:42] Still Waiting on That Algorithmic Breakthrough[20:00] LangChain Was Just the Beginning[24:23] Start With Humans, End With AGI[29:01] What If Reality's Just... What It Seems?[32:21] AI Needs Fewer Vibes, More Predictions[36:02] Defining Intelligence (No Pressure)[36:41] AI Building AI? Yep, We're Going There[40:13] Open Source vs. Prize Money Drama[43:05] Architecting the ARC Challenge[46:38] Agent 57 and the Atari Gauntlet
Are you doing all the right things to grow, but still feel stuck?In today's fast-paced world, personal and professional growth is often measured by how much we do. But what if the real key to transformation lies not in doing more, but in being more? This episode with bestselling author and vertical development expert Dr. Ryan Gottfredson explores the often-overlooked "being side" of personal evolution - helping you uncover why success sometimes feels out of reach despite your best efforts.Discover the crucial difference between horizontal and vertical development, and why knowing it could change everything.Learn practical strategies to expand your "window of tolerance" and develop emotional resilience.Understand how mindsets, trauma, and internal programming shape your potential more than any skillset can.Press play now to learn how upgrading your inner world can unlock the transformation you've been searching for.˚KEY POINTS AND TIMESTAMPS:02:01 - Reconnecting After Five Years: A Journey of Growth05:42 - Doing Better vs. Being Better: Understanding the Core Distinction09:35 - Recognizing When You're Stuck: The Role of the Being Side13:20 - Window of Tolerance: A Measure of Emotional Capacity15:34 - Vertical vs. Horizontal Development: Tools vs. Transformation18:15 - The Three Steps to Elevating Your Being19:59 - Surface-Level Practices: Breathing, Meditation, and More21:59 - Deep-Level Work: Mindsets and Inner Programming25:10 - The Deepest Work: Trauma, Culture, Neurodivergence31:37 - The Foundation of Self-Awareness and Real Transformation˚MEMORABLE QUOTE:"Vertical development isn't about adding tools to our tool belt. It's about upgrading the person wearing the tool belt."˚VALUABLE RESOURCES:Ryan Gottfredson's website: https://ryangottfredson.com/˚Click here to get in touch with Agi and discuss mentoring/coaching.˚Join our growing community at MasterySeekersTribe.com, where self-mastery seekers come together for connection and growth.˚
Dinis Guarda citiesabc openbusinesscouncil Thought Leadership Interviews
Ben Goertzel is a computer scientist, AI researcher, speaker, author, and entrepreneur who coined the term AGI in 2003 and has contributed significantly to the field.He is the founder and CEO of SingularityNET, a decentralised AI platform that leverages blockchain technology to provide open access to AI services, fostering collaboration across industries. With its AI marketplace, developers can monetise their innovations using the AGIX token.Useful Links and Resourceshttps://www.linkedin.com/in/bengoertzelhttps://aiforgood.itu.int/speaker/ben-goertzel/https://goertzel.org/bio.htmAbout citiesabc.comhttps://www.citiesabc.com/ About businessabc.nethttps://www.businessabc.net/About fashionabc.orghttps://www.fashionabc.org/ About Dinis Guardahttps://www.dinisguarda.com/https://businessabc.net/wiki/dinis-guardaBusiness Inquiries- info@ztudium.comSupport the show
Peter Deng has led product teams at OpenAI, Instagram, Uber, Facebook, Airtable, and Oculus and helped build products used by billions—including Facebook's News Feed, the standalone Messenger app, Instagram filters, Uber Reserve, ChatGPT, and more. Currently he's investing in early-stage founders at Felicis. In this episode, Peter dives into his most valuable lessons from building and scaling some of tech's most iconic products and companies.What you'll learn:1. Peter's one‑sentence test for hiring superstars2. Why your product (probably) doesn't matter3. Why you don't need a tech breakthrough to build a huge business4. The five PM archetypes, and how to build a team of Avengers5. Counterintuitive lessons on growing products from 0 to 1, and 1 to 1006. The importance of data flywheels and workflows—Brought to you by:Paragon—Ship every SaaS integration your customers wantPragmatic Institute—Industry‑recognized product, marketing, and AI training and certificationsContentsquare—Create better digital experiences—Where to find Peter Deng:• X: https://x.com/pxd• LinkedIn: https://www.linkedin.com/in/peterxdeng/—In this episode, we cover:(00:00) Introduction to Peter Deng(05:41) AI and AGI insights(11:35) The future of education with AI(16:53) The power of language in leadership(21:01) Building iconic products(36:44) Scaling from zero to 100(41:56) Balancing short- and long-term goals(47:12) Creating a healthy tension in teams(50:02) The five archetypes of product managers(55:39) Primary and secondary archetypes(58:47) Hiring for growth mindset and autonomy(01:15:52) Effective management and communication strategies(01:19:23) Presentation advice and self-advocacy(01:25:50) Balancing craft and practicality in product management(01:30:40) The importance of empathy in design thinking(01:35:45) Career decisions and learning opportunities(01:42:05) Lessons from product failures(01:45:42) Lightning round and final thoughts—Referenced:• OpenAI: https://openai.com/• Artificial general intelligence (AGI): https://en.wikipedia.org/wiki/Artificial_general_intelligence• Head of ChatGPT answers philosophical questions about AI at SXSW 2024 with SignalFire's Josh Constine: https://www.youtube.com/watch?v=mgbgI0R6XCw• Professors Are Using A.I., Too. Now What?: https://www.npr.org/2025/05/21/1252663599/kashmir-hill-ai#:~:text=Now%20What• Herbert H. Clark: https://web.stanford.edu/~clark/• Russian speakers get the blues: https://www.newscientist.com/article/dn11759-russian-speakers-get-the-blues/• Ilya Sutskever (OpenAI Chief Scientist)—Building AGI, Alignment, Future Models, Spies, Microsoft, Taiwan, & Enlightenment: https://www.dwarkesh.com/p/ilya-sutskever• Anthropic's CPO on what comes next | Mike Krieger (co-founder of Instagram): https://www.lennysnewsletter.com/p/anthropics-cpo-heres-what-comes-next• Kevin Systrom on LinkedIn: https://www.linkedin.com/in/kevinsystrom/• Building a magical AI code editor used by over 1 million developers in four months: The untold story of Windsurf | Varun Mohan (co-founder and CEO): https://www.lennysnewsletter.com/p/the-untold-story-of-windsurf-varun-mohan• Microsoft CPO: If you aren't prototyping with AI, you're doing it wrong | Aparna Chennapragada: https://www.lennysnewsletter.com/p/microsoft-cpo-on-ai• The rise of Cursor: The $300M ARR AI tool that engineers can't stop using | Michael Truell (co-founder and CEO): https://www.lennysnewsletter.com/p/the-rise-of-cursor-michael-truell• Building Lovable: $10M ARR in 60 days with 15 people | Anton Osika (CEO and co-founder): https://www.lennysnewsletter.com/p/building-lovable-anton-osika• Granola: https://www.granola.ai/• Inside Bolt: From near-death to ~$40m ARR in 5 months—one of the fastest-growing products in history | Eric Simons (founder and CEO of StackBlitz): https://www.lennysnewsletter.com/p/inside-bolt-eric-simons• OpenAI's CPO on how AI changes must-have skills, moats, coding, startup playbooks, more | Kevin Weil (CPO at OpenAI, ex-Instagram, Twitter): https://www.lennysnewsletter.com/p/kevin-weil-open-ai• Fidji Simo on LinkedIn: https://www.linkedin.com/in/fidjisimo/• Airtable: https://www.airtable.com/• George Lee on LinkedIn: https://www.linkedin.com/in/geolee/• Andrew Chen on LinkedIn: https://www.linkedin.com/in/andrewchen/• Lauryn Motamedi on LinkedIn: https://www.linkedin.com/in/laurynmotamedi/• Twilio: https://www.twilio.com/• Nick Turley on LinkedIn: https://www.linkedin.com/in/nicholasturley/• Ian Silber on LinkedIn: https://www.linkedin.com/in/iansilber/• Thomas Dimson on LinkedIn: https://www.linkedin.com/in/thomasdimson/• Joey Flynn on LinkedIn: https://www.linkedin.com/in/joey-flynn-8291586b/• Ryan O'Rourke's website: https://www.rourkery.com/• Joanne Jang on LinkedIn: https://www.linkedin.com/in/jangjoanne/• Behind the founder: Marc Benioff: https://www.lennysnewsletter.com/p/behind-the-founder-marc-benioff• Jill Hazelbaker on LinkedIn: https://www.linkedin.com/in/jill-hazelbaker-3aa32422/• Guy Kawasaki's website: https://guykawasaki.com/• Eric Antonow on LinkedIn: https://www.linkedin.com/in/antonow/• Sachin Kansal on LinkedIn: https://www.linkedin.com/in/sachinkansal/• IDEO design thinking: https://designthinking.ideo.com/• The 7 Steps of the Design Thinking Process: https://www.ideou.com/blogs/inspiration/design-thinking-process• Linear's secret to building beloved B2B products | Nan Yu (Head of Product): https://www.lennysnewsletter.com/p/linears-secret-to-building-beloved-b2b-products-nan-yu• Jeff Bezos's quote: https://news.ycombinator.com/item?id=27778175• Friendster: https://en.wikipedia.org/wiki/Friendster• Myspace: https://en.wikipedia.org/wiki/Myspace• How LinkedIn became interesting: The inside story | Tomer Cohen (CPO at LinkedIn): https://www.lennysnewsletter.com/p/how-linkedin-became-interesting-tomer-cohen• “Smile” by Jay-Z: https://www.youtube.com/watch?v=SSumXG5_rs8&list=RDSSumXG5_rs8&start_radio=1• The Wire on HBO: https://www.hbo.com/the-wire• Felicis: https://www.felicis.com/—Recommended books:• Sapiens: A Brief History of Humankind: https://www.amazon.com/Sapiens-Humankind-Yuval-Noah-Harari/dp/0062316095• The Design of Everyday Things: https://www.amazon.com/Design-Everyday-Things-Revised-Expanded/dp/0465050654• The Silk Roads: A New History of the World: https://www.amazon.com/Silk-Roads-New-History-World/dp/1101912375—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.lennysnewsletter.com/subscribe
226 | Thuy-Ngan Trinh ist Managing Director von Project A und hat viele spätere Unicorns mit aufgebaut. In dieser Crossover-Folge reden wir darüber wie der Mittelstand von AI profitieren kann - und woran er bislang scheitert.Hol dir dein Ticket für den 1. KI Gipfel in Stuttgart am 7.7. Ich bin auch auf der Bühne! Code: ALEXMROZEK99Mehr Geschäftsideen findest du auf digitaleoptimisten.de/datenbank.Kapitel:(00:00) Intro & Crossover-Setup(03:56) AGI, ASI – und warum simple Agenten reichen(07:48) Use-Cases, Daten & die Tanzflächen-Metapher(16:00) AI-Demokratisierung vs. Blockaden – China schlägt Deutschland(28:29) Scarcity is the Mother of Invention(33:25) 10×-Ziele & KPI-Ambition im Mittelstand(40:47) Thuy-Ngans beste GeschäftsideeMehr Kontext:In dieser Crossover-Folge diskutieren Alex Mrozek und Thuy Ngan die aktuellen Entwicklungen und Herausforderungen im Bereich der Künstlichen Intelligenz (KI). Sie beleuchten die Unterschiede zwischen AGI und ASI, die Bedeutung von Datenprojekten und die Adoption von KI in Unternehmen. Zudem wird die Rolle von Bildung und die Verantwortung von Führungskräften in der KI-Transformation thematisiert. Abschließend wird die emotionale Dimension der Veränderung durch KI hervorgehoben und die Notwendigkeit, KPIs für die AI-Adoption zu überdenken.Keywords:Künstliche Intelligenz, AGI, ASI, Datenprojekte, KI-Adoption, Bildung, KPI, Transformation, Führungskräfte, Emotionen
Flo Crivello, CEO of AI agent platform Lindy, provides a candid deep dive into the current state of AI agents, cutting through hype to reveal what's actually working in production versus what remains challenging. The conversation explores practical implementation details including model selection, fine-tuning, RAG systems, tool design philosophy, and why most successful "AI agents" today are better described as intelligent workflows with human-designed structure. Flo shares insights on emerging capabilities like more open-ended agents, discusses his skepticism about extrapolating current progress trends too far into the future, and explains why scaffolding will remain critical even as we approach AGI. This technical discussion is packed with practical nuggets for AI engineers and builders working on agent systems. Sponsors: Google Gemini: Google Gemini features VEO3, a state-of-the-art AI video generation model in the Gemini app. Sign up at https://gemini.google.com Oracle Cloud Infrastructure: Oracle Cloud Infrastructure (OCI) is the next-generation cloud that delivers better performance, faster speeds, and significantly lower costs, including up to 50% less for compute, 70% for storage, and 80% for networking. Run any workload, from infrastructure to AI, in a high-availability environment and try OCI for free with zero commitment at https://oracle.com/cognitive The AGNTCY (Cisco): The AGNTCY is an open-source collective dedicated to building the Internet of Agents, enabling AI agents to communicate and collaborate seamlessly across frameworks. Join a community of engineers focused on high-quality multi-agent software and support the initiative at https://agntcy.org/?utmcampaign=fy25q4agntcyamerpaid-mediaagntcy-cognitiverevolutionpodcast&utmchannel=podcast&utmsource=podcast NetSuite by Oracle: NetSuite by Oracle is the AI-powered business management suite trusted by over 41,000 businesses, offering a unified platform for accounting, financial management, inventory, and HR. Gain total visibility and control to make quick decisions and automate everyday tasks—download the free ebook, Navigating Global Trade: Three Insights for Leaders, at https://netsuite.com/cognitive PRODUCED BY: https://aipodcast.ing SOCIAL LINKS: Website: https://www.cognitiverevolution.ai Twitter (Podcast): https://x.com/cogrev_podcast Twitter (Nathan): https://x.com/labenz LinkedIn: https://linkedin.com/in/nathanlabenz/ Youtube: https://youtube.com/@CognitiveRevolutionPodcast Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431 Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk
Co-Host Ayush Prakash (https://mountaintoppodcast.com/ayush) The 'generation gap' is nothing new. Boomers complained about how 'square' their parents were, and then Gen X complained about how lame their parents' music was. But with Gen Z it's a bit different. This generation is the first to grow up 'neuro-plastically connected to technology'. Those are the words of my first time guest Ayush Prakash, author of the new book called AI for Gen Z. So how exactly has the Internet Age shaped young adults? What's more, how will AI do that from now on? Ayush is actually a Gen Z'er, so he knows what he's talking about. He starts by talking about the '3 Rs' that will affect all of us going forward, but especially the youngest generation coming up. How do technology companies and their prioritization of profit really affect all of us in unexpected ways, particularly as men? What specifically has happened in just the last five short years to rob the human race of its humanity, let alone your masculinity? How is AI tricking us into accepting the computer as an authoritarian power (yes, really), and dragging us down to a less-human level of existence with it? Are you cheating on your partner if you get an AI girlfriend programmed to give you what you're missing from your IRL relationship? Is there really any correlation between intelligent learning models as we know them and the possibility of a 'Skynet'-like AGI? What does it even mean to be human--and masculine--a quarter of the way through the 21st century? And if we've lost something there already, how to we get it back? Check out even deeper and more controversial takes to spark your curiosity at https://mountaintoppodcast.com/substack === HELP US SEND THE MESSAGE TO GREAT MEN EVERYWHERE === The content in this show is NEVER generated by AI. I discovered it can't handle a joke a long time ago. Meanwhile, I'll keep the practical, actionable ideas coming as well as the entertaining part...all for free. If you love what you hear, please rate the show on the service you subscribed to it on (takes one second) and leave a review. As we say here in Texas, I appreciate you!
Amjad Masad (@amasad), founder and CEO of Replit, and Yohei Nakajima (@yoheinakajima), Managing Partner at Untapped Capital, joined Village Global partner Ben Casnocha for a live masterclass with Village Global founders.Takeaways:AI agents are rapidly evolving, with coding and deep research agents showing the most traction today. But general-purpose assistants are still brittle — trip-planning and high-context tasks remain hard.Replit Agent shows how quickly full-stack applications can be built today, sometimes in under an hour — even by non-technical users. What matters most isn't a CS degree, it's traits like curiosity, grit, and systems thinking.Many AI startups are too quick to claim “moats” when most don't really have one. True defensibility requires deep domain insight, unique data, and the right founder traits.The rise of vertical AI agents is compelling — specialists outperform general agents for now. A real AGI will change everything, and it's so disruptive it's not even worth planning around.The best investors still look for timeless traits: hard-charging, resourceful founders, attacking stagnant industries. AI changes a lot — but not what makes a great early-stage team.Tools like Replit are making vibe coding (yes, even for non-coders) a superpower. From executive dashboards to lightweight Crunchbase clones, agents are already creating real enterprise value.Don't over-engineer AI use cases. Start with internal tools or things you've always wanted to build. The best projects often come from personal curiosity and side projects.Resources mentioned:Replit – The coding platform behind Replit Agent, enabling fast full-stack app creation with AIVCpedia by Yohei Nakajima – A startup intelligence platform vibe-coded with Replit AgentTweet: $150k → $400 NetSuite extension – Real-world example of arbitrage using ReplitTED Talk on Grit by Angela Duckworth – Referenced by Amjad as a key trait for AI builders“Perfectionism” blog post by Amjad Masad – Why it holds builders back and how to overcome itSeven Powers by Hamilton Helmer – The strategy book Amjad calls the best resource on real moatsNEO – A fully autonomous ML engineerLayers – An autonomous AI marketing agent that lives in your IDEBasis – A vertical AI agent for accounting firmsNDEA – A new lab (founded by François Chollet & Mike Knoop) exploring AGI with program synthesisThanks for listening — if you like what you hear, please review us on your favorite podcast platform.Check us out on the web at www.villageglobal.vc or get in touch with us on Twitter @villageglobal.Want to get updates from us? Subscribe to get a peek inside the Village. We'll send you reading recommendations, exclusive event invites, and commentary on the latest happenings in Silicon Valley. www.villageglobal.vc/signup
This week, hosts Chad Sowash, Joel Cheesman, and Emi Beredugo sling zingers at the tech and policy chaos of today's work of work. First up, they cackle over OpenAI's Sam Altman throwing shade at Meta, claiming Zuck's crew dangled $100 million bonuses to poach his AI wizards. Altman, smirking on his brother's podcast, scoffed, “Meta's not exactly an innovation powerhouse,” betting OpenAI's culture will outshine cash as they chase superintelligence—AI that'll make humans look like dial-up modems. Chad quips, “Zuck's throwing cash like confetti, but Altman's holding the AGI trump card.” Next, the hosts tackle Trump's immigration whiplash. Last week, he hit pause on ICE raids targeting farms and hotels—where 42% of crop workers and 7.6% of hospitality staff are undocumented—after farmers cried foul. But days later, he flipped, doubling down on mass deportations, especially in blue states, risking $315 billion in economic fallout. Tech gets weirder with Amazon's Andy Jassy predicting AI will shrink corporate jobs, leaning on generative AI and Zoox's 10,000 robotaxis to replace drivers. Meanwhile, Zoom's Eric Yuan shrugs off work-life balance, saying leaders live for work and family, but sees AI pushing Gen Z toward three-day workweeks. Klarna's CEO, Sebastian Siemiatkowski, not to be outdone, launches an AI hotline starring a digital him. Surely, AI Sebastian will be running interviews at Klarna soon, right? Tune in for insight. Chapters 00:00 Introduction and Summer Vibes 01:49 Current Events: Juneteenth and Global Chaos 03:21 TikTok's Staying Power 05:10 Browser Dating: Privacy or Romance? 08:08 Indeed's New Market Squeeze 08:25 Meta vs. OpenAI: The Poaching Wars 24:32 Trump's Economic Tightrope 29:35 Immigration vs. Market Needs 35:26 AI's Job Displacement Threat 45:33 Culture and Burnout 50:23 The Infinite Workday Free stuff at http://www.chadcheese.com/free
This week's blogpost - https://bahnsen.co/4jYgcxO In this episode of the 'Thoughts On Money' podcast, co-host Blaine Carver and guest Darren Lightfoot delve into the intricacies of Roth conversions and the potential tax traps associated with them. Blaine shares personal anecdotes and explains why Roth conversions, despite their popularity, require careful consideration of several factors that go beyond simple tax bracket comparisons. They discuss how adjustments in adjusted gross income (AGI) and modified AGI (MAGI) can affect various aspects such as Social Security taxation, Medicare premiums, capital gains taxes, and eligibility for tax credits. Key insights are provided on navigating these hidden pitfalls and the importance of consulting with financial professionals for tailored advice. 00:00 Introduction and Host Welcome 00:38 Beach Story and Weather Analogy 02:50 Introduction to Roth Conversions 04:34 Detailed Tax Traps in Roth Conversions 08:18 Impact on Social Security and Medicare 12:12 Qualified Charitable Distributions (QCD) 14:50 Dividends, Capital Gains, and Tax Credits 18:20 Final Thoughts and Advice 24:01 Podcast Conclusion and Disclaimers Links mentioned in this episode: http://thoughtsonmoney.com http://thebahnsengroup.com
Part One: https://www.spreaker.com/episode/all-things-ai-part-one--66659818In this special bonus episode of the Business English Podcast, host Rob collaborates with Stephen of Simple English News Daily (SEND7) to discuss the current and future implications of artificial intelligence (AI) in the business world and language learning. The conversation delves into how AI may affect different job levels, its role as an enabler rather than a replacement, and AGI's potential. They also explore the integration of AI in English language learning and its limitations. The episode encourages listeners to experiment with AI tools to enhance their skills and provides insights on balancing AI tools with human interaction for language mastery.00:00 Introduction and Episode Overview01:10 Diving into AI's Impact on Jobs02:33 AI as a Co-Pilot in the Workplace04:12 Challenges and Limitations of AI06:43 Understanding AGI and Its Implications09:26 AI in Language Learning12:40 Practical Uses of AI in Communication15:24 Final Thoughts and Wrap-Up
OpenAI's Sam Altman is doing a full blown AI media tour and taking no prisoners. GPT-5! Humanoid robotics! Smack talk! The next generation of AI is…maybe almost here? We unpack Altman's brand-new in-house podcast (and his brother's), confirm the “likely-this-summer” GPT-5 timeline and reveal why Meta is dangling $100 million signing bonuses at OpenAI staff. Plus: the freshly launched “OpenAI Files” site, Altman's latest shot at Elon, and what's real versus propaganda. Then it's model-mania: Midjourney Video goes public, ByteDance's Seedance stuns, Minimax's Hailuo 02 levels up, and yet Veo 3 still rules supreme. We tour Amazon's “fewer-humans” future, Geoffrey Hinton's job-loss warning, Logan Kilpatrick's “AGI is product first” take, and a rapid-fire Robot Watch: 1X's world-model paper, Spirit AI's nimble dancer, and Hexagon's rollerblade-footed speedster. THE ROBOTS ARE ON WHEELS. GPT-5 IS AT THE DOOR. IT'S A GOOD SHOW. Join the discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/ // Show Links // OpenAI's Official Podcast with Sam Altman https://youtu.be/DB9mjd-65gw?t=632 Sam Altman on Jack Altman's Podcast https://youtu.be/mZUG0pr5hBo?si=QNv3MGQLWWQcb4Aq Boris Power (Head of OpenAI Research) Tweet https://x.com/BorisMPower/status/1935160882482528446 The OpenAI Files https://www.openaifiles.org/ Google's Logan Kilpatrick on AGI as Product https://x.com/vitrupo/status/1934627428372283548 Midjourney Video is now LIVE https://x.com/midjourney/status/1935377193733079452 Our early MJ Video Tests https://x.com/AIForHumansShow/status/1935393203731283994 Seedance (New Bytedance AI Video Model) https://seed.bytedance.com/en/seedance Hailuo 2 (MiniMax New Model) https://x.com/Hailuo_AI/status/1935024444285796561 SQUIRREL PHYSICS: https://x.com/madpencil_/status/1935011921792557463 Higgsfield Canvas: a state-of-the-art image editing model https://x.com/higgsfield_ai/status/1935042830520697152 Krea1 - New AI Imaging Model https://www.krea.ai/image?k1intro=true Generating Mickey Mouse & More In Veo-3 https://x.com/omooretweets/status/1934824634442211561 https://x.com/AIForHumansShow/status/1934832911037112492 LA Dentist Commericals with Veo 3 https://x.com/venturetwins/status/1934378332021461106 AI Will Shrink Amazon's Workforce Says Andy Jassy, CEO https://www.cnbc.com/2025/06/17/ai-amazon-workforce-jassy.html Geoffrey Hinton Diary of a CEO Interview https://youtu.be/giT0ytynSqg?si=BKsfioNZScK4TJJV More Microsoft Layoffs Coming https://x.com/BrodyFord_/status/1935405564831342725 25 New Potential AI Jobs (from the NYT) https://www.nytimes.com/2025/06/17/magazine/ai-new-jobs.html 1X Robotics World Model https://x.com/1x_tech/status/1934634700758520053 SpiritAI just dropped their Moz1 humanoid https://x.com/XRoboHub/status/1934860548853944733 Hexagon Humanoid Robot https://x.com/TheHumanoidHub/status/1935126478527807496 Training an AI Video To Make Me Laugh (YT Video) https://youtu.be/fKpUP4dcCLA?si=-tSmsuEhzL-2jdMY
In this episode of AI Answers, Paul Roetzer and Cathy McPhillips tackle 20 of the most pressing questions from our 48th Intro to AI class—covering everything from building effective AI roadmaps and selecting the right tools, using GPTs, navigating AI ethics, understanding great prompting, and more. Access the show notes and show links here Timestamps: 00:00:00 — Intro 00:08:46 — Question #1: How do you define a “human-first” approach to AI? 00:11:33 — Question #2: What uniquely human qualities do you believe we must preserve in an AI-driven world? 00:15:55 — Question #3: Where do we currently stand with AGI—and how close are OpenAI, Anthropic, Google, and Meta to making it real? 00:17:53 — Question #4: If AI becomes smarter, faster, and more accessible to all—how do individuals or companies stand out? 00:23:17 — Question #5: Do you see a future where AI agents can collaborate like human teams? 00:28:40 — Question #6: For those working with sensitive data, when does it make sense to use a local LLM over a cloud-based one? 00:30:50 — Question #7: What's the difference between ChatGPT Projects and Custom GPTs? 00:32:36 — Question #8: If an agency or consultant is managing dozens of GPTs, what are your best tips for organizing workflows, versioning, and staying sane at scale? 00:36:12 — Question #9: How do you personally decide which AI tools to use—and do you see a winner emerging? 00:38:53 — Question #10: What tools or platforms in the agent space are actually ready for production today? 00:43:10 — Question #11: For companies just getting started, how do you recommend they identify the right pain points and build their AI roadmap? 00:45:34 — Question #12: What AI tools do you believe deliver the most value to marketing leaders right now? 00:46:20 — Question #13: How is AI forcing agencies and consultants to rethink their models, especially with rising efficiency and lower costs? 00:51:14 — Question #14: What does great prompting actually look like? And how should employers think about evaluating that skill in job candidates? 00:54:40 — Question #15: As AI reshapes roles, does age or experience become a liability—or can being the most informed person in the room still win out? 00:56:52 — Question #16: What kind of changes should leaders expect in workplace culture as AI adoption grows? 01:00:54 — Question #17: What is ChatGPT really storing in its “memory,” and how persistent is user data across sessions? 01:02:11 — Question #18: How can businesses safely use LLMs while protecting personal or proprietary information? 01:02:55 — Question #19: Why do you think some companies still ban AI tools internally—and what will it take for those policies to shift? 01:04:13 — Question #20: If AI tools are free or low-cost, does that make us the product? Or is there a more optimistic future where creators and users both win This week's episode is brought to you by MAICON, our 6th annual Marketing AI Conference, happening in Cleveland, Oct. 14-16. The code POD100 saves $100 on all pass types. For more information on MAICON and to register for this year's conference, visit www.MAICON.ai. Visit our website Receive our weekly newsletter Join our community: Slack LinkedIn Twitter Instagram Facebook Looking for content and resources? Register for a free webinar Come to our next Marketing AI Conference Enroll in our AI Academy
Dwarkesh Patel is the host of the Dwarkesh Podcast. He joins Big Technology Podcast to discuss the frontiers of AI research, sharing why his timeline for AGI is a bit longer than the most enthusiastic researchers. Tune in for a candid discussion of the limitations of current methods, why continuous AI improvement might help the technology reach AGI, and what an intelligence explosion looks like. We also cover the race between AI labs, the dangers of AI deception, and AI sycophancy. Tune in for a deep discussion about the state of artificial intelligence, and where it's going. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack? Here's 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Welcome to Episode 44 of The Paranormal Rundown! In this episode we have Jeff join us from Falling Knife Films. Jeff has an extensive background in paranormal investigation and research, including multiple investigations with David. He also has a very creative side, now with a focus on AI generated video. Additionally, Jeff has had some fascinating results using AI as a psychic! Later in the episode we discuss our plans to validate some of this activity, with The Great Psychic Chat GPT Experiment, and how it might actually be possible. We dig deep into AI, Consciousness, Technology, and Science, with side roads that include The Carrington Event, Red Mercury, the Star Wars Program, Autonomous AI, AGI, Wilder Penfield, Quantum Tubules, Tokens in Tokens out, Spirit Manipulation of Technology, Electro-Neuro-Magnetic Pulse, AI Training, GPTs demonic voice, the Turing Test, Doctor GPT, GPT Analyzes Fabio, Bobby Mackey's, Energy Effects on Paranormal Activity, and Beginner's Luck. Oh, and Jeff shares with us an extremely unique (and perhaps a little disturbing) EVP of Chickens. So fire up your favorite AI research assistant, and prepare to step into a future where technology and the paranormal collide! You can find Falling Knife Films on YouTube at https://www.youtube.com/@mysteryfilms and on Reddit at https://www.reddit.com/user/FallingKnifeFilmsYou can find the image of Fabio here: https://imgur.com/a/fabio-by-dog-trap-jnDm1SGIf you have any unique AI experiences or ideas for The Great Psychic Chat GPT Experiment, please email us at feedback@paranormalrundown.comThe Paranormal Rundown is a partnership between the hosts David Griffith, Father Michael Birdsong, and Vic Hermanson.Be sure to check out our partner podcasts:You can find Vic at Trailer Trash Terrors, https://podcasters.spotify.com/pod/show/vic-hermansonYou can find Father Birdsong at https://www.becomingahouseofprayer.com, as well as hear his new podcast Ending the Curse at:https://open.spotify.com/show/5yL7ZAN4wcRKnMPAlalVXW Hosted on Acast. See acast.com/privacy for more information.
Joe Hudson is a coach who works with the executives building AGI at OpenAI. From inside OpenAI, he witnesses the full spectrum of human emotion that comes with bringing something new into the world—the exhilaration, the terror, the weight of it all. He feels these emotions, too: He believes AI will eventually replace what he does as a coach.But instead of fixating on that fear, Hudson is asking a deeper question: Who is he becoming in the meantime? He believes that moments like this—when we can feel the ground quiver—can be powerful catalysts for transformation, but only if we're willing to face the uncertainty they bring.In this episode of AI & I, Dan Shipper sits down with Hudson to talk about how he's answering that question. They get into what happens when the thing you've built your life around might disappear, how to find who you are beneath your professional identity, and why Hudson believes intention is the key to growing with AI.If you found this episode interesting, please like, subscribe, comment, and share! Want even more?Sign up for Every to unlock our ultimate guide to prompting ChatGPT here: https://every.ck.page/ultimate-guide-to-prompting-chatgpt. It's usually only for paying subscribers, but you can get it here for free.To hear more from Dan Shipper:Subscribe to Every: https://every.to/subscribe Follow him on X: https://twitter.com/danshipper Sponsor: Experience high quality AI video generation with Google's most capable video model: Veo 3. Try it in the Gemini app at gemini.google with a Google AI Pro plan or get the highest access with the Ultra plan.Timestamps:Introduction: 00:01:49What it feels like inside the room where AGI is being built: 00:03:14The most important question to ask yourself as AGI approaches: 00:08:15The importance of sitting with uncertainty: 00:17:49How Joe is preparing his daughters for a post-AGI world: 00:21:11How we think, feel, and react; the three layers of human awareness: 00:27:25Staying grounded while coaching the people shaping our future: 00:35:34Why Joe doesn't take things personally—even when the stakes are high: 00:42:44Links to resources mentioned in the episode:Joe Hudson: @FU_joehudson; Learn more about the coaching and workshops that Joe runs: Art of Accomplishment
What is AGI? AI's abilities are increasingly concerning. Hour 4 6/17/2025 full 2093 Tue, 17 Jun 2025 22:00:00 +0000 cIhyvA7Xr4IdoFEv6P0gSoYbqWiw3267 news The Dana & Parks Podcast news What is AGI? AI's abilities are increasingly concerning. Hour 4 6/17/2025 You wanted it... Now here it is! Listen to each hour of the Dana & Parks Show whenever and wherever you want! © 2025 Audacy, Inc. News False https://player.ampe
Episode 63: What if you could turn your idea into a fully working app—just by describing it in plain English? Matt Wolfe (https://x.com/mreflow) sits down with Anton Osika (https://x.com/antonosika), CEO of Lovable, a revolutionary platform that lets anyone build and launch software using AI—no code or development team required. In this episode, Anton gives a live demo of Lovable, reveals how creators of all ages—including kids and solo founders—are launching real businesses in hours, and dives into how AI-powered platforms like Lovable will change the future of entrepreneurship, creativity, and even move us closer to AGI. If you're a builder, maker, or curious about the next frontier in software creation, this conversation will reshape how you think about launching your next product. Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd — Show Notes: (00:00) AI-Powered Code Revolution (04:21) Engineers as Problem Translators (07:50) Supabase Integration Simplifies Startups (10:49) Enhancing Design and Collaboration (16:46) Intuitive AI Interface Development (19:31) AI Empowering Solo Entrepreneurs (22:40) Future of Software Development: Automation Impact (24:18) Lovable App — Mentions: Want better prompts? Get our guide to Advanced Prompt Engineering: https://clickhubspot.com/wbo Anton Osika: https://www.linkedin.com/in/antonosika/ Lovable: https://lovable.dev/ Supabase: https://supabase.com/ Claude: https://claude.ai/ Gemini: https://gemini.google.com/ Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw — Check Out Matt's Stuff: • Future Tools - https://futuretools.beehiiv.com/ • Blog - https://www.mattwolfe.com/ • YouTube- https://www.youtube.com/@mreflow — Check Out Nathan's Stuff: Newsletter: https://news.lore.com/ Blog - https://lore.com/ The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano
One of the longest-standing partnerships in AI could be reaching a breaking point. We look at the souring relationship between OpenAI and Microsoft and why AGI is at the center of it.
ABC's Mike Dobuski reports on AGI...What is AGI? It's AI that could match the intelligence of a human brain. Plus, Trump Family Phone Plan
Meet Dr. Bo Wen, a staff research scientist, AGI specialist, cloud architect, and tech lead in digital health at IBM. He's joining us to discuss his perspective on the rapid evolution of AI – and what it could mean for the future of human communication… With deep expertise in generative AI, human-AI interaction design, data orchestration, and computational analysis, Dr. Wen is pushing the boundaries of how we understand and apply large language models. His interdisciplinary background blends digital health, cognitive science, computational psychiatry, and physics, offering a rare and powerful lens on emerging AI systems. Since joining IBM in 2016, Dr. Wen has played a key role in the company's Healthcare and Life Sciences division, contributing to innovative projects involving wearables, IoT, and AI-driven health solutions. Prior to IBM, he earned his Ph.D. in Physics from the City University of New York and enjoyed a successful career as an experimental physicist. In this conversation, we explore: How Dr. Wen foresaw the AI breakthrough nearly a decade ago The implications of AGI for communication, reasoning, and human-AI collaboration How large language models work. What AI needs to understand to predict words in sentences. Want to dive deeper into Dr. Wen's work? Learn more here! Episode also available on Apple Podcasts: http://apple.co/30PvU9C
In this episode of SparX, Mukesh Bansal sits down with tech veteran and former Google and Flipkart leader Peeyush Ranjan to explore how AI is fundamentally reshaping work, expertise, and entrepreneurship. From replacing human expertise in fields like medicine, law, and education to building AI-native startups, Peeyush dives deep into the promises and pitfalls of the AI revolution. They discuss the rise of coding, AI tutors, and what it means to be an engineer or student in a world dominated by large language models. If you're a founder, student, or knowledge worker wondering how to stay ahead of the curve, this episode is packed with clarity, wit, and hard truths.Resource List:- What are LLM's - https://www.cloudflare.com/en-gb/learning/ai/what-is-large-language-model/AI Hallucination - https://cloud.google.com/discover/what-are-ai-hallucinationsThick and Thin Wrapper Applications - https://www.command.ai/blog/gpt-wrapper/Y2K movement - https://medium.com/thevou/what-is-y2k-561c8bffbf98What does Vibe coding mean - https://en.wikipedia.org/wiki/Vibe_codingWhat are AGI's - https://cloud.google.com/discover/what-is-artificial-general-intelligence
In this episode: The benefits of long-term capital gains tax rates and how to qualify The impact of the 3.8% Net Investment Income Tax (often called the “Obamacare tax”) The key differences between long-term and short-term capital gains New increases to the standard deduction for 2025 A proposed tax-advantaged “Trump Account” for children under age 8, including contribution limits and withdrawal rules The House-passed bill's broader context, including its 100+ tax provisions still awaiting Senate approval Why it's important to wait for legislation to be finalized before planning around it In this episode, our guest, Debora Gorman, dives into Maryland's sweeping tax changes for 2025, aimed at closing a $3.3 billion budget gap—while giving entrepreneurs a crash course in how these changes will hit their bottom lines. If you own or operate a business in Maryland (or sell into it), this is your cheat sheet to staying compliant and tax-savvy in the face of some major shifts. What You'll Learn: New High-Income Tax Brackets: Maryland has introduced two new tax tiers for top earners—up to 6.5% for individuals making over $1 million and joint filers over $1.2 million. Business owners should reassess employee withholdings and estimated tax payments now. Capital Gains Surtax: A new 2% surtax applies to net capital gains for individuals with federal AGI over $350,000. Some exceptions apply (think retirement accounts and primary residences under $1.5M), but high-income entrepreneurs will want to revisit their investment strategies and tax planning. Standard Deduction Boosts: Maryland has raised its standard deduction significantly. It could tip the scales for some taxpayers, especially small business owners juggling itemized deductions. Gorman urges entrepreneurs to run both scenarios before filing. New 3% Sales Tax on Digital & IT Services: Starting July 1, 2025, certain digital subscriptions and tech services will be taxed. This includes cloud storage, custom software, and IT consulting under specific NAICS codes. If you're selling these services, you'll need to start collecting and remitting the tax. If you're buying, plan for increased costs. Out-of-State Sellers Beware: If your company sells into Maryland and hits $100K in revenue or 200 transactions/year, you're likely on the hook for sales tax collection—even without a physical presence. Custom Software No Longer Exempt: With the expansion of what's taxable, the custom software exemption is gone. If it's tied to taxable IT services, it's now subject to the 3% rate. Help Is Available: Gorman highlights resources like MarylandTaxConnect.gov and the Comptroller's legislative updates page, which includes step-by-step videos, withholding calculators, and FAQs. Bonus: There's no underpayment penalty for 2025 if your miscalculation is due to the new rates. This episode delivers exactly what every entrepreneur needs: clarity, direction, and tools to stay ahead of the changes—without the legalese. Stay sharp and compliant, Maryland business owners. July's coming fast.
En este episodio repasamos los temas más relevantes antes de la apertura del mercado: • Wall Street cae tras ataque en Medio Oriente: Futuros con fuertes caídas: $SPX -1.2%, $US100 -1.2%, $INDU -1.1%. Israel lanzó ataques aéreos sobre instalaciones nucleares en Irán, que respondió con más de 100 drones. Se declaró estado de emergencia en Israel y se cerró su espacio aéreo. El petróleo registra su mayor alza desde la invasión rusa a Ucrania. El jueves Wall Street cerró al alza tras un buen dato de inflación PPI. Hoy solo se espera el dato de expectativas del consumidor de la Universidad de Michigan. • Nvidia avanza con AI Factory en Alemania: $NVDA baja -1.5% tras anunciar acuerdo con Deutsche Telekom $DTEGY para construir una AI Factory soberana antes de 2026. Estará enfocada en manufactura inteligente en Europa. Ofrecerá simulación, diseño, gemelos digitales y robótica. Jensen Huang afirmó que cada fabricante necesitará dos fábricas: una física y otra de inteligencia. • Meta adquiere parte de Scale AI por $14.3B: $META compra el 49% de Scale AI, valorándola en $29B. El CEO de Scale, Alexandr Wang, se une a Meta para liderar su división de inteligencia artificial general (AGI). La operación incluye transferencia de talento clave y se suma a la estrategia de superinteligencia de Zuckerberg. Scale mantendrá independencia operativa. • Nueva York aprueba 29 proyectos eléctricos: La Comisión de Servicios Públicos de NY autorizó inversiones por $636.2M para añadir 642 MW al sistema. $ED lidera con cinco proyectos por $440M. $NGG, NYSEG y RG&E gestionan el resto. Las obras están alineadas con la electrificación de edificios y transporte. Aportan a los objetivos de descarbonización estatal. Una jornada marcada por la tensión en Medio Oriente, la consolidación de la IA y avances en infraestructura eléctrica. ¡No te lo pierdas!
Paris Marx is joined by Karen Hao to discuss how Sam Altman's goal of scale at all costs has spawned a new empire founded on exploitation of people and the environment, resulting in not only the loss of valuable research into more inventive AI systems, but also exacerbated data privacy issues, intellectual property erosion, and the perpetuation of surveillance capitalism.Karen Hao is an award-winning journalist and the author of Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI.Tech Won't Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon.The podcast is made in partnership with The Nation. Production is by Kyla Hewson.Also mentioned in this episode:Karen was the first journalist to profile OpenAI.Karen has reported on the environmental impacts and human costs of AI.The New York Times reported on Why We're Unlikely to Get Artificial General Intelligence Anytime Soon. Support the show
For decades, US allies have slept soundly under the protection of America's overwhelming military might. Donald Trump — with his threats to ditch NATO, seize Greenland, and abandon Taiwan — seems hell-bent on shattering that comfort.But according to Hugh White — one of the world's leading strategic thinkers, emeritus professor at the Australian National University, and author of Hard New World: Our Post-American Future — Trump isn't destroying American hegemony. He's simply revealing that it's already gone.Links to learn more, video, highlights, and full transcript: https://80k.info/hw“Trump has very little trouble accepting other great powers as co-equals,” Hugh explains. And that happens to align perfectly with a strategic reality the foreign policy establishment desperately wants to ignore: fundamental shifts in global power have made the costs of maintaining a US-led hegemony prohibitively high.Even under Biden, when Russia invaded Ukraine, the US sent weapons but explicitly ruled out direct involvement. Ukraine matters far more to Russia than America, and this “asymmetry of resolve” makes Putin's nuclear threats credible where America's counterthreats simply aren't. Hugh's gloomy prediction: “Europeans will end up conceding to Russia whatever they can't convince the Russians they're willing to fight a nuclear war to deny them.”The Pacific tells the same story. Despite Obama's “pivot to Asia” and Biden's tough talk about “winning the competition for the 21st century,” actual US military capabilities there have barely budged while China's have soared, along with its economy — which is now bigger than the US's, as measured in purchasing power. Containing China and defending Taiwan would require America to spend 8% of GDP on defence (versus 3.5% today) — and convince Beijing it's willing to accept Los Angeles being vaporised.Unlike during the Cold War, no president — Trump or otherwise — can make that case to voters.Our new “multipolar” future, split between American, Chinese, Russian, Indian, and European spheres of influence, is a “darker world” than the golden age of US dominance. But Hugh's message is blunt: for better or worse, 35 years of American hegemony are over. Recorded 30/5/2025.Chapters:00:00:00 Cold open00:01:25 US dominance is already gone00:03:26 US hegemony was the weird aberration00:13:08 Why the US bothered being the 'new Rome'00:23:25 Evidence the US is accepting the multipolar global order00:36:41 How Trump is advancing the inevitable00:43:21 Rubio explicitly favours this outcome00:45:42 Trump is half-right that the US was being ripped off00:50:14 It doesn't matter if the next president feels differently00:56:17 China's population is shrinking, but it doesn't matter01:06:07 Why Hugh disagrees with other realists like Mearsheimer01:10:52 Could the US be persuaded to spend 2x on defence?01:16:22 A multipolar world is bad, but better than nuclear war01:21:46 Will the US invade Panama? Greenland? Canada?!01:32:01 What should everyone else do to protect themselves in this new world?01:39:41 Europe is strong enough to take on Russia01:44:03 But the EU will need nuclear weapons01:48:34 Cancel (some) orders for US fighter planes01:53:40 Taiwan is screwed, even with its AI chips02:04:12 South Korea has to go nuclear too02:08:08 Japan will go nuclear, but can't be a regional leader02:11:44 Australia is defensible but needs a totally different military02:17:19 AGI may or may not overcome existing nuclear deterrence02:34:24 How right is realism?02:40:17 Has a country ever gone to war over morality alone?02:44:45 Hugh's message for Americans02:47:12 Why America temporarily stopped being isolationistTell us what you thought! https://forms.gle/AM91VzL4BDroEe6AAVideo editing: Simon MonsourAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongMusic: Ben CordellTranscriptions and web: Katy Moore
OpenAI's Sam Altman drops o3-Pro & sees “The Gentle Singularity”, Ilya Sutskever prepares for super intelligence & Mark Zuckerberg is spending MEGA bucks on AI talent. WHAT GIVES? All of the major AI companies are not only preparing for AGI but for true “super intelligence” which is on the way, at least according to *them*. What does that mean for us? And how do we exactly prepare for it? Also, Apple's WWDC is a big AI letdown, Eleven Labs' new V3 model is AMAZING, Midjourney got sued and, oh yeah, those weird 1X Robotics androids are back and running through grassy fields. WHAT WILL HAPPEN WHEN AI IS SMARTER THAN US? ACTUALLY, IT PROB ALREADY IS. #ai #ainews #openai Join the discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/ // Show Links /? Ilya Sutsketver's Commencement Speech About AI https://youtu.be/zuZ2zaotrJs?si=U_vHVpFEyTRMWSNa Apple's Cringe Genmoji Video https://x.com/altryne/status/1932127782232076560 OpenAI's Sam Altman On Superintelligence “The Gentle Singularity” https://blog.samaltman.com/the-gentle-singularity The Secret Mathematicians Meeting Where The Tried To Outsmart AI https://www.scientificamerican.com/article/inside-the-secret-meeting-where-mathematicians-struggled-to-outsmart-ai/ O3-Pro Released https://x.com/sama/status/1932532561080975797 The most expensive o3-Pro Hello https://x.com/Yuchenj_UW/status/1932544842405720540 Eleven Labs v3 https://x.com/elevenlabsio/status/1930689774278570003 o3 regular drops in price by 80% - cheaper than GPT-4o https://x.com/edwinarbus/status/1932534578469654552 Open weights model taking a ‘little bit more time' https://x.com/sama/status/1932573231199707168 Meta Buys 49% of Scale AI + Alexandr Wang Comes In-House https://www.nytimes.com/2025/06/10/technology/meta-new-ai-lab-superintelligence.html Apple Underwhelms at WWDC Re AI https://www.cnbc.com/2025/06/09/apple-wwdc-underwhelms-on-ai-software-biggest-facelift-in-decade-.html BusinessWeek's Mark Gurman on WWDC https://x.com/markgurman/status/1932145561919991843 Joanna Stern Grills Apple https://youtu.be/NTLk53h7u_k?si=AvnxM9wefXl2Nyjn Midjourney Sued by Disney & Comcast https://www.reuters.com/business/media-telecom/disney-universal-sue-image-creator-midjourney-copyright-infringement-2025-06-11/ 1x Robotic's Redwood https://x.com/1x_tech/status/1932474830840082498 https://www.1x.tech/discover/redwood-ai Redwood Mobility Video https://youtu.be/Dp6sqx9BGZs?si=UC09VxSx-PK77q-- Amazon Testing Humanoid Robots To Deliver Packages https://www.theinformation.com/articles/amazon-prepares-test-humanoid-robots-delivering-packages?rc=c3oojq&shared=736391f5cd5d0123 Autonomous Drone Beats Pilots For the First Time https://x.com/AISafetyMemes/status/1932465150151270644 Random GPT-4o Image Gen Pic https://www.reddit.com/r/ChatGPT/comments/1l7nnnz/what_do_you_get/?share_id=yWRAFxq3IMm9qBYxf-ZqR&utm_content=4&utm_medium=ios_app&utm_name=ioscss&utm_source=share&utm_term=1 https://x.com/AIForHumansShow/status/1932441561843093513 Jon Finger's Shoes to Cars With Luma's Modify Video https://x.com/mrjonfinger/status/1932529584442069392
What's really stopping you from stepping into the life that's waiting for you?Many of us feel stuck between who we are now and who we know we could become. This episode explores the inner resistance we face when stepping into new versions of ourselves, and why waiting to "feel ready" might be holding us back.In this deeply personal reflection, you will discover:Why confidence is the result of action, not the prerequisite.How a single decision can shift your mindset and dissolve fear.The real power of saying "yes" before you feel ready, and what it unlocks within you.Listen now to gain the clarity, courage, and inspiration you need to take your next big step forward.˚VALUABLE RESOURCES:Click here to get in touch with Agi and discuss mentoring/coaching.˚You can find the previous episodes of this series here: #489, #495, #501, #505˚Join our growing community at MasterySeekersTribe.com, where self-mastery seekers come together for connection and growth.˚
(0:00) Intro (1:49) About the podcast sponsor: The American College of Governance Counsel(2:36) Introduction by Professor Anat Admati, Stanford Graduate School of Business. Read the event coverage from Stanford's CASI.(4:14) Start of Interview(4:45) What inspired Karen to write this book and how she got started with journalism.(8:00) OpenAI's Nonprofit Origin Story(8:45) Sam Altman and Elon Musk's Collaboration(10:39) The Shift to For-Profit(12:12) On the original split between Musk and Altman over control of OpenAI(14:36) The Concept of AI Empires(18:04) About concept of "benefit to humanity" and OpenAI's mission "to ensure that AGI benefits all of humanity"(20:30) On Sam Altman's Ouster and OpenAI's Boardroom Drama (Nov 2023) "Doomers vs Boomers"(26:05) Investor Dynamics Post-Ouster of Sam Altman(28:21) Prominent Departures from OpenAI (ie Elon Musk, Dario Amodei, Ilya Sutskever, Mira Murati, etc)(30:55) The Geopolitics of AI: U.S. vs. China(32:37) The "What about China" Card used by US companies to ward off regulation.(34:26) "Scaling at All Costs is not leading us in a good place"(36:46) Karen's preference on ethical AI development "I really want there to be more participatory AI development. And I think about the full supply chain of AI development when I say that."(39:53) Her biggest hope and fear for the future "the greatest threat of these AI empires is the erosion of democracy."(43:34) The case of Chilean Community Activism and Empowerment(47:20) Recreating human intelligence and the example of Joseph Weizenbaum, MIT (Computer Power and Human Reason, 1976)(51:15) OpenAI's current AI research capabilities: "I think it's asymptotic because they have started tapping out of their scaling paradigm"(53:26) The state (and importance of) open source development of AI. "We need things to be more open"(55:08) The Bill Gates demo on chatGPT acing the AP Biology test.(58:54) Funding academic AI research and the public policy question on the role of Government.(1:01:11) Recommendations for Startups and UniversitiesKaren Hao is the author of Empire of AI (Penguin Press, May 2025) and an award-winning journalist covering the intersections of AI & society. You can follow Evan on social media at:X: @evanepsteinLinkedIn: https://www.linkedin.com/in/epsteinevan/ Substack: https://evanepstein.substack.com/__To support this podcast you can join as a subscriber of the Boardroom Governance Newsletter at https://evanepstein.substack.com/__Music/Soundtrack (found via Free Music Archive): Seeing The Future by Dexter Britain is licensed under a Attribution-Noncommercial-Share Alike 3.0 United States License
[01:05:07:18 - 01:06:22:11]Libertarian concerns about government overreach: Libertarians like Rand Paul and Thomas Massie oppose the bill due to concerns over government spending, debt, and potentialsurveillance implications. [01:13:24:19 - 01:14:05:22]Threat to habeas corpus: The Trump administration considers suspending habeas corpus for migrants, raising fears of broader abuses against dissenters, like anti-Zionists.[01:29:24:20 – 01:31:22:11]Constitutional issues with Trump's tariffs: Trump's tariffs violate separation of powers, as the president lacks authority to impose them without congressional approval, citing legal precedents.[01:42:11:19 - 01:49:07:07]Australian mushroom murder trial: Erin Patterson is on trial for allegedly poisoning three in-laws with deathcap mushrooms, claiming it was an accidental attempt to enhance a bland lunch.[02:19:31:13 - 02:22:52:12]Rare earth shortage crisis: China's export restrictions on rare earth minerals threaten U.S. industries, potentially causing auto production shutdowns akin to the pandemic chip shortage, with experts warning of supply chain disruptions.[02:30:20:21 - 02:34:34:23]Ukraine's refusal to accept fallen soldiers: Ukraine declines to retrieve 6,000 identified fallen soldiers' bodies from Russia, likely to avoid paying promised family compensations, highlighting financial motives and Kursk incursion failures.[02:41:50:10 - 02:44:25:25]Ukraine's drone strikes and NATO tensions: Senator Tuberville criticizes Zelensky's drone attacks on Russian assets as attempts to draw NATO into a losing war, while U.S. aid sustains Ukraine's resistance against territorial concessions.[03:03:23:16 - 03:05:25:03]Federal court restricts Naples Pride drag shows: The 11th Circuit Court rules Naples Pride Fest drag performances must be indoors and adults-only, citing public safety and rejecting First Amendment claims, aligning with Tennessee's regulatory approach.[03:13:20:29 - 03:15:33:09]Controversial cathedral performance in Germany: A performance featuring raw chickens in diapers at Paderborn Cathedral, part of Westphalia's 1250th anniversary, sparks outrage, prompting apologies for offending religious sentiments.[03:18:37:16 - 03:20:52:10]New theory challenges Big Bang: The "Black Hole Universe" theory suggests our universe exists inside a black hole's event horizon, formed after a collapse, aligning with relativity and quantum physics but shifting cosmological questions.[03:27:16:07 - 03:33:47:09]AI's threat to democracy and capitalism: AI creates a zero-sum game, likely undermining democracy first as corporatism consolidates power, replacing jobs with automation and eroding hope for upward mobility.[03:49:55:08 - 03:51:58:28]AI models lack AGI-level reasoning: Apple researchers find large language models mimic reasoning without generalizing, failing complex puzzles and showing inconsistent logic, far from achieving artificial general intelligence.Follow the show on Kick and watch live every weekday 9:00am EST – 12:00pm EST https://kick.com/davidknightshow Money should have intrinsic value AND transactional privacy: Go to https://davidknight.gold/ for great deals on physical gold/silver For 10% off Gerald Celente's prescient Trends Journal, go to https://trendsjournal.com/ and enter the code KNIGHT Find out more about the show and where you can watch it at TheDavidKnightShow.comIf you would like to support the show and our family please consider subscribing monthly here: SubscribeStar https://www.subscribestar.com/the-david-knight-showOr you can send a donation throughMail: David Knight POB 994 Kodak, TN 37764Zelle: @DavidKnightShow@protonmail.comCash App at: $davidknightshowBTC to: bc1qkuec29hkuye4xse9unh7nptvu3y9qmv24vanh7Become a supporter of this podcast: https://www.spreaker.com/podcast/the-david-knight-show--2653468/support.
[01:05:07:18 - 01:06:22:11]Libertarian concerns about government overreach: Libertarians like Rand Paul and Thomas Massie oppose the bill due to concerns over government spending, debt, and potentialsurveillance implications. [01:13:24:19 - 01:14:05:22]Threat to habeas corpus: The Trump administration considers suspending habeas corpus for migrants, raising fears of broader abuses against dissenters, like anti-Zionists.[01:29:24:20 – 01:31:22:11]Constitutional issues with Trump's tariffs: Trump's tariffs violate separation of powers, as the president lacks authority to impose them without congressional approval, citing legal precedents.[01:42:11:19 - 01:49:07:07]Australian mushroom murder trial: Erin Patterson is on trial for allegedly poisoning three in-laws with deathcap mushrooms, claiming it was an accidental attempt to enhance a bland lunch.[02:19:31:13 - 02:22:52:12]Rare earth shortage crisis: China's export restrictions on rare earth minerals threaten U.S. industries, potentially causing auto production shutdowns akin to the pandemic chip shortage, with experts warning of supply chain disruptions.[02:30:20:21 - 02:34:34:23]Ukraine's refusal to accept fallen soldiers: Ukraine declines to retrieve 6,000 identified fallen soldiers' bodies from Russia, likely to avoid paying promised family compensations, highlighting financial motives and Kursk incursion failures.[02:41:50:10 - 02:44:25:25]Ukraine's drone strikes and NATO tensions: Senator Tuberville criticizes Zelensky's drone attacks on Russian assets as attempts to draw NATO into a losing war, while U.S. aid sustains Ukraine's resistance against territorial concessions.[03:03:23:16 - 03:05:25:03]Federal court restricts Naples Pride drag shows: The 11th Circuit Court rules Naples Pride Fest drag performances must be indoors and adults-only, citing public safety and rejecting First Amendment claims, aligning with Tennessee's regulatory approach.[03:13:20:29 - 03:15:33:09]Controversial cathedral performance in Germany: A performance featuring raw chickens in diapers at Paderborn Cathedral, part of Westphalia's 1250th anniversary, sparks outrage, prompting apologies for offending religious sentiments.[03:18:37:16 - 03:20:52:10]New theory challenges Big Bang: The "Black Hole Universe" theory suggests our universe exists inside a black hole's event horizon, formed after a collapse, aligning with relativity and quantum physics but shifting cosmological questions.[03:27:16:07 - 03:33:47:09]AI's threat to democracy and capitalism: AI creates a zero-sum game, likely undermining democracy first as corporatism consolidates power, replacing jobs with automation and eroding hope for upward mobility.[03:49:55:08 - 03:51:58:28]AI models lack AGI-level reasoning: Apple researchers find large language models mimic reasoning without generalizing, failing complex puzzles and showing inconsistent logic, far from achieving artificial general intelligence.Follow the show on Kick and watch live every weekday 9:00am EST – 12:00pm EST https://kick.com/davidknightshow Money should have intrinsic value AND transactional privacy: Go to https://davidknight.gold/ for great deals on physical gold/silver For 10% off Gerald Celente's prescient Trends Journal, go to https://trendsjournal.com/ and enter the code KNIGHT Find out more about the show and where you can watch it at TheDavidKnightShow.comIf you would like to support the show and our family please consider subscribing monthly here: SubscribeStar https://www.subscribestar.com/the-david-knight-showOr you can send a donation throughMail: David Knight POB 994 Kodak, TN 37764Zelle: @DavidKnightShow@protonmail.comCash App at: $davidknightshowBTC to: bc1qkuec29hkuye4xse9unh7nptvu3y9qmv24vanh7Become a supporter of this podcast: https://www.spreaker.com/podcast/the-real-david-knight-show--5282736/support.
Mark Zuckerberg, frustrated by Meta's lagging AI models, is hand-picking a secretive “superintelligence” dream team. He's courting researchers at his homes and betting billions to outbuild OpenAI in the race to AGI. And Snap announces its first ever consumer AR glasses will be available in 2026: sleek, AI-powered Specs that promise games, gesture control, and a digital life beyond your phone.Featured Guests: Jeff Becker, general partner, Antler | Jake O'Shea, principal, AntlerDOWNLOAD PUBLIC: Public.com/ventureInvest in everything—stocks, options, bonds, crypto. You can even earn some of the highest yields in the industry—like the 7% or higher yield you can lock in with a Bond Account. Public is a FINRA-registered, SIPC-insured platform that takes your investments as seriously as you do. Fund your account in five minutes or less at public.com/venture and get up to $10,000 when you transfer your old portfolio.All investing involves the risk of loss, including loss of principal. Brokerage services for US-listed, registered securities, options and bonds in a self-directed account are offered by Public Investing, Inc., member FINRA & SIPC. Public Investing offers a High-Yield Cash Account where funds from this account are automatically deposited into partner banks where they earn interest and are eligible for FDIC insurance; Public Investing is not a bank. Cryptocurrency trading services are offered by Bakkt Crypto Solutions, LLC (NMLS ID 1890144), which is licensed to engage in virtual currency business activity by the NYSDFS. Cryptocurrency is highly speculative, involves a high degree of risk, and has the potential for loss of the entire amount of an investment. Cryptocurrency holdings are not protected by the FDIC or SIPC.A Bond Account is a self-directed brokerage account with Public Investing, member FINRA/SIPC. Deposits into this account are used to purchase 10 investment-grade and high-yield bonds. The 7%+ yield is the average, annualized yield to worst (YTW) across all ten bonds in the Bond Account, before fees, as of 5/15/2025. A bond's yield is a function of its market price, which can fluctuate; therefore, a bond's YTW is not “locked in” until the bond is purchased, and your yield at time of purchase may be different from the yield shown here. The “locked in” YTW is not guaranteed; you may receive less than the YTW of the bonds in the Bond Account if you sell any of the bonds before maturity or if the issuer defaults on the bond. Public Investing charges a markup on each bond trade. See our Fee Schedule. *Terms and Conditions apply.
This week, I'm speaking with Kevin Weil, Chief Product Officer at OpenAI, who is steering product development at what might be the world's most important company right now.We talk about:(00:00) Episode trailer(01:37) OpenAI's latest launches(03:43) What it's like being CPO of OpenAI(04:34) How AI will reshape our lives(07:23) How young people use AI differently(09:29) Addressing fears about AI(11:47) Kevin's "Oh sh!t" moment(14:11) Why have so many models within ChatGPT?(18:19) The unpredictability of AI product progress(24:47) Understanding model “evals”(27:21) How important is prompt engineering?(29:18) Defining “AI agent”(37:00) Why OpenAI views coding as a prime target use-case(41:24) The "next model test” for any AI startup(46:06) Jony Ive's role at OpenAI(47:50) OpenAI's hardware vision(50:41) Quickfire questions(52:43) When will we get AGI?Kevin's links:LinkedIn: https://www.linkedin.com/in/kevinweil/Twitter/X: @kevinweilAzeem's links:Substack: https://www.exponentialview.co/Website: https://www.azeemazhar.com/LinkedIn: https://www.linkedin.com/in/azharTwitter/X: https://x.com/azeemOur new show:This was originally recorded for "Friday with Azeem Azhar", a new show that takes place every Friday at 9am PT and 12pm ET. You can tune in through Exponential View on Substack.Produced by supermix.io and EPIIPLUS1 Ltd.
Steven's years at OpenAI ... An insider's view of the AI boom ... The real meaning of “Feel the AGI” ... AI safety's rationalist kernel ... What's really driving the AI arms race(s)? ... Is stifling China's AI development smart? ... Dario Amodei's geopolitical naivete ... Heading to Overtime ...
Steven's years at OpenAI ... An insider's view of the AI boom ... The real meaning of “Feel the AGI” ... AI safety's rationalist kernel ... What's really driving the AI arms race(s)? ... Is stifling China's AI development smart? ... Dario Amodei's geopolitical naivete ... Heading to Overtime ...
Meta pushes deeper into the AGI race with a new superintelligence lab and high-profile hires, while Apple draws the curtain on Intel Macs and Microsoft ditches Seattle for its Build conference. Meanwhile, Google's AI-driven search is siphoning publisher traffic, Transsion is electrifying Africa with budget e-bikes, and Threads finally learns how to DM. Starring Sarah Lane, Tom Merritt, Amos, Joe. To read the show notes in a separate page click here! Support the show on Patreon by becoming a supporter!
If the GOP reform bill moves forward, lowering your adjusted gross income (AGI) won't just be helpful, it'll be more important than ever for student loan borrowers. We explore what could change, who's most at risk, and what steps you can take to protect your paycheck. From maxing out pre-tax retirement contributions and HSAs to leveraging self-employment and smarter tax planning, we'll walk you through practical ways to lower your AGI — and why it might be the biggest factor in your student loan repayment. Key moments: (04:18) Business owners may gain an edge with deductions if the GOP bill becomes law (07:53) Many haven't felt the full weight of student loan payments in a while — that may soon change (13:55) If this bill passes, 1099 or self-employed work might offer major financial advantages (16:05) Expect to lose 50–60% of your income to loans and taxes unless you lower your AGI Like the show? There are several ways you can help! Follow on Apple Podcasts, Spotify or Amazon Music Leave an honest review on Apple Podcasts Subscribe to the newsletter Feeling helpless when it comes to your student loans? Try our free student loan calculator Check out our refinancing bonuses we negotiated Book your custom student loan plan Get profession-specific financial planning Do you have a question about student loans? Leave us a voicemail here or email us at help@studentloanplanner.com and we might feature it in an upcoming show!
We're living through truly extraordinary times—not simply because things are changing, but because of how breathtakingly fast those changes are happening. Take artificial intelligence: it's no longer some futuristic buzzword from a sci-fi movie; it's already reshaping our lives, economies, and even how we relate to each other. But here's what's really mind-blowing: artificial general intelligence is just around the corner. This isn't the kind of gradual innovation we're used to—it's a complete overhaul. AGI promises to rewrite the rules of entire industries practically overnight, delivering changes more profound and rapid than anything humanity has ever experienced. Forget the Renaissance, the Industrial Revolution, or even the dawn of the internet—this transformation could eclipse them all, and do it faster than any of us can imagine. Parallel to the AI revolution, Bitcoin has had its own remarkable story. Just a little over a decade ago, it was an obscure digital experiment—dismissed by mainstream finance as a tech nerd's hobby, virtual Monopoly money with no real-world impact. Fast-forward to today, and Bitcoin has completely transformed. Countries like El Salvador now officially recognize Bitcoin as legal tender. Sovereign wealth funds—from Singapore to the Middle East—are quietly stacking it into their national reserves. Big corporations like MicroStrategy have turned conventional treasury management upside down, boldly choosing Bitcoin as their primary reserve asset. Bitcoin's journey from fringe curiosity to essential financial infrastructure underscores a major shift in how we store, exchange, and even define value worldwide. And it's not just technology and finance that are seeing these seismic shifts; geopolitics and economic strategies are also entering uncharted waters. With the Trump administration back in power, we're witnessing a total rewrite of the traditional economic playbook. Tariffs, once cautiously applied economic tools, are now wielded boldly, reshaping global alliances and challenging decades-old partnerships. Long-standing allies like Canada and Europe now find themselves in more transactional relationships, while surprising new economic partnerships emerge based purely on pragmatism. This rapidly evolving landscape is generating unprecedented uncertainty—but also enormous opportunity. So how do you make sure you end up on the winning side of this historic transformation? By actively educating yourself, staying ahead of the curve, and positioning yourself to prosper. I've always made it my mission to anticipate where things are headed—and more importantly, to share that vision with you. Back in 2017, I first introduced Bitcoin to you when it traded below $5K. Today, with Bitcoin over $100K, I'm more convinced than ever that we'll see it hit $1 million within the next five years. The conversations I'm having make it seem inevitable. It's those conversations you need to be a part of—either having them yourself or listening to them through podcasts like mine. A good place to start is this week's Wealth Formula Podcast, where I talk with Anthony Pompliano, better known as Pomp.
In episode 1876, Jack and Miles are joined by co-host of The Bechdel Cast, Caitlin Durante, to discuss… AMC Wants To Put More Ads Before Movies, “Rainbow Capitalism” Is Back To Just “Capitalism”, A.I. Is Already (Secretly) Making Hollywood Sh*ttier and more! AMC Wants To Put More Ads Before Movies Indian man awarded damages over length of commercials before movie screening Big brands are pulling back on Pride merchandise and events this year The Business End of Pride What Happened to All the Corporate Pride Logos? Target, Macy’s, and Walmart among retailers promoting Father’s Day over Pride Month These 14 corporations have stopped or scaled back sponsorship of LGBTQ+ Pride events 'Cowardcore:' Everyone Is Noticing The Same Thing About Target's Pride Merch Big brands distance themselves from Pride events amid DEI rollback Burger King's Pride Whoppers Come With Two Tops or Two Bottoms Everyone Is Already Using AI (And Hiding It) Natasha Lyonne to Direct Feature ‘Uncanny Valley’ Combining ‘Ethical’ AI and Traditional Filmmaking Techniques Natasha Lyonne Talks ‘Uncanny Valley’ Directorial Debut, Use Of “Copyright-Clean” AI & Danger Of AGI Natasha Lyonne reveals David Lynch was a supporter of AI This AI Animation Studio Believes It Can Convince All the Skeptics I’m Not Convinced Ethical Generative AI Currently Exists LISTEN: CPR by Wet LegSee omnystudio.com/listener for privacy information.
Sundar Pichai is CEO of Google and Alphabet. Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep471-sc See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc. Transcript: https://lexfridman.com/sundar-pichai-transcript CONTACT LEX: Feedback - give feedback to Lex: https://lexfridman.com/survey AMA - submit questions, videos or call-in: https://lexfridman.com/ama Hiring - join our team: https://lexfridman.com/hiring Other - other ways to get in touch: https://lexfridman.com/contact EPISODE LINKS: Sundar's X: https://x.com/sundarpichai Sundar's Instagram: https://instagram.com/sundarpichai Sundar's Blog: https://blog.google/authors/sundar-pichai/ Google Gemini: https://gemini.google.com/ Google's YouTube Channel: https://www.youtube.com/@Google SPONSORS: To support this podcast, check out our sponsors & get discounts: Tax Network USA: Full-service tax firm. Go to https://tnusa.com/lex BetterHelp: Online therapy and counseling. Go to https://betterhelp.com/lex LMNT: Zero-sugar electrolyte drink mix. Go to https://drinkLMNT.com/lex Shopify: Sell stuff online. Go to https://shopify.com/lex AG1: All-in-one daily nutrition drink. Go to https://drinkag1.com/lex OUTLINE: (00:00) - Introduction (00:07) - Sponsors, Comments, and Reflections (07:55) - Growing up in India (14:04) - Advice for young people (15:46) - Styles of leadership (20:07) - Impact of AI in human history (32:17) - Veo 3 and future of video (40:01) - Scaling laws (43:46) - AGI and ASI (50:11) - P(doom) (57:02) - Toughest leadership decisions (1:08:09) - AI mode vs Google Search (1:21:00) - Google Chrome (1:36:30) - Programming (1:43:14) - Android (1:48:27) - Questions for AGI (1:53:42) - Future of humanity (1:57:04) - Demo: Google Beam (2:04:46) - Demo: Google XR Glasses (2:07:31) - Biggest invention in human history PODCAST LINKS: - Podcast Website: https://lexfridman.com/podcast - Apple Podcasts: https://apple.co/2lwqZIr - Spotify: https://spoti.fi/2nEwCF8 - RSS: https://lexfridman.com/feed/podcast/ - Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 - Clips Channel: https://www.youtube.com/lexclips