POPULARITY
What would a world of self-driven cars look like? How would it change shopping, transportation, and life, more broadly? A decade ago, many people were asking these questions, as it looked like a boom in autonomous vehicles was imminent. But in the last few years, other technologies—crypto, the metaverse, AI—have stolen the spotlight. Meanwhile, self-driving cars have quietly become a huge deal in the U.S. Waymo One, a commercial ride-hailing service that spun off from Google, has been rolled out in San Francisco, Phoenix, Los Angeles, and Austin. Every week, Waymo makes 150,000 autonomous rides. Tesla is also competing to build a robo-taxi service and to develop self-driving capabilities. There are two reasons why I've always been fascinated by self-driving cars: The first is safety. There are roughly 40,000 vehicular deaths in America every year and 6 million accidents. It's appropriate to be concerned about the safety of computer-driven vehicles. But what about the safety of human-driven vehicles? A technology with the potential to save thousands of lives and prevent millions of accidents is a huge deal. Second, the automobile was arguably the most important technology of the 20th century. The invention of the internal combustion engine transformed agriculture, personal transportation, and supply chains. It made the suburbs possible. It changed the spatial geometry of the city. It expanded demand for fossil fuels and created some of the most valuable companies in the world. The reinvention of last century's most important technology is a massive, massive story. And the truth is, I'm not sure that today's news media—a category in which I include myself—has done an adequate job representing just how game-changing self-driving technology at scale could be. Today's guest is Timothy Lee, author of the Substack publication Understanding AI. Today I asked him to help me understand self-driving cars—their technology, their industry, their possibility, and their implications. If you have questions, observations, or ideas for future episodes, email us at PlainEnglish@Spotify.com. Host: Derek Thompson Guest: Timothy Lee Producer: Devon Baroldi Learn more about your ad choices. Visit podcastchoices.com/adchoices
Is truth just a matter of perspective, or is there something more?
How can you build your life on this strong, unshakable foundation? It starts by knowing God through His Word. The more we study the scripture, the deeper our relationship with Him grows, helping us recognize His voice and trust His guidance. As we understand God's character and promises, our confidence in following His lead grows, and just like Proverbs 3:5-6, we will not lean on our own understanding but trust in God's perfect will for us. Jesus Christ is our chief cornerstone. In Him is a sure foundation, one that will give us stability and alignment to God's will. #SGChurch #ApostolicPentecostal #SGLife #SGFamilies
Life can feel a lot like building a house. But what's the foundation build on? Jesus talks about this in Matthew 7:24-27, where two builders face storms. The difference? One built on a solid foundation, strong and secure. We all face storms that test the strength of our foundations, so it's worth asking: what's keeping you steady? As we walk through this season of consecration, let's prioritise on building our lives on something that lasts—God's unshakable Word and living it out in obedience
Have you ever found yourself feeling insecure? Insecurity leaves us feeling exposed, uncovering hidden fears and its connected emotions. We are left wanting, wondering where is the abundant life that God promised? This need for security reflects a deeper spiritual truth: we were created to live in relationship with a secure and faithful God. The world is volatile, but we can find refuge in the One who holds all things. As we enter a season of consecration to the Lord, there is a promise of unshakable security in Christ. It's ours to have, and it starts with a choice to put Him first ❤️ #SGchurch #churchcommunity #SGlocal #SGFamilies #apostolicpentecostal
Where did all of the hype around self-driving cars go? Timothy Lee joins the podcast to discuss the state of the self-driving car industry, technological and regulatory obstacles to getting self-driving cars in your city, and where he sees the auto industry going in 10 years. To get bonus episodes, support us at patreon.com/newliberalpodcast or https://cnliberalism.org/become-a-member Got questions? Send us a note at mailbag@cnliberalism.org. Follow us at: https://twitter.com/CNLiberalism https://cnliberalism.org/ Join a local chapter at https://cnliberalism.org/become-a-member/
Is pain always harmful? Think of pruning—it may be a painful and uncomfortable process, but yet essential for growth. God's divine pruning isn't meant to hurt us but to help us bear more fruit in Him. When we abide in God and stay connected to Him, we allow Him to shape us and change us into who we're meant to be. Trust the process and let God be your vinedresser.
A lie we often believe: As long we don't hurt anyone, we can do what we want.
Is your “goodness” truly from God or just a reflection of human effort? Living for God on our own terms isn't just hard—it's impossible. The fruit of the Spirit is not produced on the strength of our own wills. To unlock the key to overcoming temptation and true inward transformation, start with surrender to the Holy Spirit. Only true change can take place when we let God be in charge. Have we allowed Him to take over our hearts, motivations, and desires?
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #75: Math is Easier, published by Zvi on August 1, 2024 on LessWrong. Google DeepMind got a silver metal at the IMO, only one point short of the gold. That's really exciting. We continuously have people saying 'AI progress is stalling, it's all a bubble' and things like that, and I always find remarkable how little curiosity or patience such people are willing to exhibit. Meanwhile GPT-4o-Mini seems excellent, OpenAI is launching proper search integration, by far the best open weights model got released, we got an improved MidJourney 6.1, and that's all in the last two weeks. Whether or not GPT-5-level models get here in 2024, and whether or not it arrives on a given schedule, make no mistake. It's happening. This week also had a lot of discourse and events around SB 1047 that I failed to avoid, resulting in not one but four sections devoted to it. Dan Hendrycks was baselessly attacked - by billionaires with massive conflicts of interest that they admit are driving their actions - as having a conflict of interest because he had advisor shares in an evals startup rather than having earned the millions he could have easily earned building AI capabilities. so Dan gave up those advisor shares, for no compensation, to remove all doubt. Timothy Lee gave us what is clearly the best skeptical take on SB 1047 so far. And Anthropic sent a 'support if amended' letter on the bill, with some curious details. This was all while we are on the cusp of the final opportunity for the bill to be revised - so my guess is I will soon have a post going over whatever the final version turns out to be and presenting closing arguments. Meanwhile Sam Altman tried to reframe broken promises while writing a jingoistic op-ed in the Washington Post, but says he is going to do some good things too. And much more. Oh, and also AB 3211 unanimously passed the California assembly, and would effectively among other things ban all existing LLMs. I presume we're not crazy enough to let it pass, but I made a detailed analysis to help make sure of it. Table of Contents 1. Introduction. 2. Table of Contents. 3. Language Models Offer Mundane Utility. They're just not that into you. 4. Language Models Don't Offer Mundane Utility. Baba is you and deeply confused. 5. Math is Easier. Google DeepMind claims an IMO silver metal, mostly. 6. Llama Llama Any Good. The rankings are in as are a few use cases. 7. Search for the GPT. Alpha tests begin of SearchGPT, which is what you think it is. 8. Tech Company Will Use Your Data to Train Its AIs. Unless you opt out. Again. 9. Fun With Image Generation. MidJourney 6.1 is available. 10. Deepfaketown and Botpocalypse Soon. Supply rises to match existing demand. 11. The Art of the Jailbreak. A YouTube video that (for now) jailbreaks GPT-4o-voice. 12. Janus on the 405. High weirdness continues behind the scenes. 13. They Took Our Jobs. If that is even possible. 14. Get Involved. Akrose has listings, OpenPhil has a RFP, US AISI is hiring. 15. Introducing. A friend in venture capital is a friend indeed. 16. In Other AI News. Projections of when it's incrementally happening. 17. Quiet Speculations. Reports of OpenAI's imminent demise, except, um, no. 18. The Quest for Sane Regulations. Nick Whitaker has some remarkably good ideas. 19. Death and or Taxes. A little window into insane American anti-innovation policy. 20. SB 1047 (1). The ultimate answer to the baseless attacks on Dan Hendrycks. 21. SB 1047 (2). Timothy Lee analyzes current version of SB 1047, has concerns. 22. SB 1047 (3): Oh Anthropic. They wrote themselves an unexpected letter. 23. What Anthropic's Letter Actually Proposes. Number three may surprise you. 24. Open Weights Are Unsafe And Nothing Can Fix This. Who wants to ban what? 25. The Week in Audio. Vitalik Buterin, Kelsey Piper, Patrick McKenzie. 26. Rheto...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #75: Math is Easier, published by Zvi on August 1, 2024 on LessWrong. Google DeepMind got a silver metal at the IMO, only one point short of the gold. That's really exciting. We continuously have people saying 'AI progress is stalling, it's all a bubble' and things like that, and I always find remarkable how little curiosity or patience such people are willing to exhibit. Meanwhile GPT-4o-Mini seems excellent, OpenAI is launching proper search integration, by far the best open weights model got released, we got an improved MidJourney 6.1, and that's all in the last two weeks. Whether or not GPT-5-level models get here in 2024, and whether or not it arrives on a given schedule, make no mistake. It's happening. This week also had a lot of discourse and events around SB 1047 that I failed to avoid, resulting in not one but four sections devoted to it. Dan Hendrycks was baselessly attacked - by billionaires with massive conflicts of interest that they admit are driving their actions - as having a conflict of interest because he had advisor shares in an evals startup rather than having earned the millions he could have easily earned building AI capabilities. so Dan gave up those advisor shares, for no compensation, to remove all doubt. Timothy Lee gave us what is clearly the best skeptical take on SB 1047 so far. And Anthropic sent a 'support if amended' letter on the bill, with some curious details. This was all while we are on the cusp of the final opportunity for the bill to be revised - so my guess is I will soon have a post going over whatever the final version turns out to be and presenting closing arguments. Meanwhile Sam Altman tried to reframe broken promises while writing a jingoistic op-ed in the Washington Post, but says he is going to do some good things too. And much more. Oh, and also AB 3211 unanimously passed the California assembly, and would effectively among other things ban all existing LLMs. I presume we're not crazy enough to let it pass, but I made a detailed analysis to help make sure of it. Table of Contents 1. Introduction. 2. Table of Contents. 3. Language Models Offer Mundane Utility. They're just not that into you. 4. Language Models Don't Offer Mundane Utility. Baba is you and deeply confused. 5. Math is Easier. Google DeepMind claims an IMO silver metal, mostly. 6. Llama Llama Any Good. The rankings are in as are a few use cases. 7. Search for the GPT. Alpha tests begin of SearchGPT, which is what you think it is. 8. Tech Company Will Use Your Data to Train Its AIs. Unless you opt out. Again. 9. Fun With Image Generation. MidJourney 6.1 is available. 10. Deepfaketown and Botpocalypse Soon. Supply rises to match existing demand. 11. The Art of the Jailbreak. A YouTube video that (for now) jailbreaks GPT-4o-voice. 12. Janus on the 405. High weirdness continues behind the scenes. 13. They Took Our Jobs. If that is even possible. 14. Get Involved. Akrose has listings, OpenPhil has a RFP, US AISI is hiring. 15. Introducing. A friend in venture capital is a friend indeed. 16. In Other AI News. Projections of when it's incrementally happening. 17. Quiet Speculations. Reports of OpenAI's imminent demise, except, um, no. 18. The Quest for Sane Regulations. Nick Whitaker has some remarkably good ideas. 19. Death and or Taxes. A little window into insane American anti-innovation policy. 20. SB 1047 (1). The ultimate answer to the baseless attacks on Dan Hendrycks. 21. SB 1047 (2). Timothy Lee analyzes current version of SB 1047, has concerns. 22. SB 1047 (3): Oh Anthropic. They wrote themselves an unexpected letter. 23. What Anthropic's Letter Actually Proposes. Number three may surprise you. 24. Open Weights Are Unsafe And Nothing Can Fix This. Who wants to ban what? 25. The Week in Audio. Vitalik Buterin, Kelsey Piper, Patrick McKenzie. 26. Rheto...
Discover strength and steadfastness in adversity as we explore the analogy of an unshakeable foundation in the face of life's storms. Jesus Christ, our cornerstone, offers direction and security as we build our lives upon Him. Join us in finding the only way to remain steadfast in our faith in life's storms.
This is the truth of Jesus' resurrection - despite His burial confirming His death, three days later, He rose from the grave, proving He was more than just a man - He was God in human form. Jesus' resurrection defeated death and offered hope to humanity. Find out how His sacrifice on the cross provides forgiveness for our sins and promises eternal life to all who believe. Join us as we explore the life-changing power of Jesus' resurrection and the victory it brings over sin and death.
There are followers and observers - which one are you? Observers watch from the sidelines. Followers pursue a real and living relationship with Jesus. On Palm Sunday, Jesus didn't offer rules; He offered a life-changing relationship. This message challenges us to move from observers to true followers so that we can be a channel of God's love and power to this world.
Mike Ferguson in the Morning 04-08-24 Timothy Lee, Senior Vice President of Legal and Public Affairs at the Center for Individual Freedom, talks about the Biden administration's push to expand social media censorship and issues surrounding net neutrality. Timothy's article here: https://thefederalist.com/2024/04/04/net-neutrality-could-expand-bidens-social-media-censorship-to-the-whole-internet/ (https://cfif.org/v/) MORNING NEWS DUMP: Eclipse Day 2024 has arrived! The Mayorkas impeachment is back in the news. Another murderer is scheduled to be executed in Missouri this week. Maryland Gov. Wes Moore updates the bridge collapse in Baltimore. The Battlehawks' home opener set a new UFL attendance record with 40,317 fans as they beat the Arlington Renegades 27-24. Cardinals lost to the Miami Marlins 10-3 but still won 2 of 3 from the Marlins. Up next: Philadelphia Phillies tonight at 6:45 at Busch Stadium. Blues beat the Ducks in Anaheim 6-5 in a shootout. Up next: hosting the Chicago Blackhawks at Enterprise Center on Wednesday night at 7pm. NewsTalkSTL website: https://newstalkstl.com/ Rumble: https://rumble.com/c/NewsTalkSTL Twitter/X: https://twitter.com/NewstalkSTL Livestream 24/7: http://bit.ly/newstalkstlstreamSee omnystudio.com/listener for privacy information.
Mike Ferguson in the Morning 04-05-24 Timothy Lee, Senior Vice President of Legal and Public Affairs at the Center for Individual Freedom, talks about the Biden administration's push to expand social media censorship and issues surrounding net neutrality. Timothy's article here: https://thefederalist.com/2024/04/04/net-neutrality-could-expand-bidens-social-media-censorship-to-the-whole-internet/ (https://cfif.org/v/) NewsTalkSTL website: https://newstalkstl.com/ Rumble: https://rumble.com/c/NewsTalkSTL Twitter/X: https://twitter.com/NewstalkSTL Livestream 24/7: http://bit.ly/newstalkstlstreamSee omnystudio.com/listener for privacy information.
The Marketing Mix: Thought-starters for B2B Business Leaders
Are we ready to take advantage of AI in Marketing yet?Sam Altman, CEO of OpenAI, is quoted as saying that AI could automate 95% of tasks currently performed by marketing agencies. I don't agree with the number, but it's a bold enough statement to pay attention to. And, for sure, AI is going to have a significant impact on how we “do” marketing over the next few years.So in this episode, I talk through the options that marketers should consider as they start to look into AI; share some examples and use cases of how to incorporate AI into your daily workflows; and consider where we might be with AI tools a year from now.Really, though, this episode is an encouragement to “Just Do It” (to use a turn of phrase!). If you're not yet using AI, it's time to roll up your sleeves and start to play around with AI tools - whether that's a standalone AI assistant like Chat GPT or Claude; or tapping into the AI features of the software you're already using. And we're giving you some pointers to get your journey started.Plus, since the subject of Large Language Models and AI in Marketing is constantly changing, I share some of the resources I'm using to stay ahead of the curve!. See below:Timestamps:04:35What to think about when you think about AI07:05Which AI Assistant should you use?12:10Three use cases to get you started19:20How is this going to change the way we work?Articles and Newsletters mentioned in this episode:Sam Altman's “95%” quote. Link“Best Practices for Working with Large Language Models”- The New Stack column, by Jon Udell. Link“Claude 3: ChatGPT finally has a serious rival"- Understanding AI Newsletter by Timothy Lee. Link“Your guide to Google Gemini and Claude 3.0, compared to ChatGPT”- AI for Good Newsletter by TarenSK. Link"Which AI Should I Use?”- One Useful Thing Newsletter by Ethank Molik. Link“Microsoft promises Copilot will be a 'moneymaker' in the long term”- The Register. LinkThe Gartner Hype Cycle. LinkFuggedaboutit Cocktails “Bot” By Michael A McConachie- LinkThe Mobster Mirage Cocktail Recipe:[Verbatim, from the Fuggedaboutit Cocktails custom GPT]1 oz Whisky: Pick a good one, something with character but not too overbearing.1/2 oz Cointreau: For a hint of sweet citrus to brighten things up.1/2 oz Dark Rum: Adds depth and a bit of mystery.1/2 oz Fresh Lime Juice: To cut through the sweetness and add some zing.Dash of Simple Syrup: Only if you like it a bit sweeter, see?Garnish: Lime wheel or a twist, for that touch of class.Chill Your Glass: Get a nice cocktail glass and fill it with ice or stick it in the freezer for a bit to get it nice and frosty.Mix Ingredients: In a shaker, combine the whisky, Cointreau, dark rum, fresh lime juice, and simple syrup if you're using it. Fill that shaker with ice.Shake It Up: Shake it like you mean it, but with respectStrain: Strain it into your chilled glass. You want it smooth, with no ice chunks crashin' the party.Garnish: Add that lime wheel or twist. It's like the suit jacket on a well-dress
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #51: Altman's Ambition, published by Zvi on February 21, 2024 on LessWrong. [Editor's note: I forgot to post this to WorldPress on Thursday. I'm posting it here now. Sorry about that.] Sam Altman is not playing around. He wants to build new chip factories in the decidedly unsafe and unfriendly UAE. He wants to build up the world's supply of energy so we can run those chips. What does he say these projects will cost? Oh, up to seven trillion dollars. Not a typo. Even scaling back the misunderstandings, this is what ambition looks like. It is not what safety looks like. It is not what OpenAI's non-profit mission looks like. It is not what it looks like to have concerns about a hardware overhang, and use that as a reason why one must build AGI soon before someone else does. The entire justification for OpenAI's strategy is invalidated by this move. I have spun off reactions to Gemini Ultra to their own post. Table of Contents Introduction. Table of Contents. Language Models Offer Mundane Utility. Can't go home? Declare victory. Language Models Don't Offer Mundane Utility. Is AlphaGeometry even AI? The Third Gemini. Its own post, link goes there. Reactions are mixed. GPT-4 Real This Time. Do you remember when ChatGPT got memory? Deepfaketown and Botpocalypse Soon. Bot versus bot, potential for AI hacking. They Took Our Jobs. The question is, will they also take the replacement jobs? Get Involved. A new database of surprising AI actions. Introducing. Several new competitors. Altman's Ambition. Does he actually seek seven trillion dollars? Yoto. You only train once. Good luck! I don't know why. Perhaps you'll die. In Other AI News. Andrej Karpathy leaves OpenAI, self-discover algorithm. Quiet Speculations. Does every country need their own AI model? The Quest for Sane Regulation. A standalone post on California's SR 1047. Washington D.C. Still Does Not Get It. No, we are not confused about this. Many People are Saying. New Yorkers do not care for AI, want regulations. China Watch. Not going great over there, one might say. Roon Watch. If you can. How to Get Ahead in Advertising. Anthropic super bowl ad. The Week in Audio. Sam Altman at the World Government Summit. Rhetorical Innovation. Several excellent new posts, and a protest. Please Speak Directly Into this Microphone. AI killer drones now? Aligning a Smarter Than Human Intelligence is Difficult. Oh Goody. Other People Are Not As Worried About AI Killing Everyone. Timothy Lee. The Lighter Side. So, what you're saying is… Language Models Offer Mundane Utility Washington D.C. government exploring using AI for mundane utility. Deliver your Pakistani presidential election victory speech while you are in prison. Terrance Tao suggests a possible application for AlphaGeometry. Help rescue your Fatorio save from incompatible mods written in Lua. Shira Ovide says you should use it to summarize documents, find the exact right word, get a head start on writing something difficult, dull or unfamiliar, or make cool images you imagine, but not to use it to get info about an image, define words, identify synonyms, get personalized recommendations or to give you a final text. Her position is mostly that this second set of uses is unreliable. Which is true, and you do not want to exclusively or non-skeptically rely on the outputs, but so what? Still seems highly useful. Language Models Don't Offer Mundane Utility AlphaGeometry is not about AI? It seems that what AlphaGeometry is mostly doing is combining DD+AR, essentially labeling everything you can label and hoping the solution pops out. The linked post claims that doing this without AI is good enough in 21 of the 25 problems that it solved, although a commentor notes the paper seems to claim it was somewhat less than that. If it was indeed 21, and to some extent even if it wasn't...
In Episode #16, AI Risk Denier Down, things get weird. This show did not have to be like this. Our guest in Episode #16 is Timothy Lee, a computer scientist and journalist who founded and runs understandingai.org. Tim has written about AI risk many times, including these two recent essays: https://www.understandingai.org/p/why... https://www.understandingai.org/p/why... Tim was not prepared to discuss this work, which is when things started to get off the rails. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. MY QUESTIONS FOR TIM (We didn't even get halfway through lol, Youtube wont let me put all of them so I'm just putting the second essay questions) OK lets get into your second essay "Why I'm not afraid of superintelligent AI taking over the world" from 11/15/23 -You find chess as a striking example of how AI will not take over the world-But I'd like to talk about AI safety researcher Steve Omohundro's take on chess-He says if you had an unaligned AGI you asked to get better at chess, it would first break into other servers to steal computing power so it would be better at Chess. Then when you discover this and try to stop it by turning it off, it sees your turning it off as a threat to it's improving at chess, so it murders you. -Where is he wrong? -You wrote: “Think about a hypothetical graduate student. Let's say that she was able to reach the frontiers of physics knowledge after reading 20 textbooks. Could she have achieved a superhuman understanding of physics by reading 200 textbooks? Obviously not. Those extra 180 textbooks contain a lot of words, they don't contain very much knowledge she doesn't already have. So too with AI systems. I suspect that on many tasks, their performance will start to plateau around human-level performance. Not because they “run out of data,” but because they reached the frontiers of human knowledge.” -In this you seem to assume that any one human is capable of mastering all of knowledge in a subject area better than any AI, because you seem to believe that one human is capable of holding ALL of the knowledge available on a given subject. -This is ludicrous to me. You think humans are far too special. -AN AGI WILL HAVE READ EVERY BOOK EVER WRITTEN. MILLIONS OF BOOKS. ACTIVELY CROSS-REFERENCING ACROSS EVERY DISCIPLINE. -How could any humans possibly compete with an AGI system than never sleeps and can read every word ever written in any language? No human could ever do this. -Are you saying humans are the most perfect vessels of knowledge consumption possible in the universe? -A human who has read 1000 books on one area can compete with an AGI who has read millions of books in thousands of areas for knowledge? Really? -You wrote: “AI safetyists assume that all problems can be solved with the application of enough brainpower. But for many problems, having the right knowledge matters more. And a lot of economically significant knowledge is not contained in any public data set. It's locked up in the brains and private databases of millions of individuals and organizations spread across the economy and around the world.” -Why do you assume an unaligned AGI would not raid every private database on earth in a very short time and take in all this knowledge you find so special? -Does this claim rest on the security protocols of the big AI companies? -Security protocols, even at OpenAI, are seen to be highly vulnerable to large-scale nation-state hacking. If China could hack into OpenAI, and AGI could surely hack into either or anything. An AGI's ability to spot and exploit vulnerabilities in human written code is widely predicted. -Lets see if we can leave this conversation with a note of agreement. Is there anything you think we can agree on?
In Episode #16 TRAILER, AI Risk Denier Down, things get weird. This show did not have to be like this. Our guest in Episode #16 is Timothy Lee, a computer scientist and journalist who founded and runs understandingai.org. Tim has written about AI risk many times, including these two recent essays: https://www.understandingai.org/p/why-im-not-afraid-of-superintelligent https://www.understandingai.org/p/why-im-not-worried-about-ai-taking Tim was not prepared to discuss this work, which is when things started to get off the rails. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
In Episode #15, AI Risk Superbowl I: Conner vs. Beff, Highlights and Post-Game Analysis, John takes a look at the recent debate on the Machine Learning Street Talk Podcast between AI safety hero Connor Leahy and Acceleration cult leader Beff Jezos, aka Guillaume Vendun. The epic three hour debate took place on 2/2/24. With a mix of highlights and analysis, John, with Beff's help, reveals the truth about the e/acc movement: it's anti-human at its core. This podcast is not journalism. But it's not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. Resources: Machine Learning Street Talk - YouTube Full Debate, e/acc Leader Beff Jezos vs Doomer Connor Leahy e/acc Leader Beff Jezos vs Doomer Connor Leahy How Guillaume Verdon Became BEFF JEZOS, Founder of e/acc How Guillaume Verdon Became BEFF JEZOS, Founder of e/acc Guillaume Verdon: Beff Jezos, E/acc Movement, Physics, Computation & AGI | Lex Fridman Podcast #407 Guillaume Verdon: Beff Jezos, E/acc Movement, Physics, Computation & AGI | Lex Fridman Podcast #407 Next week's guest Timothy Lee's Website and related writing: https://www.understandingai.org/https://www.understandingai.org/p/why...https://www.understandingai.org/p/why...
Preached at Deep Calleth Deep Conference 2023, 7 December Evening Service
Join us as we thank God in faith for all that He will do in our lives, families, churches, countries and the world this DCD! Behold His Glory podcast is a 40-episode devotional designed to help us slow down and experience God in the fullness of His glory. Join Pastor Timothy Lee as he talks about what it means to behold God, shares with us practical ways of beholding God in our daily lives and leads us into experiencing the transformative power of God's glory.
In today's episode, we pray for unity, partnerships and networks to be formed amongst our delegates. Behold His Glory podcast is a 40-episode devotional designed to help us slow down and experience God in the fullness of His glory. Join Pastor Timothy Lee as he talks about what it means to behold God, shares with us practical ways of beholding God in our daily lives and leads us into experiencing the transformative power of God's glory.
Pray together with us as we take authority over our families, praying protection, deliverance and healing over them. Behold His Glory podcast is a 40-episode devotional designed to help us slow down and experience God in the fullness of His glory. Join Pastor Timothy Lee as he talks about what it means to behold God, shares with us practical ways of beholding God in our daily lives and leads us into experiencing the transformative power of God's glory.
Today, we pray for unity in the family and for each unit to fulfil their call to marriage and in raising the next generation. Behold His Glory podcast is a 40-episode devotional designed to help us slow down and experience God in the fullness of His glory. Join Pastor Timothy Lee as he talks about what it means to behold God, shares with us practical ways of beholding God in our daily lives and leads us into experiencing the transformative power of God's glory.
Join us as we continue to pray for God's divine destiny to be fulfilled in our children's lives. Behold His Glory podcast is a 40-episode devotional designed to help us slow down and experience God in the fullness of His glory. Join Pastor Timothy Lee as he talks about what it means to behold God, shares with us practical ways of beholding God in our daily lives and leads us into experiencing the transformative power of God's glory.
In today's episode, we pray for our children to awakened, equipped and transformed to reach their generation. Behold His Glory podcast is a 40-episode devotional designed to help us slow down and experience God in the fullness of His glory. Join Pastor Timothy Lee as he talks about what it means to behold God, shares with us practical ways of beholding God in our daily lives and leads us into experiencing the transformative power of God's glory.
Today, we pray for faith to be released upon our gatherings, empowering us to believe God for supernatural possibilities. Behold His Glory podcast is a 40-episode devotional designed to help us slow down and experience God in the fullness of His glory. Join Pastor Timothy Lee as he talks about what it means to behold God, shares with us practical ways of beholding God in our daily lives and leads us into experiencing the transformative power of God's glory.
Dormify Coupon Code/Affiliate Relationship ExplainedPrep Expert Coupon Code/Affiliate Relationship ExplainedAlphabetical List of All Episodes with LinksClick Here To Join The Podcast Email ListLe Moyne College - Undergraduate AdmissionThe College Application Process Podcast - Social Media Linkswww.collegeadmissionstalk.com
Self-driving cars has long been one of the most exciting potential outcomes of advanced artificial intelligence. Contrary to popular belief, humans are actually very good drivers, but even so, well over a million people die on the roads each year. Globally, for people between 12 and 24 years old, road accidents are the most common form of death.Google started its self-driving car project in January 2009, and spun out a separate company, Waymo, in 2016. Expectations were high. Many people shared hopes that within a few years, humans would no longer need to drive. Some of us also thought that the arrival of self-driving cars would be the signal to everyone else that AI was our most powerful technology, and would get people thinking about the technological singularity. They would in other words be the “canary in the coal mine”.The problem of self-driving turned out to be much harder, and insofar as most people think about self-driving cars today at all, they probably think of them as a technology that was over-hyped and failed. And it turned out that chatbots – and in particular GPT-4 - would be the canary in the coal mine instead.But as so often happens, the hype was not wrong – it was just the timing that was wrong. Waymo and Cruise (part of GM) now operate paid-for taxi services in San Francisco and Phoenix, and they are demonstrably safer than humans. Chinese companies are also pioneering the technology.One man who knows much more about this than most is our guest today, Timothy Lee, a journalist who writes the newsletter "Understanding AI". He was previously a journalist at Ars Technica and the Washington Post, and he has a masters degree in Computer Science. In recent weeks, Timothy has published some carefully researched and insightful articles about the state of the art in self-driving cars.Selected follow-ups:https://www.UnderstandingAI.org/Topics addressed in this episode include:*) The two main market segments for self-driving cars*) Constraints adopted by Waymo and Cruise which allowed them to make progress*) Options for upgrading the hardware in a self-driven vehicle*) Some local opposition to self-driving cars in San Francisco*) A safety policy: when uncertain, stop, and phone home for advice*) Support from the State of California - and from other US States*) Comparing accident statistics: human drivers versus self-driving*) Why self-driving cars don't require AGI (Artificial General Intelligence)*) Reasons why self-driving cars cannot be remotely tele-operated*) Prospects for self-driven freight transport running on highways*) The company Nuro that delivers pizza and other items by self-driven robots*) Another self-driving robot company: Starship ("your local community helpers")*) The Israeli company Mobileye - acquired by Intel in 2017*) Friction faced by Chinese self-driving companies in the US and elsewhere*) Different possibilities for the speed at which self-driving solutions will scale up*) Potential social implications of wider adoption of self-driving solutions*) Consequences of fatal accidents*) Dangerous behaviour from safety drivers*) The special case of Tesla FSD (assisted "Full Self-Driving") and Elon Musk*) The future of recreational driving*) An invitation to European technologistsMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Today we have Timothy Lee back on the podcast to talk to us about Unity. He is the Pastor of Tabernacle of Joy in Singapore - an amazing church. He shared his story back on episode 52, if you want to check that out.In this episode, he talks about unity, the difference between patience and tolerance, why we can't unify with anything and everything, the importance of being a peace maker, and much more.---------Watch every episode - https://www.youtube.com/c/TheHackaPodcastFollow us on social:Instagram - https://www.instagram.com/thehackapod/Facebook - https://www.facebook.com/hackaorgTikTok - https://www.tiktok.com/@thehackapod