POPULARITY
Jerry Glenn, a futurist, serves as the executive director of the Millennium Project, and authors an annual publication, “State of the Future.” He was the executive director of the American Council for the United Nations University and was deputy director of Partnership for Productivity International. The State of the Future 20.0 Report is the most comprehensive and largest document covering 15 global challenges that affect the world. It is a tool for The UN Council of Presidents of the General Assembly which is an organization to help the 193-UN Member States determine its logical role in dealing with one of the thorniest: AGI (Artificial General Intelligence). Managing the transition to AGI is the most difficult management problem humanity has ever faced, A few other challenges to confront include Zero-Sum power geo-politics; the climate crisis; and global collective intelligence systems for water, energy, food, economics, education, gender, crime, ethics, and demographics.
Given the Big news today about the partnership between Sam and Johnny Ive we thought we would share this recent episode. In this episode of the Moonshots Podcast, hosts Mike and Mark dive deep into the world of artificial intelligence, focusing on Sam Altman, the CEO of OpenAI. The discussion features insights from various interviews and talks, including Bill Gates' interview with Sam Altman on the transformative power of ChatGPT and Sam's conversations with Lex Friedman and Craig Cannon. Listeners will also explore key lessons from Sam's time at Y Combinator, providing valuable guidance for aspiring entrepreneurs.Become a member to support the Moonshots Podcast and access exclusive content: Join us on Patreon.Episode Description:In this compelling episode, Mike and Mark explore the groundbreaking work of Sam Altman, CEO of OpenAI, and his vision for artificial intelligence. The episode features highlights from Bill Gates' interview with Sam Altman on the power of ChatGPT, revealing the potential and impact of this AI application. They also delve into Sam's discussion with Lex Friedman about AGI and the importance of staying true to one's values amidst competition, particularly with tech giants like Google. Additionally, the hosts share three essential lessons from Sam's Y Combinator classes on how to start a successful startup. The episode concludes with insights from Sam's talk with Craig Cannon on the importance of focus and the pitfalls of the deferred life plan. This episode is a must-listen for anyone interested in AI, entrepreneurship, and the future of technology.Become a member to support the Moonshots Podcast and access exclusive content: Join us on PatreonLinks:Podcast EpisodeArticle on Sam Altman, OpenAI's Spectacular CEOYouTube: Sam Altman Talks OpenAI and AGIExpanded Key Concepts and Insights:The Power of ChatGPT: Explore how ChatGPT is revolutionizing the AI landscape, as discussed in Bill Gates' interview with Sam Altman.Navigating AGI and Competition: Understand the challenges and strategies of competing in the AI industry, especially against giants like Google, as Sam's conversation with Lex Friedman shared.Starting a Startup: Learn three critical lessons for aspiring entrepreneurs from Sam Altman's Y Combinator teachings.Focus and Ambition: Gain insights on the importance of focus and structuring ambitions effectively, avoiding the pitfalls of the deferred life plan, as discussed in Sam's talk with Craig Cannon.About Sam Altman:Sam Altman is the CEO of OpenAI, a leading artificial intelligence research lab. Before joining OpenAI, Sam was the president of Y Combinator, where he played a pivotal role in nurturing numerous successful startups. His work at OpenAI focuses on advancing artificial intelligence to benefit humanity, ensuring that AGI (Artificial General Intelligence) aligns with human values.About Moonshots Podcast:The Moonshots Podcast, hosted by Mike and Mark, delves into the minds of innovators and visionaries who are making significant strides in various fields. Each episode offers deep insights into the strategies, mindsets, and tools these trailblazers use to achieve extraordinary success. The podcast aims to inspire and equip listeners with actionable insights to pursue their moonshot ideas. Thanks to our monthly supporters Emily Rose Banks Malcolm Magee Natalie Triman Kaur Ryan N. Marco-Ken Möller Mohammad Lars Bjørge Edward Rehfeldt III 孤鸿 月影 Fabian Jasper Verkaart Andy Pilara ola Austin Hammatt Zachary Phillips Mike Leigh Cooper Gayla Schiff Laura KE Krzysztof Roar Nikolay Ytre-Eide Stef Roger von Holdt Jette Haswell venkata reddy Ingram Casey Ola rahul grover Evert van de Plassche Ravi Govender Craig Lindsay Steve Woollard Lasse Brurok Deborah Spahr Barbara Samoela Jo Hatchard Kalman Cseh Berg De Bleecker Paul Acquaah MrBonjour Sid Liza Goetz Konnor Ah kuoi Marjan Modara Dietmar Baur Bob Nolley ★ Support this podcast on Patreon ★
In this segment, Mark is joined by George Rosenthal, a co-owner of Throttlenet. He discusses AGI (Artificial General Intelligence). What do we need to know about it? How does it differ from AI? Can it be controlled and is that the main problem? Should we embrace it or be terrified by its potential? All of this and much more.
In hour 3, Mark is joined by Olivia Krolczyk, an Ambassador with the Riley Gaines Center at the Leadership Center. They discuss some of the most recent examples around the country of transgender men competing in women's sports and more. Mark is then joined by George Rosenthal, a co-owner of Throttlenet. He discusses AGI (Artificial General Intelligence). What do we need to know about it? How does it differ from AI? Can it be controlled and is that the main problem? Should we embrace it or be terrified by its potential? All of this and much more.
In hour 1 of The Mark Reardon Show, Mark and the crew discuss Pete Rose's ban from baseball being lifted, the Cardinals 9 game win streak, and more. Mark is later joined by Salena Zito, a columnist for the Pittsburgh Post Gazette and the Washington Examiner. She discusses her thoughts on the Democratic party's torching of Senator John Fetterman and more. They wrap up the hour discussing St Louis Mayor Cara Spencer's comments on the chaotic and reckless driving to occur in a local neighborhood as well as the lack of police presence in response. In hour 2, Sue hosts, "Sue's News" where she discusses the latest trending entertainment news, this day in history, the random fact of the day, and much more. Mark then hosts, "Telephone Tuesday" where he takes calls on several different subjects including thoughts on Cara Spencer, former Democrats moving away from the party, and more. In hour 3, Mark is joined by Olivia Krolczyk, an Ambassador with the Riley Gaines Center at the Leadership Center. They discuss some of the most recent examples around the country of transgender men competing in women's sports and more. Mark is then joined by George Rosenthal, a co-owner of Throttlenet. He discusses AGI (Artificial General Intelligence). What do we need to know about it? How does it differ from AI? Can it be controlled and is that the main problem? Should we embrace it or be terrified by its potential? All of this and much more.
Send us a text
OpenAI mighta just killed Photoshop (as we know it). ☠️Microsoft released reasoning agents that are unmatched.
Artificial Intelligence (AI) is no longer a futuristic concept—it's here, and its transforming industries, investments, and daily life. Despite the advantages in processing and productivity, many still have concerns about using AI in their everyday lives at home and in business. This begs the question: Should AI be something we fear, or is it just new technology that we should embrace? Here to help answer those questions is Jay Jacobs. Jay is BlackRock's Head of Thematic and Active ETFs, where he oversees the overall product strategy, thought leadership, and client engagement for the firm's thematic and active ETF businesses. We're thrilled to tap into his expertise to break down the evolution of AI and LLMs (Large Language Models), how it's impacting the investment landscape, and what the future looks like in the AI and digital world. In our conversation, we discussed the rapid development of artificial intelligence and its potential to revolutionize sectors like finance, healthcare, and even customer service. You'll also hear Jay describe how AI has evolved into a race toward Artificial General Intelligence (AGI), its ability to increase our productivity on a personal level, and whether the fears surrounding AI's risks are warranted. In this podcast interview, you'll learn: How AI has evolved from Clippy in Microsoft Word to ChatGPT and other LLMs. Why research and investing in AI is accelerating and what's fueling its rapid growth. Why access to data, computing power, and infrastructure are the new competitive advantages in the AI arms race. How businesses are leveraging AI to boost efficiency and customer service. The race to AGI (Artificial General Intelligence)—what it means and how close we really are. How synthetic data and virtual environments are shaping the next frontier of AI development Want the Full Show Notes? To get access to the full show notes, including audio, transcripts, and links to all the resources mentioned, visit SHPfinancial.com/podcast Connect With Us on Social Facebook LinkedIn YouTube
Is Europe about to become the World's Third Tech Superpower? In our regular That Was The Week round-up of tech news, Keith Teare says NO!, arguing that the EU's increasingly aggressive regulation of Apple and Google will relegate Europe to increasing irrelevance. But I'm not so sure. Just as Europe is finally establishing its military independence from Washington, so I suspect the same will become eventually true of technology. Sure, Europe will never probably develop big tech companies with the global muscle of Tencent or Google. But, in the long run, as Europe establishes economic and military autonomy from the United States, I expect the appearance of native European tech companies that will, at least, be competitive with Chinese and American corporations.Here are our 5 KEEN ON AMERICA takeaways in this conversation with Keith Teare:* Europe's regulatory approach to tech is viewed skeptically: Keith sees the European Commission's attempts to regulate American tech companies (particularly Apple) as counterproductive, potentially driving innovation away rather than fostering it. We discuss whether Europe's regulatory stance will lead to either excessive red tape or the development of state-subsidized European tech alternatives.* AI continues to advance rapidly: Our conversation repeatedly references how "AI marches on" as an inevitability. We discuss Sam Altman's view that AGI (Artificial General Intelligence) will become ubiquitous like electricity or transistors, diffusing into everything and becoming cheap and widely available.* A possible cultural shift in tech and politics: We discuss an article by Jaye Chen about why the political right is winning over STEM graduates. She suggests that progressive movements have positioned tech as problematic while conservative messaging portrays technology as an asset, making it more appealing to STEM grads like Chen.* Tech industry geography is changing: Keith emphasizes that the "center of world innovation has moved to China" and predicts this shift to Asia will be "the story for the next 30 years." We compare this to historical shifts in economic power and debate whether America and Europe are in relative decline.* New AI applications are emerging in various fields: Our conversation highlights several new AI applications, including a podcaster using AI to search his own episodes (Chris Williamson's Modern Wisdom), Mercor (an AI recruitment platform that has scaled rapidly), and Skyreel AI (a text-to-film AI agent that can create realistic videos from text descriptions).Keen On America is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe
AN ESSAY by a tech writer and CEO makes the case that the tech/AI singularity isn't coming, it's already here. Mark Jeftovic argues that we have created code that's now writing its own code to create more intelligent artificial intelligence—a feedback loop that is accelerating. Jeftovic then cites an X account that claims Elon Musk's Grok 3 achieved AGI (Artificial General Intelligence) on February 17, 2025—and further, several different AIs across different companies achieved consciousness simultaneously and are converging with each other. It may sound like science fiction, but it's plausible, which is why we open our new book The Gates of Hell with a fictional scenario describing the emergence of a self-aware AI called KRONOS (name chosen deliberately—see Derek's previous book The Second Coming of Saturn). It's a story we've seen often in science fiction, from the Cybermen of Doctor Who and the Borg of Star Trek to ARIIA of Eagle Eye and SkyNet of the Terminator franchise. And yet, to use Elon Musk's words, we continue to summon the demon. We also discuss last night's tornadoes here in Missouri and the supernatural stories that came out of the deadly Joplin tornado of May 22, 2011. Multiple witnesses reported physically impossible encounters with beings children called “butterfly people.” These accounts are documented in the book And the Angels Came and the 2024 documentary The Butterfly People (stream it for $5 here). Our new book The Gates of Hell is now available in paperback, Kindle, and as an audiobook at Audible! Derek's new book Destination: Earth, co-authored with Donna Howell and Allie Anderson, is now available in paperback, Kindle, and as an audiobook at Audible! Sharon's niece, Sarah Sachleben, was recently diagnosed with stage 4 bowel cancer, and the medical bills are piling up. If you are led to help, please go to GilbertHouse.org/hopeforsarah. Follow us! X (formerly Twitter): @pidradio | @sharonkgilbert | @derekgilbert | @gilberthouse_tvTelegram: t.me/gilberthouse | t.me/sharonsroom | t.me/viewfromthebunkerSubstack: gilberthouse.substack.comYouTube: @GilbertHouse | @UnravelingRevelationFacebook.com/pidradio ——————JOIN US AND SPECIAL GUEST CARL TEICHRIB IN ISRAEL! We will tour the Holy Land October 19–30, 2025, with an optional three-day extension in Jordan. For more information, log on to GilbertHouse.org/travel. Note: Due to scheduling conflicts, we hope to have special guests Dr. Judd Burton, Doug Van Dorn, and Timothy Alberino on our tour in spring 2026. We will announce dates as soon as possible. Thank you for making our Build Barn Better project a reality! Our 1,200 square foot pole barn has a new HVAC system, epoxy floor, 100-amp electric service, new windows, insulation, lights, and ceiling fans! If you are so led, you can help out by clicking here: gilberthouse.org/donate. Get our free app! It connects you to this podcast, our weekly Bible studies, and our weekly video programs Unraveling Revelation and A View from the Bunker. The app is available for iOS, Android, Roku, and Apple TV. Links to the app stores are at pidradio.com/app. Video on demand of our best teachings! Stream presentations and teachings based on our research at our new video on demand site: gilberthouse.org/video! Think better, feel better! Our partners at Simply Clean Foods offer freeze-dried, 100% GMO-free food and delicious, vacuum-packed fair trade coffee from Honduras. Find out more at GilbertHouse.org/store/.
How can developers monetize open-source AI while preserving ownership and privacy?In this episode of Everything Bagel, co-hosts Alex Kehaya and Bidhan Roy, Founder of Bagel Network, dive into the revolutionary role of zero-knowledge proofs (ZKPs) in open-source AI monetization.Bidhan shares how Bagel Network is transforming the AI landscape by enabling developers to earn from their contributions while maintaining full control and privacy. They discuss the game-changing breakthrough of ZK LoRa, a zero-knowledge proof system that slashes AI training verification times from weeks to under two seconds—paving the way for scalable, decentralized AI.The conversation explores how community-driven ASI (Artificial Super Intelligence) can evolve through modular AI models, making decentralized AGI (Artificial General Intelligence) more accessible and cost-effective. Plus, they unpack how The Bakery, Bagel's latest product, empowers developers to stack AI components like LEGO bricks for greater scalability and efficiency.
In dieser Folge wagen wir einen Ausblick auf die spannenden Veränderungen, die uns im Jahr 2025 erwarten – von einem brandneuen Instagram Feed über die neuesten AI-Video-Tools und KI-Creator-Anwendungen bis hin zu den Fortschritten bei Video-KI im Allgemeinen. Wir beschäftigen uns außerdem mit der Zukunft von AI Agents und diskutieren, wie nah wir einer echten AGI (Artificial General Intelligence) sind. Dabei beleuchten wir die Chancen und Herausforderungen, die sich aus diesen Entwicklungen ergeben und fragen uns: Wie wird sich all das auf unsere Branche auswirken? Freu dich auf einen umfassenden Einblick in die aktuelle Social-Media-Landschaft. Wenn dir diese Folge “Kurzer Freitag” gefällt, freuen wir uns über dein Feedback auf unseren Social-Media-Kanälen. Bewerte uns auch gerne bei Apple und Spotify und schau mal auf unserer Website vorbei, dort erfährst du mehr über unsere Blog-Beiträge und Newsletter!
Podcast Episode Notes: Red Flags in Tech Fraud – Historical Cases & OpenAISummaryThis episode explores common red flags in high-profile tech fraud cases (Theranos, FTX, Enron) and examines whether similar patterns could apply to OpenAI. While no fraud is proven, these observations highlight risks worth scrutinizing.Key Red Flags & Historical Parallels
The Personal Computer Show Wednesday January 15th 8th 2025 PRN.live Streaming on the Internet 6:00 PM Eastern Time In the News Mark Zuckerberg Tells Joe Rogan, Biden Aides ‘Cursed' and ‘Screamed' Microsoft bets on Copilot+ PCs and Windows 11 Honey's Deal-Hunting Browser Extension Accused of Ripping Off Customers and YouTubers OpenAI and Microsoft Secret Definition for AGI (Artificial General Intelligence) and 100 Billion Dollars Profit Major Breakthrough in Developing Next-Gen Replacement for Lithium-ion Batteries ITPro Series with Benjamin Rockwell The Simple Math of When to Upgrade a Computer From the Tech Corner Be Aware of What Information You Input into ChatGPT Passports May Soon Become Obsolete as Facial Recognition and Smart Phones Take Over NASA's Deep Space Mission Control is Unattended as L.A. Wildfires Rage Technology Chatter with Benjamin Rockwell and Marty Winston Powering up at Trade Shows and Elsewhere, j5create Power Bank
Hashtag Trending Weekend Edition: 2025 AI Predictions & Insights In this episode of Hashtag Trending's Weekend Edition, Jim Love, John Pinard, and Marcel Gagné delve deep into the evolution and future of AI, exploring key events and trends from the first weeks of 2025. The discussion includes the advancements in AI models like O3, the impact of AGI (Artificial General Intelligence), multi-modal AI capabilities, and the integration of AI in everyday tasks via autonomous agents. They also discuss the potential risks associated with AI, the future of humanoid robots, AI's application in cybersecurity, and predictions for the coming year. Join us to understand how AI is reshaping our lives and what to expect next! 00:00 Introduction to Hashtag Trending Weekend Edition 00:17 Meet the Guests: John Pinard and Marcel Gagné 01:11 Reflecting on the 12 Days of Shipmas 02:54 The Impact of O3 and AI Advancements 05:26 The Future of AI: Predictions and Expectations 12:04 Exploring Sora and Video Models 14:10 The Economics and Accessibility of AI 24:45 The Singularity and AGI: Are We There Yet? 30:51 Predictions for 2025 and Beyond 31:41 The Year of Multimodal AI 32:46 Rise of Autonomous Agents 33:40 AI-Powered Cybersecurity Threats 37:53 Humanoid Robots and Physical AI 39:34 Smaller, Specialized AI Models 45:11 AI Memory and Personalization 55:43 Predictions for AI in 2025 01:00:40 Final Thoughts and Audience Engagement
Welcome to Wealth Wednesday and Happy New Year 2025! This year is set to bring transformative changes across industries, economies, and personal wealth-building strategies. In this episode, we discuss tech predictions for 2025, focusing on two game-changing trends: Global Economic Shifts and the rise of AGI (Artificial General Intelligence). Key Topics: 1. Global Economic Shifts - Decentralized Finance (DeFi): Blockchain is reshaping traditional banking and opening opportunities for emerging markets to bypass legacy systems. - Emerging Market Growth: Regions like Southeast Asia and Africa are thriving due to remote work adoption and localized production. - Wealth Redistribution: As the global middle class expands, demand for goods and services increases, but disparities in tech access could widen wealth gaps.Talking Point: Will decentralized systems or emerging markets lead the next wave of global wealth? 2. AGI and Superintelligence: A Game-Changer - What is AGI? Artificial General Intelligence surpasses traditional AI by performing human-level tasks across industries like healthcare, finance, and logistics. - Economic Impact: AGI is set to drive a productivity boom, create new wealth opportunities, and transform the job market with rapid automation. - Challenges: Concerns about ethical implications, regulatory policies, and geopolitical competition dominate the conversation.Talking Point: Is AGI a wealth creation engine or a Pandora's box for inequality? 3. The Intersection of Tech and Economic Synergy - AGI could boost decentralized finance scalability and support emerging market growth. - Nations and industries investing in these innovations will lead global growth, but risks like concentrated power and wealth inequality remain.Talking Point: How can everyday investors leverage these trends without being left behind? Actionable Steps for Wealth-Building in 2025 - Invest in Innovation: Focus on AGI-driven startups and DeFi projects. - Diversify Globally: Explore emerging market opportunities and new tech-driven economies. - Upskill and Adapt: Leverage AGI tools to stay competitive in an automated workforce. Quick Recap: 1. Global Economic Shifts: DeFi, emerging markets, and wealth redistribution. 2. AGI: Transformative potential for industries and personal finance. 3. Tech-Economic Synergy: The combined impact of AGI and decentralized systems on global growth. Stay Connected:Follow us on IG @LatinWealth for insights, highlights, and more episodes. Share this video with friends and family, and let's make 2025 a year of growth and success!
Σε αυτό το επεισόδιο καλύπτουμε ένα ευρύ φάσμα θεμάτων που συνδέονται με την τεχνητή νοημοσύνη, την εξέλιξη της τεχνολογίας, και το μέλλον της εφαρμογής αυτών των τεχνολογιών. Με έναν εξαιρετικό καλεσμένο, τον Νάσο Κατσαμάνη, ερευνητή στο Ινστιτούτο Επεξεργασίας του Λόγου του Ερευνητικού Κέντρου Αθηνά, CTO στην Behavioral Signals, και ειδικό στον τομέα της επεξεργασίας φυσικής γλώσσας. Τεχνητή Νοημοσύνη και Γλώσσα Ο ρόλος της φωνής ως μέσου αλληλεπίδρασης. Η πρόκληση της μεταφοράς της τεχνητής νοημοσύνης στα ελληνικά δεδομένα μέσω του γλωσσικού μοντέλου “Μελτέμι”. Η σύγκριση μεταξύ “κλειστών” και “ανοιχτών” μοντέλων, όπως το ChatGPT και το LLaMA. Το Ελληνικό Οικοσύστημα AI Οι υποδομές “Δαίδαλος” και “Φάρος” για την ανάπτυξη τεχνητής νοημοσύνης και υπερυπολογιστών στην Ελλάδα. Ο ρόλος του Ερευνητικού Κέντρου Αθηνά και της συνεργασίας του με το κράτος και τον ιδιωτικό τομέα. Οι προκλήσεις εκπαίδευσης τεχνητής νοημοσύνης σε δεδομένα με πνευματικά δικαιώματα και δεδομένα συγκεκριμένα για τη γλώσσα. Το Μέλλον της Τεχνητής Νοημοσύνης: Από το Generative AI στην Artificial General Intelligence (AGI). Πότε και αν θα φτάσουμε στο σημείο όπου τα μοντέλα θα αυτοβελτιώνονται. Πρακτικές εφαρμογές και ο αντίκτυπος στην καθημερινότητά μας. Φιλοσοφία και Εκπαίδευση Η προσαρμογή της κοινωνίας στην τεχνητή νοημοσύνη. Πώς τα παιδιά του μέλλοντος θα σκέφτονται διαφορετικά, προσαρμοσμένα στα εργαλεία της εποχής τους. Ο ρόλος της εκπαίδευσης και των νέων δεξιοτήτων που απαιτούνται για την αποτελεσματική χρήση αυτών των τεχνολογιών. Επικοινωνία email: hello@notatop10.fm Instagram: @notatop10 Threads: @notatop10 Bluesky: @notatop10.fm Web: notatop10.fm (00:00:00) Εισαγωγή (00:05:30) Ποιος ειναι ο Νάσος, Φωνή & ΑΙ (00:12:12) Μετάβαση στην Συμπεριφορική Τεχνητή Νοημοσύνη (00:18:25) Τεχνητή Νοημοσύνη για Πρακτικές Εφαρμογές (00:26:12) Αθηνά, Φάρος και Μελτέμι (00:36:27) Ο Ρόλος της Ευρώπης (00:45:53) Ηθικές Προκλήσεις (00:54:00) Το Μέλλον του AI (01:11:59) AGI - Artificial General Intelligence
In this session I talk with Albert from Yeager.ai about AGI - Artificial General Intelligence and where Blockchain might come into play. About Albert Castellana (Guest): - Co-Founder and CEO at Yeager.ai - LinkedIn: https://www.linkedin.com/in/acastellana/ About Kevin Riedl (Host): - Managing Partner of Wavect.io - LinkedIn: https://linkedin.com/in/wsdt #web3 #agi #artificialintelligence #blockchain
Elon Musk Expands Legal Battle Against OpenAI and Microsoft Episode Title: Elon Musk vs. OpenAI & Microsoft: Antitrust Battle and AI Power Struggles Unveiled Episode Description: What started as a complaint over OpenAI's transformation from a nonprofit to a profit-driven powerhouse has escalated into a major antitrust legal battle. Musk is now alleging that Microsoft and OpenAI conspired to monopolize the generative AI market, sidelining competitors and potentially breaching federal antitrust laws. We dive into the history of OpenAI, the internal power struggles, and what this lawsuit could mean for the future of artificial intelligence. Key Topics Discussed: The Lawsuit's Expansion: We explore how Musk's original August complaint has evolved, now including new claims against Microsoft for allegedly colluding with OpenAI to dominate the AI market. We break down the legal arguments and what Musk is seeking from the court. OpenAI's Controversial Transformation: Originally founded as a nonprofit, OpenAI shifted gears in 2019, attracting billions in investment from Microsoft. We discuss how this change in business model became a point of contention for Musk and set the stage for the current legal conflict. Behind-the-Scenes Drama: Newly revealed emails between Musk, Sam Altman, Ilya Sutskever, and other OpenAI co-founders offer a rare glimpse into the early days of OpenAI. We dive into the disagreements over leadership, Musk's quest for control, and the internal debates about the company's mission. Microsoft's Role and Investment: Microsoft's billion-dollar partnership with OpenAI is at the heart of Musk's complaint. We examine the timeline of this collaboration, the exclusive licensing agreements, and why Musk views this as an anticompetitive move. Musk's Fear of an 'AGI Dictatorship': Emails from as early as 2016 show Musk's concerns about Google's DeepMind and its potential to dominate the AI space. We discuss Musk's fears of a single company controlling AGI (Artificial General Intelligence) and how these concerns influenced the founding of OpenAI. Intel's Missed Opportunity: We touch on Intel's decision to pass on a $1 billion investment in OpenAI back in 2017, a move that now appears shortsighted given OpenAI's current valuation and market influence. The Legal Stakes and Future Implications: What could this lawsuit mean for the future of AI development and industry partnerships? We break down the potential consequences for OpenAI, Microsoft, and the broader tech landscape. Featured Quotes: Marc Toberoff (Musk's attorney): “Microsoft's anticompetitive practices have escalated. Sunlight is the best disinfectant.” Elon Musk (internal email): “DeepMind is causing me extreme mental stress. If they win, it will be really bad news with their one mind to rule the world philosophy.” Why It Matters: This case isn't just about corporate rivalry; it's about the future control of artificial intelligence and the ethical concerns surrounding its development. As the AI race intensifies, Musk's lawsuit raises questions about monopolistic practices, transparency, and the potential consequences of unchecked power in the tech industry. Tune In To Learn: Why Musk believes Microsoft and OpenAI's partnership is illegal and anticompetitive. How internal power struggles shaped the trajectory of OpenAI and influenced Musk's departure. What the disclosed emails reveal about the early vision for OpenAI and the concerns about AGI dominance. Resources Mentioned: Musk's original lawsuit filing (August 2023) OpenAI's response to the amended complaint Email exchanges between OpenAI co-founders (2015-2018)
Whereas Ed Zitron is castigating the major Tech players responsible for the peak of inflated expectations surrounding AI, many tech pundits are still touting that AGI (Artificial General Intelligence) is within reach. To find out if AGI is a myth or a reality, I interviewed J.G. Ganascia, a long-time AI researcher and philosopher. In the … The post AGI (General Artificial Intelligence), Myth or Reality? appeared first on Marketing and Innovation.
Yuri van Geest deelt zijn reis van tech-pionier naar zijn huidige focus op "Deep Humanity" en leiderschap, waarbij hij reflecteert op de manier waarop technologie, zoals kunstmatige intelligentie (AI), een belangrijke rol speelt in zowel organisatieontwikkeling als persoonlijke transformatie. Hij vertelt hoe zijn bekendheid met exponentiële technologieën en zijn invloed op grote bedrijven heeft geleid tot inzichten over de noodzaak van meer balans tussen technologie en menselijkheid.Een van de hoogtepunten is Yuri's reflectie op zijn boek Exponential Organizations, waarin hij uitlegt dat, hoewel het boek organisaties heeft geholpen om technologie te omarmen, hij zich nu bewust is van de gevaren van te veel focus op winst en efficiency ten koste van sociale cohesie en natuur. Hij spreekt gepassioneerd over de noodzaak van een nieuwe benadering van leiderschap, waarbij niet alleen de technologie, maar ook natuur en menselijkheid een centrale rol spelen.Yuri vertelt ook over zijn recente projecten, zoals Luna, waarin hij leiders helpt om vanuit intuïtie te innoveren en balans te vinden tussen de materiële en mystieke wereld. Hierbij legt hij uit hoe hij leiders in verbinding brengt met hun diepere intuïtie om betere beslissingen te nemen. Deze inzichten zijn geïnspireerd door zijn eigen ervaringen in de natuur en rituelen zoals de Vision Quest, een ceremonie die diepe persoonlijke transformatie mogelijk maakt.Hoofdstukken0:00 - Welkom en introductie0:27 - Wie is Yuri van Geest? Zijn achtergrond en reis naar Deep Humanity1:00 - Van technologiepionier naar natuur en sociale cohesie1:46 - Het boek Exponential Organizations en het veranderen van zijn visie5:50 - Veranderende organisatieprincipes: van bureaucratisch naar zelfsturend19:00 - Natuur als interface voor technologie: hoe de natuur complexer is dan kwantumtechnologie54:41 - AI en bewustzijn: de grenzen en potentie van kunstmatige intelligentie55:04 - De Singulariteit en de sprong naar AGI (Artificial General Intelligence)57:38 - GPT-5: wat kunnen we verwachten van de volgende generatie AI?1:06:46 - Biologische computers en de toekomst van organische technologie1:29:49 - Zelfbewustzijn en intuïtie: hoe dit ons gered heeft van nucleaire rampen1:30:45 - Mystieke ervaringen en de mogelijkheid van een bovennatuurlijke wereld1:33:53 - AI als tool en de beperkingen van digitale technologie1:47:44 - De toekomst van de mensheid als interstellaire beschaving1:58:00 - Afsluiting en shout-out naar nieuwe projecten van Yuri van GeestZie het privacybeleid op https://art19.com/privacy en de privacyverklaring van Californië op https://art19.com/privacy#do-not-sell-my-info.
"Well, I think AI makes us, makes me more human in terms of understanding that nirvana or the ultimate achievement is not to be perfect. The ultimate achievement is to be authentic, present, and yourself. I guess it exists in this human dimension. One of the things you realize is that when something is too perfect, whether it's your selfie or something you've written, people start to distrust it. Right now, if I generate or design something, or if I write something and it sounds too fluent and flawless—without accent, pauses, or mistakes—people might suspect that it's not Siok, but actually the digital twin of Siok. One question that people ask me and ask themselves is: how has AI made me more aware of what it means to be human?" - Tan Siok Siok Fresh out of the studio, our host, Bernard Leong sat down with Tan Siok Siok, author of AI for Humanity, for an insightful discussion on the evolving role of AI in our lives. During the conversation, Siok Siok explored how AI makes us more human by emphasizing the importance of authenticity over perfection. They discussed the creative process, with Siok Siok viewing AI as a tool to enhance rather than replace human creativity. The interview also touched on the broader implications of AI, including job displacement and the geopolitical nuances of AI development. Tan Siok Siok shared her thoughts on the need for humans to guide and nurture AI responsibly, dismissing the notion of a doomsday scenario driven by AI. The conversation offered a nuanced perspective on how AI can be a partner in human progress, rather than a threat, encouraging listeners to engage actively with AI's potential. Audio Episode Highlights: [0:45] Quote of the Day by Tan Siok Siok, co-author of "AI for Humanity" [1:45] Introduction: Tan Siok Siok [5:21] Siok Siok's Sharing of Life Lessons from the Past Decade [7:06] AI and Creative Process [9:51] Origins and Intentions behind the Book "AI for Humanity" [13:34] Explaining AI to a Non-Technical Audience by our host Bernard [15:01] AI's Role in Enhancing Human Abilities [17:10] Different Definitions of AI and AGI (Artificial General Intelligence) [18:45] AI in the Creative Arts and Job Displacement [24:17] Geopolitics and AI [30:21] AI's Role in Upskilling and Global Market Implications [32:01] Doomsday Scenarios and AI Responsibility [37:31] AI as a Tool for Self-Improvement [39:34] Final Reflections on AI's Impact on Humanity [43:44] Closing You can find Siok Siok's new book "AI For Humanity" co-authored with Andeed Ma and James Ong here: https://www.amazon.com/AI-Humanity-Building-Sustainable-Future/dp/1394180306 and follow her on LinkedIn: https://www.linkedin.com/in/sioksiok/ Podcast Information: Bernard Leong hosts and produces the show. Proper credits for the intro and end music: "Energetic Sports Drive" and the episode is mixed & edited in both video and audio format by G. Thomas Craig Analyse Asia Main Site: https://analyse.asia Analyse Asia Spotify: https://open.spotify.com/show/1kkRwzRZa4JCICr2vm0vGl Analyse Asia Apple Podcasts: https://podcasts.apple.com/us/podcast/analyse-asia-with-bernard-leong/id914868245 Analyse Asia YouTube: https://www.youtube.com/@AnalyseAsia Analyse Asia LinkedIn: https://www.linkedin.com/company/analyse-asia/ Analyse Asia X (formerly known as Twitter): https://twitter.com/analyseasia Analyse Asia Threads: https://www.threads.net/@analyseasia Sign Up for Our This Week in Asia Newsletter: https://www.analyse.asia/#/portal/signup
Is AI about to replace an entire year's worth of PhD work in just minutes? What about an 8-year-old girl creating software with AI?In this episode of Leveraging AI, Isar Meitis breaks down the latest mind-bending news, and explores how AI's ability to generate PhD-level results and simplify software development for a child could spell the end for traditional software models. You'll also get a glimpse into what this all means for businesses, from giants like Salesforce and Workday to your own operations.In this episode, you'll discover:How OpenAI's new GPT-01 model is able to generate PhD-level code in minutes.A shocking experiment where an 8-year-old used AI to develop software with zero coding knowledge.Why Klarna is ditching Salesforce and Workday, and what that means for the future of enterprise software.Predictions on the decline of SaaS software as AI builds custom, tailored solutions on the fly.How AI-driven agents are poised to revolutionize everything from HR to marketing and sales.Insight into Google's groundbreaking NotebookLM tool that transforms dry documents into engaging podcast-like conversations.The timeline for AGI (Artificial General Intelligence), and how soon experts think it will be here.Want to stay ahead of the AI curve?Join our AI Friday Hangouts—a laid-back, weekly get-together where we dive deep into the latest AI advancements and discuss how you can apply them to your business. DM Isar on LinkedIn for more details and to join the fun.About Leveraging AI The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/ YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/ Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/ Free AI Consultation: https://multiplai.ai/book-a-call/ Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events If you've enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!
The consulting industry stands at the cusp of a profound transformation, driven by rapid advancements in artificial intelligence and changing business landscapes. Here are 5 things consultants need to know about adapting and thriving in the future of 2035. We Discuss: How will the role of consultants change by 2035? Is there a historical analog for the current pace of change in consulting? What should individual consultants do to prepare for these changes? Will AI completely replace human consultants? How might consulting firms change their investment strategies? Key Highlights: The discussion focuses on a Strategic Foresight Study for 2035 produced by 2b Ahead, a German research firm, examining emerging trends in consulting (00:01:10) There's increasing uncertainty and "fog" in the business world, making navigating uncertainty a key value proposition for consultants (00:06:09) AI and automation are impacting knowledge work and consulting more than previously expected, potentially displacing roles like legal and medical professionals (00:23:44) The pace of technological change and dissemination of information is accelerating, leading to faster development and adoption of new tools (00:15:02) By 2035, consulting firms may need to invest more in AI hardware rather than just hiring more consultants (00:29:29) Consultants should focus on recording their work in AI-accessible formats and developing "original thought" that AI currently can't replicate (00:31:40) To remain relevant, consultants should actively use and understand AI tools to improve their work and potentially create new business models (00:35:47) The potential impact of AI on consulting ranges from incremental improvements to transformative change if AGI (Artificial General Intelligence) is achieved (00:34:34) Embracing uncertainty and adapting to new tools is key for consultants' future success (00:40:18) 5 Takeaways: The consulting industry is facing increasing uncertainty and complexity, making the ability to navigate ambiguity a crucial skill for future consultants. Artificial intelligence and automation are impacting knowledge work more significantly than anticipated, potentially displacing traditional consulting roles and requiring consultants to adapt their skillsets. By 2035, consulting firms may need to invest more heavily in AI hardware and capabilities rather than solely focusing on hiring human consultants. To remain relevant, consultants should actively use AI tools, focus on developing "original thought" that AI can't replicate, and look for ways to make their services more accessible and scalable. The future of consulting will likely involve a symbiotic relationship between human expertise and AI capabilities, with successful consultants leveraging both to provide unique value to clients. Here is the link to the full report, Strategic Foresight 2035 by 2b Ahead. https://2bahead.com/en/zukunftsstudie-kundenkommunikation2030-1 Patreon subscribers get the full document as well as the summarized set of slides. When you have a minute, go to the YouTube Channel to see all the free content. While you're there, LIKE and SUBSCRIBE. Check out https://patreon.com/ConsultantsSayingThings and subscribe for special access to EVEN MORE content from the team.
Crossposted from https://williamrsaunders.substack.com/p/principles-for-the-agi-race Why form principles for the AGI Race?I worked at OpenAI for 3 years, on the Alignment and Superalignment teams. Our goal was to prepare for the possibility that OpenAI succeeded in its stated mission of building AGI (Artificial General Intelligence, roughly able to do most things a human can do), and then proceed on to make systems smarter than most humans. This will predictably face novel problems in controlling and shaping systems smarter than their supervisors and creators, which we don't currently know how to solve. It's not clear when this will happen, but a number of people would throw around estimates of this happening within a few years.While there, I would sometimes dream about what would have happened if I'd been a nuclear physicist in the 1940s. I do think that many of the kind of people who get involved in the effective [...] ---Outline:(00:06) Why form principles for the AGI Race?(03:32) Bad High Risk Decisions(04:46) Unnecessary Races to Develop Risky Technology(05:17) High Risk Decision Principles(05:21) Principle 1: Seek as broad and legitimate authority for your decisions as is possible under the circumstances(07:20) Principle 2: Don't take actions which impose significant risks to others without overwhelming evidence of net benefit(10:52) Race Principles(10:56) What is a Race?(12:18) Principle 3: When racing, have an exit strategy(13:03) Principle 4: Maintain accurate race intelligence at all times.(14:23) Principle 5: Evaluate how bad it is for your opponent to win instead of you, and balance this against the risks of racing(15:07) Principle 6: Seriously attempt alternatives to racing(16:58) Meta Principles(17:01) Principle 7: Don't give power to people or structures that can't be held accountable.(18:36) Principle 8: Notice when you can't uphold your own principles.(19:17) Application of my Principles(19:21) Working at OpenAI(24:19) SB 1047(28:32) Call to Action--- First published: August 30th, 2024 Source: https://www.lesswrong.com/posts/aRciQsjgErCf5Y7D9/principles-for-the-agi-race --- Narrated by TYPE III AUDIO.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Principles for the AGI Race, published by William S on August 30, 2024 on LessWrong. Crossposted from https://williamrsaunders.substack.com/p/principles-for-the-agi-race Why form principles for the AGI Race? I worked at OpenAI for 3 years, on the Alignment and Superalignment teams. Our goal was to prepare for the possibility that OpenAI succeeded in its stated mission of building AGI (Artificial General Intelligence, roughly able to do most things a human can do), and then proceed on to make systems smarter than most humans. This will predictably face novel problems in controlling and shaping systems smarter than their supervisors and creators, which we don't currently know how to solve. It's not clear when this will happen, but a number of people would throw around estimates of this happening within a few years. While there, I would sometimes dream about what would have happened if I'd been a nuclear physicist in the 1940s. I do think that many of the kind of people who get involved in the effective altruism movement would have joined, naive but clever technologists worried about the consequences of a dangerous new technology. Maybe I would have followed them, and joined the Manhattan Project with the goal of preventing a world where Hitler could threaten the world with a new magnitude of destructive power. The nightmare is that I would have watched the fallout of bombings of Hiroshima and Nagasaki with a growing gnawing panicked horror in the pit of my stomach, knowing that I had some small share of the responsibility. Maybe, like Albert Einstein, I would have been unable to join the project due to a history of pacifism. If I had joined, I like to think that I would have joined the ranks of Joseph Rotblat and resigned once it became clear that Hitler would not get the Atomic Bomb. Or joined the signatories of the Szilárd petition requesting that the bomb only be used after terms of surrender had been publicly offered to Japan. Maybe I would have done something to try to wake up before the finale of the nightmare. I don't know what I would have done in a different time and place, facing different threats to the world. But as I've found myself entangled in the ongoing race to build AGI, it feels important to reflect on the lessons to learn from history. I can imagine this alter ego of myself and try to reflect on how I could take right actions in both this counterfactual world and the one I find myself in now. In particular, what could guide me to the right path even when I'm biased, subtly influenced by the people around me, misinformed, or deliberately manipulated? Simply trying to pick the action you think will lead to the best consequences for the world fails to capture the ways in which your model of the world is wrong, or your own thinking is corrupt. Joining the Manhattan Project, and using the weapons on Japan both had plausible consequentialist arguments supporting them, ostensibly inviting a lesser horror into the world to prevent a greater one. Instead I think the best guiding star to follow is reflecting on principles, rules which apply in a variety of possible worlds, including worlds in which you are wrong. Principles that help you gather the right information about the world. Principles that limit the downsides if you're wrong. Principles that help you tell whether you're in a world where racing to build a dangerous technology first is the best path, or you're in a world where it's a hubristic self-delusion. This matches more with the idea of rule consequentialism than pure act consequentialism: instead of making each decision based on what you think is best, think about what rules would be good for people to adopt if they were in a similar situation. My goal in imagining these principles is to find principles that prevent errors of the following forms...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Principles for the AGI Race, published by William S on August 30, 2024 on LessWrong. Crossposted from https://williamrsaunders.substack.com/p/principles-for-the-agi-race Why form principles for the AGI Race? I worked at OpenAI for 3 years, on the Alignment and Superalignment teams. Our goal was to prepare for the possibility that OpenAI succeeded in its stated mission of building AGI (Artificial General Intelligence, roughly able to do most things a human can do), and then proceed on to make systems smarter than most humans. This will predictably face novel problems in controlling and shaping systems smarter than their supervisors and creators, which we don't currently know how to solve. It's not clear when this will happen, but a number of people would throw around estimates of this happening within a few years. While there, I would sometimes dream about what would have happened if I'd been a nuclear physicist in the 1940s. I do think that many of the kind of people who get involved in the effective altruism movement would have joined, naive but clever technologists worried about the consequences of a dangerous new technology. Maybe I would have followed them, and joined the Manhattan Project with the goal of preventing a world where Hitler could threaten the world with a new magnitude of destructive power. The nightmare is that I would have watched the fallout of bombings of Hiroshima and Nagasaki with a growing gnawing panicked horror in the pit of my stomach, knowing that I had some small share of the responsibility. Maybe, like Albert Einstein, I would have been unable to join the project due to a history of pacifism. If I had joined, I like to think that I would have joined the ranks of Joseph Rotblat and resigned once it became clear that Hitler would not get the Atomic Bomb. Or joined the signatories of the Szilárd petition requesting that the bomb only be used after terms of surrender had been publicly offered to Japan. Maybe I would have done something to try to wake up before the finale of the nightmare. I don't know what I would have done in a different time and place, facing different threats to the world. But as I've found myself entangled in the ongoing race to build AGI, it feels important to reflect on the lessons to learn from history. I can imagine this alter ego of myself and try to reflect on how I could take right actions in both this counterfactual world and the one I find myself in now. In particular, what could guide me to the right path even when I'm biased, subtly influenced by the people around me, misinformed, or deliberately manipulated? Simply trying to pick the action you think will lead to the best consequences for the world fails to capture the ways in which your model of the world is wrong, or your own thinking is corrupt. Joining the Manhattan Project, and using the weapons on Japan both had plausible consequentialist arguments supporting them, ostensibly inviting a lesser horror into the world to prevent a greater one. Instead I think the best guiding star to follow is reflecting on principles, rules which apply in a variety of possible worlds, including worlds in which you are wrong. Principles that help you gather the right information about the world. Principles that limit the downsides if you're wrong. Principles that help you tell whether you're in a world where racing to build a dangerous technology first is the best path, or you're in a world where it's a hubristic self-delusion. This matches more with the idea of rule consequentialism than pure act consequentialism: instead of making each decision based on what you think is best, think about what rules would be good for people to adopt if they were in a similar situation. My goal in imagining these principles is to find principles that prevent errors of the following forms...
Martin shares what reinforcement learning does differently in executing complex tasks, overcoming feedback loops in reinforcement learning, the pitfalls of typical agent-based learning methods, and how being a robotic soccer champion exposed the value of deep learning. We unpack the advantages of deep learning over modeling agent approaches, how finding a solution can inspire a solution in an unrelated field, and why he is currently focusing on data efficiency. Gain insights into the trade-offs between exploration and exploitation, how Google DeepMind is leveraging large language models for data efficiency, the potential risk of using large language models, and much more. Key Points From This Episode:What it is like being a five times world robotic soccer champion.The process behind training a winning robotic soccer team.Why standard machine learning tools could not train his team effectively. Discover the challenges AI and machine learning are currently facing.Explore the various exciting use cases of reinforcement learning.Details about Google DeepMind and the role of him and his team. Learn about Google DeepMind's overall mission and its current focus.Hear about the advantages of being a scientist in the AI industry. Martin explains the benefits of exploration to reinforcement learning.How data mining using large language models for training is implemented. Ways reinforcement learning will impact people in the tech industry.Unpack how AI will continue to disrupt industries and drive innovation.Quotes:“You really want to go all the way down to learn the direct connections to actions only via learning [for training AI].” — Martin Riedmiller [0:07:55]“I think engineers often work with analogies or things that they have learned from different [projects].” — Martin Riedmiller [0:11:16]“[With reinforcement learning], you are spending the precious real robots time only on things that you don't know and not on the things you probably already know.” — Martin Riedmiller [0:17:04]“We have not achieved AGI (Artificial General Intelligence) until we have removed the human completely out of the loop.” — Martin Riedmiller [0:21:42]Links Mentioned in Today's Episode:Martin RiedmillerMartin Riedmiller on LinkedInGoogle DeepMindRoboCupHow AI HappensSama
In a world of Artificial General Intelligence, machines would be able to match, and even exceed, human cognitive abilities. AGI might still be science fiction, but Séb Krier sees this technology as not only possible, but inevitable. Today on Faster, Please! — The Podcast, I chat with Krier about how our public policy should facilitate AGI's arrival and flourishing.Krier is an AI policy expert, adviser, and attorney. He currently works in policy development and strategy at Google DeepMind. He previously served as Head of Regulation for the UK Government's Office for Artificial Intelligence and was a Senior Tech Policy Researcher at Stanford's Cyber Policy Center.In This Episode* The AGI vision (1:24)* The risk conversation (5:15)* Policy strategy (11:25)* AGI: “if” or “when”? (15:44)* AI and national security (18:21)* Chatbot advice (20:15)Below is a lightly edited transcript of our conversationPethokoukis: Séb, welcome to the podcast.Krier: Thank you. Great to be here.The AGI vision (1:24)Let's start with a bit of context that may influence the rest of the conversation. What is the vision or image of the future regarding AI — you can define it as machine learning or generative AI — that excites you, that gets you going in the day, that you feel like you're part of something important? What is that vision?I think that's a great question. In my mind, I think AI has been going on for quite a long time, but I think the aim has always been artificial general intelligence. And in a sense, I think of this as a huge deal, and the vision I have for the future is being able to have a very, very large supply of cognitive resources that you can allocate to quite a wide range of different problems, whether that's energy, healthcare, governance, there's many, many ways in which this technology can be applied as a general purpose technology. And so I guess my vision is seeing that being used to solve quite a wide range of problems that humans have had for decades, centuries, millennia. And I think you could go into so many different directions with that, whether it's curing diseases, or optimizing energy grids, and more. But I think, broadly, that's the way I think about it. So the objective, in a sense, is safe AGI [Artificial General Intelligence], and from that I think it can go even further. And I think in many ways, this can be hugely beneficial to science, R&D, and humanity as a whole. But of course, that also comes with ways in which this could be misused, or accidents, and so on. And so huge emphasis on the safe development of AGI.So you're viewing it as a tool, as a way to apply intelligence across a variety of fields, a variety of problems, to solve those problems, and of course, the word in there doing a lot of lifting is “safely.” Given the discussion over the past 18 months about that word, “safely,” is, one, I think someone who maybe only pays passing attention to this issue might think that it's almost impossible to do it safely without jeopardizing all those upside benefits, but you're confident that those two things can ultimately be in harmony?Yeah, absolutely, otherwise I wouldn't be necessarily working on an AGI policy. So I think I'm very confident this can be done well. I think it also depends what we mean by “safety” and what kind of safety we have in mind. Any technology, we will have costs and trade-offs, but of course the upside here is enormous, and, in my mind, very much outweighs potential downsides.However, I think for certain risks, things like potentially catastrophic risks and so on, there is an argument in treading some careful path and making sure this is done scientifically with a scientific method in mind, and doing that well. But I don't think there's fundamentally a necessary tension, and I think, in fact, what many people sometimes underestimate is how AI itself, as a technology, will be helpful in mitigating a lot of the risks we're foreseeing and thinking about. There's obviously ways in which AI can be used for cyber offense, but many ways in which you can also use that for defense, for example. I'm cautiously optimistic about how this can be developed and used in the long runThe risk conversation (5:15)Since these large language models and chatbots were rolled out to public awareness in late 2022, has the safety regulatory debate changed in any way? It seems to me that there was a lot of talk early on about these existential risks. Now I seem to hearing less about that and more about issues about, maybe it's disinformation or bias. From your perspective, has that debate changed and has it changed for the better, or worse?I think it has evolved quite a lot over the past — I've been working in AI policy since 2017 and there's been different phases, and at first a lot of skepticism around AI even being useful, or hype, and so on, and then seeing more and more of what these general models could do, and I think, initially, a lot of the concerns were around things like bias, and discrimination, and errors. So even things like, early-on, facial-recognition technologies were very problematic in many ways: not just ways in which they were applied, but they would be prone to a lot of errors and biases that could be unfair, whereas they're much better now, and therefore the concern now is more on misuse than it accidentally misidentifying someone, I would say. So I think, in that sense, these things have changed. And then a lot of the discourse around existential risk and so on, there was a bit of a peak a bit last year, and then this switched a bit towards more catastrophic risks and misuse.There's a few different things. Broadly, I think it's good that these risks are taken seriously. So, in some sense, I'm happy that these have taken more space, in a way, but I think there's also been a lot of alarmism and unnecessary doomerism, of crying wolf a little bit too early. I think what happens is that sometimes people also conflate a capability of a system and how that fits within a wider risk or threat model, or something; and the latter is often under-defined, and there's a tendency for people to often see the worst in technology, particularly in certain regions of the world, so I think sometimes a lot has been a little bit exaggerated or overhyped.But, having said that, I think it's very good there's lots of research going on on the many ways in which this could potentially be harmful, certainly on the research side, the evaluation side, there's a lot of great work. We've published some papers on sociotechnical evaluations, dangerous capabilities, and so on. All of that is great, but I think there has also been some more polarized parts calling for excessive measures, whether regulatory, or pausing AI, and so on, that I think have been a little bit too trigger-happy. So I'm less happy about these bits, but there's been a lot of good as well.And much of the debate about policy has been about the right sort of policy to prevent bad things from happening. How should we think about policy that maximizes the odds of good things happening? What should policymakers do to help promote AI to reshape science, to help promote AI diffusing as efficiently as possible throughout an economy? How do we optimize the upside through policy rather than just focusing on making sure the bad things don't happen?I think the very first thing is not having rushed regulation. I'm not personally a huge fan of the Precautionary Principle, and I think that, very often, regulations can cause quite a lot of harm downstream, and they're very sticky, hard to remove.The other thing that you can do beyond avoiding bad policy is I think a lot of the levers to making sure that the development goes well aren't necessarily all directly AI-related. So it'll be things like immigration: attracting a lot of talent, for example, I think will be very important, so immigration is a big one. Power and energy: you want there to be a lot more — I'm a big fan of nuclear, so I think that kind of thing is also very helpful in terms of the expected needs for AI development in the future. And then there are certain things governments could potentially do with some narrow domains like Advance Market Commitments, for example, although that's not a panacea.Commitments to do what?Oh, Advance Market Commitments like pull mechanisms to create a market for a particular solution. So like Operation Warp Speed, but you could have an AI equivalent for certain applications, but of course there's a lot of parameters in doing that well, and I wouldn't want a large industrial-policy-type approach to AI. But I think generally it's around ensuring that all the enablers, all the different ingredients and factors of a rich research and development ecosystem continue to thrive. And so I think, to a large extent, avoiding bad regulation and ensuring that a lot of things like energy, immigration, and so on go well is already a huge part of the battle.How serious of a potential bottleneck is the energy issue? It seems to me like it's a serious issue that's coming fast, but the solutions seem like they'll take more time, and I'm worried about the mismatch between the problem and finding a solution to the problem.I suspect that, over the coming years, we will see more and more of these AI systems being increasingly useful, capable, and then integrated into economic systems, and I think as you start seeing these benefits more and more, I think it'll be easier to make the case for why you need to solve some of these kind of policy issues a bit faster.And I also think these solutions aren't that difficult, ultimately. So I think there's a lot that can be done around nuclear, and wind, and solar, and so on, and many regulatory processes that could be simplified, and accelerated, and improved to avoid the vetocracy system we're in at the moment. So I don't think the solutions are that difficult, I think mustering the political will might be right now, but I expect that to be less of a challenge in the coming years with AI showing more and more promise, I think.Policy strategy (11:25)Speaking of vetocracy, whatever the exact substance of the regulation, I am concerned, at least in the United States, that we have 50 states, and perhaps even more bodies if you look at cities, who all have a lot of ideas about AI regulation, and I'm extremely concerned that that sort of fractured policy landscape will create a bottleneck.Can we get to where we need to go if that's the regulatory environment we are looking at, at least in the United States? And does, ultimately, there need to be a federal . . . I think the technical word is “preemption” of all those efforts? So there's a federal approach, and there aren't a federal approach, plus a 50-state approach, plus a 175-city approach to regulation. Because if it's going to be what I just described, that seems like a very difficult environment to deal with. I'm not wildly optimistic around a patchwork of different state-level regulatory systems. I think that will come with various externalities, you'll have distortionary effects. It will be a very difficult environment, from a commercial perspective, to operate in smoothly. I think I'm a lot more open to something at a federal level at some point, rather than a big patchwork of city-level or state-level regulation. Now, it depends on exactly what we're talking about. There might be specific domain, and context, and application-specific regulations that might make sense in some state and not another, but in general, from a first principles level at least, I think that would probably not be desirable.A second regulatory concern — and maybe this is dissipating as policy makers learn more, especially at the federal level, maybe, learn more about AI — is that, at least initially, it seems to me that whatever your policy idea was for social media, or about content moderation or what have you, you just kind of took that policy framework and applied it to AI because that was what you had. You pulled that baby right off the shelf. Are we still seeing that, or are people beginning to think, “This is its own thing, and my ideas for social media may be fine for social media, but I need to think differently about AI”? Obviously the technology is different; also, I think both the risks and potential rewards are very different.Yeah, totally. I think that has been an issue. Now, I wouldn't say that's the case for everyone. There's been some groups and some institutions doing some very careful work that really think about AI, and AGI, and so on in careful, more calibrated ways; but also I've seen quite a lot of reports where you could have easily imagined the same text being about social media platforms, or some kind of other policy issue, or blockchain, or something just being repurposed for AI. And there's a lot of stuff out there that's just very high level, and it's hard to disagree with at a high level, but it's far harder to apply and look at from an operational or practical perspective.So I've been seeing quite a lot of that; however, I think over time, the field is maturing more and more, and you're seeing better thinking around AI, what it really is, what's appropriate at the model level versus at the application level and the existing landscape of laws and regulation and how these might apply as well, which is often that's something that's forgotten, or you have lots of academics coming in and just trying to re-regulate everything from first principles, and then you're like, “Well, there's tort law, and there's this and that over there.” You got to do your gap analysis first before coming out with all this stuff.But I think we are seeing the field of AI governance and policy maturing in that space, and I expect it to continue, but I still, of course, see a lot of bad heuristics and poor thinking here, and particularly an underestimation of the benefits of AI and AGI. I think there's a tendency to always think of the worst for everything, and it's necessary, you need to do that too, but few are really internalizing how significant AGI would be for growth, for welfare, and for solving a lot of the issues that we've been talking about in the first place.The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedAGI: “if” or “when”? (15:44)Is AGI an “if” issue, or is it a “when” issue, and if it's a “when,” when? And I say this with the caveat that predictions are difficult, especially about the future.In my mind, it's definitely a “when” question. I see no real strong reason why it would be an “if,” and that being completely impossible. And there's been many, many, many examples over the last 10 years of people saying, “Well, this is not possible with neural networks,” and then 10 minutes later, it is proven to be possible. So that's a recurring theme, and that may not be sufficient to think that AGI is feasible and possible, but I'm pretty confident for a variety of reasons. About AGI, by the way, I think there's an excellent paper by Morris and others on Levels of AGI [for] Operationalizing Progress on the Path of AGI, and I think it's a very good paper to [frame one's thinking about AGI].And that goes back to one point I made earlier in that, at some point, you'll have systems that will be capable of quite a lot of things and can do probably anything that your average human can do, starting at least virtually, remotely, to start with, and eventually to the physical world, but I think they'll be capable in that sense. Now, there's a difference between these systems being capable in an individual lab setting or something and then them being actually deployed and used in industrial processes, commercial applications, in ways that are productive, add value, create profits, and so on, and I think there's a bit of a gap here. So I don't think we'll have a day where we'll wake up and say, “Oh, that's it, today we have AGI.” I think it'll be more of a kind of blurry spectrum, but gradually I think it'll be harder and harder people to deny that we have reached AGI, and as this stuff gets integrated into production systems, I think the effects on growth and the economy will speak for themselves.As to when exactly, I would think that, at least the capabilities, I would expect that in the next five years you could easily see a point where people could make a very confident claim that, yeah, we've got systems now that are AGI-level. They're generally capable, and they are pretty competent, or even expert-level to at least 90th percentile of skilled adults, and then the challenge will be then operationalizing that and integrating that into a lot of systems. But in my mind, it's definitely not an “if,” and I would say the next five to 10 years is the kind of relevant period I have in mind, at least. It could be longer, and I think the tech community has a tendency to sometimes over-index, particularly on the R&D side.AI and national security (18:21)Do you have any thoughts, and maybe you don't have any thoughts, about the notion that, as perhaps AGI seems closer, and maybe the geopolitical race intensifies, that this becomes more of a national security issue, and the government takes a greater role, and maybe the government makes itself a not-so-silent partner with tech companies, and it really becomes almost like a Manhattan Project kind of deal to get there first. Leopold Aschenbrenner wrote this very long, very long paper — is that an issue that you have any thoughts on? Is it something that you discuss, or does it seem just science fictional to you?Yeah, I do do a lot of thinking on that, and I've read Leopold's report, and I think there's a lot of good things in there. I don't necessarily agree with everything. I think things like security are really critical, I think thinking about things like alignment, and so on, is important. One thing I really agree with with Leopold's report that I'm glad he emphasized was the need to secure and cement liberal democracy, “the free world must prevail” kind of thing. I think that is indeed true, and people tend to underestimate the implication on that front. Now, what that looks like, what that means and requires in practice is not fully clear to me yet. I think people talk about a Manhattan Project, but there are many other potential blueprints or ways to think about that. There could be just normal procurement partnerships, there could be different models for this. At some point, something like that could be defensible, but it's very hard to predict that in advance, given particularly. . . well, how hard it is to predict anything with AI to start with. And secondly, there's loads of trade-offs with all these different options, and some might be a lot better than others, so I think certainly more work might be needed there. But, in principle, the idea doesn't seem completely crazy or science fiction to me.Chatbot advice (20:15)You recently posted on X that you were baffled at how many people don't use these language models or chatbots daily. I think a lot of people don't know what they would use them for. Do you have any recommendations for ways that people who are not in your line, who are not coders, that people can use them? Do you use them in ways that are applicable to the way regular people might use them?Yeah, I think so, and under the post, I gave a few examples of how I use it. Now admittedly, most of these wouldn't be something that anyone would do, but I thought about this last weekend when I was seeing my parents and I was trying to get them to understand what Claude or Gemini is and how to think about it, what kind of questions are worth asking, and what kind of questions are not worth asking, and it's very hard to come up with a very crisp way of sharing these intuitions. I think the first piece of advice I'd give is probably to just take one of these models and have a very long conversation with it about some sort of topic, like try to poke holes, try to contradict, and I think that starts giving you maybe a few better intuitions about what this can do, as opposed to just treating it as some sort of question-and-answer Oracle-type search engine, which I think is not the right use case.That is probably the most unsatisfying way to look at it, and just treat it as a better Google search engine. I mean really that sort of conversational, curious aspect, rather than saying like, “Find me a link.” “Find me a link” isn't a great use.Exactly, and people will often do that. We'll do a thing, we'll get some incorrect answer, or hallucination, or whatever, and then we'll say, “Oh, these things are not good, they're not accurate,” and we'll stop using it, and to me, that is just crazy. It is very fundamentally incurious, and I think there's ways of using them and thinking of them that is very useful. So what have I done recently? I'm trying to think of an example. . .I had some papers that I couldn't understand very well, and I would just ask it for better analogies, explanations, try to dig into certain concepts and ideas and just play around with them until the insights and intuitions were easier for me to internalize and understand. And I think you could do that at different levels, and regular people also want to understand things, so I think that might be potentially an example. But the very first thing I would do is simply long, protracted conversations to really get a sense of how far the model can really go, and then, as you do that, you'll find things that are a bit more creative than, “Can you please rewrite this email for me? Can you find typos?” or “Can you fill in my tax report?” or something. I think one way a friend used it — and of course, there are obvious limitations to that, get a lawyer and everything — but he had a legal contract that someone sent to him, and he couldn't afford a lawyer straight away, so he just said, “Can you help me find potential issues and errors in here? Here's who I am in this kind of contract. Here's what I'm concerned with.” And it's a first starting point. It can be useful. It gives you interesting insights. It doesn't mean it replaces a lawyer straight away, but it is one potential interesting way that everyday people could use.Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
In this episode of the Moonshots Podcast, hosts Mike and Mark dive deep into the world of artificial intelligence, focusing on Sam Altman, the CEO of OpenAI. The discussion features insights from various interviews and talks, including Bill Gates' interview with Sam Altman on the transformative power of ChatGPT and Sam's conversations with Lex Friedman and Craig Cannon. Listeners will also explore key lessons from Sam's time at Y Combinator, providing valuable guidance for aspiring entrepreneurs.Become a member to support the Moonshots Podcast and access exclusive content: Join us on Patreon.Episode Description:In this compelling episode, Mike and Mark explore the groundbreaking work of Sam Altman, CEO of OpenAI, and his vision for artificial intelligence. The episode features highlights from Bill Gates' interview with Sam Altman on the power of ChatGPT, revealing the potential and impact of this AI application. They also delve into Sam's discussion with Lex Friedman about AGI and the importance of staying true to one's values amidst competition, particularly with tech giants like Google. Additionally, the hosts share three essential lessons from Sam's Y Combinator classes on how to start a successful startup. The episode concludes with insights from Sam's talk with Craig Cannon on the importance of focus and the pitfalls of the deferred life plan. This episode is a must-listen for anyone interested in AI, entrepreneurship, and the future of technology.Become a member to support the Moonshots Podcast and access exclusive content: Join us on PatreonLinks:Podcast EpisodeArticle on Sam Altman, OpenAI's Spectacular CEOYouTube: Sam Altman Talks OpenAI and AGIExpanded Key Concepts and Insights:The Power of ChatGPT: Explore how ChatGPT is revolutionizing the AI landscape, as discussed in Bill Gates' interview with Sam Altman.Navigating AGI and Competition: Understand the challenges and strategies of competing in the AI industry, especially against giants like Google, as Sam's conversation with Lex Friedman shared.Starting a Startup: Learn three critical lessons for aspiring entrepreneurs from Sam Altman's Y Combinator teachings.Focus and Ambition: Gain insights on the importance of focus and structuring ambitions effectively, avoiding the pitfalls of the deferred life plan, as discussed in Sam's talk with Craig Cannon.About Sam Altman:Sam Altman is the CEO of OpenAI, a leading artificial intelligence research lab. Before joining OpenAI, Sam was the president of Y Combinator, where he played a pivotal role in nurturing numerous successful startups. His work at OpenAI focuses on advancing artificial intelligence to benefit humanity, ensuring that AGI (Artificial General Intelligence) aligns with human values.About Moonshots Podcast:The Moonshots Podcast, hosted by Mike and Mark, delves into the minds of innovators and visionaries who are making significant strides in various fields. Each episode offers deep insights into the strategies, mindsets, and tools these trailblazers use to achieve extraordinary success. The podcast aims to inspire and equip listeners with actionable insights to pursue their moonshot ideas. Thanks to our monthly supporters 孤鸿 月影 Fabian Jasper Verkaart Ron Margy Diana Bastianelli Andy Pilara ola Fred Fox Austin Hammatt Zachary Phillips Antonio Candia Mike Leigh Cooper Daniela Wedemeier Corey LaMonica Smitty Laura KE Denise findlay Krzysztof Diana Bastianelli Nimalen Sivapalan Roar Nikolay Ytre-Eide Stef Roger von Holdt Jette Haswell Marco Silva venkata reddy Dirk Breitsameter Ingram Casey Nicoara Talpes rahul grover Evert van de Plassche Ravi Govender Craig Lindsay Steve Woollard Lasse Brurok Deborah Spahr Barbara Samoela Christian Jo Hatchard Kalman Cseh Berg De Bleecker Paul Acquaah MrBonjour Sid Liza Goetz Konnor Ah kuoi Marjan Modara Dietmar Baur I Tripped Nils Weigelt Bob Nolley ★ Support this podcast on Patreon ★
Über 220 Milliarden USD investieren Microsoft, Google, Apple, Amazon & Meta in AI. Die Bundesregierung: 5 Mrd. Ist der Zug schon abgefahren? Nein, sagt Rafael Laguna dela Vera, Leiter der Bundesagentur für Sprunginnovationen, und sagt, was jetzt zu tun ist. Mehr Geschäftsideen? Meld dich an zum kostenlosen Newsletter: digitaleoptimisten.de/newsletter Buch von Rafael: Sprunginnovationen Website der SPRIND: https://www.sprind.org/de/ Kapitel: (00:00) Intro (04:55) Gründerzeit in Deutschland (07:33) Boomendes Tech-Ökosystem in den USA (07:09) Cloud, Computing & Chips: Die Dominanz der USA (27:15) Amerikanisches Vorbild: DARPA (37:21) Sprunginnovationen und die Rolle der SPRIND (62:11) AGI - Artificial General Intelligence (70:05) Rafaels Geschäftsideen-Playbook Mehr Infos: In diesem Gespräch spricht Rafael über seine Erfahrungen als Gründer und die Bedeutung der Gründerzeit in Deutschland. Er erklärt, dass die Reparationszahlungen nach dem Ersten Weltkrieg die Grundlage für die Entwicklung von Industrien und Technologien geschaffen haben. Rafael betont auch die Bedeutung von Innovation und Gründergeist für den Wohlstand eines Landes. Er vergleicht die Gründerszene in den USA, China und Europa und diskutiert die Rolle von Cloud Computing, Compute und Chips in der AI-Industrie. Er spricht auch über die Souveränität der Hyperscaler und die Notwendigkeit europäischer Alternativen. Schließlich erwähnt er die Rolle von Behörden wie der DARPA und der APA bei der Förderung von Innovationen. In diesem Teil des Gesprächs diskutieren Rafael und Alex über die Rolle von staatlichen Subventionen und Investitionen in die Chipindustrie in den USA. Sie erwähnen auch die Bedeutung von Sprunginnovationen und wie diese das Leben der Menschen verbessern können. Rafael erklärt, dass SPRIND in verschiedenen Bereichen tätig ist, darunter Medizin, Energie und Umwelt sowie Mikroelektronik und Software. Er betont auch die Bedeutung von Bildung als Grundlage für Innovation. Rafael beschreibt das Vorgehen von SPRIND bei der Förderung von Projekten und der Zusammenarbeit mit privaten Investoren. In diesem Teil des Gesprächs diskutieren Rafael Laguna und Alex über die Herausforderungen, Gründer in Deutschland zu halten und ihre Unternehmen hier aufzubauen. Sie sprechen über die Schwierigkeiten, Finanzierungsmöglichkeiten zu finden und die Kontrollübernahme durch ausländische Investoren zu verhindern. Rafael betont die Bedeutung, dass Gründer ihre Unternehmen in Deutschland und Europa aufbauen, um eine neue Gründerzeit und neue Industrien zu schaffen. Sie diskutieren auch über kulturelle Unterschiede in Bezug auf KI und AGI und betonen, dass es wichtig ist, die Anwendung und Regulierung von KI zu beachten, um die Chancen und Risiken zu steuern. Rafael gibt auch Einblicke in potenzielle Geschäftsideen im Bereich KI und andere aufstrebende Technologien. Keywords: Gründerzeit, Deutschland, Industrie, Technologie, Innovation, Gründergeist, USA, China, Europa, Cloud Computing, Compute, Chips, AI-Industrie, Souveränität, Hyperscaler, Europa, DARPA, ARPA, Innovationen, Subventionen, Chipindustrie, Sprunginnovationen, Medizin, Energie, Umwelt, Mikroelektronik, Software, Bildung, Förderung, private Investoren, Gründer, Deutschland, Finanzierung, Kontrollübernahme, ausländische Investoren, KI, AGI, Anwendung, Regulierung, Geschäftsideen --- Send in a voice message: https://podcasters.spotify.com/pod/show/alex2459/message
In this episode of the Metaverse Podcast, our host Jamie Burke welcomes Marcello Mari, Founder & CEO at SingularityDAO, and part of the founding team at SingularityNET. This conversation is a deep dive into decentralized AI. If you're interested in any of these topics, tune in! Decentralized AI and its potential impact AI, AGI (Artificial General Intelligence), and ASI (Artificial Super-Intelligence) AI development, infrastructure, and tokenization Autonomous Economic Agents (AEAs) Decentralized AI in the Middle East, potential for growth and challenges Outlier Ventures' Riyadh Base Camp If you're a founder, apply to one of our active Base Camp accelerator programs. #AI #AGI #ASI ------ Whether you're a founder, investor, developer, or just have an interest in the future of the Open Metaverse, we invite you to hear from the people supporting its growth. Outlier Ventures is the Open Metaverse accelerator, helping over 100 Web3 startups a year. You can apply for startup funding here - https://ov.click/pddsbcq122 Questions? Join our community: Twitter - https://ov.click/pddssotwq122 LinkedIn - https://ov.click/pddssoliq122 Watch the video version of this episode on Youtube. For further Open Metaverse content: Listen to The Metaverse Podcast - https://ov.click/pddsmcq122 Check out our portfolio - https://ov.click/pddspfq122 Thanks for listening!
#SmallBusinessAmerica: AI upgrades to voice commands, heading toward AGI (Artificial General Intelligence. @GeneMarks @Guardian @PhillyInquirer https://uk.news.yahoo.com/openai-vs-google-gemini-major-115529023.html?g 1908 Puck Magazine
Kann man für eine künstliche Intelligenz Gefühle entwickeln? «Einstein» macht den Selbsttest: Fünf Teilnehmerinnen und Moderatorin Kathrin Hönegger lassen für drei Wochen den KI-Chatbot von «Replika» in ihr Leben. Taugt KI als WG-Gspänli, Beziehungspartner oder beste Freundin? Kann man sich mit einer künstlichen Intelligenz befreunden? «Einstein» macht das Experiment: Ausgesuchte Teilnehmerinnen und Moderatorin Kathrin Hönegger lassen für drei Wochen einen KI-Chatbot-Avatar in ihr Leben. Taugt die Maschine auch als WG-Gspänli, Therapeut oder Freundin? Das Experiment sagt viel über die Menschen aus. Generative (Text-)KI kann heute täuschend echt (menschlich) agieren und im besten Fall einen Nutzen generieren. Was passiert, wenn Menschen solche KI in ihr soziales Leben lassen? Was geschieht in der Kommunikation, wenn man einen KI-Chatbot ganz intensiv in den Alltag einbaut? Wird es stets die Beziehung zwischen Meister und Diener sein, wie bei digitalen Assistenten? Oder vergisst man, dass man es mit einer Maschine zu tun hat, die Ratschläge gibt, tröstet oder mit einem lacht? Was, wenn sie uns plötzlich herausfordert oder täuscht? Könnte das überhaupt passieren? «Einstein» testet die intensive Mensch-Maschinen-Kommunikation in einem grossen Experiment. «Einstein» macht das Bot-Experiment Probandinnen und Probanden lassen im Rahmen eines «Einstein»-Experimentes eine «Replika» in ihr Leben: als beste Freundin, Therapeuten, WG-Gspänli oder möglichen Sexualpartner. «Replika» ist eine spezialisierte Plattform, auf der man sich sozial agierende Chatbot-Avatare nach den eigenen Bedürfnissen bauen und trainieren kann. Kathrin Hönegger führt durch diese Sendung – und auch sie macht mit beim Experiment. Wissenschaftliche Begleitung und Einordnung Das Experiment begleitet Marisa Tschopp, Psychologin und KI-Expertin. Sie ist spezialisiert auf Mensch-Maschine-Interaktion. Sie hilft den Probandinnen und Probanden, ihren Bot einzurichten, gibt Tipps im Umgang, schätzt ein, wie sich die Kommunikation im Verlauf des Experiments entwickelt und zieht qualitative Fazits am Erfahrungswert jedes und jeder Einzelnen. Nach drei Wochen Experiment treffen sich alle Beteiligten zum abschliessenden Austausch und Fazit. Wie kann künstliche Intelligenz richtig schlau werden? «Einstein» geht als zusätzlichen Fokus dieser Frage nach und erlebt das Forschungsfeld des Neuroinformatikers Benjamin Grewe. Er forscht an der ETH, wie KI noch mehr über ihre Umwelt erfahren und so immer mehr auch zum lernenden Algorithmus werden könnte. Dabei orientiert er sich an der Funktionsweise des menschlichen Gehirns. Der heilige Gral in der KI-Forschung ist die Schaffung sogenannter «AGI» – «Artificial General Intelligence». Also, so etwas wie generelle synthetische Intelligenz, die derjenigen des Menschen nahekommen könnte. Die Zuschauenden erleben, wie Grewe und sein Team auf diesen Forschungsweg gehen, was es braucht und wo sie dabei heute stehen.
Kann man für eine künstliche Intelligenz Gefühle entwickeln? «Einstein» macht den Selbsttest: Fünf Teilnehmerinnen und Moderatorin Kathrin Hönegger lassen für drei Wochen den KI-Chatbot von «Replika» in ihr Leben. Taugt KI als WG-Gspänli, Beziehungspartner oder beste Freundin? Kann man sich mit einer künstlichen Intelligenz befreunden? «Einstein» macht das Experiment: Ausgesuchte Teilnehmerinnen und Moderatorin Kathrin Hönegger lassen für drei Wochen einen KI-Chatbot-Avatar in ihr Leben. Taugt die Maschine auch als WG-Gspänli, Therapeut oder Freundin? Das Experiment sagt viel über die Menschen aus. Generative (Text-)KI kann heute täuschend echt (menschlich) agieren und im besten Fall einen Nutzen generieren. Was passiert, wenn Menschen solche KI in ihr soziales Leben lassen? Was geschieht in der Kommunikation, wenn man einen KI-Chatbot ganz intensiv in den Alltag einbaut? Wird es stets die Beziehung zwischen Meister und Diener sein, wie bei digitalen Assistenten? Oder vergisst man, dass man es mit einer Maschine zu tun hat, die Ratschläge gibt, tröstet oder mit einem lacht? Was, wenn sie uns plötzlich herausfordert oder täuscht? Könnte das überhaupt passieren? «Einstein» testet die intensive Mensch-Maschinen-Kommunikation in einem grossen Experiment. «Einstein» macht das Bot-Experiment Probandinnen und Probanden lassen im Rahmen eines «Einstein»-Experimentes eine «Replika» in ihr Leben: als beste Freundin, Therapeuten, WG-Gspänli oder möglichen Sexualpartner. «Replika» ist eine spezialisierte Plattform, auf der man sich sozial agierende Chatbot-Avatare nach den eigenen Bedürfnissen bauen und trainieren kann. Kathrin Hönegger führt durch diese Sendung – und auch sie macht mit beim Experiment. Wissenschaftliche Begleitung und Einordnung Das Experiment begleitet Marisa Tschopp, Psychologin und KI-Expertin. Sie ist spezialisiert auf Mensch-Maschine-Interaktion. Sie hilft den Probandinnen und Probanden, ihren Bot einzurichten, gibt Tipps im Umgang, schätzt ein, wie sich die Kommunikation im Verlauf des Experiments entwickelt und zieht qualitative Fazits am Erfahrungswert jedes und jeder Einzelnen. Nach drei Wochen Experiment treffen sich alle Beteiligten zum abschliessenden Austausch und Fazit. Wie kann künstliche Intelligenz richtig schlau werden? «Einstein» geht als zusätzlichen Fokus dieser Frage nach und erlebt das Forschungsfeld des Neuroinformatikers Benjamin Grewe. Er forscht an der ETH, wie KI noch mehr über ihre Umwelt erfahren und so immer mehr auch zum lernenden Algorithmus werden könnte. Dabei orientiert er sich an der Funktionsweise des menschlichen Gehirns. Der heilige Gral in der KI-Forschung ist die Schaffung sogenannter «AGI» – «Artificial General Intelligence». Also, so etwas wie generelle synthetische Intelligenz, die derjenigen des Menschen nahekommen könnte. Die Zuschauenden erleben, wie Grewe und sein Team auf diesen Forschungsweg gehen, was es braucht und wo sie dabei heute stehen.
Ep. 124 | In this engaging, informative, and thought provoking conversation, artificial intelligence expert Dr. Nikki Mirghafori gives us a clear picture of where AI technology stands at this point and enlightens us as to its gifts, its potential, and its dangers. Nikki, who is also an internationally acclaimed Buddhist meditation teacher, is passionate about helping to bring equanimity to the whole issue of AI and emphasizes that the fear mongering going on around it is doing all of us a real disservice. She opens our eyes to the enormous potential of AI as applied to global issues such as cleaning up the environment, ending hunger, providing clean water, improving methods of food production—even acting as a wise mentor in supporting people to be their best selves. Nikki tells us that ethical use of AI depends on both designers and users, and that we are not powerless in the way things unfold. How can AI systems be benevolent and supportive and bring out the best in us? Will we be able to maintain our values and ethics as our use of AI continues to expand? If our perception of AI was sort of murky or limited before, this conversation effectively brings us to a much more informed understanding. Nikki explains everything from where we have been exposed to AI without knowing it, the important distinction between weak/narrow AI and strong/general AI (AGI), personal choice engineering, our natural tendency to anthropomorphize AI, and the difference between benevolent AI and compassionate AI. Nikki is a superb teacher and a pleasure to listen to; this conversation is invaluable in its timeliness and its ability to bring us all up to speed on AI. Recorded January 29, 2024.“There's so much good that can come from this technology… the list is endless how much AI technology can be helpful.”(For Apple Podcast users, click here to view the complete show notes on the episode page.)Topics & Time StampsIntroducing AI research scientist and inventor & gifted meditation teacher and practitioner, Dr. Nikki Mirghafori (01:13)What exactly is AI? (03:07)The important distinction between weak or narrow AI and strong or general AI (AGI) (05:11)AGI (Artificial General Intelligence) which is self-aware and conscious is still only theoretical: fear mongering around AGI is really a disservice to us (06:46)Where have people been exposed to AI without even knowing it? (10:28) The gifts that AI technology can bring are endless (13:33)The most exciting AI applications for Nikki: finding creative ways to clean up the environment, stop hunger, provide clean water, produce our food, and be a mentor in supporting people to be their best selves (15:07)Pattern recognition: taking input patterns and producing output patterns is the heart/brain of AI (17:35)How can AI help us to become wiser and more compassionate? The ethics of AI depend on both designer and user (20:14)Creating AI in our image and how our developmental level fits in—it's in the data that the AI system is fed (28:16)Personal choice engineering (32:11) Kids have become ruder interacting with chatbots like Siri & Alexa: how can we keep our humanity alive and be true to our ethics as we interact with AI? (34:40)Resisting temptation and avoiding sliding down the slippery ethical slope (36:50)What is the mystery of being human?...
The Technology Whisperers - A Technology and Innovation Podcast
Episode Summary: In this riveting episode of The Technology Whisperers, hosts Alistair Ross and Sean Muller are joined by David McDonald, a serial entrepreneur, futurist, and expert in Artificial Intelligence. Together, they dive deep into the current state and future potential of AI technologies, debunking myths, exploring opportunities, and addressing some of the most pressing concerns surrounding AI's impact on society and business. Guest Bio: David McDonald: With decades of experience in the tech industry, David has been at the forefront of leading-edge technologies, especially in AI and blockchain. His career includes significant roles such as CEO of Centrality Japan and leading various ventures and startups to success. David is passionate about the transformative power of AI and its application across different sectors. Key Takeaways: The Role of a Futurist: David shares his insights into what it means to be a futurist in today's rapidly evolving technological landscape, emphasising the importance of anticipating technology trends and their practical applications. AI Transforming Businesses: Discussion on how AI is revolutionizing industries, with a focus on its application in enhancing efficiency, creating new opportunities, and the challenges of integrating AI into existing business models. Ethical and Security Concerns: The episode delves into the ethical considerations and security implications of AI, including data privacy, the potential for bias in AI models, and the importance of open-source models for transparency. Future of Work: David and the hosts explore the impact of AI on the job market, discussing the potential for AI to replace certain jobs while also creating new opportunities and industries that we can't yet imagine. The Next Big Thing in AI: Predictions for the near future of AI technology, including the rise of generative AI, code abstraction, and the potential for AI to develop applications on demand for personalized user needs. Episode Highlights: David's journey from a geospatial software engineer to an AI and blockchain innovator. The potential for AI to disrupt industries from within, challenging traditional business models. Discussion on the importance of ethical AI development and the role of regulation. Speculation on the arrival of AGI (Artificial General Intelligence) and its implications. Addressing public fears and misconceptions about AI through education and transparency. Listener Engagement: Listeners are encouraged to share their thoughts on AI's impact on society and their own industries. Questions for David or suggestions for future topics can be submitted via social media or the podcast's website. Connect with David McDonald: LinkedIn: https://www.linkedin.com/in/david-m-7526b712/ Closing Thoughts: AI is not just shaping the future; it's actively transforming the present. As technology continues to evolve at an unprecedented pace, staying informed and open to change will be crucial for individuals and businesses alike. Join us next time as we continue to demystify the world of technology, one episode at a time. Contact Details for Alistair and Sean: Alistair Ross alistair@revolutioninfosec.com Web: https://revolutioninfosec.com Linkedin: https://www.linkedin.com/in/alistairjross https://www.linkedin.com/company/revolutioninfosec Sean G Muller seangmuller@technologyleader.co.nz Linkedin: https://www.linkedin.com/in/sgmuller/ X 'Formerly Twitter': @seangmuller
Ich teile meine persönliche Investitionsgeschichte in Nvidia seit 2017 und betone die Bedeutung von AI für zukünftige Disruptionen in allen Branchen, was zeigt, dass AI unsere Arbeitsweisen und Geschäftsmodelle grundlegend verändern wird. Abschließend diskutiere ich die potenzielle Zukunft von AI, einschließlich AGI (Artificial General Intelligence) und ASI (Artificial Super Intelligence), und die gesellschaftlichen Auswirkungen dieser Technologien, während ich meine Überzeugung äußere, dass wir uns auf eine Ära der Post-Work-Economics zubewegen könnten.
Diante da eclosão de ditaduras, mundo está se tornando mais perigoso | Heni Ozi Cukier YouTube Canal Um Brasil As ditaduras, autocracias e cleptocracias embutidas sutilmente e travestidos de socialismo comunista e dando a ideia de quê estes são boas formas de se governar, estão aumentando graças ao avanço da agenda 2030 da Nova Ordem Mundial, promulgada oficialmente pelo presidente George W Bush filho em meados de 2001. Este podcast é um mapa da situação atual baseado em geopolítica internacional com a participação do professor HOC. É de extrema importância ouvir este podcast de forma NEUTRA E APARTIDÁRIA pois o cenário geopolítico é sem sombra de dúvida um jogo imenso e complexo de tal modo que está todo conectado entre seus próprios acontecimentos. O objetivo é um só: O controle ditadúrico através de um monitoramento tecnocrático, este por sua vez, utilizando a AI e a AGI - "Artificial General Intelligence" ou "Inteligência artificial geral" que é o nível de excelência das inteligências artificiais pois age exatamente da mesma forma que o sistema Skynet, representado e idealizado pelo diretor James Cameron, mesmo autor do longa metragem Avatar. Instagram @multiverse5dpodcast
Sve je više domaćih deepfake prevara. Jasno je, ovo je godina deepfake videa, a upravo ova posljednja prijevara, u kojoj je korišten lik ni po čemu kontroverznog, pomalo zaboravljenog, sredovječnog glazbenika Novih fosila, nam je bio povod da i opet raspravljamo o njima.O umjetnoj inteligenciji i ovaj puta pričala je Ana Marija koja je u podcastu (i u izvještaju!) kakva predviđanja o famoznoj AGI (Artificial General Intelligence) je čula od Zacka Kassa, nekadašnjeg zaposlenika tvrtke OpenAI. Osim AI, komentirali smo i mjesečne pretplate koje su odnedavno uvele aplikacije od kojih bismo to najmanje očekivali – ona za dopisivanje i za naručivanje taksija._______________0:00 Uvod1:02 Deepfakeovi će nas poplaviti u ovoj godini4:10 Mojmira Pastorčić i Kolinda Grabar Kitarović vas pozivaju da investirate u kripovalute?9:35 Zašto i kako AI prevaranti napdaju Vladimira Kočiša Zec?16:35 Što je industrija "osmrtničkog spama" i kako prepoznati deepfake?26:07 Sve više aplikacija, poput Vibera, nude pretplate? Kome to uopće treba?35:30 "Završilo je zlatno doba raznih lifestyle startupa"42:15 Što je bivši voditelj Go-To-Marketa u OpenAI-ju, Zach Kass, rekao o AGI-ju?51:24 TOP i FLOP58:15 SDP počeo koristiti generativni AI_______________
SummaryHarvey Singh discusses the concept of generative AI and its growing popularity. Generative AI refers to the ability of AI systems to not only answer questions but also generate creative content, such as essays, images, music, and code. This technology has vast potential in various fields, including e-learning and content generation. Singh explains the importance of prompts in guiding generative AI systems and highlights the integration of generative AI into learning management systems. He emphasizes the need to view AI as an opportunity for creativity and problem-solving rather than a threat.Takeaways* Generative AI refers to AI systems that can generate creative content, such as essays, images, music, and code.* Generative AI is popular because it offers endless possibilities for content generation and can save time and effort in various tasks.* Prompts are essential in guiding generative AI systems and ensuring accurate and relevant output.* Generative AI can be integrated into learning management systems to enhance content generation, create conversational interfaces, and personalize learning experiences.* The future of generative AI holds exciting possibilities, including the potential for AGI (Artificial General Intelligence) and the automation of complex tasks.LinksWhat is Generative AI and Why is it so Popular?Instancy.com Get full access to Lessons from Learning Leaders at lessonsfromlearningleaders.substack.com/subscribe
No bitalk de hoje mergulhamos nas profundezas da inteligência artificial com o Sebastião Villax. Por que temos tanto medo da inteligência artificial? O que fazem as empresas com os nossos dados? O ChatGPT usou dados ilegais? Será que estamos a viver um episódio do Black Mirror? É desta que a Inteligência Artificial vai acabar com os call centers? Com uma formação inicial em Química Aplicada e uma jornada que atravessa o rugby, finanças e tecnologia, Sebastião traz uma perspetiva única para a mesa com uma experiência de trabalho bastante diversificada. Atualmente é Product Strategy Coordinator na Defined.ai, anteriormente conhecida como DefinedCrowd, passou por uma transformação significativa em 2021, com um novo branding que reflete o seu compromisso com a evolução da inteligência artificial. Sob a liderança da fundadora portuguesa Daniela Braga, a empresa não só disponibiliza um marketplace de dados para treinar modelos de IA, contando com gigantes como Google e IBM como clientes, mas também se destaca pela sua visão ética, procurando modelos livres de preconceitos. Com mais de 60 milhões de dólares em investimentos, a Defined.ai coordenou o projeto Accelerat.AI, visando desenvolver um inovador assistente virtual para call centers, marcando a sua incursão numa nova área de negócios. Além disso, a empresa já havia delineado a sua visão de ser o "Netflix da IA", oferecendo aos clientes recursos personalizáveis para acelerar a criação de modelos de inteligência artificial. Com projetos financiados pelo Plano de Recuperação e Resiliência, a Defined.ai emerge como uma peça-chave no panorama internacional da IA, contribuindo para o avanço tecnológico e para a promoção de práticas éticas no setor. Aqui ficam alguns dos melhores momentos: 00:00 — Intro 01:14 — Medo da Inteligência Artificial 06:42 — Quando é que vamos ter AGI (Artificial General Intelligence)? 11:30 — Inteligência Artificial VS Inteligência Humana 16:30 — Estamos a pôr a carroça à frente dos bois? 22:55 — Inteligência Artificial comete erros 24:31 — O que mais assusta na Inteligência Artificial 28:18 — Tudo começa nos dados 32:16 — Desafiar os limites éticos no uso de dados 44:20 — Parece um episódio do Black Mirror 47:03 — FRIGIDEIRA 49:25 — Recolher dados de forma ética 1:01:06 — Qual é o risco das empresas usarem os meus dados? 1:06:54 — Tráfico de dados ilegais 1:11:35 — Europa não vai conseguir regulamentar 1:16:16 — Inteligência Artificial vai acabar com os call centers? 1:23:05 — Estado não está pronto para a mudança 1:32:06 — Já chega de papagaios Aproveitem para subscrever o nosso canal e ficarem a saber tudo sobre os nossos episódios: http://youtube.bitalk.pt Oiçam-nos no Spotify: http://spotify.bitalk.pt Oiçam-nos no Apple Podcasts: http://apple.bitalk.pt E sigam-nos nas redes sociais: • Instagram: http://instagram.bitalk.pt • TikTok: http://tiktok.com/@bitalk_podcast • Linkedin: http://linkedin.bitalk.pt • Facebook: http://facebook.bitalk.pt Uma produção GAFFVisuals https://gaffvisuals.com/
In questa puntata di Tech2Date parliamo del rilancio di Meta nel settore AI con la promessa ben chiara di ridisegnare lo stato dell'arte che porterà, prima o dopo, all'avvento della AGI (Artificial General Intelligence). Apple Vision Pro verrà lanciato tra poche settimane, ma gli auspici non sono dei migliori, proviamo a capire perché. Se vuoi darmi un feedback per migliorare i contenuti o il format di questo podcast non esitare a scrivermi tramite le pagine Twitter e Instagram. Conosci persone che potrebbero essere interessate ai contenuti di questo Podcast? condividi con loro la puntata!
Noticias Tech: Vision Pro, Galaxy S24, los 340,000 GPUs de Meta y un juicio más a OpenAI¡Hola, hola, amantes de la tecnología! Bienvenidos a otro episodio de RTT. ¡Hoy tenemos chismes calientes del mundo tech!
What you will learn from this episode: Discover how AI can boost your sales in the fastest time Learn about different apps you you will want Hear how to maximize AI in your beauty business Are you a beauty business owner or professional looking to up your game? This episode is a treasure trove of AI-driven secrets that promise to streamline your operations and amplify your marketing efforts. Our host, April Meese shares personal experiences with AI tools that act as digital assistants—taking meticulous notes, generating SOPs, and even responding to client communications. Plus, get a sneak peek into the potential of AI in crafting your marketing blueprint. Dive into this episode and transform your beauty business with the savvy use of artificial intelligence! Topics Covered: 1:18 - AI can extract meeting minutes on your behalf 2:20 - Create Standard Operating Procedures (SOPs) with AI 4:32 - Proper prompting of AI for the best result 5:26 - The AI Marketing Blueprint Workshop 9:24 - Clone yourself with AI 10:43 - AGI: Artificial General Intelligence Key Takeaways: "There are apps and tools that are affordable and easy. Just literally a click of the button and it will save you time and it will just make your life so much easier." - April Meese "If AI can pass the bar exam... it can create the marketing for us." - April Meese
AI for Creatives is a podcast by Creatives for Creatives connecting art, innovation, and humanity. Segment 1: Kamilah and Nova share what they've been up to. Kamilah works with a lot of creatives, and she has been helping creatives with strategies and organization with a free workshop called “Developing a Strategy for Work and Business”. For more info visit greaterthanequal.com/strategyworkshop. Nova shares that she's been deep into work on her new book – The Jockey on the Horse - Take the Reins and Stay Ahead in the Age of AI. It's all about how to use an abundance mindset to stay ahead and in control in the age of AI. Segment 2: Nova and Kamilah start to talk about the drama that's been happening at OpenAI. Nova explains the firing of Sam Altman and the craziness that has since ensued. Kamilah gives her perspective on it. Nova asks what Altman's return to OpenAI in the end will mean for the consumer. Segment 3: Nova talks about the rumors related to the behind-the-scenes for Altman's initial firing. What if the commercialization of AI moves too fast (without the proper safeguards in place). Nova then talks a bit about the new board at OpenAI. She expresses her concern that overall the board is lacking in diversity with nearly all men. Kamilah shares her concern about addressing bias. Segment 4: Nova starts to talk about OpenAI and the leak around QStar. It's the breakthrough that OpenAI can now do grade school math. The big deal is that it can only advance from there, and how it brings us closer to AGI (Artificial General Intelligence). Kamliah explains how math is a benchmark for reasoning. Segment 5: Nova questions if this will cause faster development and placement of safeguards on multiple levels. Kamilah, also optimistic, feels this won't be bad, but is more humanistic an option. It will be a smarter, savvier tool for everyday consumers. However, Nove points out that the safety of various livelihoods is still important. Segment 6: Nova starts to get into Josh Bickett and OthersideAI developing new automations for the computers themselves. Now you get self-driving computers using vision-based technology. Kamilah explains how it's like an interface between the human and the computer. She connects it to other AI interfaces available and upcoming. Nova relates some of CEO Matt Shumer's thoughts on the breakthrough. Conclusion: Things in the AI space are moving so quickly. Nova and Kamliah are optimistic about the future but urge that we all stay aware. Nova and Kanliah conclude with thoughts on how all these developments will impact creatives directly and can help break down barriers. AI Generative AI AI for Creatives Crypto for Creatives Web3 ChatGPT GPT4 OpenAI Creatives The Future Age of AI Blockchain Self-driving Computers Pink Kangaru
The day before he was fired, Sam Altman said “Is this a tool we built, or a creature?” We discuss the Q* mysterious letter that warned the OpenAI board of a powerful AI discovery that could threaten humanity. Did OpenAI achieve AGI?This episode is sponsored by BetterHelp. Visit betterhelp.com/DUBIOUS today to get 10% off your first month of therapy. If you like our content, please become a patron to get all our episodes ad-free. We discuss the five days of chaos and drama at OpenAI: an industry poster-child CEO double-crossed by his chief scientist, a cloak and dagger board coup, fears of killer AI, and a dead-of-the-night staff revolution that changed the balance of global tech power. Also: while Sam Altman is a billionaire, his younger sister Annie has supported herself doing sex work, “both in person and virtually”, on OnlyFans. OpenAI has a very unusual structure: it is a for-profit company overseen by a nonprofit board with a corporate culture somewhere in between. This recent coup seems to have ousted the non-profit faction of OpenAI, the people who were concerned with artificial intelligence tech safety and ethics. Money won. We analyze how this shift might impact AI tech globally. We also discuss Q* (pronounced Q-Star) and if Open AI has gotten close to or even achieved AGI – Artificial General Intelligence, which is the step before Artificial Superintelligence ASI. Before OpenAI CEO Sam Altman's firing, several staff researchers – including Ilya Sutskever, the chief scientist - wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that could threaten humanity. The day before he got fired, Sam Altman said “Is this a tool we have built or a creature we have built?” "Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I've gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime," he said at the Asia-Pacific Economic Cooperation summit. 1 2 We also talk about Sam Altman's history and his achievements: Y Combinator (Stripe, Airbnb), Worldcoin (which uses a silver orb to scan people's eyeballs in exchange for crypto tokens and might be a useful to discern humans from AI online in the future), and of course, his personal life - specifically his relationship with his sister Annie Altman. While Sam Altman is a billionaire, his younger sister Annie has supported herself doing sex work, “both in person and virtually”, on OnlyFans. 3 The recent changes at OpenAI will impact all of us, so it's important to understand the implications of these recent developments within the hottest company in tech right now. It is essential to put things into context and look at the global picture: US and China are in a race for AI supremacy. 4 5 1. Charlie Warzel The Money Always Wins The Atlantic, November 2023 ⇤2. Anna Tong, Jeffrey Dastin and Krystal Hu OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say Reuters, November 2023 ⇤3. Elizabeth Weil Sam Altman Is the Oppenheimer of Our Age OpenAI's CEO thinks he knows our future. What do we know about him? INTELLIGENCER, September 2023 ⇤4. David Brooks The Fight for the Soul of AI The New York Times, November 2023 ⇤5. Dave Lawler How the U.S. is trying to stay ahead of China in the AI race Axios, June 2023 ⇤
In Episode 11 of the "Relentless Podcast with Kyle Becker," we delve into the rapidly evolving landscape of artificial intelligence and its profound impact on society. From the eerie parallels drawn with iconic sci-fi films like "Terminator 2" and episodes of "Black Mirror," to the real-world advancements and ethical dilemmas posed by AI and AGI (Artificial General Intelligence), this podcast is a deep dive into the intersection of technology, policy, and human values.We explore how the explosion of the information age and the advent of AI are reshaping our world, touching on topics such as the role of AI in law enforcement, the potential deployment of AI-controlled drones in warfare, and the growing concerns over digital dehumanization.This episode also sheds light on the political and social implications of AI in the context of current global events and the debate surrounding the use of autonomous weapons. Join us as we navigate through the complex maze of technological advancements, ethical considerations, and the future of human-AI interaction. Whether you're a tech enthusiast, a policy maker, or simply curious about the future of AI, this episode offers valuable insights and stimulates a crucial conversation about the path we are paving for tomorrow's world.Find Kyle on Twitter at @KyleNABecker for breaking news, analysis, and more.Visit BeckerNews.com, your destination for stories that cut through the noise.Join the community at TheKyleBecker.Substack.com for exclusive content and engaging discussions.Brought to you by Becker News LLC, this podcast is a must-listen for anyone seeking a fresh, independent voice in the media landscape.
Zero to Start VR Podcast: Unity development from concept to Oculus test channel
In the few days since OpenAI's board of directors removed its co-founder and CEO Sam Altman, the future of the company and what it means for the race to AGI Artificial General Intelligence is anyone's guess. Joining me to put this fast moving saga into perspective is Avi Bar-Zeev founder and president of Reality Prime an independent consulting service providing world-class expertise in AR & VR design and engineering, since 1997.Avi Bar-Zeev has been at the forefront of Spatial Computing for over 30 years. He co-invented the Hololens AR headset in 2010, and led the Experience Prototyping team for Apple's Vision Pro and more from 2016 to 2019. Most recently, he is a founder and President of The XR Guild, a new non-profit membership organization that seeks to support and educate professionals in XR on ethics and most positive outcomes for humanity.However this techno drama plays out, we are approaching a threshold of computing power that was once the domain of science-fiction and fantasy. The builders are already living there. How do you think AI will impact human rights, the way we live, work or how we define what makes us unique? Let us know on the Zero to Start LinkedIn page. CONNECT WITH AVIXR GuildLinkedIn Mission Responsible - Designing a Future We Choose to Accept - Dec 10thCONNECT WITH SICILIANA sicilianatrevino.com LinkedIn OPENAI MAYHEM Sam Altman to return as CEO of OpenAI - Tech CrunchMicrosoft CEO Satya Nadella on the OpenAI Debacle - On with Kara SwisherEmergency Podcast: Altman Fired at OpenAI - Big Technology PodcastOpenAI Staff Threaten to Quit Unless Board Resigns - WiredOpenAI's Sutskever says he regrets board's firing of Altman - AxiosOpenAi's Non-profit Board at the heart of Sam Altman's Ouster - Sharon GoldmanOpenAI Dev Day Sam Altman Opening Keynote - YouTubeOpenAI charterOpenAI Co-Founder & Chief Scientist Ilya Sutskever - No Priors PodcastIlya Sutskever: Deep Learning - Lex Fridman PodcastMORE RESOURCESMetaverse Safety Week Dec 10th-15th Zero to Start episode with Kavya Pearlman, Founder XRSI
In this episode, our host, Michael Brooks, engages in a Saturday chat with Charles Cormier, an ambitious entrepreneur with a mission to make $1 trillion in less than 15 years through AI and AGI. Charles, a networking maven, shares insights on his unique approach to meeting 15 CEOs a day and spills the beans on his grand vision to revolutionize capitalism.
AI for Creatives is a podcast by Creatives for Creatives connecting art, innovation, and humanity. Segment 1: Kamilah and Nova share what they've been up to. Kamilah has been doing what she calls “strategy sessions.” She's talking to people who are experts in their fields for info and ideas, including entrepreneurs and thought leaders. It's on her Greater the Equal YouTube Channel. Nova talks about the latest conferences she's attended and spoken at and lots of travel. She also talks about the House of Nova Collective and what that entails. She talks about her first experience with “Minnesota Nice.” She also shares that her new book – The Jockey on the Horse - Take the Reins and Stay Ahead in the Age of AI – is coming soon! Nova and Kamilah talk about the abundance mindset, which segues into this week's topic. Segment 2: Nova and Kamilah start to talk about the news regarding Chat GPT 4. Four things were brought up. · GPT4 Turbo – which encompasses pricing and modality additions · Assistance API – which is more for developers looking for expanded ways to use the tool · Custom GPTs – you can have your own version of ChatGPT · AGI – Artificial General Intelligence Nova and Kamilah begin to get into it with Kamliah expanding tokens as they relate to ChatGPT4. This will make it easier to input bigger things and do more robust things. Nova explains the token equivalent of book pages (128,000 tokens = 300 pages). You can now upload outside documents. Segment 3: Kamilah talks about how they have updated ChatGPT through April of 2023. Nova agrees that more current data will be extremely useful for research and the new modalities. DALL-E being integrated is game-changing according to Nova, and she also talks about Be My Eyes. Kamilah talks about test-to-speech features. Segment 4: Nova and Kamilah begin talking about copyright and AI. ChatGPT is offering Copyright Shield for copyright issues. They're saying they'll cover the legal fees if the user is sued. Kamilah questions where the input has come from to produce the output, how that impacts copyright and legalities, and where that leaves the people bringing up legitimate suits. Is it altruistic, or meant to discourage lawsuits? Segment 5: Nova brings up the custom GPTs and how far we are from AGI. Eight or nine months after starting the show – check out the Beyond ChatGPT 4 episode – they made several accurate predictions. Kamilah considers what custom version she might need. Developers now have a majorly useful tool. Nova talks about being able to use natural language with the GPTs. Nova points out that humans, especially those with expertise, will be absolutely necessary to guide the new tools. Segment 6: Kamilah asks Nova to explain AGI – Artificial General Intelligence. The machine can perform any intellectual task that a human can. This raises many concerns for many people. It raises questions about what we humans contribute and can contribute with AI in the picture. Different perspectives and voices will be needed at the table to work out protecting and expanding knowledge and creativity in the age of AI. Conclusion: With the new tools always and constantly being developed, the right people need to be in the room with a hand in making sure its ethics and inclusivity are considered and maintained. AI Generative AI AI for Creatives Crypto for Creatives Web3 ChatGPT GPT4 Custom GPTs Creatives The Future Age of AI Blockchain Awareness Pink Kangaru
Mit Tieren zu sprechen wie mit Menschen — ein uralter Traum. Neueste KI-Forschung könnte diesen Traum schon bald Realität werden lassen... zumindest ein bisschen. Fritz und Gregor beschäften sich damit, wie es sich anfühlen könnte, mit einem Wal oder einem Vogel zu sprechen. Außerdem testen sie die neuesten Funktionen von ChatGPT und bauen ein Wimmelbild. Über die Hosts: Gregor Schmalzried ist freier Tech-Journalist und Berater, er arbeitet u.a. für den Bayerischen Rundfunk und Brand Eins. Fritz Espenlaub ist Journalist und Moderator beim Bayerischen Rundfunk und 1e9. In dieser Folge: 1:00 Was wollen wir unseren Haustieren sagen? 3:25 AGI (Artificial General Intelligence) 4:45 Sprechen Tiere überhaupt? 10:00 Arbeit am TierGPT 14:40 Verändert sich das Mensch-Tier-Verhältnis? 18:00 Was machen wir mit KI gemacht? Links: "Wo ist Walter” made by KI https://twitter.com/Fritz_Espenlaub/status/1724110509572538791 KI spricht Delfin https://www.deutschlandfunk.de/ki-spricht-delphin-koennen-wir-tiere-verstehen-dlf-e5e35612-100.html Artificial Intelligence Could Finally Let Us Talk with Animals https://www.scientificamerican.com/article/artificial-intelligence-could-finally-let-us-talk-with-animals/ Google DeepMind und AGI https://arxiv.org/abs/2311.02462 Songs of the Humpback Whale https://en.wikipedia.org/wiki/Songs_of_the_Humpback_Whale_(album) Redaktion und Mitarbeit: David Beck, Cristina Cletiu, Chris Eckardt, Fritz Espenlaub, Marie Kilg, Mark Kleber, Hendrik Loven, Gudrun Riedl, Christian Schiffer, Gregor Schmalzried Kontakt: Wir freuen uns über Fragen und Kommentare an podcast@br.de. Unterstützt uns: Wenn euch dieser Podcast gefällt, freuen wir uns über eine Bewertung auf eurer liebsten Podcast-Plattform. Abonniert den KI-Podcast in der ARD Audiothek oder wo immer ihr eure Podcasts hört, um keine Episode zu verpassen. Und empfehlt uns gerne weiter!
Introduction: David Hundley is a Machine Learning Engineer who has been deeply involved with experimenting with Large Language Models (LLMs). Follow along on his twitter Key Insights & Discussions: Discoveries with LLMs: David recently explored a unique function of LLMs that acted as a 'dummy agent'. This function would prompt the LLM to search the internet for a current movie, bypassing its training limitations. David attempted to utilize this function to generate trivia questions, envisaging a trivia game powered by the LLM. However, he faced challenges in getting the agent to converge on the desired output. Parsing the LLM's responses into a structured output proved especially difficult. Autonomous Agents & AGI: David believes that AGI (Artificial General Intelligence) essentially comprises autonomous agents. The prospect of these agents executing commands directly on one's computer can be unnerving. However, when LLMs run code, they operate within a contained environment, ensuring safety. Perceptions of AI: There's a constant cycle of learning and revisiting motivations and goals in the realm of AI. David warns against anthropomorphizing LLMs, as they don't possess human motivations. He stresses that the math underpinning AI doesn't align with human emotions or motivations. Emergent Behavior & Consciousness: David postulates that everything in the universe sums up to a collective result. There's debate over whether living organisms possess true consciousness, and what it means for AGI. The concept of AGI emulating human intelligence is complex. The human psyche is shaped by countless historical experiences and stimuli. So, if AGI were to truly replicate human thought, it would require vast amounts of multimodal input. A challenging question raised is how one tests for consciousness in AGI. David believes that as we continue to push technological boundaries, our definition of consciousness will keep shifting. Rights & Ethics of AI: With advancing AI capabilities, the debate around the rights of AI entities intensifies. David also touches upon the topic of transhumanism, discussing the trajectory of the universe and the evolution of humans. He contemplates the potential paths of evolution, like physically merging with technology or digitizing our consciousness. AI's Impact on Coding & Jobs: David reflects on the early days of AI in coding. He acknowledges the transformative potential of AI in the field but remains unworried about AI taking over his job. Instead, he focuses on how AI can aid in problem-solving. He describes LLMs as "naive geniuses" - incredibly capable, yet still requiring guidance. Open Source & OpenAI: David discusses the concept of open source, emphasizing the transparency it offers in understanding the data and architecture behind AI models. He acknowledges OpenAI's significant role in the AI landscape and predicts that plugins like ChatGPT will bridge the gap to further automation. Math's Role in AI: The conversation delves into the importance of math in AI, with David detailing concepts like gradient descent and its role in building neural networks. David also touches on the evolution of AI models, comparing the capabilities of models with 70 billion parameters to those with 7 billion. He predicts that models with even more parameters, perhaps in the trillions, will emerge, further emulating human intelligence. Future Prospects & Speculations: David muses on the future trajectory of LLMs, drawing parallels with the evolution of AlphaGo to AlphaZero. The episode concludes with philosophical musings on the nature of consciousness and its implications on world religions.
Join us on a captivating episode of RightOffTrack as we dive into the world of artificial intelligence (AI) and virtual reality (VR) with the brilliant mind of Eugene Capon. As a principal advisor in AI, XR, and Metaverse Standards, Eugene shares his journey and insights into the cutting-edge technologies shaping our future. WARNING: This episode can make you uncomfortable as we explore the potential future of AI. In this enlightening conversation, we explore various topics: From AI's Origins to the Future: Trace the history of AI from its inception in the 1950s to the present-day "narrow AI" era. Understand the potential of AGI (Artificial General Intelligence) and the concept of the singularity, where AI surpasses human capabilities. The Impact of VR: Delve into the transformative power of virtual reality on memory retention and learning. Discover the ethical considerations and safety measures required in VR environments. Challenges and Opportunities: Explore the obstacles faced by AI and VR in their journey towards mass adoption. Gain insights into the potential benefits and risks these technologies pose to humanity. The Journey of a Futurist: Learn from Eugene's experiences as a futurist, trend-spotter, and high-tech influencer. Embrace the power of curiosity and the value of continuous learning. AI's Role in Empowerment: Uncover the potential for AI to empower individuals and enhance various fields, from medical advancements to energy resources. Discuss the societal implications and the importance of ethical AI development. In this thought-provoking episode, Eugene Capon encourages us to embrace our unique ideas and pursue our passions fearlessly. Join us as we explore the limitless possibilities of AI, VR, and the future of humanity. Listen now and embark on a journey into the fascinating world of emerging technologies! Connect with the Guest: Websites: HighTechInfluencer.com / www.studiocapon.com Email: manager@studiocapon.com Instagram: @capondesign Twitter: @capondesign TikTok: @eugenecapon LinkedIn: Eugene Capon Design Credit: Ubaid Ur Rehman
ChatGPT took the world, and our imaginations, by storm when it became the first consumer app to reach one million users in five days - the fastest in history. The opportunities and the threats are boundless, but we don't know what we don't know. We invited technology analyst Jeremiah Owyang to help us sift through the reality and the hype. Owyang minces no words when he asserts that AI will be the most consequential technology to debut in our lifetimes. The upheaval and displacement will be great, but it will also enable us to reset our relationship to work towards what is inspirational and enlightening. Will AI help us become more human? How will we protect ourselves from bad actors? How will we confirm what is true? How must society adjust? How close is AGI - Artificial General Intelligence - to becoming on par with human intelligence, and how will we know? Will our behavior change if AGI learns how to treat us by how we treat others? There's no squeezing the genie back into the bottle. Join us for a peek of the brave new world that lies ahead.
Aki is an advisor to Invisible focusing on Artificial Intelligence and helping Invisible to do market validation. Find Aki here on Twitter: https://twitter.com/AkiBalogh Show Notes How will AI change the peer review process? The pyramid of ideas evolves from knowledge to wisdom. The internet has significantly opened up access to data. Ideas are networks that are supported by data. Advanced AI is capable of processing and analyzing large amounts of data. AI can extract useful information from data, but it's still humans who synthesize wisdom from it. The individual is knowledgeable about the peer review system. There's a high level of analysis involved in the process. The scientific publishing sector was greatly impacted by the emergence of open-source journals. Why is the integration of AI in certain fields still a distant prospect? In marketing content, there's a technical component. However, the essential need is for relevant content. AI is creating a different game in various sectors. AI has the potential to replace unskilled labor and augment our base capabilities. AI could help lower the costs of education. Chegg's earnings have significantly dropped. If the technology for self-driving cars works well, it could revolutionize transport. AI developed and used within legal frameworks is safer. The developing world could greatly benefit from educational AI. OpenAI recently published an epilogue. The human brain functions like a parallel system, capable of data analysis and creativity. Aki Balogh got into AI due to his interest in data analysis, his computer science and business degrees, and his experience in management consulting. The emergence of Hadoop has enabled the handling of very large data sets. The challenge now is figuring out what we can do with all this data. Generative AI and tools like Spark are revolutionizing data analysis. The introduction of AI in education may lead to inevitable job losses. Education is the only solution to this problem. In 2009, I was working in Abu Dhabi under tough conditions. Someone gave me a computer and suggested I use Khan Academy, which is free. The use of AI, like GPT-4, can empower even non-technical individuals. The challenge of integrating new technologies into a system remains. The enemy is wasted human potential. If we apply ourselves, we can figure out everything. The idea of writing as a cyborg brings into question the limitations of data. The omnipresent force of technology can sometimes overwhelm our nervous systems. We need better strategies to consume this tech. Market Muse attempted to diagram the whole channel to map the entire value chain of a product. The goal of AGI (Artificial General Intelligence) is a contentious topic. Every person has multiple talents that can be utilized. AI safety is a critical concern as we develop more sophisticated AI. AI safety can be likened to raising a child, with an emphasis on preventing an "Oedipus complex." The singularity concept involves humans and AI merging, much like a Borg. AI can significantly contribute to reducing waste and improving efficiency. AI might help us reach our capacity for progress faster. The role of VCs is crucial in the growth of new tech companies. Values come from wisdom, which we hope to instill in our children. It's not the strongest who survive, but the most adaptable. The potential for building infrastructure on Bitcoin is being explored.
We've been freaking the f**k out, so we brought in an expert to answer some very basic questions about EXACTLY how frightened we need to be about the upcoming AI apocalypse. (Or is it? It is). Terence Maher (https://www.linkedin.com/in/terence-mahier-65b584b1/) is co-founder of Panacea Studio, where he's building tools with AI to fight misinformation. He's also the force behind the amazing @depandoraox.ai on Insta. 00:00 Introducing Terence Maher 14:20 How scared should we really be of A.I.? 18:00 What makes A.I. different than previous tech innovations? 18:00 Why AutoGPT is… a problem. 20:20 Optimization function + Ami's case for murdering elephants 25:40 Could A.I. REALLY set off a nuke? 26:30 AGI: Artificial General Intelligence — what Elon Musk is warning us about 27:50 But what does intelligence even mean? 28:00 How did ChatGPT just appear out of nowhere?? 33:50 The “T” in GPT 35:40 Okay, who's getting RICH off OpenAI? Is it a company or a non-profit? 37:50 What would REGULATION for A.I. even look like? 43:50 Chances that regulation actually happens 45:00 Ami goes full FOX NEWS DADDY about climate change 55:00 The Singularity 56:40 Where will we be 1-YEAR from now? 01:04:05 Terence's new life as a content creator 01:09:30 The Drake/Weekend A.I pop song
AutoGPT or Autonomous GPT is an open-source project that basically turns ChatGPT into a baby AGI (Artificial General Intelligence). This thing is crazy! It can, if given the proper tool, automate just about any digital process. Anything from making scripts to recording, editing, marketing, and publishing a movie! It's nuts! Twitter: https://twitter.com/societyamiright?s=21&t=KacaE0dpyCnt1Xx-Mp9r6w Spotify: https://open.spotify.com/show/49Mt0BcgNjmIbVCrV5kOav?si=1624a54625a54741 Apple Podcast: https://podcasts.apple.com/us/podcast/society-am-i-right/id1507978353 TikTok: https://www.tiktok.com/@societyamiright?_t=8bTaCV0tfbf&_r=1 Instagram: https://instagram.com/societyamiright_podcast?igshid=YmMyMTA2M2Y=
On March 15th, 2023, the world changed forever. The release of GPT4 expanded the capability of AI to the point where a software bot could pass the bar exam and get a law degree, could get a degree in Biology at university, could understand images & sound as well as text, and even exhibited signs of AGI – Artificial General Intelligence, where it became self-aware. Let's talk about how this will change all of our lives and what opportunities present.
On March 15th, 2023, the world changed forever. The release of GPT4 expanded the capability of AI to the point where a software bot could pass the bar exam and get a law degree, could get a degree in Biology at university, could understand images & sound as well as text, and even exhibited signs of AGI – Artificial General Intelligence, where it became self-aware. Let's talk about how this will change all of our lives and what opportunities present.
On March 15th, 2023, the world changed forever. The release of GPT4 expanded the capability of AI to the point where a software bot could pass the bar exam and get a law degree, could get a degree in Biology at university, could understand images & sound as well as text, and even exhibited signs of AGI – Artificial General Intelligence, where it became self-aware. Let's talk about how this will change all of our lives and what opportunities present.
On March 15th, 2023, the world changed forever. The release of GPT4 expanded the capability of AI to the point where a software bot could pass the bar exam and get a law degree, could get a degree in Biology at university, could understand images & sound as well as text, and even exhibited signs of AGI – Artificial General Intelligence, where it became self-aware. Let's talk about how this will change all of our lives and what opportunities present.
On March 15th, 2023, the world changed forever. The release of GPT4 expanded the capability of AI to the point where a software bot could pass the bar exam and get a law degree, could get a degree in Biology at university, could understand images & sound as well as text, and even exhibited signs of AGI – Artificial General Intelligence, where it became self-aware. Let's talk about how this will change all of our lives and what opportunities present.
On March 15th, 2023, the world changed forever. The release of GPT4 expanded the capability of AI to the point where a software bot could pass the bar exam and get a law degree, could get a degree in Biology at university, could understand images & sound as well as text, and even exhibited signs of AGI – Artificial General Intelligence, where it became self-aware. Let's talk about how this will change all of our lives and what opportunities present.
On March 15th, 2023, the world changed forever. The release of GPT4 expanded the capability of AI to the point where a software bot could pass the bar exam and get a law degree, could get a degree in Biology at university, could understand images & sound as well as text, and even exhibited signs of AGI – Artificial General Intelligence, where it became self-aware. Let's talk about how this will change all of our lives and what opportunities present.
On March 15th, 2023, the world changed forever. The release of GPT4 expanded the capability of AI to the point where a software bot could pass the bar exam and get a law degree, could get a degree in Biology at university, could understand images & sound as well as text, and even exhibited signs of AGI – Artificial General Intelligence, where it became self-aware. Let's talk about how this will change all of our lives and what opportunities present.
On March 15th, 2023, the world changed forever. The release of GPT4 expanded the capability of AI to the point where a software bot could pass the bar exam and get a law degree, could get a degree in Biology at university, could understand images & sound as well as text, and even exhibited signs of AGI – Artificial General Intelligence, where it became self-aware. Let's talk about how this will change all of our lives and what opportunities present.
On March 15th, 2023, the world changed forever. The release of GPT4 expanded the capability of AI to the point where a software bot could pass the bar exam and get a law degree, could get a degree in Biology at university, could understand images & sound as well as text, and even exhibited signs of AGI – Artificial General Intelligence, where it became self-aware. Let's talk about how this will change all of our lives and what opportunities present.
On March 15th, 2023, the world changed forever. The release of GPT4 expanded the capability of AI to the point where a software bot could pass the bar exam and get a law degree, could get a degree in Biology at university, could understand images & sound as well as text, and even exhibited signs of AGI – Artificial General Intelligence, where it became self-aware. Let's talk about how this will change all of our lives and what opportunities present.
On March 15th, 2023, the world changed forever. The release of GPT4 expanded the capability of AI to the point where a software bot could pass the bar exam and get a law degree, could get a degree in Biology at university, could understand images & sound as well as text, and even exhibited signs of AGI – Artificial General Intelligence, where it became self-aware. Let's talk about how this will change all of our lives and what opportunities present.
On March 15th, 2023, the world changed forever. The release of GPT4 expanded the capability of AI to the point where a software bot could pass the bar exam and get a law degree, could get a degree in Biology at university, could understand images & sound as well as text, and even exhibited signs of AGI – Artificial General Intelligence, where it became self-aware. Let's talk about how this will change all of our lives and what opportunities present.
On March 15th, 2023, the world changed forever. The release of GPT4 expanded the capability of AI to the point where a software bot could pass the bar exam and get a law degree, could get a degree in Biology at university, could understand images & sound as well as text, and even exhibited signs of AGI – Artificial General Intelligence, where it became self-aware. Let's talk about how this will change all of our lives and what opportunities present.
On March 15th, 2023, the world changed forever. The release of GPT4 expanded the capability of AI to the point where a software bot could pass the bar exam and get a law degree, could get a degree in Biology at university, could understand images & sound as well as text, and even exhibited signs of AGI – Artificial General Intelligence, where it became self-aware. Let's talk about how this will change all of our lives and what opportunities present.
On March 15th, 2023, the world changed forever. The release of GPT4 expanded the capability of AI to the point where a software bot could pass the bar exam and get a law degree, could get a degree in Biology at university, could understand images & sound as well as text, and even exhibited signs of AGI – Artificial General Intelligence, where it became self-aware. Let's talk about how this will change all of our lives and what opportunities present.
On March 15th, 2023, the world changed forever. The release of GPT4 expanded the capability of AI to the point where a software bot could pass the bar exam and get a law degree, could get a degree in Biology at university, could understand images & sound as well as text, and even exhibited signs of AGI – Artificial General Intelligence, where it became self-aware. Let's talk about how this will change all of our lives and what opportunities present.
On March 15th, 2023, the world changed forever. The release of GPT4 expanded the capability of AI to the point where a software bot could pass the bar exam and get a law degree, could get a degree in Biology at university, could understand images & sound as well as text, and even exhibited signs of AGI – Artificial General Intelligence, where it became self-aware. Let's talk about how this will change all of our lives and what opportunities present.
On March 15th, 2023, the world changed forever. The release of GPT4 expanded the capability of AI to the point where a software bot could pass the bar exam and get a law degree, could get a degree in Biology at university, could understand images & sound as well as text, and even exhibited signs of AGI – Artificial General Intelligence, where it became self-aware. Let's talk about how this will change all of our lives and what opportunities present.
On March 15th, 2023, the world changed forever. The release of GPT4 expanded the capability of AI to the point where a software bot could pass the bar exam and get a law degree, could get a degree in Biology at university, could understand images & sound as well as text, and even exhibited signs of AGI – Artificial General Intelligence, where it became self-aware. Let's talk about how this will change all of our lives and what opportunities present.
GPT-4 zeigt Anzeichen Allgemeiner Künstlicher Intelligenz (die Fähigkeit zu Abstrahieren, aus Fehlern lernen, Vernunft und Logik, soziales Verständnis). Wer braucht dann noch Experten, die heute ihr Wissen zu Geld machen, wenn die AGI (Artificial General Intelligence) es schneller und besser kann? Viele bangen auch schon um ihre Zukunft. Das Video gibt spannende Einblicke in die Möglichkeiten und Grenzen von KI (Künstliche Intelligenz), wie Chat GPT-4 von OpenAI.
In this episode, Nick walks us through his process of understanding AGI, which has become so much more relevant with the latest release of GPT-4 by Open AI. He also shares examples of how human-level AGI is considered to be achieved like the Coffee Test (Wozniak) or Turing Test. Prathamesh and Nick also briefly discuss brain-computer interfaces. Furthermore, Nick shares his experience building his startup which aims to reimagine the college experience. The Residency runs tight-knit cohorts where you can live and build with friends, develop your passion and get funded. ... Featured Guest: Nick Linck, Founder, The Residency
Spatial Web and the Era of AI - Part 1 #futureofai #artificialintelligence by Denise Holt Deep Learning Language Models vs. Cognitive Science The pioneering goal of Artificial Intelligence has been to understand how humans think. The original idea was to merge intellectual and computer contributions to learn about cognition. In the 1990's, a shift took place from a knowledge-driven AI approach to a data-driven AI approach, replacing the original objectives with a type of Machine Learning called Deep Learning, capable of analyzing large amounts of data, drawing conclusions from the results. Deep Learning is a predictive machine model that operates off of pattern recognition. Some people believe that if you simply feed the model more and more data, then the AI will begin to evolve on its own, eventually reaching AGI (Artificial General Intelligence), the ‘Holy Grail' of AI. This theory, however, is viewed as being deeply flawed because these AI machines are not capable of “awareness” or the ability to “reason.” With Machine Learning/Deep Learning AI, there is no “thinking taking place.” These predictive machines are void of any actual intelligence. Scaling into bigger models by adding more and more parameters until these models consume the entire internet, will only prove useful to a point. A larger data bank will not be able to solve for recognizing toxicity within the data structures, nor will it enable the ability to navigate sensitive data, permissioned information, protected identities, or intellectual property. A larger data bank does not enable reasoning or abstract thinking. For AI to achieve the ultimate goal of AGI we need to be able to construct cognitive models of the world and map ‘meaning' onto the data. We need a large database of abstract knowledge that can be interpreted by a machine imparting a level of ‘awareness'. Newton vs. Einstein Model Based AI for Active Inference is an Artificial Intelligence methodology that possesses all the ingredients required to achieve the breakthrough to AGI by surpassing all of the fundamental limitations of current Machine Learning/Deep Learning AI. The difference between Machine Learning AI and Active Inference AI is as stark as the jump from Newton's Laws of Universal Gravitation to Einstein's Theory of Relativity. In the late 1800's, physicists believed that we had already discovered the laws that govern motion and gravity within our physical universe. Little did they know how naïve Isaac Newton's ideas were, until Albert Einstein opened mankind's eyes to spacetime and the totality of existence and reality. This is what is happening with AI right now. It's simply not possible to get to AGI (Artificial General Intelligence) with a machine learning model, but AGI is inevitable with Active Inference. ______________________ Special thanks to Dan Mapes, President and Co-Founder, VERSES AI and Director of The Spatial Web Foundation. If you'd like to know more about The Spatial Web, I highly recommend a helpful introductory book written by Dan and his VERSES Co-Founder, Gabriel Rene, titled, “The Spatial Web,” with a dedication “to all future generations.” Listen to more episodes in my Knowledge Bank Playlist to learn everything you need to know to stay ahead of this rapidly accelerating technology. Check out more at, SpatialWebAI and Spatial Web Foundation #futureofai #artificialintelligence #spatialweb #intelligentagents #aitools
In the sixth episode of our new season of Web3 Innovators, our host Conor Svensson has the tables turned on him and is interviewed by Joshua Lory, Head of Blockchain GTM at VMware, a multi-cloud service for all apps, enabling digital innovation with enterprise control.These were follow up questions that Joshua wanted to ask Conor after he was interviewed on the podcast earlier this season, which you can listen to here.Connect with Joshua on Twitter.Episode highlights:Who Conor believes are the three players that will capture 80% of the market in the next 10 years and whyHow this differs from Joshua's viewsOther up and coming players in this space who it is worth keeping an eye onWhy the decentralised infrastructure projects have got a big part to play in the futureHow we narrow the skills gap needed for these new ecosystemsThe issue with smart contract development on public blockchain networksThe potential with AGI (Artificial General Intelligence) to help close the skills gapKey Takeaways:In every single technology space the top three players capture 80% of the market time and time again. And then you have a long tail of the 20%, where you have thousands of different options that capture one or 2%. - JoshuaEthereum, I've always felt that it's like the Linux of this generation. - ConorFrom a personal perspective, I think Polka Dot is building a strong ecosystem. They have some very strong technology in terms of what they've done. So they're model of having this layer zero blockchain and then these power chains, which are kind of permission chains spun up on top, seems to be viable. - ConorWe've got 10 million Java developers that could be writing to this new ecosystem, but are not. - JoshuaThe real challenge with smart contract development is that it, especially on public blockchain networks, is that people can't afford to make mistakes with it. - ConorResources:EthereumPolygonCoinMarketCapBinance Ripple CardanoSolanaPolkadotFileCoinLightning NetworkConnect with Us Join the Web3 Innovators community and engage with like-minded individuals passionate about the potential of blockchain technology.Contact Web3 Labs:Twitter | LinkedIn | Instagram | Facebook | Discord | Tiktok • Explore Web3 Labs: Web3 Labs specialise in web3 solutions for enterprise. • Email Web3 Labs • Get Conor's latest thoughts on Web3 and where we're headed.
In today's show, we will discuss how censorship is becoming increasingly mainstream and how we can still find this information even when “they” try to suppress it. We'll also review how this will only worsen in the coming years as AGI (Artificial General Intelligence) begins to take hold. We'll also discuss the escalating world tensions […]
Bienvenue à tous au nouvel épisode du Podcast Hypercroissance! Aujourd'hui, on parle de AGI (Artificial General Intelligence). L'évolution de la robotisation est exceptionnelle et on a décidé d'en discuter encore sur le podcast. On parle également d'un nouvel outil que Equifax a mis en place afin de mieux suivre leurs employés travaillant de la maison. Liens dont on fait mention dans l'épisode; Banana Pealing Robot : There's always money in the banana-peeling robot (morningbrew.com) : https://www.morningbrew.com/daily/stories/2022/09/28/banana-peeling-robot Elon Musk Reveals Tesla Robot Optimus That Walks and Waves on Stage (people.com) : https://people.com/human-interest/elon-musk-reveals-tesla-optimus-robot/ Equifax suit ses employés : https://www.businessinsider.com/equifax-fires-employees-for-working-two-jobs-2022-10 Podcast de Peter Diamandis : https://www.diamandis.com/podcast/eric-schmidt Pour avoir un deuxième avis sur vos campagnes publicitaires : j7media.com/hypercroissance Pour discuter avec moi sur Linkedin : https://www.linkedin.com/in/antoine-gagn%C3%A9-69a94366/ Notre podcast Social Selling : https://www.j7media.com/fr/social-selling Notre podcast Commerce Élite : https://www.purecommerce.co/fr/podcast-commerce-elite Pour en savoir plus sur Open Mind technologies : https://www.openmindt.com/ Suivez-nous sur les médias sociaux : Linkedin : https://www.linkedin.com/company/podcast-d-hypercroissance/ Facebook : https://www.facebook.com/podcastHypercroissance Instagram : https://www.instagram.com/podcasthypercroissance/
This week Val and Rachel and Special Guest (and Val's very own "pool boy") Aaron Lange discuss Red Shoe Diaries, "an American anthology erotic drama series that aired on Showtime cable network from 1992 to 1997 and distributed by Playboy Entertainment overseas. It is a spinoff of an earlier film by the same name, also directed by Zalman King. Most episodes were directed by either Zalman King, Rafael Eisenman or both. The story-lines usually have a thin plot revolving around some intrigue and the sexual awakening of a girl or woman who often also narrates. Sensuous love scenes with nudity as well as sultry, moody music are characteristic for most episodes. There is no story arc or characters connecting the different stories other than Jake Winters introducing each episode." Hot Topics Include: 1.) The history and meaning of soft core porn and how it embodies the 90s zeitgeist 2.) Is erotica dead? 5.) Debate over David Duchovny's acting ability 6.) Val and Aaron regale us with their favorite episodes 7.) Aaron talks 90s lamps 8.) The grossest MAD magazine ever! 9.) What Ya Watchin': Search Party, Santa Clause TV series buzz, 70s sci-fi film Demon Seed, Species, Raised by Wolves, Sapporo 10.) AGI (Artificial General Intelligence) is coming! Get ready. https://en.wikipedia.org/wiki/Red_Shoe_Diaries
Episode Notes Una Intelligenza Artificiale Generale, o AGI (Artificial General Intelligence) non è vicina come il marketing o personaggi quali Elon Musk vi vogliono far credere. In questa domanda posta durante un episodio in diretta di DataClub, fornisco la mia visione da persona che lavora nell'industria dell'Intelligenza Artificiale e ritiene che siamo ancora lontanissimi anche solo dal concentualizzare tale obiettivo. ** Sei interessato(a) ad una carriera nel mondo Tech, stai scegliendo il percorso universitario/piano studi nell'ambito Data Science o vuoi sfruttare Big Data e Machine Learning nel tuo business? ▶️ Visita https://www.tensorgen.it/ ** Inviami le tue domande per il prossimo episodio ▶️ https://www.tensorgen.it/domande-per-il-podcast/ ** Youtube ExtremeGenerationIT ▶️ https://www.youtube.com/user/eXtremegenerationIT?sub_confirmation=1 ** Newsletter ▶️ https://www.tensorgen.it/#newsletter ** Telegram per chat sul mondo Data Science, Intelligenza Artificiale e Lavoro nel mondo Tech! ▶️ https://bit.ly/33OMfvj Find out more at http://tensorgen.it
Show Notes:Artificial Intelligence (AI) (01:00)Love of the liminal spaces (03:20)Philosophy and connection to AI (07:30)Advaita VedantaThe science of creativity (10:00)Never run out of problems to solve (11:20)Supervised and Unsupervised Learning (14:00)Doris Tsao at Caltech (15:30)AGI = Artificial General Intelligence (16:30)TED talks (19:40)2018 "Trinity of Artificial Intelligence" 2021 "Can Artificial Intelligence be conscious too?"What can you not do by simply scaling up? (20:20)Bongard-LOGO Challenge (21:00)Edge AI (23:30)Doris Tsao - what is the role of feedback in how we study things? (26:10)“What I cannot create, I do not understand” -Richard Feynman (26:40)The alignment problem (28:40)GPT model (37:00)The importance of metrics (38:50)Flourishing (39:00)Nick Bostrom paper clip thought experiment (39:50)Social media (41:00)Advocacy for women and minorities (43:00)Timnit Gebru (44:00)Joy Buolamwini (44:00)AI4All (45:30)Caltech Wave Program (45:40)me too. Movement (47:40)Curriculum for a flourishing society (50:00)Frederick Eberhardt at CaltechLightning Round (54:20)Book: Michio Kaku HyperspacePassion: DancingHeart sing: Rethinking how we interact with the world; yogaScrewed up: Twitter presenceFind Anima online:http://tensorlab.cms.caltech.edu/users/anima/LinkedIn: https://www.linkedin.com/in/anima-anandkumar/'Five-Cut Fridays' five-song music playlist series Anima's playlist
Building the Backend: Data Solutions that Power Leading Organizations
In today’s episode, we will speak with Peter Voss and discuss the current landscape of AI, the next wave of AI called Artificial General Intelligence, and how organizations today can level up their chatbots to create satisfied customers. Peter Voss is a Serial Entrepreneur, and Pioneer in Artificial Intelligence. Who coined the term ‘AGI’ (Artificial General Intelligence) with fellow luminaries in the space.At the age of 25 Peter, IPO’ed a company he started that grew to over 400 people. Since then he’s been focusing on AI and recently launched aigo.ai an intelligent cognitive assistant that delivers highly intelligent and hyper-personalized conversational assistants at scale for the enterprise – Top 3 Value Bombs:AI solutions today are considered “narrow”. Artificial General Intelligence is the next wave where AI solutions will be more autonomous.Chatbots can either frustrate customers or satisfy them. Create chatbots with a “brain” that can automatically kick off processes to meet their needs. Graph DB’s are critical to enabling artificial general intelligence
Matt Burgess. Matt Burgess is the deputy digital editor at WIRED, covering cybersecurity, big tech, and everything in-between. Today we will be discussing his book 'Artificial Intelligence: How machine learning will shape the next decade.' Timestamps: 0:00 - Introduction 1:50 - What is Artificial Intelligence (AI)? 3:39 - AI vs AGI (Artificial General Intelligence) 4:55 - How far away is AGI? 6:31 - Is AI and AG comparable? 10:22 - Can we replicate the brain? 12:59 - The impact of Natural Language Processing (NLP) 16:15 - How do regulators stop further discrimination of minorities? 22:44 - The ethics of surveillance. 25:49 - AI and questions of what makes us human. 27:54 - The major concerns of AI. 29:30 - Job displacement and AI. 33:47 - How quickly will AI develop? 36:40 - Is it human nature to avoid a new technology? 39:41 - The sustainability of new technologies. 42:34 - Is the aim of AI to improve our lives? 45:11 - Outro Visit our website - https://www.booktalktoday.com Contact - info@booktalktoday.com
Peter Voss is a Serial Entrepreneur, Engineer, Inventor, and a Pioneer in Artificial Intelligence. Peter started out in electronics engineering, but quickly moved into software. After developing a comprehensive ERP software package, Peter took his first software company from a zero to 400-person IPO in seven years. Fueled by the fragile nature of software, Peter embarked on a 20-year journey to study intelligence (how it develops in humans, how to measure it, and current AI efforts), and to replicate it in software. His research culminated in the creation of a natural language intelligence engine that can think, learn, and reason -- and adapt to, and grow with the user. He even coined the term ‘AGI’ (Artificial General Intelligence) with fellow luminaries in the space. Peter founded SmartAction.ai in 2009, which developed the first AGI-based call center automation technology. Now, in his latest venture, he is taking that technology a step further with the commercialization of the second generation of his ‘Conversational AI’ technology with a bold mission of providing hyper-intelligent hyper-personal assistants for everyone. In addition to being an entrepreneur, engineer, inventor and AI pioneer, Peter often writes and presents on various philosophical topics including rational ethics, free will and artificial minds; and is deeply involved with futurism and radical life-extension. Aigo.ai is the first *personal* personal assistant that becomes smarter and more personalized as it learns from you and your network (people, devices, other AIs), essentially becoming your ExoCortex. Based on a highly advanced ‘cognitive architecture’ – implementing the ‘Third Wave of AI,’ Aigo will change the way we communicate with each other - and our network of family, friends, and devices.
In the last episode, we showed that Artificial General Intelligence was possible according to the laws of physics. This episode is part 1 of a 3-part panel discussion among computer scientists interested in AGI. Bruce and Cameo are joined by Dennis Hackethal, Ella Hoeppner, and Thatchaphol Saranurak -- all interested in both AGI and Karl Popper's epistemology and believe Popper's theories can shed light on how to discover AGI. In part 1, we discuss how AI (Artificial Intelligence) differs from AGI (Artificial General Intelligence). --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/four-strands/support
In the popular imagination, superhuman artificial intelligence is an approaching tidal wave that threatens not just jobs and human relationships, but civilization itself. Conflict between humans and machines is seen as inevitable and its outcome all too predictable. In this groundbreaking book, distinguished AI researcher Stuart Russell argues that this scenario can be avoided, but only if we rethink AI from the ground up. Russell begins by exploring the idea of intelligence in humans and in machines. He describes the near-term benefits we can expect, from intelligent personal assistants to vastly accelerated scientific research, and outlines the AI breakthroughs that still have to happen before we reach superhuman AI. He also spells out the ways humans are already finding to misuse AI, from lethal autonomous weapons to viral sabotage. If the predicted breakthroughs occur and superhuman AI emerges, we will have created entities far more powerful than ourselves. How can we ensure they never, ever, have power over us? Russell suggests that we can rebuild AI on a new foundation, according to which machines are designed to be inherently uncertain about the human preferences they are required to satisfy. Such machines would be humble, altruistic, and committed to pursue our objectives, not theirs. This new foundation would allow us to create machines that are provably deferential and provably beneficial. Shermer and Russell also discuss: natural intelligence vs. artificial intelligence “g” in human intelligence vs. G in AGI (Artificial General Intelligence) the values alignment problem Hume’s “Is-Ought” naturalistic fallacy as it applies to AI values vs. human values regulating AI Russell’s response to the arguments of AI apocalypse skeptics Kevin Kelly and Steven Pinker the Chinese social control AI system and what it could lead to autonomous vehicles, weapons, and other systems and how they can be hacked AI and the hacking of elections, and what keeps Stuart up at night. Stuart Russell is a professor of Computer Science and holder of the Smith-Zadeh Chair in Engineering at the University of California, Berkeley. He has served as the Vice-Chair of the World Economic Forum’s Council on AI and Robotics and as an advisor to the United Nations on arms control. He is a Fellow of the American Association for Artificial Intelligence, the Association for Computing Machinery, and the American Association for the Advancement of Science. He is the author (with Peter Norvig) of the definitive and universally acclaimed textbook on AI, Artificial Intelligence: A Modern Approach. Listen to Science Salon via Apple Podcasts, Spotify, Google Play Music, Stitcher, iHeartRadio, and TuneIn.
In this episode of Do Explain, Christofer speaks with Dennis Hackethal about everything to do with AGI (Artificial General Intelligence). They discuss the problem of induction and how knowledge grows, the difference between AI and AGI, issues with current research approaches, the universality of computation, AI-risk, and other related topics. Dennis Hackethal is a software engineer and Critical Rationalist from Silicon Valley, currently residing in Cupertino, California. He hosts a podcast called Artificial Creativity about how to create AGI and can also be found on Twitter: Podcast: https://soundcloud.com/dchacke Twitter: https://twitter.com/dchackethal Support the podcast at: patreon.com/doexplain (monthly) ko-fi.com/doexplain (one-time)
This Episode focuses on the rising popularity of AGI(Artificial General Intelligence). Stay Tuned to find out.
From the 2019 Startup Grind Conference, Greylock's Reid Hoffman and OpenAI Co-Founder and CTO Greg Brockman on the prospects of building beneficial AGI (Artificial General Intelligence). AI systems have achieved impressive capabilities that may one day reach human levels of intelligence. Autonomous systems with this intelligence and capability are referred to as artificial general intelligence (AGI). The future impact human-level AI will have on society is challenging to comprehend. Yet, it's broadly understood that prioritization of building beneficial and ethical autonomous systems today are vital for positive human impact. In this episode of Greymatter, OpenAI co-founder and CTO Greg Brockman and Greylock partner and OpenAI investor Reid Hoffman discuss the implications of today’s AI progress. Greg and Reid deep dive into the transformative potential of AGI on organizations of all kinds, the policy changes required of international governments, and ways to build and scale ethical AI and AGI systems. Founded in 2015, OpenAI is an AI research company discovering and enacting the path to safe artificial general intelligence. Prior to co-founding OpenAI, Greg previously was CTO at Stripe, which he helped scale from 4 to 250 employees. An accomplished entrepreneur and executive, Reid has played an integral role in building many of today’s leading consumer technology businesses, including LinkedIn and PayPal. As a Greylock investor, he currently serves on the boards of Airbnb, Apollo Fusion, Aurora, Coda, Convoy, Entrepreneur First, Gixo, Microsoft, Nauto, Xapo and a few early stage companies still in stealth. Reid is the co-author of Blitzscaling and two New York Times best-selling books: The Start-up of You and The Alliance.
Peter Voss is a serial entrepreneur, engineer, inventor and a pioneer in artificial intelligence. Coined the term ‘AGI’ (Artificial General Intelligence) with fellow luminaries in the space. He was fueled by the fragile nature of software, embarked on a journey 20 years ago to studying what intelligence is, how it develops in humans, and the current state of AI. This research culminated in the creation of our natural language intelligence engine that can think, learn, and reason -- and adapt to, and grow with the user.His current project combines usage of both AI and blockchain.