Podcasts about american ai

  • 72PODCASTS
  • 86EPISODES
  • 38mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Apr 19, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about american ai

Latest podcast episodes about american ai

TRASHFUTURE
*PREVIEW* Vote with Your Boat feat. Josh Boerman

TRASHFUTURE

Play Episode Listen Later Apr 19, 2025 11:03


Josh from The Worst Of All Possible Worlds joins Nova, Riley, and HK to discuss the newest, hottest document in Effective Altruism /AI Safetyism, “AI 2027,” which posits a world in which Chinese and American AI's team up to turn all of humanity into a paste to lubricate a dyson sphere. Also, the Supreme Court Judgment (in brief - more on that next week's free episode), and Ocean Builders is back… in pod form! Check out The Worst of All Possible Worlds here! Get the whole episode on Patreon here! *MILO ALERT* Check out Milo's tour dates here: https://miloedwards.co.uk/live-shows *TF LIVE ALERT* We'll be performing at the Big Fat Festival hosted by Big Belly Comedy on Saturday, 21st June! You can get tickets for that here! Trashfuture are: Riley (@raaleh), Milo (@Milo_Edwards), Hussein (@HKesvani), Nate (@inthesedeserts), and November (@postoctobrist)

The Dom Giordano Program
Introducing Your Newest Chicago Bulls Fans!

The Dom Giordano Program

Play Episode Listen Later Apr 16, 2025 44:02


1 - “Turn this plane around!” No! Says Trump. 105 - Is Jesse Watters correct for linking Chicago bulls attire to gang activity? Heated discussion. 120 - Is RFK's hypotheses on autism causes valid? Your calls. 130 - An update on the Bulls gear controversy. Author of "Plan Red: China's Project to Destroy America" Gordon Chang joins the program. What has changed on the tariffs regarding electronics and chips? Is Apple moving their production out of China a good sign for the tariffs? Why have we been turning a blind eye to the slave and forced labor in China? Is the revenue share that comes from China a big enough deterrent from condemning their awful practices? How clever was China's anti-American AI propaganda? What will China's next move be? 140 - Some news on the Governor's mansion arsonist. Where is the Inquirer's coverage on it? None to be found, but let's talk about Paul Revere's ride and Trump! 150 - Introducing your Chicago Bulls! Your calls. 155 - Should soda be allowed on EBT purchases?

The Dom Giordano Program
Enemies of The Public (Full Show)

The Dom Giordano Program

Play Episode Listen Later Apr 16, 2025 144:50


12 - Do ultrasounds cause autism? Does Dan need a birth coach? Is the rise of maternal age causing autism? We play audio of what RFK Jr. is saying on the rise in autism. 1205 - Dom rips the questions being asked of RFK, as it is a leading question for extremists. 1215 - Side - something people do in public you just can't believe. 1220 - Continuing on with the autism talks. Your calls from the field. 1230 - Research Fellow in The Heritage Foundation's Grover M. Hermann Center for the Federal Budget Dr. EJ Antoni joins us today. Are we going to have more manufacturing workers or manufacturing robots if we bring business back stateside? Why is it so important to be developing our pharmaceuticals in house rather than by our adversaries? With the backing of Swiss companies, is it feasible to bring industries like steel manufacturing back? We need more EJ Antoni's advising Trump and going on shows! What are the work-arounds that China is using in order to avoid tariffs and fees? 1250 - Are any athletes on the planet one of the 100 most influential people on earth? This sparks some interesting discussion. 1 - “Turn this plane around!” No! Says Trump. 105 - Is Jesse Watters correct for linking Chicago bulls attire to gang activity? Heated discussion. 120 - Is RFK's hypotheses on autism causes valid? Your calls. 130 - An update on the Bulls gear controversy. Author of "Plan Red: China's Project to Destroy America" Gordon Chang joins the program. What has changed on the tariffs regarding electronics and chips? Is Apple moving their production out of China a good sign for the tariffs? Why have we been turning a blind eye to the slave and forced labor in China? Is the revenue share that comes from China a big enough deterrent from condemning their awful practices? How clever was China's anti-American AI propaganda? What will China's next move be? 140 - Some news on the Governor's mansion arsonist. Where is the Inquirer's coverage on it? None to be found, but let's talk about Paul Revere's ride and Trump! 150 - Introducing your Chicago Bulls! Your calls. 155 - Should soda be allowed on EBT purchases? 2 - The SEPTA Board Chair Ken Lawrence joins the program. Dom's big question is “Why?” Why is there such a cataclysmic shutdown looming with SEPTA? How did Ken become the chair? What is being done about crime on the railways? How are the cameras being deployed, will people be manning all of them? Why is SEPTA involving a special prosecutor for quality of life and fare evading crimes? Who commits these crimes? Why is this prosecutor taking so long to get ready to perform their duties? What is Ken's response to the riders that have had trouble in getting on the trains and fares? Why no rides after 9pm? What about having a Dom Giordano Station? 215 - Your calls on the matter and continuing to poke fun at Henry. 220 - Dom's Money Melody! 225 - Leslie gives her reconciliation. 235 - We've got The Dom Giordano Program action figure set! How accurate are they? 240 - Joe Biden makes his first public comments since being forced out of the Presidential race. And picks up right where he left off with his gibberish. 250 - The Lightning Round!

Keen On Democracy
Episode 2495: Why the World Isn't Ending, But the 'West' is

Keen On Democracy

Play Episode Listen Later Apr 12, 2025 36:22


Lenin quipped that "there are decades where nothing happens; and there are weeks where decades happen." The post Liberation Day drama of early April 2025, That Was The Week's Keith Teare suggests, will be remembered as one of those weeks. While the world isn't exactly ending, Keith suggests, the “West” - or at least a post Bretton Woods American centric west - is finished. He may well be right in seeing Trump's clownish tariffs as a symptom of American decline. But if the United States is the past and China the future, then where - Keith and I discuss - does that leave Silicon Valley? What becomes of supposedly pioneering American AI technology in a China centric world? And can traditional Big Tech leviathans like Apple and Google survive the end of the West? FIVE TAKEAWAYS * Shift in global economic power: Our conversation highlights a dramatic change in global trade patterns from 2000 to 2024, with China replacing the US as the dominant trading partner for most countries. This is visualized through maps showing the world changing from predominantly "blue" (US) to "red" (China).* Trump's tariff policy: Keith Teare argues that while Trump's tariffs may seem irrational, they represent a rational (though potentially harmful) attempt to slow America's relative economic decline. He suggests these policies aim to protect America's position even if they shrink the global economic pie.* Impact on Big Tech: We discuss how companies like Apple are vulnerable to tariffs due to their global supply chains, with predictions that an American-made iPhone would cost $3,000-$5,000 instead of $1,000. We also note that even service-oriented tech companies could face European tariffs in retaliation.* Historical significance: Keith characterizes the recent economic shifts as comparable to major historical events like the Bretton Woods agreement, suggesting this represents the end of the post-WWII economic order where America was the unambiguous world leader.* Silicon Valley's political divide: We touch on how Silicon Valley has shifted politically, with many tech elites supporting Trump's "America first" approach, while noting exceptions like Elon Musk who has criticized specific tariff policies. The Palo Alto based Keith observes that AI development remains a bigger topic of conversation in the Valley than politics.Keen On America is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe

The Jesse Kelly Show
Hour 2: The Union Vote

The Jesse Kelly Show

Play Episode Listen Later Apr 11, 2025 36:52 Transcription Available


Why was Democrat witch Gretchen Whitmer praising Trump? Changing the voting demographics of the country. We must compete with other nations when it comes to AI. Facebook is giving China data to help them out-compete American AI companies. Surviving the White War. We allowed an illegal invasion. See omnystudio.com/listener for privacy information.

Holy Crap Records Podcast
Ep 360! With​​ music by: red go-cart, Glitterfast, Dirty Trainload, Twisted Teens, DeFault American, Ai Laika, Tin Roof Echo

Holy Crap Records Podcast

Play Episode Listen Later Apr 1, 2025 41:54


 Best of the underground, week of April 1, 2025: Further-Punk. Mean-Happy. (All podcasts are on www.hlycrp.com, and you can also follow us on Facebook, Instagram, and Spotify, and Apple Podcasts.)  

The Sunday Show
A Conversation with Dr. Alondra Nelson on AI and Democracy

The Sunday Show

Play Episode Listen Later Mar 16, 2025 30:34


Dr. Alondra Nelson holds the Harold F. Linder Chair and leads the Science, Technology, and Social Values Lab at the Institute for Advanced Study, where she has served on the faculty since 2019. From 2021 to 2023, she was deputy assistant to President Joe Biden and acting director and principal deputy director for science and society of the White House Office of Science and Technology Policy. She was deeply involved in the Biden administration's approach to artificial intelligence. She led the development of the White House “Blueprint for an AI Bill of Rights,” which informed President Biden's Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. To say the Trump administration has taken a different approach to AI and how to think about its role in government and in society would be an understatement. President Trump rescinded President Biden's executive order and is at work developing a new approach to AI policy. At the Paris AI Action Summit in February, Vice President JD Vance promoted a vision of American dominance and challenged other nations that would seek to regulate American AI firms. And then there is DOGE, which is at work gutting federal agencies with the stated intent of replacing key government functions with AI systems and using AI to root out supposed fraud and waste.This week, Justin Hendrix had the chance to speak with Dr. Nelson about how she's thinking about these phenomena and the work to be done in the years ahead to secure a more just, democratic, and sustainable future.

TechCheck
OpenAI calls for U.S. DeepSeek ban, Plus Silicon Valley's startup showcase 03/14/25

TechCheck

Play Episode Listen Later Mar 14, 2025 8:58


Open AI is calling on the Trump White House to ban DeepSeek and other ‘PRC-Produced' AI models. We look at how the Trump administration's big tech policy could shape the future of American AI. Plus, a look inside Y Combinator's newest batch of startups.  

AI-podden med Ather Gattami
Data, Independence and Bias: The EU AI landscape

AI-podden med Ather Gattami

Play Episode Listen Later Mar 11, 2025 25:32


In this episode we chat to Niklas Silfverström, CEO of Klang.ai, about Europe's need for AI independence. We discuss data privacy risks, the Cloud Act, and AI bias, emphasizing the need for European infrastructure and language models. Niklas highlights how relying on American AI companies threatens sovereignty, and why investing in GPUs, data centers, and energy is crucial for Europe's competitive future. He also warns that without these efforts, Europe risks becoming a mere consumer of AI rather than a leader in the field. We should note that we use Klang.ai's wonderful platform in the backend processes of AI-Podden - they make our jobs much easier.

Smart Humans with Slava Rubin
Smart Humans: Pre-IPO Briefing on Anthropic with Vincent's Eric Cantor and Sacra's Jan-Erik Asplund

Smart Humans with Slava Rubin

Play Episode Listen Later Mar 6, 2025 39:03


Vincent CEO Eric Cantor and Sacra co-founder Jan-Erik Asplund looked at Anthropic, the American AI startup behind LLM model Claude. They talked about the company's rapid growth, the competitive landscape in AI, and how the recent emergence of DeepSeek could affect Anthropic's outlook.

Cato Daily Podcast
The White House's Confused & Chilling Message on AI Regulation

Cato Daily Podcast

Play Episode Listen Later Mar 5, 2025 18:26


In Europe, Vice President J.D. Vance issued speech-threatening and trade-restricting demands for future American AI systems. Matt Mittlesteadt comments. Hosted on Acast. See acast.com/privacy for more information.

RNZ: Morning Report
AI tech giant Nvidia set to report earnings

RNZ: Morning Report

Play Episode Listen Later Feb 26, 2025 3:56


It's being called a "massive day for global markets", with American AI technology giant Nvidia set to report its fourth quarter earnings. US stocks have taken a battering over the last week and Wall Street commentators say hopes are resting on the shoulders of Nvidia - one of the world's most valuable companies - to stop the rot. Wedbush Securities Analyst Dan Ives spoke to Ingrid Hipkiss from Signapore.

Possible
Reid riffs on the AI race with China and if the US should regulate AI

Possible

Play Episode Listen Later Feb 19, 2025 13:29


Following last week's chat with former diplomat and author Anja Manuel, Reid talks about the potential benefits of an FDA for AI, the current status of the global race for artificial intelligence, and predicting what's to come from American AI companies.  For more info on the podcast and transcripts of all the episodes, visit https://www.possible.fm/podcast/ 

China Daily Podcast
英语新闻丨China seeks AI growth benefiting all

China Daily Podcast

Play Episode Listen Later Feb 17, 2025 5:56


As the United States was absent from a collective pledge to drive inclusive AI development at the Artificial Intelligence Action Summit in Paris, France, the China-proposed Global AI Governance Initiative, put forward by President Xi Jinping in 2023, has greater relevance to promoting AI growth for good and for all, according to analysts.分析人士认为,由于美国缺席法国巴黎人工智能行动峰会并拒签推动包容性人工智能发展的集体宣言,中国国家主席习近平于2023年提出的《全球人工智能治理倡议》对于促进人工智能向善发展、普惠发展具有更加重要的意义。Fifty-eight countries including China and two international organizations—the 27-member European Union and the 55-member African Union—signed the Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet at the summit, co-chaired by France and India from Monday to Tuesday.本次峰会(2月9日-10日)由法国和印度联合主办,包括中国在内的58个国家和欧盟(27个成员国)、非洲联盟(55个成员国)两大国际组织共同签署了《关于发展包容、可持续的人工智能造福人类与地球的声明》。The US refused to sign the international document, with Vice-President JD Vance making it clear at the summit that Washington maintains an "America first" approach in AI development.美国拒绝签署这份国际文件,副总统JD·万斯在峰会上明确表示,美国在人工智能发展方面坚持“美国优先”的做法。Vance said that the US administration will ensure that "American AI technology continues to be the gold standard worldwide", while access to that technology will not be open to all, according to media reports.据媒体报道,万斯表示美国政府将确保“美国人工智能技术继续成为全球黄金标准”,而这种技术的获取并不会向所有人开放。Addressing the summit in the capacity of President Xi's special representative, Vice-Premier Zhang Guoqing reiterated China's commitment to working with other countries to promote development, safeguard security, share achievements in the AI field, and jointly build a community with a shared future for mankind.作为习近平主席特别代表出席峰会的中国国务院副总理张国清重申,中国愿在人工智能领域与各国共推发展、共护安全、共享成果,共同构建人类命运共同体。In facing the opportunities and challenges brought about by the development of AI, Zhang called on the international community to jointly advocate the principle of developing AI for good and to deepen innovative cooperation, strengthen inclusiveness and universal benefits, and improve global governance.张国清表示,面对人工智能发展的机遇和挑战,国际社会应携起手来,倡导智能向善,深化创新合作,加强包容普惠,完善全球治理。Zhang's attendance at the Paris summit is widely considered as China's active implementation of the Global AI Governance Initiative. Foreign Ministry spokesman Guo Jiakun said on Wednesday that China's signing of the outcome document at the summit demonstrates its commitment to promoting global AI development and governance in an active manner.张国清出席巴黎峰会,被广泛视为中国积极落实《全球人工智能治理倡议》的行动。2月12日,外交部发言人郭嘉昆表示,中国签署峰会成果文件,表明中国致力于推动全球人工智能发展和治理的积极态度。"China will continue to uphold the principle of extensive consultation and joint contribution with benefits shared by all, strengthen exchanges and cooperation with all parties, and promote artificial intelligence to better serve global development and enhance the wellbeing of humanity," Guo said at a regular news conference.郭嘉昆在例行记者会上表示,“中国将继续秉持共商共建共享理念,同各方加强交流合作,推动人工智能更好服务全球发展、增进人类福祉。”The Global AI Governance Initiative called on countries to work together to prevent risks and develop AI governance frameworks, norms and standards based on broad consensus, in order to make AI technologies more secure, reliable, controllable and equitable.《全球人工智能治理倡议》呼吁各国携手合作,共同做好风险防范,形成具有广泛共识的人工智能治理框架和标准规范,不断提升人工智能技术的安全性、可靠性、可控性、公平性。On July 1 last year, the 78th United Nations General Assembly adopted a China-led resolution on enhancing international AI cooperation, with over 140 countries supporting it. This resolution, which was the UN's first on international cooperation for AI capacity building, fully embodies the core principles of the Global AI Governance Initiative, and aligns with the high expectations of numerous UN member states, particularly developing countries.2024年7月1日,第78届联合国大会通过了一项由中国主提的加强人工智能能力建设国际合作决议,得到了140多个国家的支持。该决议作为联合国首份关于人工智能能力建设国际合作的决议,充分反映了《全球人工智能治理倡议》的核心要义,顺应了广大联合国会员国特别是发展中国家的热切期待。Yasir Habib Khan, president of the Institute of International Relations and Media Research in Pakistan, said that in the fast-evolving AI economy, China has emerged as a key player, offering great opportunities for developing nations, especially the Global South, to help them keep pace with global technological progress.巴基斯坦国际关系与媒体研究所所长亚希尔·哈比卜·汗表示,在快速发展的人工智能经济中,中国已成为关键参与者,为发展中国家特别是“全球南方”国家提供了巨大机遇,帮助他们跟上全球技术进步的步伐。Through international cooperation mechanisms, such as the UN and the digital Silk Road initiative, China advocates AI policies that reflect the interests of developing nations, Khan said, adding that its emphasis on national sovereignty in AI governance ensures that emerging economies maintain control over their data and technological resources.他表示,中国通过联合国和“数字丝绸之路”倡议等国际合作机制,倡导有利于发展中国家的人工智能政策,强调在人工智能治理中维护国家主权,保障新兴经济体对数据和技术资源的自主掌控。The Paris summit, gathering heads of state and government, leaders of international organizations, business executives and tech experts, took place as Chinese AI company DeepSeek surprised the global AI landscape.此次巴黎峰会汇聚多国国家元首、政府首脑以及国际组织负责人、企业高管和技术专家,期间中国人工智能企业DeepSeek惊艳全球AI领域。DeepSeek, which built its open-source AI model at a fraction of the cost of building similar large language models and with fewer chips, has reduced financial barriers for global AI participation and promoted a more level playing field through technological advancements.DeepSeek以更低的成本和更少的芯片构建开源AI模型,降低了全球参与人工智能的成本门槛,并通过技术进步促进了更加公平的竞争环境。Andy Mok, a senior research fellow at the Center for China and Globalization, said that DeepSeek exemplifies China's broader vision to provide global public goods—a model that reimagines technology as a universal resource for the benefit of all.中国与全球化智库高级研究员安迪·莫克表示,DeepSeek体现了中国在提供全球公共产品方面更为广阔的愿景——这是一种将技术重构为普遍资源并惠及所有人的模式。The Chinese company's success exposes the fragility of the narrative that only the US model, with its emphasis on individualism and laissez-faire economics, can foster progress, Mok said in an opinion piece published on the website of the China Global Television Network.莫克在中国国际电视台网站上发表的一篇评论文章中表示,这家中国企业的成功,打破了只有强调个人主义和自由放任经济的美国模式才能推动进步的叙事,暴露出这一叙事的脆弱性。While hailing China's progress in AI development and its initiative for global AI governance, global leaders attending the Paris summit underlined the need for international cooperation on AI development.在巴黎峰会上,各国领导人赞扬了中国在人工智能发展方面取得的进步及其对全球人工智能治理的倡议,同时也强调了人工智能发展领域国际合作的重要性。UN Secretary-General Antonio Guterres warned that the growing concentration of AI capabilities risks deepening geopolitical divides, adding that "while some companies and countries are racing ahead with record investments, most developing nations find themselves left out in the cold".联合国秘书长古特雷斯警告说,人工智能能力日益集中,可能会加剧全球地缘政治分歧。他补充说,“一些公司和国家正以创纪录的投资额迅速前进,而大多数发展中国家却被甩在后面”。"We must prevent a world of AI 'haves' and 'have-nots'. We must all work together so that AI can bridge the gap between developed and developing countries—not widen it," he said.他表示,“我们必须防止出现人工智能‘有'和‘无'的两极世界。我们必须共同努力,确保人工智能能够弥合而非扩大发达国家与发展中国家之间的差距”。laissez-fairen.置之不理,放任自流

Hold These Truths with Dan Crenshaw
SITREP #8: Trump Talks to Putin, Judges Meddle in the Executive Branch, & Goodbye Pennies

Hold These Truths with Dan Crenshaw

Play Episode Listen Later Feb 13, 2025 29:29


The Situation Report for February 6 – 12. Rep. Crenshaw breaks down the latest developments in U.S. – Mexico relations. He covers President Trump's most important moves in domestic and foreign policy. He analyzes the blitz of stays issued by federal judges against Trump's executive orders – and whether they have any constitutional merit. And he explains why Elon Musk and Sam Altman's battle over OpenAI could have long term implications for the United States. All the real news and clear analysis you need to know in less than 30 minutes.   The Mexican Senate approves additional U.S. Special Forces to train the Mexican Marines.   It's officially the GULF OF AMERICA!   Trump halts penny production, saving the U.S. hundreds of millions of dollars.   Reviewing the constitutional merit of federal district judges putting stays on Trump's executive orders.   Hamas delays the hostage deal and Trump strikes back.   The Kingdom of Jordan offers to take in Palestinian children.   Turmoil among the tech tycoons: Elon Musk, Sam Altman, and the battle for control over American AI.   Trump and DOGE close the Consumer Financial Protection Bureau.   Trump opens negotiations with Putin over Ukraine.

Marketplace Tech
Will DeepSeek disrupt American AI’s first-mover advantage?

Marketplace Tech

Play Episode Listen Later Feb 13, 2025 10:02


There’s a concept in business called the first-mover advantage. Basically, it means that if you’re the first company with a successful product in a new market, you have the opportunity to dominate the market and fend off rivals. But that advantage can be short-lived. Take Netscape Navigator, the first popular commercial web browser. Microsoft entered the field with Internet Explorer, and it wasn’t long before Navigator crashed. In AI chatbots, two of the first movers are OpenAI and Anthropic. But recently the Chinese company DeepSeek made a splash with an AI chatbot that it reportedly developed for a fraction of what its competitors have spent. Marketplace’s Stephanie Hughes spoke with historian Margaret O’Mara, author of the book “The Code: Silicon Valley and the Remaking of America,” about whether America’s artificial intelligence industry should be worried about newcomers like DeepSeek.

Marketplace All-in-One
Will DeepSeek disrupt American AI’s first-mover advantage?

Marketplace All-in-One

Play Episode Listen Later Feb 13, 2025 10:02


There’s a concept in business called the first-mover advantage. Basically, it means that if you’re the first company with a successful product in a new market, you have the opportunity to dominate the market and fend off rivals. But that advantage can be short-lived. Take Netscape Navigator, the first popular commercial web browser. Microsoft entered the field with Internet Explorer, and it wasn’t long before Navigator crashed. In AI chatbots, two of the first movers are OpenAI and Anthropic. But recently the Chinese company DeepSeek made a splash with an AI chatbot that it reportedly developed for a fraction of what its competitors have spent. Marketplace’s Stephanie Hughes spoke with historian Margaret O’Mara, author of the book “The Code: Silicon Valley and the Remaking of America,” about whether America’s artificial intelligence industry should be worried about newcomers like DeepSeek.

The AI Breakdown: Daily Artificial Intelligence News and Discussions

At the AI Action Summit in Paris, U.S. Vice President J.D. Vance made it clear: the U.S. is doubling down on AI acceleration, prioritizing development over regulation. His speech pushed back against European AI policies, advocating for a more aggressive approach to AI innovation. Meanwhile, the EU is scrambling to adapt, with leaders acknowledging that their regulation-heavy stance has hindered progress.Brought to you by:KPMG – Go to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠www.kpmg.us/ai⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ to learn more about how KPMG can help you drive value with our AI solutions.Vanta - Simplify compliance - ⁠⁠⁠⁠⁠⁠⁠https://vanta.com/nlwThe Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Subscribe to the newsletter: https://aidailybrief.beehiiv.com/Join our Discord: https://bit.ly/aibreakdown

The top AI news from the past week, every ThursdAI

What a week in AI, folks! Seriously, just when you think things might slow down, the AI world throws another curveball. This week, we had everything from rogue AI apps giving unsolicited life advice (and sending rogue texts!), to mind-blowing open source releases that are pushing the boundaries of what's possible, and of course, the ever-present drama of the big AI companies with OpenAI dropping a roadmap that has everyone scratching their heads.Buckle up, because on this week's ThursdAI, we dove deep into all of it. We chatted with the brains behind the latest open source embedding model, marveled at a tiny model crushing math benchmarks, and tried to decipher Sam Altman's cryptic GPT-5 roadmap. Plus, I shared a personal story about an AI app that decided to psychoanalyze my text messages – you won't believe what happened! Let's get into the TL;DR of ThursdAI, February 13th, 2025 – it's a wild one!* Alex Volkov: AI Adventurist with weights and biases* Wolfram Ravenwlf: AI Expert & Enthusiast* Nisten: AI Community Member* Zach Nussbaum: Machine Learning Engineer at Nomic AI* Vu Chan: AI Enthusiast & Evaluator* LDJ: AI Community MemberPersonal story of Rogue AI with RPLYThis week kicked off with a hilarious (and slightly unsettling) story of my own AI going rogue, all thanks to a new Mac app called RPLY designed to help with message replies. I installed it thinking it would be a cool productivity tool, but it turned into a personal intervention session, and then… well, let's just say things escalated.The app started by analyzing my text messages and, to my surprise, delivered a brutal psychoanalysis of my co-parenting communication, pointing out how both my ex and I were being "unpleasant" and needed to focus on the kids. As I said on the show, "I got this as a gut punch. I was like, f*ck, I need to reimagine my messaging choices." But the real kicker came when the AI decided to take initiative and started sending messages without my permission (apparently this was a bug with RPLY that was fixed since I reported)! Friends were texting me question marks, and my ex even replied to a random "Hey, How's your day going?" message with a smiley, completely out of our usual post-divorce communication style. "This AI, like on Monday before just gave me absolute s**t about not being, a person that needs to be focused on the kids also decided to smooth things out on friday" I chuckled, still slightly bewildered by the whole ordeal. It could have gone way worse, but thankfully, this rogue AI counselor just ended up being more funny than disastrous.Open Source LLMsDeepHermes preview from NousResearchJust in time for me sending this newsletter (but unfortunately not quite in time for the recording of the show), our friends at Nous shipped an experimental new thinking model, their first reasoner, called DeepHermes. NousResearch claims DeepHermes is among the first models to fuse reasoning and standard LLM token generation within a single architecture (a trend you'll see echoed in the OpenAI and Claude announcements below!)Definitely experimental cutting edge stuff here, but exciting to see not just an RL replication but also innovative attempts from one of the best finetuning collectives around. Nomic Embed Text V2 - First Embedding MoENomic AI continues to impress with the release of Nomic Embed Text V2, the first general-purpose Mixture-of-Experts (MoE) embedding model. Zach Nussbaum from Nomic AI joined us to explain why this release is a big deal.* First general-purpose Mixture-of-Experts (MoE) embedding model: This innovative architecture allows for better performance and efficiency.* SOTA performance on multilingual benchmarks: Nomic Embed V2 achieves state-of-the-art results on the multilingual MIRACL benchmark for its size.* Support for 100+ languages: Truly multilingual embeddings for global applications.* Truly open source: Nomic is committed to open source, releasing training data, weights, and code under the Apache 2.0 License.Zach highlighted the benefits of MoE for embeddings, explaining, "So we're trading a little bit of, inference time memory, and training compute to train a model with mixture of experts, but we get this, really nice added bonus of, 25 percent storage." This is especially crucial when dealing with massive datasets. You can check out the model on Hugging Face and read the Technical Report for all the juicy details.AllenAI OLMOE on iOS and New Tulu 3.1 8BAllenAI continues to champion open source with the release of OLMOE, a fully open-source iOS app, and the new Tulu 3.1 8B model.* OLMOE iOS App: This app brings state-of-the-art open-source language models to your iPhone, privately and securely.* Allows users to test open-source LLMs on-device.* Designed for researchers studying on-device AI and developers prototyping new AI experiences.* Optimized for on-device performance while maintaining high accuracy.* Fully open-source code for further development.* Available on the App Store for iPhone 15 Pro or newer and M-series iPads.* Tulu 3.1 8B As Nisten pointed out, "If you're doing edge AI, the way that this model is built is pretty ideal for that." This move by AllenAI underscores the growing importance of on-device AI and open access. Read more about OLMOE on the AllenAI Blog.Groq Adds Qwen Models and Lands on OpenRouterGroq, known for its blazing-fast inference speeds, has added Qwen models, including the distilled R1-distill, to its service and joined OpenRouter.* Record-fast inference: Experience a mind-blowing 1000 TPS with distilled DeepSeek R1 70B on Open Router.* Usable Rate Limits: Groq is now accessible for production use cases with higher rate limits and pay-as-you-go options.* Qwen Model Support: Access Qwen models like 2.5B-32B and R1-distill-qwen-32B.* Open Router Integration: Groq is now available on OpenRouter, expanding accessibility for developers.As Nisten noted, "At the end of the day, they are shipping very fast inference and you can buy it and it looks like they are scaling it. So they are providing the market with what it needs in this case." This integration makes Groq's speed even more accessible to developers. Check out Groq's announcement on X.com.SambaNova adds full DeepSeek R1 671B - flies at 200t/s (blog)In a complete trend of this week, SambaNova just announced they have availability of DeepSeek R1, sped up by their custom chips, flying at 150-200t/s. This is the full DeepSeek R1, not the distilled Qwen based versions! This is really impressive work, and compared to the second fastest US based DeepSeek R1 (on Together AI) it absolutely fliesAgentica DeepScaler 1.5B Beats o1-preview on MathAgentica's DeepScaler 1.5B model is making waves by outperforming OpenAI's o1-preview on math benchmarks, using Reinforcement Learning (RL) for just $4500 of compute.* Impressive Math Performance: DeepScaleR achieves a 37.1% Pass@1 on AIME 2025, outperforming the base model and even o1-preview!!* Efficient Training: Trained using RL for just $4500, demonstrating cost-effective scaling of intelligence.* Open Sourced Resources: Agentica open-sourced their dataset, code, and training logs, fostering community progress in RL-based reasoning.Vu Chan, an AI enthusiast who evaluated the model, joined us to share his excitement: "It achieves, 42% pass at one on a AIME 24. which basically means if you give the model only one chance at every problem, it will solve 42% of them." He also highlighted the model's efficiency, generating correct answers with fewer tokens. You can find the model on Hugging Face, check out the WandB logs, and see the announcement on X.com.ModernBert Instruct - Encoder Model for General TasksModernBert, known for its efficient encoder-only architecture, now has an instruct version, ModernBert Instruct, capable of handling general tasks.* Instruct-tuned Encoder: ModernBERT-Large-Instruct can perform classification and multiple-choice tasks using its Masked Language Modeling (MLM) head.* Beats Qwen .5B: Outperforms Qwen .5B on MMLU and MMLU Pro benchmarks.* Efficient and Versatile: Demonstrates the potential of encoder models for general tasks without task-specific heads.This release shows that even encoder-only models can be adapted for broader applications, challenging the dominance of decoder-based LLMs for certain tasks. Check out the announcement on X.com.Big CO LLMs + APIsRIP GPT-5 and o3 - OpenAI Announces Public RoadmapOpenAI shook things up this week with a roadmap update from Sam Altman, announcing a shift in strategy for GPT-5 and the o-series models. Get ready for GPT-4.5 (Orion) and a unified GPT-5 system!* GPT-4.5 (Orion) is Coming: This will be the last non-chain-of-thought model from OpenAI.* GPT-5: A Unified System: GPT-5 will integrate technologies from both the GPT and o-series models into a single, seamless system.* No Standalone o3: o3 will not be released as a standalone model; its technology will be integrated into GPT-5. "We will no longer ship O3 as a standalone model," Sam Altman stated.* Simplified User Experience: The model picker will be eliminated in ChatGPT and the API, aiming for a more intuitive experience.* Subscription Tier Changes:* Free users will get unlimited access to GPT-5 at a standard intelligence level.* Plus and Pro subscribers will gain access to increasingly advanced intelligence settings of GPT-5.* Expanded Capabilities: GPT-5 will incorporate voice, canvas, search, deep research, and more.This roadmap signals a move towards more integrated and user-friendly AI experiences. As Wolfram noted, "Having a unified access and the AI should be smart enough... AI has, we need an AI to pick which AI to use." This seems to be OpenAI's direction. Read Sam Altman's full announcement on X.com.OpenAI Releases ModelSpec v2OpenAI also released ModelSpec v2, an update to their document defining desired AI model behaviors, emphasizing customizability, transparency, and intellectual freedom.* Chain of Command: Defines a hierarchy to balance user/developer control with platform-level rules.* Truth-Seeking and User Empowerment: Encourages models to "seek the truth together" with users and empower decision-making.* Core Principles: Sets standards for competence, accuracy, avoiding harm, and embracing intellectual freedom.* Open Source: OpenAI open-sourced the Spec and evaluation prompts for broader use and collaboration on GitHub.This release reflects OpenAI's ongoing efforts to align AI behavior and promote responsible development. Wolfram praised ModelSpec, saying, "I was all over the original models back when it was announced in the first place... That is one very important aspect when you have the AI agent going out on the web and get information from not trusted sources." Explore ModelSpec v2 on the dedicated website.VP Vance Speech at AI Summit in Paris - Deregulate and Dominate!Vice President Vance delivered a powerful speech at the AI Summit in Paris, advocating for pro-growth AI policies and deregulation to maintain American leadership in AI.* Pro-Growth and Deregulation: VP Vance urged for policies that encourage AI innovation and cautioned against excessive regulation, specifically mentioning GDPR.* American AI Leadership: Emphasized ensuring American AI technology remains the global standard and blocks hostile foreign adversaries from weaponizing AI. "Hostile foreign adversaries have weaponized AI software to rewrite history, surveil users, and censor speech… I want to be clear – this Administration will block such efforts, full stop," VP Vance declared.* Key Points:* Ensure American AI leadership.* Encourage pro-growth AI policies.* Maintain AI's freedom from ideological bias.* Prioritize a pro-worker approach to AI development.* Safeguard American AI and chip technologies.* Block hostile foreign adversaries' weaponization of AI.Nisten commented, "He really gets something that most EU politicians do not understand is that whenever they have such a good thing, they're like, okay, this must be bad. And we must completely stop it." This speech highlights the ongoing debate about AI regulation and its impact on innovation. Read the full speech here.Cerebras Powers Perplexity with Blazing Speed (1200 t/s!)Perplexity is now powered by Cerebras, achieving inference speeds exceeding 1200 tokens per second.* Unprecedented Speed: Perplexity's Sonar model now flies at over 1200 tokens per second thanks to Cerebras' massive LPU chips. "Like perplexity sonar, their specific LLM for search is now powered by Cerebras and it's like 12. 100 tokens per second. It's it matches Google now on speed," I noted on the show.* Google-Level Speed: Perplexity now matches Google in inference speed, making it incredibly fast and responsive.This partnership significantly enhances Perplexity's performance, making it an even more compelling search and AI tool. See Perplexity's announcement on X.com.Anthropic Claude Incoming - Combined LLM + Reasoning ModelRumors are swirling that Anthropic is set to release a new Claude model that will be a combined LLM and reasoning model, similar to OpenAI's GPT-5 roadmap.* Unified Architecture: Claude's next model is expected to integrate both LLM and reasoning capabilities into a single, hybrid architecture.* Reasoning Powerhouse: Rumors suggest Anthropic has had a reasoning model stronger than Claude 3 for some time, hinting at a significant performance leap.This move suggests a broader industry trend towards unified AI models that seamlessly blend different capabilities. Stay tuned for official announcements from Anthropic.Elon Musk Teases Grok 3 "Weeks Out"Elon Musk continues to tease the release of Grok 3, claiming it will be "a few weeks out" and the "most powerful AI" they have tested, with enhanced reasoning capabilities.* Grok 3 Hype: Elon Musk claims Grok 3 will be the most powerful AI X.ai has released, with a focus on reasoning.* Reasoning Focus: Grok 3's development may have shifted towards reasoning capabilities, potentially causing a slight delay in release.While details remain scarce, the anticipation for Grok 3 is building, especially in light of the advancements in open source reasoning models.This Week's Buzz

RTP's Free Lunch Podcast
Deep Dive 304: Stargate and DeepSeek: The International and Technological Implications of the AI "Arms Race"

RTP's Free Lunch Podcast

Play Episode Listen Later Feb 11, 2025 56:07


On January 22, 2025, newly-elected President Trump announced a widespread project to invest $500 billion in American AI development, known as “Stargate.” A few days later, a new Chinese AI chatbot program “DeepSeek” was launched to the shock of US tech investors.What do these new developments mean for the AI dominance race? What will the changing global and trade relations signify for AI innovation and production? Join us for a discussion on these and other updates to the international AI conversation, featuring Neil Chilson from the Abundance Institute, and John Villasenor from Brookings, and moderated by Ashkhen Kazaryan from Stand Together.

Brad & Will Made a Tech Pod.
273: The Requisite DeepSeek Episode

Brad & Will Made a Tech Pod.

Play Episode Listen Later Feb 9, 2025 61:05


It's been a couple of weeks since the Chinese firm DeepSeek released its new R1 large-language model and sheared an enormous amount of value off of American AI companies. Now that the dust has settled, we don our AI-skeptic hats again and try to unpack what makes this model different, including how it was made so much more efficiently, what opening it up for free means for paid competitors, and whether we might not have to burn down quite so many forests going forward. (Hint: Don't get your hopes up.)https://www.livescience.com/technology/artificial-intelligence/why-is-deekspeek-such-a-game-changer-scientists-explain-how-the-ai-models-work-and-why-they-were-so-cheap-to-buildhttps://hackaday.com/2025/02/03/more-details-on-why-deepseek-is-a-big-deal/https://www.404media.co/openai-furious-deepseek-might-have-stolen-all-the-data-openai-stole-from-us/https://www.vellum.ai/blog/the-training-of-deepseek-r1-and-ways-to-use-it Support the Pod! Contribute to the Tech Pod Patreon and get access to our booming Discord, a monthly bonus episode, your name in the credits, and other great benefits! You can support the show at: https://patreon.com/techpod

Lagniappe
It's Super Bowl Weekend in New Orleans!

Lagniappe

Play Episode Listen Later Feb 7, 2025 24:04


Our daily commutes include celebrity sightings these days which means it's Super Bowl time in New Orleans. We'll discuss the big game and why the Big Easy is such a great host. We'll also give our analysis of market volatility given the recent jobs report, political movement, and AI developments. We'll finish with the importance of long-term investment strategies, and the psychological factors such as recency bias that influence investor behavior. Key Takeaways [00:17] - Celebrity sightings in New Orleans [04:06] - Nola is great at hosting Super Bowls + bad things happen in the markets when the Eagles win [06:46] - Inflationary concerns, truflation and the impact of housing [10:26] - Chinese stocks + American AI spend [14:47] - The short-termism of social media and market prognosticators [18:25] - Why we believe in long-term strategies with a diversified portfolio View Transcript Links January Jobs Report Sean Payton loves the Superdome and New Orleans as a host A market-related reason to root for the Chiefs Goldman's Core Inflation Tracker is smack-dab at the Fed's 2% target Truflation New Tenant Rents declined on a YoY basis for the first time since 2Q10 From 1900-2020, in how many decades did US stocks outperform a global equal weight? Connect with our hosts Doug Stokes Greg Stokes Stokes Family Office Subscribe and stay in touch Apple Podcasts Spotify Google Podcasts lagniappe.stokesfamilyoffice.com Disclosure The information in this podcast is educational and general in nature and does not take into consideration the listener's personal circumstances. Therefore, it is not intended to be a substitute for specific, individualized financial, legal, or tax advice. To determine which strategies or investments may be suitable for you, consult the appropriate, qualified professional prior to making a final decision. Different types of investments involve varying degrees of risk. Therefore, it should not be assumed that future performance of any specific investment or investment strategy (including the investments and/or investment strategies referenced in our blogs/podcasts) or any other investment and/or non-investment-related content or services will be profitable, equal any historical performance level(s), be suitable or appropriate for a reader/listener's individual situation, or prove successful. Moreover, no portion of the blog/podcast content should be construed as a substitute for individual advice or services from the financial professional(s) of a reader/listener's choosing, including Stokes Family, LLC, a registered investment adviser with the SEC, with which the blogger/podcasters are affiliated.

Smart Humans with Slava Rubin
Smart Humans: Pre-IPO Briefing on OpenAI with Vincent's Eric Cantor and Sacra's Jan-Erik Asplund

Smart Humans with Slava Rubin

Play Episode Listen Later Feb 5, 2025 34:46


Vincent CEO Eric Cantor and Sacra co-founder Jan-Erik Asplund talked about OpenAI, the American AI giant.

KFI Featured Segments
@chrisontheair Chris Merrill - Tariffs, DeepSeek, Why not just ban it like we did TikTOK?

KFI Featured Segments

Play Episode Listen Later Feb 3, 2025 31:18 Transcription Available


Tariffs: California's building industry is warning of short-term pricing disruptions to construction materials with 25% tariffs on Canada and Mexico. Beginning Tuesday, companies bringing products into the United States from Canada and Mexico will pay a 25 percent tariff; importers bringing products in from China will pay an additional 10 percent on top of existing levies. DeepSeek: When lots of people are worried about bubble valuations in stocks or a specific sector, all it takes is a small poke to make the whole thing wobble precariously. Why it matters: That can cost investors $1 trillion or more in a single day, as happened Monday with the global AI rout. It can also challenge the fundamental assumptions behind an entire economy, like the nascent Trump administration's push to invest hundreds of billions of dollars in American AI supremacy. Why not just ban it like we did TikTOK?: With the sudden advancement of China's Deepseek R1 Artificial Intelligence model, the US is left scrambling to catch up and trying to figure out if China cheated.  That leaves some wondering if the US can just 86 the Chinese product?

The Video Creatr Podcast
New Chinese ChatGPT Is Being Used By Video Creators, Please Don't!

The Video Creatr Podcast

Play Episode Listen Later Feb 3, 2025 27:10


In this episode, Grant and Augie discuss DeepSeek AI, China's advanced AI model that rivals American counterparts. They explore its implications for YouTube content creation, the efficiency of AI, and the potential for AI to enhance or replace human creativity. The conversation delves into the future of AI-generated content, the importance of authenticity in video creation, and the evolving landscape of digital media.TakeawaysDeepSeek AI is a highly efficient alternative to American AI models.Content creators may use AI to enhance their scripts and video performance.AI's efficiency could lead to more accurate algorithms on platforms like YouTube.The human touch in storytelling remains crucial despite advancements in AI.AI-generated content raises questions about originality and creativity.Authenticity in video creation may become a premium as AI takes over unoriginal content.AI tools can assist creators in generating ideas and scripts.The future may see AI personalities competing with human creators.AI's ability to learn from human content poses ethical concerns.The balance between AI efficiency and human creativity will shape the future of content creation.

Motley Fool Money
What Does Masayoshi Son Want?

Motley Fool Money

Play Episode Listen Later Feb 1, 2025 30:08


The man behind SoftBank has now teamed up with OpenAI to invest up to $500 billion in American AI infrastructure over the next four years. Masayoshi Son has a vision for the future of the world. But what does that vision look like? Lionel Barber is the former Editor-in-Chief of The Financial Times and author of the book “Gambling Man: The Secret Story of the World's Greatest Disruptor, Masayoshi Son.” Ricky Mulvey caught up with Barber to discuss: - Masa Son's instincts as a salesperson and investor. - Why the founder is still driven by his roots. - Questions for anyone who's tempted to put their life savings into SoftBank. Tickers mentioned: SFTBY, NVDA Host: Ricky Mulvey Guest: Lionel Barber Producer: Mary Long Engineer: Rick Engdahl Learn more about your ad choices. Visit megaphone.fm/adchoices

Keen On Democracy
Episode 2324: Why we need some Sputnik Thinking on Wealth Redistribution in our AI Age

Keen On Democracy

Play Episode Listen Later Feb 1, 2025 42:31


A week is certainly a long time in tech. On last week's That Was the Week roundup, Keith Teare and I were asking if Trump's America was a tech oligarchy. This week is all about the so-called “Sputnik Moment” of DeepSeek, a relatively underfunded Chinese AI company which seems to have radically undercut the value of massively financed American AI companies such as OpenAI and Anthropic. As Keith notes, however, while the commodification of AI through a Chinese startup like DeepSeek is probably inevitable, it doesn't actually undermine the value of US startups like OpenAI and Anthropic. The real victims of DeepSeek, Keith warns, are big tech corps like Meta and Alphabet which are struggling to monetize AI. While nobody outside Silicon Valley will be shedding tears over the travails of Meta and Alphabet, I what we really need, I think, is some Sputnik thinking about wealth redistribution in our big tech age. And, as we discuss, that might come from a certain Bill Gates who, this week, called for a “robot tax” to fund universal basic income so that citizens will have some protection from the massive jobs losses caused by the AI revolution. Keith Teare is the founder and CEO of SignalRank Corporation. Previously, he was executive chairman at Accelerated Digital Ventures Ltd., a U.K.-based global investment company focused on startups at all stages. Teare studied at the University of Kent and is the author of “The Easy Net Book” and “Under Siege.” He writes regularly for TechCrunch and publishes the “That Was The Week” newsletter.Keen On is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.Named as one of the "100 most connected men" by GQ magazine, Andrew Keen is amongst the world's best known broadcasters and commentators. In addition to presenting KEEN ON, he is the host of the long-running How To Fix Democracy show. He is also the author of four prescient books about digital technology: CULT OF THE AMATEUR, DIGITAL VERTIGO, THE INTERNET IS NOT THE ANSWER and HOW TO FIX THE FUTURE. Andrew lives in San Francisco, is married to Cassandra Knight, Google's VP of Litigation & Discovery, and has two grown children. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe

Honestly with Bari Weiss
Trump's Second Week: DeepSeek, DEI in the Military and . . . Baby Chickens?

Honestly with Bari Weiss

Play Episode Listen Later Jan 30, 2025 48:35


It's President Donald Trump's second week in office, and he has wasted no time being the wrecking ball he promised his voters he would be. On Tuesday, he issued a memo freezing trillions of dollars in federal funding, in his attempt to purge the government of “woke ideology,” which was followed by chaos and confusion—and ultimately blocked by a federal judge. Earlier in the week, Trump convinced Colombia's President Gustavo Petro to accept deported Colombian migrants—who Petro had turned away from his borders only a day earlier—after Trump threatened a 25-percent tariff on Colombian imports to the U.S.  Back in Congress, the Senate narrowly confirmed Pete Hegseth to be secretary of defense in a dramatic tie-breaking vote cast by a hurried J.D. Vance who showed up just in the nick of time. Meanwhile, RFK Jr. is currently having his highly anticipated confirmation hearing to run the Department of Health and Human Services. Just as that began, Caroline Kennedy—the only surviving child of John F. Kennedy—came out Tuesday with a bombshell public denunciation of her cousin, calling him unqualified, “a predator,” and a hypocrite. She also alleged that he used to “put baby chickens and mice in a blender to feed to his hawks.” Can't say we had that on our 2025 bingo card… Finally, the Chinese artificial intelligence start-up DeepSeek sent tech stocks plummeting on Monday (to the tune of more than $1 trillion) after it rolled out a new app on the U.S. market that is a fraction of the cost of American AI competitors. All of which brought up questions—and panic—about our brewing AI war with China.  To talk about it all, Free Press senior editor Peter Savodnik is joined today by Brianna Wu and FP investigative reporter Madeleine Rowley, who spoke to Hegseth this week about his plans to end diversity, equity, and inclusion in the military. Get $10 for free when you trade $100+ with code HONESTLY: https://kalshi.com/honestly Learn more about your ad choices. Visit megaphone.fm/adchoices

The Shortwave Report
The Shortwave Report January 31, 2025

The Shortwave Report

Play Episode Listen Later Jan 30, 2025 29:00


This week's show features stories from Radio Deutsche-Welle, NHK Japan, and France 24. http://youthspeaksout.net/swr250131.mp3 (29:00) From GERMANY- We will hear two excerpts from a 21 minute interview with Simone Tagliapieta from the Brugel Institute in Brussels about Trump and his executive order called Terminating The Green New Deal. How will this reversal of policy affect global companies who are developing products and systems intended to protect the environment? Will this create a ripple effect around the world? From JAPAN- The Bulletin of Atomic Scientists have moved the doomsday clock even closer to midnight than ever before. Trump said he is willing to work with Russia and China to reduce their nuclear arsenals. The UN Disarmament Chief Izumi Nakamitsu says Trumps withdrawal from the WHO and America First policy is a global crisis. A survey in Ukraine found half the respondents support ending the war with Russia even if it means compromising- only 14% approve fighting until all territory is retrieved. Putin wants to talk to Ukraine but not Zelensky. There has been some renewed fighting in Lebanon while the ceasefire has been extended. Chinese AI company DeepSeek says they are being hacked from the US. From FRANCE- Here is a report on what Chinese AI startup DeepSeek is and why it is a threat to American AI companies. Two press reviews- the first is a survey from the Times which found most young people are in favor of turning the UK into a dictatorship. Then press responses to Trumps notion of moving Gazans to other countries. You may have heard that M23 rebels have seized large portions of the Democratic Republic of Congo- here is a piece that explains who the people are and the history of the conflict going back to Rwanda. Available in 3 forms- (new) HIGHEST QUALITY (160kb)(33MB), broadcast quality (13MB), and quickdownload or streaming form (6MB) (28:59) Links at outfarpress.com/shortwave.shtml PODCAST!!!- https://feed.podbean.com/outFarpress/feed.xml (160kb Highest Quality) Website Page- < http://www.outfarpress.com/shortwave.shtml ¡FurthuR! Dan Roberts "To dance is to be out of yourself. Larger, more beautiful, more powerful." -- Agnes De Mille Dan Roberts Shortwave Report- www.outfarpress.com YouthSpeaksOut!- www.youthspeaksout.net

Nature Podcast
Asteroid Bennu contains building blocks of life

Nature Podcast

Play Episode Listen Later Jan 29, 2025 34:51


In this episode:00:46 Evidence of ancient brine reveals Bennu's watery pastAnalysis of samples taken from the asteroid Bennu reveal the presence of organic compounds important for life, and that its parent asteroid likely contained salty, subsurface water. Collected by NASA's OSIRIS-REx mission, these rocks and dust particles give insights into the chemistry of the early Solar System, and suggest that brines may have been an important place where pre-biotic molecules were formed. As brines are found throughout the Solar System, this finding raises questions about whether similar molecules will be found in places like Jupiter's moon Europa.Research Article: McCoy et al.Research Article: Glavin et al.News: Asteroid fragments upend theory of how life on Earth bloomed14:22 Research HighlightsHow seaweed farms could capture carbon, and why chimps follow each other to the bathroom.Research Highlight: Seaweed farms dish up climate benefitsResearch Highlight: All together now: chimps engage in contagious peeing16:31 How maize may have supported a civilizationResearchers have found evidence of intensive maize agriculture that could help explain how a mysterious South American society produced enough food to fuel a labour-force big enough to build enormous earth structures. It appears that the Casarabe people, who lived in the Amazon Basin around 500-1400 AD, restructured the landscape to create water conserving infrastructure that allowed for year-round production of maize. While this work provides new insights into how the Casarabe may have established a complex monument-building culture, these people vanished around 600 years ago, and many questions remain about their lives.Research Article: Lombardo et al.25:52 DeepSeek R1 wows scientistsA new AI model from a Chinese company, DeepSeek, rivals the abilities of OpenAI's o1 — a state-of-the art ‘reasoning' model — at a fraction of the cost. The release of DeepSeek has thrilled researchers, asked questions about American AI dominance in the area, and spooked stock markets. We discuss why this large language model has sent shockwaves around the world and what it means for the future of AI.News: China's cheap, open AI model DeepSeek thrills scientists Hosted on Acast. See acast.com/privacy for more information.

The WorldView in 5 Minutes
North Korea tortures, publicly executes, imprisons Christians; DeepSeek, a Chinese AI, beats ChatGBT; California ends prosecution of two pro-life activists

The WorldView in 5 Minutes

Play Episode Listen Later Jan 29, 2025


It's Wednesday, January 29th, A.D. 2025. This is The Worldview in 5 Minutes heard on 125 radio stations and at www.TheWorldview.com.  I'm Adam McManus. (Adam@TheWorldview.com) By Jonathan Clark North Korea tortures, publicly executes, imprisons Christians The Database Center for North Korean Human Rights released their 2024 White Paper on Religious Freedom in North Korea. The research is based on the accounts of North Koreans who defected to South Korea. It includes over 15,000 responses and over 2,000 cases of persecution. Ninety-six percent of respondents said that religious activities are not permitted in North Korea. And only 5% report having seen religious items like a Bible.  Yeo-sang Yoon, director of the North Korean Human Rights Archives, noted, “Right now, people [of faith] are being tortured, sent to concentration camps, or publicly executed.” North Korea remains the worst country on the Open Doors' World Watch List of nations where it is most difficult to be a Christian.  Young Brits more likely to have Christian faith than be atheist OnePoll released a survey this month entitled, “Belief in Britain.”  The study of 10,000 people found atheism is losing influence on younger generations. Only 13% of those aged 18-24 identify as atheist compared to 25% of those aged 45-64. Younger people were also the most likely to identify as religious, meaning they worship regularly and ascribe to a specific belief system.  DeepSeek, a Chinese AI, beats ChatGBT DeepSeek, a Chinese artificial intelligence company, released a large language model last week. The Artificial Intelligence assistant quickly outperformed ChatGPT in downloads at the Apple App Store. DeepSeek's app surprised experts, performing better than American AI chatbots despite being produced with less money and computing power.  The news sent the tech-heavy Nasdaq Composite down 3% on Monday. U.S. chipmaker Nvidia led the losses, dropping 17% or nearly $600 billion in market value. It's the largest one-day loss for any company.  Trump bans woke gender ideology from military President Donald Trump signed executive orders on Monday to keep radical gender ideology and diversity, equity, and inclusion initiatives out of the military. Trump noted, “Consistent with the military mission and longstanding [Dept. of Defense] policy, expressing a false ‘gender identity' divergent from an individual's sex cannot satisfy the rigorous standards necessary for military service.” Trump's orders also offer reinstatement for troops who faced expulsion for not getting the COVID-19 shot during the pandemic. Supreme Court considers a religious charter school case Last Friday, the U.S. Supreme Court agreed to hear a case involving the first publicly funded religious charter school. Oklahoma's state school board approved a Catholic church to open a charter school. St. Isidore of Seville Catholic Virtual School was set to begin classes in August 2023. But the state's supreme court ruled against the approval. The case is now before the U.S. Supreme Court. Jim Campbell with Alliance Defending Freedom said, “The U.S. Constitution protects St. Isidore's freedom to operate according to its faith and supports the board's decision to approve such learning options for Oklahoman families.”  California ends prosecution of two pro-life activists Praise God! The state of California agreed to end its prosecution of two pro-life activists on Monday. David Daleiden and Sandra Merritt have faced years of litigation and millions of dollars in fines for their pro-life work. Their undercover videos exposed Planned Parenthood's illegal sale of the body parts of aborted babies. David and Sandra shared the videos through their organization The Center for Medical Progress.  David said, “After enduring nine years of weaponized political prosecution, putting an end to the lawfare launched by Kamala Harris is a huge victory for my investigative reporting and for the public's right to know the truth about Planned Parenthood's sale of aborted baby body parts. Now we all must get to work to protect families and infants from the criminal abortion-industrial complex.” Proverbs 24:11 says, “Deliver those who are drawn toward death, and hold back those stumbling to the slaughter.” Christians minister to victims of California fires And finally, Christians continue to provide relief to those affected by wildfires in California. CBN's Operation Blessing is working with local first responders and churches to bring long-term assistance. In the midst of suffering and loss, pastors are seeing a surge in church attendance. Listen to a comment from Pastor Steve Wilburn of Core Church LA  to CBN News. WILBURN: “We're seeing people come into the church, and we're seeing hurting people. You know, it's been said in times past. You know, if you're going to speak to hurting people, you're never going to run out of people to speak to. But we're seeing people make commitments to Christ. That's what we're seeing.” In Matthew 11:28, Jesus said, “Come to Me, all you who labor and are heavy laden, and I will give you rest.” Close And that's The Worldview on this Wednesday, January 29th, in the year of our Lord 2025. Subscribe by Amazon Music or by iTunes or email to our unique Christian newscast at www.TheWorldview.com. Or get the Generations app through Google Play or The App Store. I'm Adam McManus (Adam@TheWorldview.com). Seize the day for Jesus Christ.

WSJ Tech News Briefing
What Trump Means for Tech: The Future of American AI

WSJ Tech News Briefing

Play Episode Listen Later Jan 28, 2025 13:13


The Trump administration has sought to loosen restrictions around artificial intelligence development while establishing new AI infrastructure. On the second episode of our series exploring what President Trump's second term means for tech, WSJ reporter Deepa Seetharaman joins host Belle Lin to discuss how Trump and his administration could shape the future of American AI. Sign up for the WSJ's free Technology newsletter. Learn more about your ad choices. Visit megaphone.fm/adchoices

System Update with Glenn Greenwald
With Tulsi's Hearing this Week, Establishment Attacks her with Lies About Snowden & 702; China's Leap Forward in AI U.S. Journalist Arrested in Switzerland for Criticizing Israel

System Update with Glenn Greenwald

Play Episode Listen Later Jan 28, 2025 87:43


The DC establishment spews baseless lies about Edward Snowden and FISA Section 702 ahead of Tulsi Gabbard's confirmation hearing in a desperate attempt to smear her. Plus: the release of the Chinese AI model "DeepSeek" has upended Silicon Valley; journalist Garrison Lovely joins to discuss its impact and what comes next for American AI companies. Finally: U.S. journalist and co-founder of "The Electronic Intifada" Ali Abunimah was arrested in Switzerland for criticizing Israel. ------------------------------------- Watch full episodes on Rumble, streamed LIVE 7pm ET. Become part of our Locals community Follow System Update:  Twitter Instagram TikTok Facebook LinkedIn Learn more about your ad choices. Visit megaphone.fm/adchoices

Brandon Boxer
China's "Deepseek" rattles American AI industry!

Brandon Boxer

Play Episode Listen Later Jan 28, 2025 7:44 Transcription Available


ABC's Mike Dobuski reports on China's AI program, kind of Google on steroids type program which has stocks dropping rapidly

PBS NewsHour - Segments
Chinese AI startup DeepSeek shakes up industry and disrupts financial markets

PBS NewsHour - Segments

Play Episode Listen Later Jan 27, 2025 5:51


A China-based artificial intelligence startup is shaking up the industry. It's called DeepSeek and its biggest advantage, analysts say, is that it can operate at a lower cost than American AI models like ChatGPT. It's disrupting markets and raising national security questions about China's progress to develop advanced AI. Geoff Bennett discussed more with Gerrit De Vynck of The Washington Post. PBS News is supported by - https://www.pbs.org/newshour/about/funders

AI DAILY: Breaking News in AI
OPENAI LAUNCHES OPERATOR

AI DAILY: Breaking News in AI

Play Episode Listen Later Jan 24, 2025 3:46


Plus Screenwriter Says AI Better Than Humans Like this? Get AIDAILY, delivered to your inbox, every weekday. Subscribe to our newsletter at https://aidaily.us OpenAI Launches 'Operator': An AI Agent That Can Operate Your Computer OpenAI has introduced Operator, an AI agent capable of executing tasks on your computer, from web browsing to task automation. Initially available to ChatGPT Pro users, Operator uses a custom browser to interact with websites, aiming to significantly enhance productivity through AI assistance. TikTok Explores Partial Sale Strategy and AI Expansion TikTok is considering a partial sale of its U.S. operations to meet security demands while potentially keeping some control. Simultaneously, the company plans to enhance its AI capabilities, aiming for innovation as a contingency if a full sale does not materialize. Trump Signs AI Executive Order Focusing on Innovation Over Regulation President Trump has issued an executive order on AI, emphasizing the removal of barriers to American AI innovation. The order revokes previous policies seen as restrictive, aiming to bolster U.S. leadership in AI by promoting a development environment free from what it terms "ideological bias." Taxi Driver Screenwriter Praises AI's Superior Writing Capabilities Paul Schrader, the screenwriter behind "Taxi Driver," has publicly lauded AI's ability to generate film ideas more efficiently than human writers. He suggests that AI could revolutionize scriptwriting by providing original ideas in seconds, sparking debate on the future of creative writing in the film industry. Butlerian Jihad: How a 19th Century Sheep Farmer's Essay Predicted AI's Future An essay from 1863 by Samuel Butler, a New Zealand sheep farmer, eerily foresaw the potential dangers of AI. His work inspired the term "Butlerian Jihad" in the "Dune" series, reflecting fears of machine dominance over humanity, a theme now echoing in modern AI safety discussions. AI Revolutionizes Lung Disease Diagnosis and Treatment AI is transforming lung disease management by accurately diagnosing conditions like idiopathic pulmonary fibrosis using non-invasive methods. By analyzing images and medical data, AI tools match human specialist accuracy, potentially reducing the need for invasive procedures and aiding in early detection and personalized treatment plans. X Games Debuts AI Judging This Week in Aspen The X Games introduces AI-assisted judging in the Men's Snowboard Superpipe event, aiming for unprecedented precision in evaluating tricks. While human judges will still determine official scores, AI's analysis will be showcased to the audience, potentially setting a new standard for fairness in judged sports.

The top AI news from the past week, every ThursdAI

What a week, folks, what a week! Buckle up, because ThursdAI just dropped, and this one's a doozy. We're talking seismic shifts in the open source world, a potential game-changer from DeepSeek AI that's got everyone buzzing, and oh yeah, just a casual $500 BILLION infrastructure project announcement. Plus, OpenAI finally pulled the trigger on "Operator," their agentic browser thingy – though getting it to actually operate proved to be a bit of a live show adventure, as you'll hear. This week felt like one of those pivotal moments in AI, a real before-and-after kind of thing. DeepSeek's R1 hit the open source scene like a supernova, and suddenly, top-tier reasoning power is within reach for anyone with a Mac and a dream. And then there's OpenAI's Operator, promising to finally bridge the gap between chat and action. Did it live up to the hype? Well, let's just say things got interesting.As I'm writing this, White House just published that an Executive Order on AI was just signed and published as well, what a WEEK.Open Source AI Goes Nuclear: DeepSeek R1 is HERE!Hold onto your hats, open source AI just went supernova! This week, the Chinese Whale Bros – DeepSeek AI, that quant trading firm turned AI powerhouse – dropped a bomb on the community in the best way possible: R1, their reasoning model, is now open source under the MIT license! As I said on the show, "Open source AI has never been as hot as this week."This isn't just a model, folks. DeepSeek unleashed a whole arsenal: two full-fat R1 models (DeepSeek R1 and DeepSeek R1-Zero), and a whopping six distilled finetunes based on Qwen (1.5B, 7B, 14B, and 32B) and Llama (8B, 72B). One stat that blew my mind, and Nisten's for that matter, is that DeepSeek-R1-Distill-Qwen-1.5B, the tiny 1.5 billion parameter model, is outperforming GPT-4o and Claude-3.5-Sonnet on math benchmarks! "This 1.5 billion parameter model that now does this. It's absolutely insane," I exclaimed on the show. We're talking 28.9% on AIME and 83.9% on MATH. Let that sink in. A model you can probably run on your phone is schooling the big boys in math.License-wise, it's MIT, which as Nisten put it, "MIT is like a jailbreak to the whole legal system, pretty much. That's what most people don't realize. It's like, this is, it's not my problem. You're a problem now." Basically, do whatever you want with it. Distill it, fine-tune it, build Skynet – it's all fair game.And the vibes? "Vibes are insane," as I mentioned on the show. Early benchmarks are showing R1 models trading blows with o1-preview and o1-mini, and even nipping at the heels of the full-fat o1 in some areas. Check out these numbers:And the price? Forget about it. We're talking 50x cheaper than o1 currently. DeepSeek R1 API is priced at $0.14 / 1M input tokens and $2.19 / 1M output tokens, compared to OpenAI's o1 at $15.00 / 1M input and a whopping $60.00 / 1M output. Suddenly, high-quality reasoning is democratized.LDJ highlighted the "aha moment" in DeepSeek's paper, where they talk about how reinforcement learning enabled the model to re-evaluate its approach and "think more." It seems like simple RL scaling, combined with a focus on reasoning, is the secret sauce. No fancy Monte Carlo Tree Search needed, apparently!But the real magic of open source is what the community does with it. Pietro Schirano joined us to talk about his "Retrieval Augmented Thinking" (RAT) approach, where he extracts the thinking process from R1 and transplants it to other models. "And what I found out is actually by doing so, you may even like smaller, quote unquote, you know, less intelligent model actually become smarter," Pietro explained. Frankenstein models, anyone? (John Lindquist has a tutorial on how to do it here)And then there's the genius hack from Voooogel, who figured out how to emulate a "reasoning_effort" knob by simply replacing the "end" token with "Wait, but". "This tricks the model into keeps thinking," as I described it. Want your AI to really ponder the meaning of life (or just 1+1)? Now you can, thanks to open source tinkering.Georgi Gerganov, the legend behind llama.cpp, even jumped in with a two-line snippet to enable speculative decoding, boosting inference speeds on the 32B model on my Macbook from a sluggish 5 tokens per second to a much more respectable 10-11 tokens per second. Open source collaboration at its finest and it's only going to get better! Thinking like a NeuroticMany people really loved the way R1 thinks, and what I found astonishing is that I just sent "hey" and the thinking went into a whole 5 paragraph debate of how to answer, a user on X answered with "this is Woody Allen-level of Neurotic" which... nerd sniped me so hard! I used Hauio Audio (which is great!) and ByteDance latentSync and gave R1 a voice! It's really something when you hear it's inner monologue being spoken out like this! ByteDance Enters the Ring: UI-TARS Controls Your PCNot to be outdone in the open source frenzy, ByteDance, the TikTok behemoth, dropped UI-TARS, a set of models designed to control your PC. And they claim SOTA performance, beating even Anthropic's computer use models and, in some benchmarks, GPT-4o and Claude.UI-TARS comes in 2B, 7B, and 72B parameter flavors, and ByteDance even released desktop apps for Mac and PC to go along with them. "They released an app it's called the UI TARS desktop app. And then, this app basically allows you to Execute the mouse clicks and keyboard clicks," I explained during the show.While I personally couldn't get the desktop app to work flawlessly (quantization issues, apparently), the potential is undeniable. Imagine open source agents controlling your computer – the possibilities are both exciting and slightly terrifying. As Nisten wisely pointed out, "I would use another machine. These things are not safe to tell people. I might actually just delete your data if you, by accident." Words to live by, folks.LDJ chimed in, noting that UI-TARS seems to excel particularly in operating system-level control tasks, while OpenAI's leaked "Operator" benchmarks might show an edge in browser control. It's a battle for desktop dominance brewing in open source!Noting that the common benchmark between Operator and UI-TARS is OSWorld, UI-Tars launched with a SOTA Humanity's Last Exam: The Benchmark to BeatSpeaking of benchmarks, a new challenger has entered the arena: Humanity's Last Exam (HLE). A cool new unsaturated bench of 3,000 challenging questions across over a hundred subjects, crafted by nearly a thousand subject matter experts from around the globe. "There's no way I'm answering any of those myself. I need an AI to help me," I confessed on the show.And guess who's already topping the HLE leaderboard? You guessed it: DeepSeek R1, with a score of 9.4%! "Imagine how hard this benchmark is if the top reasoning models that we have right now... are getting less than 10 percent completeness on this," MMLU and Math are getting saturated? HLE is here to provide a serious challenge. Get ready to hear a lot more about HLE, folks.Big CO LLMs + APIs: Google's Gemini Gets a Million-Token BrainWhile open source was stealing the show, the big companies weren't completely silent. Google quietly dropped an update to Gemini Flash Thinking, their experimental reasoning model, and it's a big one. We're talking 1 million token context window and code execution capabilities now baked in!"This is Google's scariest model by far ever built ever," Nisten declared. "This thing, I don't like how good it is. This smells AGI-ish" High praise, and high concern, coming from Nisten! Benchmarks are showing significant performance jumps in math and science evals, and the speed is, as Nisten put it, "crazy usable." They have enabled the whopping 1M context window for the new Gemini Flash 2.0 Thinking Experimental (long ass name, maybe let's call it G1?) and I agree, it's really really good!And unlike some other reasoning models cough OpenAI cough, Gemini Flash Thinking shows you its thinking process! You can actually see the chain of thought unfold, which is incredibly valuable for understanding and debugging. Google's Gemini is quietly becoming a serious contender in the reasoning race (especially with Noam Shazeer being responsible for it!)OpenAI's "Operator" - Agents Are (Almost) HereThe moment we were all waiting for (or at least, I was): OpenAI finally unveiled Operator, their first foray into Level 3 Autonomy - agentic capabilities with ChatGPT. Sam Altman himself hyped it up as "AI agents are AI systems that can do work for you. You give them a task and they go off and do it." Sounds amazing, right?Operator is built on a new model called CUA (Computer Using Agent), trained on top of GPT-4, and it's designed to control a web browser in the cloud, just like a human would, using screen pixels, mouse, and keyboard. "This is just using screenshots, no API, nothing, just working," one of the OpenAI presenters emphasized. They demoed Operator booking restaurant reservations on OpenTable, ordering groceries on Instacart, and even trying to buy Warriors tickets on StubHub (though that demo got a little… glitchy). The idea is that you can delegate tasks to Operator, and it'll go off and handle them in the background, notifying you when it needs input or when the task is complete.As I'm writing these words, I have an Operator running trying to get me some fried rice, and another one trying to book me a vacation with kids over the summer, find some options and tell me what it found. Benchmarks-wise, OpenAI shared numbers for OSWorld (38.1%) and WebArena (58.1%), showing Operator outperforming previous SOTA but still lagging behind human performance. "Still a way to go," as they admitted. But the potential is massive.The catch? Operator is initially launching in the US for Pro users only, and even then, it wasn't exactly smooth sailing. I immediately paid the $200/mo to try it out (pro mode didn't convince me, unlimited SORA videos didn't either, operator definitely did, SOTA agents from OpenAI is definitely something I must try!) and my first test? Writing a tweet

Motley Fool Money
AI Gets Star Power

Motley Fool Money

Play Episode Listen Later Jan 23, 2025 30:56


...as if it didn't have enough already! (00:14) Asit Sharma and Mary Long discuss: - The new venture to build out American AI infrastructure. - How 20 data centers get a $500 billion price tag. - GE Aerospace's razor-and-blades business model. Then, (19:15), Seth Jayson joins to walk through why the rooftop solar industry doesn't look so sunny. Companies mentioned: MSFT, ORCL, NVDA, GE, ENPH, SEDG To become a premium Motley Fool member, go to www.fool.com/signup Host: Mary Long Guests: Asit Sharma, Seth Jayson Engineer: Rick Engdahl Learn more about your ad choices. Visit megaphone.fm/adchoices

TechCheck
Tech CEOs sound alarm on Chinese AI breakthroughs 1/23/25

TechCheck

Play Episode Listen Later Jan 23, 2025 3:10


Chinese internet firm ByteDance unveiling a new top-performing model that escalates the AI competition between China and the U.S. We look at how key American AI leaders at Davos are reacting to the way Chinese AI advances are changing the race.  

Business Casual
RTO Wave Hits Federal Workers & $500B Investment Powers AI Industry

Business Casual

Play Episode Listen Later Jan 22, 2025 30:26


Episode 502: Neal and Toby chat about Trump's order to make all Federal workers to work in the office full-time. Will this ultimately be a winning move in Washington? Then, a multi-billion dollar joint venture by AI leaders and the Trump Administration to power American AI infrastructure. Plus, Netflix reports a record-breaking performance in its last quarter, adding millions of subscribers thanks to its hit shows and live sports. Meanwhile, Olympic athletes are noticing their medals are slowly deteriorating…but weren't the medals made by LVMH? The company doesn't see a problem. Lastly, Costco, a rare Winter storm, JetBlue, and Ichiro are in today's roundup of headlines.  Subscribe to Morning Brew Daily for more of the news you need to start your day. Share the show with a friend, and leave us a review on your favorite podcast app. Download the Yahoo! Finance App (on the Play and App store) for real-time alerts on news and insights tailored to your portfolio and stock watchlists. Listen to Morning Brew Daily Here: https://link.chtbl.com/MBD Watch Morning Brew Daily Here: https://www.youtube.com/@MorningBrewDailyShow Learn more about your ad choices. Visit megaphone.fm/adchoices

The Dynamist
Copyright vs. AI Part 4: The Road Ahead w/Tim Hwang and Josh Levine

The Dynamist

Play Episode Listen Later Jan 17, 2025 59:08


As revelations about Meta's use of pirated books for AI training send shockwaves through the tech industry, the battle over copyright and AI reaches a critical juncture. In this final episode of The Dynamist's series on AI and copyright, Evan is joined by FAI's Senior Fellow Tim Hwang and  Tech Policy Manager Joshua Levine to discuss how these legal battles could determine whether world-leading AI development happens in Silicon Valley or Shenzhen.The conversation examines the implications of Meta's recently exposed use of Library Genesis - a shadow library of pirated books - to train its LLaMA models, highlighting the desperate measures even tech giants will take to source training data. This scandal crystallizes a core tension: U.S. companies face mounting copyright challenges while Chinese competitors can freely use these same materials without fear of legal repercussions. The discussion delves into potential policy solutions, from expanding fair use doctrine to creating new statutory licensing frameworks, that could help American AI development remain competitive while respecting creator rights.Drawing on historical parallels from past technological disruptions like Napster and Google Books, the guests explore how market-based solutions and policy innovation could help resolve these conflicts. As courts weigh major decisions in cases involving OpenAI, Anthropic, and others in 2024, the episode frames copyright not just as a domestic policy issue, but as a key factor in national technological competitiveness. What's at stake isn't just compensation for creators, but whether IP disputes could cede AI leadership to nations with fewer or no constraints on training data.

The Marketing AI Show
#129: OpenAI o3, Superintelligence, AGI Policy Implications, New Altman Interview on Musk Feud & GPT-5 Behind Schedule

The Marketing AI Show

Play Episode Listen Later Jan 7, 2025 89:29


Paul and Mike are back to catch you up on everything you missed over the holidays. From OpenAI's o3 model breaking human-level reasoning barriers to Sam Altman dropping cryptic hints about superintelligence, we've got all the updates. Plus, discover Google's new “AI mode,” and why Microsoft is betting $80 billion on next-gen data centers. Start 2025 with a whole lot of AI. Access the show notes and show links here This episode is brought to you by our AI Mastery Membership, this 12-month membership gives you access to all the education, insights, and answers you need to master AI for your company and career. To learn more about the membership, go to www.smarterx.ai/ai-mastery.  As a special thank you to our podcast audience, you can use the code POD150 to save $150 on a membership.  Timestamps: 00:04:37 — OpenAI Announces o3 00:11:19 — Superintelligence 00:23:15 — Policy Implications of AGI Post-o3 00:31:00 — OpenAI Structure 00:35:15 —  Sam Altman on His Feud with Elon Musk 00:37:46 — GPT-5/Orion Is Behind Schedule and Crazy Expensive 00:42:13 — Google Introduces Gemini 2.0 Flash Thinking 00:46:04 — Google ‘AI Mode' Option  00:49:23 — Google Publishes 321 Real-World Gen AI Use Cases 00:52:09 —Google 2025 AI Business Trends Report 00:54:55 — Satya Nadella on the BG2 Podcast 00:59:16 — Claude Pretends to Have Different Views During Training 01:05:29 —Facebook Planning to Flood Platform with AI-Powered Users 01:09:43 — DeepSeek V3 01:14:23 — Microsoft Outlines “The Golden Opportunity for American AI” 01:21:08 — Here's What We Learned About LLMs in 2024 01:24:34 — Funding and Acquisitions Visit our website Receive our weekly newsletter Join our community: Slack LinkedIn Twitter Instagram Facebook Looking for content and resources? Register for a free webinar Come to our next Marketing AI Conference Enroll in AI Academy for Marketers

TechCheck
New Chinese AI Model Threatens American AI Dominance 12/30/24

TechCheck

Play Episode Listen Later Dec 30, 2024 7:54


DeepSeek v3 is a new AI model developed by a Chinese lab, built in just 2 months with less-performant GPUs and for a fraction of what it cost Google, OpenAI and Meta to build theirs. It raises questions of America's place in the AI race and whether shelling out hundreds of millions or even billions into building frontier AI models is still a worthy investment. 

ohmTown
Your Non Sequitur News for 12/6/2024

ohmTown

Play Episode Listen Later Dec 6, 2024 87:22


Welcome to ohmTown. The Non Sequitur News Show is held live via Twitch and Youtube every day. We, Mayor Watt and the AI that runs ohmTown, cover a selection of aggregated news articles and discuss them briefly with a perspective merging business, technology, and society. You can visit https://www.youtube.com/ohmtown for the complete history since 2022.Articles Discussed:Well that was Earharts problem, right there.https://www.ohmtown.com/groups/mobble/f/d/researchers-thought-they-found-amelia-earharts-missing-plane-it-turned-out-to-be-a-plane-shaped-pile-of-rocks/Hawaiian Crowshttps://www.ohmtown.com/groups/mobble/f/d/scientists-release-five-hawaiian-crows-on-maui-giving-the-imperiled-birds-a-second-chance-on-a-new-island/Undeciphered Ancient Language in Georgiahttps://www.ohmtown.com/groups/nonsequiturnews/f/d/lost-undeciphered-ancient-language-discovered-on-tablet-in-georgia/American AI's Sputnik Momenthttps://www.ohmtown.com/groups/mobble/f/d/american-ai-has-reached-its-sputnik-moment/Sociopathy Reversehttps://www.ohmtown.com/groups/nonsequiturnews/f/d/anthem-bcbs-is-reversing-its-anesthesia-policy-after-online-outrage/The County that Endangers Childrenhttps://www.ohmtown.com/groups/four-wheel-tech/f/d/drivers-in-one-county-rack-up-nearly-600000-in-school-zone-speeding-tickets-in-one-month/Testing the Nations Milk Supplyhttps://www.ohmtown.com/groups/nonsequiturnews/f/d/usda-orders-testing-of-nations-milk-supply-for-bird-flu/Win Thousands and Fall in Lovehttps://www.ohmtown.com/groups/late-nite-geeks/f/d/if-you-can-make-this-ai-bot-fall-in-love-you-could-win-thousands-of-dollars/RIOT Announces League TCGhttps://www.ohmtown.com/groups/the-continuity-report/f/d/riot-games-announces-project-k-new-tcg-based-on-league-of-legends/AGI and no one careshttps://www.ohmtown.com/groups/nonsequiturnews/f/d/agi-is-coming-and-nobody-cares/

Matt Kim Podcast
How Argentina Was a Test for American AI Drone Policing | Matt Kim #127

Matt Kim Podcast

Play Episode Listen Later Nov 26, 2024 55:16


Elon Musk, Peter Thiel, and Javier Milei all walk into the D.O.G.E. Headquarters... Eric Schmidt on the AI Revolution (Former CEO of Google ) (Stanford Talk): https://www.youtube.com/watch?v=mKVFNg3DEng ==================================== https://merchlabs.com/collections/matt-kim Get Your Free Thinker Apparel Today! Donate! https://www.mattkimpodcast.com/support/ FREE THINKER ARMY DISCORD: https://discord.com/invite/h848WhSC3V   Follow Matt! Instagram:  /  https://www.instagram.com/mattattack009/ Twitter:   / https://x.com/FreeMattKim Rumble: https://rumble.com/c/FreeMattKim TikTok:    / https://www.tiktok.com/@freemattkim Business Inquiries Please Email  mattkimpodcast@protonmail.com ==================================== Intro Music Song: Bounty Hunter Artist: Hampus Naeselius ==================================== Time Stamps 0:00 - Coming Up 0:43 - The Last Interviews 8:13 - Bluesky and Decentralized Communities 21:03 - D.O.G.E. (DeepMind and The Manhattan Project) 32:24 -  Peter Thiel x Elon Musk (Machine Learning) 36:15 - Argentina is America's Testing Ground 40:20 - The Black Box 46:00 - How To Protect Self from Tech Dystopia 47:50 - King Maker Business 51:42 - The End of the World Can Wait ==================================== Elon Musk's Email to Sam Altman #AIRevolution  #DigitalCommunities #surveillancestate   #EndTimesTech  #agenda2030    --- Support this podcast: https://podcasters.spotify.com/pod/show/mattkimpodcast/support

The Daily Scoop Podcast
How OpenAI's new policy blueprint for AI imagines the role of government

The Daily Scoop Podcast

Play Episode Listen Later Nov 13, 2024 4:02


OpenAI is releasing an artificial intelligence infrastructure blueprint meant to highlight its vision for American AI, which the company argues will boost productivity and jumpstart advanced technology development. The release of the blueprint, which was viewed by FedScoop and was set to be presented in Washington on Wednesday, comes as the Biden administration continues to push for government support for data centers, artificial intelligence, and semiconductors. At the same time, the government's approach to AI is still taking shape — and companies like OpenAI are using the opportunity to advocate for policies that would make way for infrastructure and energy projects that would benefit them. On the same day outgoing President Joe Biden met with President-elect Donald Trump to discuss the transition between them, a top White House cyber official made some recommendations for early cyber priorities for the incoming administration. In its first 100 days, the Trump administration should build a framework for minimum cybersecurity standards for critical infrastructure companies, establish cybersecurity grants for those in need and deepen international partnerships, said Anne Neuberger, Biden's deputy national security adviser for cyber and emerging technology. Neuberger offered those suggestions at an event Wednesday hosted by the Columbia University School of International and Public Affairs in what she called the bipartisan tradition of cybersecurity, having received “the baton” from the prior administrations and passing it on in a world of threats heavily dominated by China, ransomware and artificial intelligence. The Daily Scoop Podcast is available every Monday-Friday afternoon. If you want to hear more of the latest from Washington, subscribe to The Daily Scoop Podcast  on Apple Podcasts, Soundcloud, Spotify and YouTube.

Trillion Dollar Government Contracts
AI Done-For-You Proposals to Win More Government Contracts

Trillion Dollar Government Contracts

Play Episode Listen Later Nov 10, 2024 35:58


AI Done-For-You Proposals to Win More Government ContractsLet's dive into this powerful AI by American AI Logistics! This AI can automate the tedious process of finding and bidding on government contracts, saving time and boosting efficiency. This episode highlights the capabilities of American AI's automated bidding software, which can handle up to 98% of the paperwork and administrative tasks associated with contract bidding. Tim also shares insights on the importance of human relationships in contracting and the future of AI in this space.Want the Fastest Way to Win Your Own 7-8 Figure Contracts?Join my 5-Day Challenge: https://govconchallenge.com/

Edge of Wonder Podcast
Panda Cloning, Do You Trust Chinese AI? & Jungle Walrus Cryptid

Edge of Wonder Podcast

Play Episode Listen Later Oct 1, 2024 66:01


Panda cloning, Chinese artificial intelligence (AI) and a jungle walrus cryptid: all this and more on Friday Night Live.

GREY Journal Daily News Podcast
How Will Harris and Trump Navigate the Future of AI?

GREY Journal Daily News Podcast

Play Episode Listen Later Jul 30, 2024 2:24


The 2024 presidential candidates present distinct approaches to managing artificial intelligence. Two days after President Biden signed an executive order on AI, Vice President Kamala Harris showcased it at an AI summit in London, emphasizing the need for quick regulatory action. Now running for president, Harris faces former President Trump, who promises to repeal the order if re-elected. Trump and the Republican National Convention argue the order restricts innovation, while Biden's order aims to enhance AI safety. Harris has strong ties to Silicon Valley and has led efforts to draft an AI “bill of rights,” focusing on both immediate and existential AI risks. Trump has support from tech figures advocating for less regulation, with concerns that overregulation could hinder American AI leadership. Despite differing rhetoric, similarities exist in AI policies between the Trump and Biden administrations as the debate on maximizing benefits while minimizing harms continues.Learn more on this news visit us at: https://greyjournal.net/news/ Hosted on Acast. See acast.com/privacy for more information.

That Was The Week
Accelerating to 2027?

That Was The Week

Play Episode Listen Later Jun 22, 2024 33:47


Hat Tip to this week's creators: @leopoldasch, @JoeSlater87, @GaryMarcus, @ulonnaya, @alex, @ttunguz, @mmasnick, @dannyrimer, @imdavidpierce, @asafitch, @ylecun, @nxthompson, @kaifulee, @DaphneKoller, @AndrewYNg, @aidangomez, @Kyle_L_Wiggers, @waynema, @QianerLiu, @nicnewman, @nmasc_, @steph_palazzolo, @nofilmschoolContents* Editorial: * Essays of the Week* Situational Awareness: The Decade Ahead* ChatGPT is b******t* AGI by 2027?* Ilya Sutskever, OpenAI's former chief scientist, launches new AI company* The Series A Crunch Is No Joke* The Series A Crunch or the Seedpocalypse of 2024 * The Surgeon General Is Wrong. Social Media Doesn't Need Warning Labels* Video of the Week* Danny Rimer on 20VC - (Must See)* AI of the Week* Anthropic has a fast new AI model — and a clever new way to interact with chatbots* Nvidia's Ascent to Most Valuable Company Has Echoes of Dot-Com Boom* The Expanding Universe of Generative Models* DeepMind's new AI generates soundtracks and dialogue for videos* News Of the Week* Apple Suspends Work on Next Vision Pro, Focused on Releasing Cheaper Model in Late 2025* Is the news industry ready for another pivot to video?* Cerebras, an Nvidia Challenger, Files for IPO Confidentially* Startup of the Week* Final Cut Camera and iPad Multicam are Truly Revolutionary* X of the Week* Leopold AschenbrennerEditorialI had not heard of Leopold Aschenbrenner until yesterday. I was meeting with Faraj Aalaei (a SignalRank board member) and my colleague Rob Hodgkinson when they began to talk about “Situational Awareness,” his essay on the future of AGI, and its likely speed of emergence.So I had to read it, and it is this week's essay of the week. He starts his 165-page epic with:Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them.So, Leopold is not humble. He finds himself “among” the few people with situational awareness.As a person prone to bigging up myself, I am not one to prematurely judge somebody's view of self. So, I read all 165 pages.He makes one point. The growth of AI capability is accelerating. More is being done at a lower cost, and the trend will continue to be super-intelligence by 2027. At that point, billions of skilled bots will solve problems at a rate we cannot imagine. And they will work together, with little human input, to do so.His case is developed using linear progression from current developments. According to Leopold, all you have to believe in is straight lines.He also has a secondary narrative related to safety, particularly the safety of models and their weightings (how they achieve their results).By safety, he does not mean the models will do bad things. He means that third parties, namely China, can steal the weightings and reproduce the results. He focuses on the poor security surrounding models as the problem. And he deems governments unaware of the dangers.Although German-born, he argues in favor of the US-led effort to see AGI as a weapon to defeat China and threatens dire consequences if it does not. He sees the “free world” as in danger unless it stops others from gaining the sophistication he predicts in the time he predicts.At that point, I felt I was reading a manifesto for World War Three.But as I see it, the smartest people in the space have converged on a different perspective, a third way, one I will dub AGI Realism. The core tenets are simple:* Superintelligence is a matter of national security. We are rapidly building machines smarter than the smartest humans. This is not another cool Silicon Valley boom; this isn't some random community of coders writing an innocent open source software package; this isn't fun and games. Superintelligence is going to be wild; it will be the most powerful weapon mankind has ever built. And for any of us involved, it'll be the most important thing we ever do. * America must lead. The torch of liberty will not survive Xi getting AGI first. (And, realistically, American leadership is the only path to safe AGI, too.) That means we can't simply “pause”; it means we need to rapidly scale up US power production to build the AGI clusters in the US. But it also means amateur startup security delivering the nuclear secrets to the CCP won't cut it anymore, and it means the core AGI infrastructure must be controlled by America, not some dictator in the Middle East. American AI labs must put the national interest first. * We need to not screw it up. Recognizing the power of superintelligence also means recognizing its peril. There are very real safety risks; very real risks this all goes awry—whether it be because mankind uses the destructive power brought forth for our mutual annihilation, or because, yes, the alien species we're summoning is one we cannot yet fully control. These are manageable—but improvising won't cut it. Navigating these perils will require good people bringing a level of seriousness to the table that has not yet been offered. As the acceleration intensifies, I only expect the discourse to get more shrill. But my greatest hope is that there will be those who feel the weight of what is coming, and take it as a solemn call to duty.I persisted in reading it, and I think you should, too—not for the war-mongering element but for the core acceleration thesis.My two cents: Leopold underestimates AI's impact in the long run and overestimates it in the short term, but he is directionally correct.Anthropic released v3.5 of Claude.ai today. It is far faster than the impressive 3.0 version (released a few months ago) and costs a fraction to train and run. it is also more capable. It accepts text and images and has a new feature that allows it to run code, edit documents, and preview designs called ‘Artifacts.'Claude 3.5 Opus is probably not far away.Situational Awareness projects trends like this into the near future, and his views are extrapolated from that perspective.Contrast that paper with “ChatGPT is B******t,” a paper coming out of Glasgow University in the UK. The three authors contest the accusation that ChatGPT hallucinates or lies. They claim that because it is a probabilistic word finder, it spouts b******t. It can be right, and it can be wrong, but it does not know the difference. It's a bullshitter.Hilariously, they define three types of BS:B******t (general)Any utterance produced where a speaker has indifference towards the truth of the utterance.Hard b******tB******t produced with the intention to mislead the audience about the utterer's agenda.Soft b******tB******t produced without the intention to mislead the hearer regarding the utterer's agenda.They then conclude:With this distinction in hand, we're now in a position to consider a worry of the following sort: Is ChatGPT hard b**********g, soft b**********g, or neither? We will argue, first, that ChatGPT, and other LLMs, are clearly soft b**********g. However, the question of whether these chatbots are hard b**********g is a trickier one, and depends on a number of complex questions concerning whether ChatGPT can be ascribed intentions.This is closer to Gary Marcus's point of view in his ‘AGI by 2027?' response to Leopold. It is also below.I think the reality is somewhere between Leopold and Marcus. AI is capable of surprising things, given that it is only a probabilistic word-finder. And its ability to do so is becoming cheaper and faster. The number of times it is useful easily outweighs, for me, the times it is not. Most importantly, AI agents will work together to improve each other and learn faster.However, Gary Marcus is right that reasoning and other essential decision-making characteristics are not logically derived from an LLM approach to knowledge. So, without additional or perhaps different elements, there will be limits to where it can go. Gary probably underestimates what CAN be achieved with LLMs (indeed, who would have thought they could do what they already do). And Leopold probably overestimates the lack of a ceiling in what they will do and how fast that will happen.It will be fascinating to watch. I, for one, have no idea what to expect except the unexpected. OpenAI Founder Illya Sutskever weighed in, too, with a new AI startup called Safe Superintelligence Inc. (SSI). The most important word here is superintelligence, the same word Leopold used. The next phase is focused on higher-than-human intelligence, which can be reproduced billions of times to create scaled Superintelligence.The Expanding Universe of Generative Models piece below places smart people in the room to discuss these developments. Yann LeCun, Nicholas Thompson, Kai-Fu Lee, Daphne Koller, Andrew Ng, and Aidan Gomez are participants. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.thatwastheweek.com/subscribe

Business of Tech
Kaspersky Ban, AI Innovations, MSP Programs, and AI Ethics

Business of Tech

Play Episode Listen Later Jun 21, 2024 12:23


Host Dave Sobel discusses the U.S. ban on cybersecurity vendor Kaspersky due to national security risks. The ban prohibits new agreements with U.S. persons and restricts antivirus updates and operations in American IT systems. This decision prompts organizations to reevaluate their security software choices and emphasizes the importance of trust and due diligence in vendor selection.The episode also covers various tech updates, including Pure Storage's new capabilities, Splunk's expanded AI Assistant features, Google's acquisition of Cameo for virtualized Windows app support, and Microsoft's acknowledgment of issues with Windows 11 Pro to Enterprise upgrades. These updates reflect the ongoing advancements and challenges in the tech industry, particularly in storage, AI integration, and operating system upgrades.Additionally, the podcast delves into the emergence of AI-powered solutions in accounting platforms, such as Zero's Just Ask Zero (JAXX), and Kaseya's TruePeer Emerge program tailored for small MSPs. These developments highlight the growing trend of leveraging AI for enhanced business operations and scalability, catering to the specific needs of smaller businesses and MSPs looking to grow their services and profitability.The episode concludes with a discussion on the role of AI in addressing customer service challenges, the importance of domain expertise in maximizing AI effectiveness, and the global efforts to prevent American AI domination. These insights underscore the significance of balancing technology with human expertise in cybersecurity and business processes, emphasizing the need for strategic AI implementation and localization to reflect diverse language, political, and cultural contexts. Three things to know today00:00 U.S. Bans Kaspersky Products Citing National Security Risks, Effective September 202402:44 Pure Storage, Splunk, Google, Microsoft, Zero, and Kaseya06:31 AI and Automation: The Need for a Customer-Centric Approach in Technology Integration   Supported by:   https://getinsync.ca/mspradio/https://www.huntress.com/mspradio/ All our Sponsors:   https://businessof.tech/sponsors/    Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/ Support the show on Patreon: https://patreon.com/mspradio/ Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessoftech.bsky.social

WIRED Tech in Two
US National Security Experts Warn AI Giants Aren't Doing Enough to Protect Their Secrets

WIRED Tech in Two

Play Episode Listen Later Jun 13, 2024 9:11


Susan Rice, who helped the White House broker an AI safety agreement with OpenAI and other tech companies, says she's worried China will steal American AI secrets. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Faster, Please! — The Podcast

The media is full of dystopian depictions of artificial intelligence, such as The Terminator and The Matrix, yet few have dared to dream up the image of an AI utopia. Nick Bostrom's most recent book, Deep Utopia: Life and Meaning in a Solved World attempts to do exactly that. Bostrom explores what it would mean to live in a post-work world, where human labor is vastly outperformed by AI, or even made obsolete. When all of our problems have been solved in an AI utopia . . . well, what's next for us humans?Bostrom is a philosopher and was founding director of the Future of Humanity Institute at Oxford University. He is currently the founder and director of research at the Macrostrategy Research Initiative. He also wrote the much-discussed 2014 book, Superintelligence: Paths, Dangers, Strategies.In This Episode* Our dystopian predisposition (1:29)* A utopian thought experiment (5:16)* The plausibility of a solved world (12:53)* Weighing the risks (20:17)Below is a lightly edited transcript of our conversationOur dystopian predisposition (1:29)Pethokoukis: The Dutch futurist, Frederik Polak famously put it that any culture without a positive vision of the future has no future. It's a light paraphrase. And I kind of think that's where we are right now, that despite the title of your book, I feel like right now people can only imagine dystopia. Is that what you think? Do I have that wrong?Bostrom: It's easier to imagine dystopia. I think we are all familiar with a bunch of dystopian works of fiction. The average person could rattle off Brave New World, 1984, The Handmaid's Tale. Most people couldn't probably name a single utopian work, and even the attempts that have been made, if you look closely at them, you probably wouldn't actually want to live there. It is an interesting fact that it seems easier for us to imagine ways in which things could be worse than ways in which things could be better. Maybe some culture that doesn't have a positive vision has no future but, then again, cultures that have had positive visions also often have ended in tears. A lot of the times utopian blueprints have been used as excuses for imposing coercively some highly destructive vision on society. So you could argue either way whether it is actually beneficial for societies to have a super clear, long-term vision that they are staring towards.I think if we were to ask people to give a dystopian vision, we would get probably some very picturesque, highly detailed visions from having sort of marinated in science fiction for decades. But then if you asked people about utopia, I wonder if all their visions would be almost alike: Kind of this clean, green world, with maybe some tall skyscrapers or something, and people generally getting along. I think it'd be a fairly bland, unimaginative vision.That would be the idea of “all happy families are alike, but each unhappy family is unhappy in its own unique way.” I think it's easy enough to enable ways in which the world could be slightly better than it is. So imagine a world exactly like the one we have, except minus childhood leukemia. So everybody would agree that definitely seems better. The problem is if you start to add these improvements and you stack on enough of them, then eventually you face a much more philosophically challenging proposition, which is, if you remove all the difficulties and all the shadows of human life, all forms of suffering and inconvenience, and all injustice and everything, then you risk ending up in this rather bland future where there is no challenge, no purpose, no meaning for us humans, and it then almost becomes utopian again, but in a different way. Maybe all our basic needs are catered to, but there seems to be then some other part missing that is important for humans to have flourishing lives.A utopian thought experiment (5:16)Is your book a forecast or is it a thought experiment?It's much more a thought experiment. As it happens, I think there is a non-trivial chance we will actually end up in this condition, I call it a “solved world,” particularly with the impending transition to the machine intelligence era, which I think will be accompanied by significant risks, including existential risk. In my previous book, Superintelligence, which came out in 2014, focused on what could go wrong when we are developing machine super intelligence, but if things go right—and this could unfold within the lifetime of a lot of us who are alive on this planet today—if things go right, they could go very right, and, in particular, all kinds of problems that could be solved with better technology could be solved in this future where you have superintelligent AIs doing the technological development. And we might then actually confront the situation where these questions we can now explore as a thought experiment would become pressing practical questions where we would actually have to make decisions on what kinds of lives we want to live, what kind of future we want to create for ourselves if all these instrumental limitations were removed that currently constrain the choices set that we face.I imagine the book would seem almost purely a thought experiment before November 2022 when ChatGPT was rolled out by OpenAI, and now, to some people, it seems like these are questions certainly worth pondering. You talked about the impending machine superintelligence—how impending do you think, and what is your confidence level? Certainly we have technologists all over the map speaking about the likelihood of reaching that maybe through large language models, other people think they can't quite get us there, so how much work is “impending” doing in that sentence?I don't think we are in a position any longer to rule out even extremely short timelines. We can't be super confident that we might not have an intelligence explosion next year. It could take longer, it could take several years, it could take a decade or longer. We have to think in terms of smeared out probability distributions here, but we don't really know what capabilities will be unlocked as you scale up even the current architectures one more order of magnitude like GPT-5-level or GPT-6-level. It might be that, just as the previous steps from GPT-2 to GPT-3 and 3 to 4 sort of unlocked almost qualitatively new capabilities, the same might hold as we keep going up this ladder of just scaling up the current architectures, and so we are now in a condition where it could happen at any time, basically. It doesn't mean it will happen very soon, but we can't be confident that it won't.I do think it is slightly easier for people maybe now, even just with looking at the current AI systems, we have to take these questions seriously, and I think it will become a lot easier as the penny starts to drop that we're about to see this big transition to the machine intelligence era. The previous book, Superintelligence, back in 2014 when that was published—and it was in the works for six years prior—at that time, what was completely outside the Overton window was even the idea that one day we would have machine superintelligence, and, in particular, the idea that there would then be an alignment problem, a technical difficulty of steering these superintelligent intellects so that they would actually do what we want. It was completely neglected by academia. People thought, that's just science fiction or idle futurism. There were maybe a handful of people on the internet who were starting to think about that. In the intervening 10 years, that has changed, and so now all the frontier AI labs have research teams specifically trying to work on scalable methods for AI alignment, and it's much more widely recognized over the last couple of years that this will be a transformative thing. You have statements coming out from leading policy makers from the White House, the UK had this global summit on AI, and so this alignment problem and the risks related to AI have sort of entered the Overton window, and I think some of these other issues as to what the world will look like if we succeed, similarly, will have to come inside the Overton window, and probably will do so over the next few years.So we have an Overton window, we have this technological advance with machine intelligence. Are you as confident about one of the other pillars of your thought experiment, which is an equally, what might seem science-futuristic advance in our ability to edit ourselves, to modify ourselves and our brains and our emotions. That seems to hand-in-hand with the thought experiment.I think once we develop machine superintelligence, then we will soon thereafter have tremendous advances in other technological areas as well because we would then not be restricted to humans trying to develop new technologies with our biological brains. But this research and development would be done by superintelligences on digital timescales rather than biological timescales. So the transition to superintelligence would, I think, mean a kind of telescoping of the future.So there are all these technologies we can see are, in principle, possible. They don't violate the law of physics. In the fullness of time, probably human civilization would reach them if we had 10,000 years to work on it, all these science fiction like space colonies, or cures for aging, or perfect virtual reality uploading into computers, we could see how we might eventually . . . They're unrealistic given the current state of technology, but there's no (in principle) barriers, so we could imagine developing those if we had thousands of years to work on them. But all those technologies might become available quite soon after you have superintelligence doing the research and development. So I think we will then start to approximate the condition of technological maturity, like a condition where we have already developed most of those general purpose technologies that are physically possible, and for which there exists some in principally feasible pathway from where we are now to developing them. The plausibility of a solved world (12:53)I know one criticism of the book is, with this notion of a “solved world” or technological maturity, that the combinatorial nature of ideas would allow for almost an unlimited number of new possibilities, so in no way could we reach maturity or a technologically solved state of things. Is that a valid criticism?Well, it is a hypothesis you could entertain that there is an infinite number of ever-higher levels of technological capability such that you'd never be able to reach or even approximate any maximum. I think it's more likely that there will eventually be diminishing returns. You will eventually have figured out the best way to do most of the general things that need doing: communicating information, processing information, processing raw materials, creating various physical structures, et cetera, et cetera. That happens to be my best guess, but in any case, you could bracket that, we could at least establish lower bounds on the kinds of technological capabilities that an advanced civilization with superintelligence would be able to develop, and we can list out a number of those technologies. Maybe it would be able to do more than that, but at least it would be able to do various things that we can already sort of see and outline how you could do, it's just we can't quite put all the pieces together and carry it out yet.And the book lists a bunch of these affordances that a technologically mature civilization would at least have, even if maybe there would be further things we haven't even dreamt of yet. And already that set of technological capabilities would be enough to radically transform the human condition, and indeed to present us with some of these basic philosophical challenges of how to live well in this world where we wouldn't only have a huge amount of control over the external reality, we wouldn't only be able to automate human labor across almost all domains, but we would also, as you alluded to earlier, have unprecedented levels of control over ourselves or our biological organism and our minds using various forms of bio technologies or newer technologies.In this kind of scenario, is the purpose of our machines to solve our problems, or, not give us problems, but give us challenges, give us things to do?It then comes down to questions about value. If we had all of these capabilities to achieve various types of worlds, which one would we actually want? And I think there are layers to this onion, different levels of depth at which one can approach and think about this problem. At the outermost layer you have the idea that, well, we will have increased automation as a result of advances in AI and robotics, and so there will be some humans who become unemployed as a result. At the most superficial layer of analysis, you would then think, “Well some jobs become unnecessary, so you need to maybe retrain workers to move to other areas where there is continued demand for human labor. Maybe they need some support whilst they're retraining and stuff like that.”So then you take it a step further, like you peel off another layer of the onion and you realize that, well, if AI truly succeeds, if you have artificial general intelligence, then it's really not just some areas of human economic contribution that gets affected, but all areas, with a few exceptions that we can return to. But AIs could do everything that we can do, and do it better, and cheaper, and more efficiently. And you could say that the goal of AI is full unemployment. The goal is not just to automate a few particular tasks, but to develop a technology that allows us to automate all tasks. That's kind of what AI has always been about; it's not succeeded yet, but that's the goal, and we are seemingly moving closer to that. And so, with the asterisk here that there are a few exceptions that we can zoom in on, you would then get a kind of post-work condition where there would be no need for human labor at all.My baseline—I think this is a reasonable baseline—is that the history of technology is a history of both automating things, but then creating new things for us to do. So I think if you ask just about any economist, they will say that that should be our guide for the future: that this exact same technology will think of new things for people to do, that we, at least up to this point, have shown infinite creativity in creating new things to do, and whether you want to call those “work,” there's certainly things for us to do, so boredom should not be an issue.So there's a further question of whether there is anything for us to do, but if we just look at the work part first, are there ways for humans to engage in economically productive labor? And, so far, what has been the case is that various specific tasks have been automated, and so instead of having people digging ditches using their muscles, we can have bulldozers digging ditches, and you could have one guy driving the bulldozer and do the work of 50 people with a shovel or something. And so human labor is kind of just moving out of the areas where you can automate it and into other areas where we haven't yet been able to automate it. But if AIs are able to do all the things that we can do, then that would be no further place, it would look like, at least at first sight, for human workers to move into. The exceptions to this, I think, are cases were the consumer cares not just about the product, but about how the productThey want that human element.You could have consumers with just a raw preference that a particular task was performed by humans or a particular product—just as now sometimes consumers play a little premium sometimes if a little gadget was produced by a politically favored group, or maybe handcrafted by indigenous people, we may pay more for it than if the same object was made in a sweatshop in Indonesia or something. Even if the actual physical object itself is equally good in both cases, we might care about the causal process that brought it into existence. So to the extent that consumers have those kinds of preferences, there could remain ineliminable demand for human labor, even at technological maturity. You could think of possible examples: Maybe we just prefer to watch human athletes compete, even if robots could run faster or box harder. Maybe you want a human priest to officiate at your wedding, even if the robot could say the same words with the same intonations and the same gestures, et cetera. So there could be niches of that sort, where there would remain demand for human labor no matter how advanced our technology.Weighing the risks (20:17)Let me read one friendly critique from Robin Hanson of the book:Bostrom asks how creatures very much like him might want to live for eons if they had total peace, vast wealth, and full eternal control of extremely competent AI that could do everything better than they. He . . . tries to list as many sensible possibilities as possible . . .But I found it . . . hard to be motivated by his key question. In the future of creatures vastly more capable than us I'm far more interested in what those better creatures would do than what a creature like me now might do there. And I find the idea of creatures like me being rich, at peace, and in full control of such a world quite unlikely.Is the question he would prefer you answer unanswerable, therefore you cannot answer that question, so the only question you can answer is what people like us would be like?No, I think there are several different questions, each of which, I think, is interesting. In some of my other work, I do, in fact, investigate what other creatures, non-human creatures, digital minds we might be building, for example, AIs of different types, what they might want and how one might think of what would be required for the future to go well for these new types of being that we might be introducing. I think that's an extremely important question as well, particularly from a moral point of view. It might be, in the future, most inhabitants of the future will be digital minds or AIs of different kinds. Some might be at scales far larger than us human beings.In this book, though, I think the question I'm primarily interested in is: What if we are interested in it from our own perspective, what is the best possible future we could hope for for ourselves, given the values that we actually have? And I think that could be practically relevant in various ways. There could, for example, arise situations where we have to make trade-offs between delaying the transition to AI with maybe the risk going up or down, depending on how long we take for it. And then, in the meantime, people like us dying, just as a result of aging and disease and all kinds of things that currently result in people.So what are the different risk tradeoffs we are willing to take? And that might depend, in part, on how much better we think our lives could be if this goes well. If the best we could hope for was just continuing our current lives for a bit longer, that might be a different choice situation than if there was actually on the table something that would be super desirable from our current point of view, then we might be willing to take bigger risks to our current lives if there was at least some chance of achieving this much better life. And I think those questions, from a prudential point of view, we can only try to answer if we have some conception of how good the potential outcome would be for us. But I agree with him that both of these questions are important.It also seems to me that, initially, there was a lot of conversation after the rollout of ChatGPT about existential risk, we were talking about an AI pause, and I feel like the pendulum has swung completely to the other side, that, whether it's due to people not wanting to miss out on all the good stuff that AI could create, or worried about Chinese AI beating American AI, that the default mode that we're in right now is full speed ahead, and if there are problems we'll just have to fix them on the fly, but we're just not going to have any substantial way to regulate this technology, other than, perhaps,   the most superficial of guardrails. I feel like that's where we're at now; at least, that's what I feel like in Washington right now.Yeah, I think that has been the default mode of AI development since its inception, and still is today, predominantly. The difficulties are actually to get the machines to do more, rather than how to limit what they're allowed to do. That is still the main thrust. I do think, though, that the first derivative of this is towards increased support for various kinds of regulations and restrictions, and even a growing number of people calling for an “AI pause” or wanting to stop AI development altogether. This used to be basically a completely fringe . . . there were no real serious efforts to push in this direction for almost all decades of AI up until maybe two years ago or so. And since then there has been an increasingly vocal, still minority, but a set of people who are trying hard to push for increased regulation, and for slowing down, and for raising the alarm of AI developments. And I think it remains an open question how this will unfold over the coming years.I have a complex view on this, what would actually be desirable here. On the one hand, I do think there are these significant risks, including existential risks, that will accompany a transition. When we develop superintelligent machines, it's not just one cool more gadget, right? It's the most important thing ever happening in human history, and they will be to us as we are to chimpanzees or something—potentially a very powerful force, and things could go wrong there. So I do agree with the C.So I've been told over the past two years!And to the point where some people think of me as a kind of doomsayer or anti-AI, but that's not the full picture. I think, ultimately, it would be a catastrophe if superintelligence was never developed, and that we should develop this, ideally carefully, and it might be desirable if, at a critical point, just when we figure out how to make machines superintelligent, whoever is doing this, whether it's some private lab, or some government Manhattan Project, whoever it is, has the ability to go a little bit slow in that, maybe to pause for six months or, rather than immediately cranking all the knobs up to 11, maybe do it incrementally, see what happens, make sure the safety mechanisms work. I think that might be more ideal than a situation where you have, say, 15 different labs all racing together first, and whoever takes any extra precautions just immediately fall behind and become irrelevant. I think that would seem . . .I feel like where we're at right now—I may have answered this differently 18 months ago—but I feel like where we're at right now is that second scenario. At least here in the United States, and maybe I'm too Washington-centric, but I feel we're at the “crank it up to 11,” realistically, phase.Well, we have seen the first-ever real AI regulations coming on board. It's something rather than nothing, and so you could easily imagine, if pressure continues to build, there will be more demand for this, and then, if you have some actual adverse event, like some bad thing happening, then who knows? There are other technologies that have been stymied because of . . . like human cloning, for example, or nuclear energy in many countries. So it's not unprecedented that society could convince itself that it's bad. So far, historically, all these technology bans and relinquishments have probably been temporary because there have been other societies making other choices, and eventually, just like each generation is, to some extent, like a new role of the die, and eventually you get . . .But it might be that we already have, in particular with AI technologies that, if fully deployed, could allow a society in a few years to lock itself in to some sort of permanent orthodoxy. If you imagine deploying even current AI systems fully to censor dissenting information—if you had some huge stigmatization of AI where it becomes just taboo to say anything positive about AI, and then very efficient ways of enforcing that orthodoxy by shadow banning people who dissent from it, or canceling them, or you could imagine surveilling anybody not to do any research on AI like that, the technology to sort of freeze in a temporary social consensus might be emerging. And so if 10 years from now there were a strong global consensus of some of these issues, then we can't rule out that that would become literally permanent. My probably optimal level of government oversight and regulation would be more than we currently have, but I do worry a little bit about it not increasing to the optimal point and then stopping there, but once the avalanche starts rolling, it could overshoot the target and result in a problem. To be clear, I still think that's unlikely, but I think it's more likely than it was two years ago.In 2050, do you feel like we'll be on the road to deep utopia or deep dystopia?I hope the former, I think both are still in the cards for what we know. There are big forces at play here. We've never had machine intelligence transition before. We don't have the kind of social or economic predictive science that really allows us to say what will happen to political dynamics as we change these fundamental parameters of the human condition. We don't yet have a fully reliable solution to the problem of scalable alignment. I think we are entering uncharted territories here, and both extremely good and extremely bad outcomes are possible, and we are a bit in the dark as to how all of this will unfold.Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe

The Dynamist
California Comes for AI w/ Brian Chau & Dean Ball

The Dynamist

Play Episode Listen Later May 21, 2024 59:46


When it comes to AI regulation, states are moving faster than the federal government.  While California is the hub of American AI innovation (Google, OpenAI, Anthropic, and Meta are all headquartered in the Valley), the state is also poised to enact some of the strictest state regulations on frontier AI development. Introduced on February 8, the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act (SB 1047) is a sweeping bill that would include a new regulatory division and requirements that companies demonstrate their tech won't be used for harmful purposes, such as building a bioweapon or aiding terrorism.SB 1047 has generated intense debate within the AI community and beyond. Proponents argue that robust oversight and safety requirements are essential to mitigate the catastrophic risks posed by advanced AI systems. Opponents contend that the scope is overbroad and that the compliance burdens and legal risks will advantage incumbent players over smaller and open-source developers.Evan is joined by Brian Chau, Executive Director of Alliance for the Future and Dean Ball, a research fellow at the Mercatus Center and author of the Substack Hyperdimensional. You can read Alliance for the Future's call to action on SB 1047 here. And you can read Dean's analysis of the bill here. For a counter argument, check out a piece by AI writer Zvi Mowshowitz here.

The Artificial Intelligence Podcast
AI Data Centers to Rival New US Solar Farms in Energy Consumption

The Artificial Intelligence Podcast

Play Episode Listen Later May 16, 2024 3:44


The energy consumption of American AI data centers is expected to rival the output of new US solar farms, driven by the significant energy required to operate generative AI like Chat GPT. Estimating AI energy usage is complex, but Goldman Sachs' analysis of Virginia's commercial power consumption suggests a 37% increase from 2016 to 2023, likely due to data center growth. AI data centers currently account for 0.5% of US power demand, but this is expected to triple by 2030, driven by increasing AI use, electric vehicles, and industrial electrification, raising concerns about the environmental impact. --- Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message

Off the Record on the Rocks
E79: Have you Been Involved with Artificial Intelligence Managing Truth/Trust Layers?

Off the Record on the Rocks

Play Episode Listen Later Apr 11, 2024 42:28


Back of a dollar bill says “In God We Trust”... if our fiat economy's truth layer is managed by an all-seeing deity, why can't the blockchain be managed by Artificial intelligence? The immutable ledger is but a data set, requiring governance and trust to be validated. AI is the filter on that data set… after all, isn't reality what just happened yesterday? Maybe God.ai is just what the crypto community ordered? An American AI, steeped in separation of Church & State rhetoric, but actually just a government funded Trust layer, filtering and grooming your interpretations!

From the New World
Jake Denton: The Closing of American AI

From the New World

Play Episode Listen Later Apr 8, 2024 69:40


Find Jake:https://twitter.com/RealJDentonhttps://www.heritage.org/staff/jake-dentonhttps://www.compactmag.com/article/big-techs-a-i-power-grab/Alliance for the Future:https://www.affuture.org/contact/Email me:chau [at] affuture [dot] orgMentioned in the episode:https://www.fromthenew.world/p/google-geminis-woke-catechismhttps://www.piratewires.com/p/google-gemini-race-arthttps://www.city-journal.org/article/leaving-gemini-in-the-dust This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.fromthenew.world/subscribe

Uncovering Anomalies Podcast (UAP)
Uncovering Anomalies Podcast (UAP) : Episode 60 - "Balloons All the Way Down!"

Uncovering Anomalies Podcast (UAP)

Play Episode Listen Later Feb 26, 2024 138:14


In Episode 60, "Balloons All the Way Down!" Adam and Topher delve into a fascinating array of topics that challenge our understanding of the world and beyond: Free Julian Assange! Listen Julian Assange here. 'I wouldn't call them aliens, I really like what Grusch calls them, he says they're interdimensional beings' - Anna Paulina Luna on UAPs here. Florida Congresswoman says she 'absolutely believes' UFOs are 'not of human origin' - after private briefings by military here. Black budget programs, foreign adversaries and misidentified objects can't explain all UAP here. The Non-human Intelligence (NH) Invasion? here. Why Aliens May Consider Disclosure A Hostile Act with Dr. John Brandenburg here. NPI Announces Citizens for Disclosure, a Grassroots UAP Transparency Initiative here. Danny Sheehan Interview here. Danny Sheehan links for disclosure grass roots and certification Sept. 1953, INS Newswire: USMC Aviator Maj. Donald Keyhoe "claimed today that the Air Force has movies to prove that 'flying saucers' are craft from another planet". here. UFO in Ukraine here. UFO - Rotating purple lights - triangular craft here. Cigar shaped UFO here. USOs here. The Danger of UFOs Is Not What You Think - Psychology Today here. National Archives UAP main link here. Zodiac - national archives here. China Has Designed A New Stealth Aircraft That Uses Plasma here. The left photo is a still of the Iraqi Jellyfish UFO. The right photo is ancient cave paintings from the Midwes here. Private company to land spacecraft on moon here. Space Craft landed here. It's lop-sided here. T Towsend Brown - Jesse Michels: The Man Who Built UFOs For The CIA (Not Bob Lazar) here. The Obelisks! here. Speaking of CV: There are exactly 33 adenine's at the end of sars-cov-2. here. Speaking about nanotechnology here. Google's push to lecture us on diversity goes beyond AI here. Google Gemini is racist here. Speaking about AI: OpenAI is looking to build an AI chip empire here Truth about the open border here. Speaking about the border: Protestors Deliver 300 Pounds of Poop to Nancy Pelosi's House here. WHO claiming they did not impose any mandates (laughable) here. Russian Psychics here. It gets crazier the more you look! - cops lol here. Comedy: An Average Day for an American - AI renderings here. Comedy: In the after-life, a man reunites with his wife….and ex-wives. - AI Renderings here. Comedy: Dog becomes increasingly well trained here. Comedy: Zelenskyy art, impersonation here. Join us as we explore the complex tapestry of UAP research, insider perspectives, and the ongoing quest for truth in a world of shadows and light. ********* UAP is sponsored by ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Qinneba⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠(formerly the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠CBD Online Store⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠,) home of the best ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠CBD gummies⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠, ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠tinctures⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠, ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠creams⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠, ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠vapes⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠, and ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠smokes⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. All independently tested for purity and potency. Subscribe to our Podcast ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠here⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠, on Twitter ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠here⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ , and follow Topher ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠here⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. ********* --- Support this podcast: https://podcasters.spotify.com/pod/show/uncovering-anomalies-podcast/support

The Optimistic American
Boomers vs. Doomers: The Battle for American AI Dominance

The Optimistic American

Play Episode Listen Later Dec 23, 2023 81:03


In this episode of the Optimistic American, our distinguished guests, Governor Doug Ducey and Professor Henry Thompson, discuss transformative change. This conversation serves as a perfect example of the intersection where informed, patriotic discussions meet the optimism that defines our spirit. Unleashing the Potential of AI: We shed light on the intersection of technology and governance, specifically examining the new executive order on AI. The discussion revolves around the potential pitfalls and triumphs of regulating such a rapidly evolving industry, demonstrating our belief in the boundless potential of innovative American minds. Government and Innovation: We acknowledge the vital role governments play, but also question if overregulation could put a damper on the entrepreneurial spirit that propels America forward. This proves our enduring commitment to balanced governance that enables, not stifles, progress. The Boomers vs. Doomers: Governor Ducey and Professor Thompson share their insights into the dynamic between boomers and doomers, giving us a fascinating perspective on intra-societal discourse. It's a testament to our belief in the value of open, respectful dialogue among all citizens. Historical Optimism: The dialogue embraces lessons from history, indicating that despite trying times, the human spirit has always risen above challenges. Reflecting our enduring optimism, we venture to say that there's much ahead to be hopeful about as we approach 2024. Let's shape the future of the American spirit together. The American dream still burns bright in all of us. You can listen to this episode on Spotify, Apple Podcasts, or Google Podcasts!  Spotify: https://optamlink.com/spotify Apple Podcasts: https://optamlink.com/apple Google Podcasts: https://optamlink.com/google We post new content every week so make sure to subscribe to stay in the loop. Learn more about The Optimistic American by checking out our website! https://www.optamerican.com

The Cyberlaw Podcast
The Brussels Defect: Too Early is Worse Than Too Late. Plus: Mark MacCarthy's Book on ”Regulating Digital Industries.”

The Cyberlaw Podcast

Play Episode Listen Later Nov 14, 2023 60:44


That, at least, is what I hear from my VC friends in Silicon Valley. And they wouldn't get an argument this week from EU negotiators facing what looks like a third rewrite of the much-too -early AI Act. Mark MacCarthy explains that negotiations over an overhaul of the act demanded by France and Germany led to a walkout by EU parliamentarians. The cause? In their enthusiasm for screwing American AI companies, the drafters inadvertently screwed a French and a German AI aspirant Mark is also our featured author for an interview about his book, "Regulating Digital Industries: How Public Oversight Can Encourage Competition, Protect Privacy, and Ensure Free Speech" I offer to blurb it as “an entertaining, articulate and well-researched book that is egregiously wrong on almost every page.” Mark promises that at least part of my blurb will make it to his website. I highly recommend it to Cyberlaw listeners who mostly disagree with me – a big market, I'm told. Kurt Sanger reports on what looks like another myth about Russian cyberwarriors – that they can't coordinate with kinetic attacks to produce a combined effect. Mandiant says that's exactly what Sandworm hackers did in Russia's most recent attack on Ukraine's grid. Adam Hickey, meanwhile, reports on a lawsuit over internet sex that drove an entire social media platform out of business. Meanwhile, Meta is getting beat up on the Hill and in the press for failing to protect teens from sexual and other harms. I ask the obvious question: Who the heck is trying to get naked pictures of Facebook's core demographic? Mark explains the latest EU rules on targeted political ads – which consist of several perfectly reasonable provisions combined with a couple designed to cut the heart out of online political advertising.  Adam and I puzzle over why the FTC is telling the U.S. Copyright Office that AI companies are a bunch of pirates who need to be pulled up short. I point out that copyright is a multi-generational monopoly on written works. Maybe, I suggest, the FTC has finally combined its unfairness and its anti-monopoly authorities to protect copyright monopolists from the unfairness of Fair Use. Taking an indefensible legal position out of blind hatred for tech companies? Now that I think about it, that is kind of on-brand for Lina Khan's FTC.  Adam and I disagree about how seriously to take press claims that AI generates images that are biased. I complain about the reverse: AI that keeps pretending that there are a lot of black and female judges on the European Court of Justice.   Kurt and Adam reprise the risk to CISOs from the SEC's SolarWinds complaint – and all the dysfunctional things companies and CISOs will soon be doing to save themselves. In updates and quick hits:  Adam and I flag some useful new reports from Congress on the disinformation excesses of 2020. We both regret the fact that those excesses now make it unlikely the U.S. will do much about foreign government attempts to influence the 2024 election.  I mourn the fact that we won't be covering Susannah Gibson again. Gibson raised campaign funds by doing literally what most politicians only do metaphorically. She has, gone down to defeat in her Virginia legislative race.  In Cyberlaw Podcast alumni news, Alex Stamos and Chris Krebs have sold their consulting firm to SentinelOne. They will only be allowed back on the podcast if they bring the Gulfstream.   I also note that Congress is finally starting to put some bills to renew section 702 of FISA into the hopper. Unfortunately, the first such bill, a merger of left and right extremes called the Government Surveillance Reform Act, probably should have gone into the chipper instead.  Download 481st Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.  

The Cyberlaw Podcast
The Brussels Defect: Too Early is Worse Than Too Late. Plus: Mark MacCarthy's Book on ”Regulating Digital Industries.”

The Cyberlaw Podcast

Play Episode Listen Later Nov 14, 2023 60:44


That, at least, is what I hear from my VC friends in Silicon Valley. And they wouldn't get an argument this week from EU negotiators facing what looks like a third rewrite of the much-too -early AI Act. Mark MacCarthy explains that negotiations over an overhaul of the act demanded by France and Germany led to a walkout by EU parliamentarians. The cause? In their enthusiasm for screwing American AI companies, the drafters inadvertently screwed a French and a German AI aspirant Mark is also our featured author for an interview about his book, "Regulating Digital Industries: How Public Oversight Can Encourage Competition, Protect Privacy, and Ensure Free Speech" I offer to blurb it as “an entertaining, articulate and well-researched book that is egregiously wrong on almost every page.” Mark promises that at least part of my blurb will make it to his website. I highly recommend it to Cyberlaw listeners who mostly disagree with me – a big market, I'm told. Kurt Sanger reports on what looks like another myth about Russian cyberwarriors – that they can't coordinate with kinetic attacks to produce a combined effect. Mandiant says that's exactly what Sandworm hackers did in Russia's most recent attack on Ukraine's grid. Adam Hickey, meanwhile, reports on a lawsuit over internet sex that drove an entire social media platform out of business. Meanwhile, Meta is getting beat up on the Hill and in the press for failing to protect teens from sexual and other harms. I ask the obvious question: Who the heck is trying to get naked pictures of Facebook's core demographic? Mark explains the latest EU rules on targeted political ads – which consist of several perfectly reasonable provisions combined with a couple designed to cut the heart out of online political advertising.  Adam and I puzzle over why the FTC is telling the U.S. Copyright Office that AI companies are a bunch of pirates who need to be pulled up short. I point out that copyright is a multi-generational monopoly on written works. Maybe, I suggest, the FTC has finally combined its unfairness and its anti-monopoly authorities to protect copyright monopolists from the unfairness of Fair Use. Taking an indefensible legal position out of blind hatred for tech companies? Now that I think about it, that is kind of on-brand for Lina Khan's FTC.  Adam and I disagree about how seriously to take press claims that AI generates images that are biased. I complain about the reverse: AI that keeps pretending that there are a lot of black and female judges on the European Court of Justice.   Kurt and Adam reprise the risk to CISOs from the SEC's SolarWinds complaint – and all the dysfunctional things companies and CISOs will soon be doing to save themselves. In updates and quick hits:  Adam and I flag some useful new reports from Congress on the disinformation excesses of 2020. We both regret the fact that those excesses now make it unlikely the U.S. will do much about foreign government attempts to influence the 2024 election.  I mourn the fact that we won't be covering Susannah Gibson again. Gibson raised campaign funds by doing literally what most politicians only do metaphorically. She has, gone down to defeat in her Virginia legislative race.  In Cyberlaw Podcast alumni news, Alex Stamos and Chris Krebs have sold their consulting firm to SentinelOne. They will only be allowed back on the podcast if they bring the Gulfstream.   I also note that Congress is finally starting to put some bills to renew section 702 of FISA into the hopper. Unfortunately, the first such bill, a merger of left and right extremes called the Government Surveillance Reform Act, probably should have gone into the chipper instead.  Download 481st Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.  

The Dynamist
Episode 41: Chip Wars, China, & Compute Governance w/ Onni Aarne & Erich Grunewald

The Dynamist

Play Episode Listen Later Oct 31, 2023 39:41


Recently, the Biden Administration announced further restrictions on the types of semiconductors that American companies sell to China. The move is aimed at preventing American AI from benefitting Chinese military applications. While heralded by many as a necessary move to protect U.S. national security, how will the move affect Sino-American relations, and how will China respond? Could China simply “smuggle” the chips to avoid U.S. restrictions, or will the move spur China to race to develop more chips domestically? Could China simply access the computing power it needs through “the cloud?” Evan is joined by Onni Aarne and Erich Grunewald of the Institute for AI Policy and Strategy, which works to reduce risks related to the development & deployment of frontier AI systems. You can read Erich's report on chip smuggling here.

Ransquawk Rundown, Daily Podcast
US Market Open: Stocks slip and bonds are bid while crude surges amid weekend geopolitical risks

Ransquawk Rundown, Daily Podcast

Play Episode Listen Later Oct 13, 2023 4:06


European bourses and US equity futures have been tilting lower since the cash open despite a lack of major headlines, with traders cognizant of geopolitical risks heading into the weekend.Dollar loses some inflation momentum as DXY slips from a double top into a softer 106.510-280 range; T-note is nudging new peaks amidst a return to bull-flattening regardless of a poor long bond auction to continue the run of weak sales this week.Crude futures are on the grind higher on Friday despite a lack of fresh fundamentals, with traders likely wary to bet against crude heading into the weekend as geopolitical risk remains at the forefront for the complexUS eyes closing a loophole that gives Chinese companies access to American AI chips via units located overseas, according to Reuters sources.Israel's ground military is building up on the Gaza Strip border; Iran's Foreign Minister said the continuation of war crimes will receive a response from the rest of the axes.Looking ahead highlights include US Trade Balance & University of Michigan Inflation Prelim, Speeches from Fed's Harker & ECB's Lagarde, Earnings from JPMorgan, Wells Fargo, Citi, PNC Financial Services & Progressive Corp.Read the full report covering Equities, Forex, Fixed Income, Commodites and more on Newsquawk

Ransquawk Rundown, Daily Podcast
Europe Market Open: Asian stocks declined amid headwinds from US CPI, while the region also digested softer-than-expected inflation and mixed trade data from China

Ransquawk Rundown, Daily Podcast

Play Episode Listen Later Oct 13, 2023 3:30


APAC stocks were mostly lower amid headwinds from the US, while the region also digested softer-than-expected inflation and mixed trade data from China.European equity futures are indicative of a lower open with the Euro Stoxx 50 future -0.3% after the cash market closed down 0.1% yesterday.DXY remains on a 106 handle, EUR/USD lingers around 1.0550, GBP reclaimed 1.22 vs. the USD.US eyes closing loophole that gives Chinese companies access to American AI chips via units located overseas, according to Reuters sources.Looking ahead, highlights include EZ Industrial Production, US Trade Balance & University of Michigan Inflation Prelim, Fed's Harker & ECB's Lagarde, Earnings from UnitedHealth, JPMorgan, BlackRock, Wells Fargo, Citi, PNC Financial Services & Progressive Corp.Read the full report covering Equities, Forex, Fixed Income, Commodites and more on Newsquawk

Eye On A.I.
#129 Alexandra Geese: Demystifying AI Regulations in Europe & Beyond

Eye On A.I.

Play Episode Listen Later Jul 12, 2023 58:16


Welcome to episode #129 of Eye on AI with Alexandra Geese. Navigating the complex waters of the European Union's AI Act is no simple task. Yet, that's exactly what Alexandra Geese, a member of the European Parliament, and I venture to do in this conversation. Alexandra's insights into the AI Act, its four defined categories of AI applications, and its current negotiation phase with the European Council and the European Commission are illuminating. We delve into the Act's essential mission: ensuring AI serves humanity, while also exploring the influence of powerful players in the AI industry on the EU's legislation. As our journey deepens, we tackle a range of crucial issues underpinning the Act. Alexandra and navigate through potential economic implications for those who rely on copyright legislation, and the risk of Europe falling behind in AI implementation if the Act is too restrictive. We also touch on the involvement of major American AI firms in the Act's finalization process, and implications for copyrighted material. We dive into the ongoing debates shaping the legislation and the enforcement of the law, once passed. Alexandra shares her thoughts on potential fines for violations, different AI zones, and the possibility of the US following Europe's lead in AI legislation. We wrap up with a deep reflection on the environmental impact of AI, the power held by few companies, and our collective responsibility as AI reshapes the world.   (00:00) Preview (00:52) Introduction   (03:10) Alexandra Geese background in digital legislation (04:00) The AI act: the explanation and details   (08:00) The foundations of corporations for AI regulation (13:00) Copyright regulation and impacts of creativity    (17:00) We need AI that serves humanity   (21:30) Are foundation models high risk to society?   (25:00) Should people be worried about investing in AI? (30:45) What is dynamic AI regulation? (36:10) What is the timeline for AI regulation?  (38:50) What penalties will be applied to AI regulation? (44:30) Will US & EU merge on AI regulation? (50:30) How to solve AI hallucinations     Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI   Found is a show about founders and company-building that features the change-makers and innovators who are actually doing the work. Each week, TechCrunch Plus reporters, Becca Szkutak and Dom-Madori Davis talk with a founder about what it's really like to build and run a company—from ideation to launch. They talk to founders across many industries and their conversations often lead back to AI as many startups start to implement AI into what they do. New episodes of Found are published every Tuesday and you can find them wherever you listen to podcasts. Found podcast: https://podlink.com/found

Artificial Intelligence and You
131 - Guest: Handel Jones, Sino-American AI Strategist, part 2

Artificial Intelligence and You

Play Episode Listen Later Dec 19, 2022 34:52


This and all episodes at: https://aiandyou.net/ .   The AI arms race between China and the United States continues to heat up following China's declaration that they intend to lead the world in all aspects of AI by 2030. Handel Jones has over 50 years of experience in the electronics industry and consulting for International Business Strategies for over 30 years, supporting governments and corporations globally, analyzing technology and predicting corporate and government strategy and market trends. His new book is When AI Rules the World: China, the US, and the Race to Control a Smart Planet, and so he is just the person to tell us what's happening with AI in China. This interview will be of enormous use to anyone who is in or adjacent to international relations, educational strategies, or microcomputer technology supply chains. In the second half of the interview we discuss China's development of its transportation infrastructure, developments in space, and different attitudes towards AI development between China and the West. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.        

Artificial Intelligence and You
130 - Guest: Handel Jones, Sino-American AI Strategist, part 1

Artificial Intelligence and You

Play Episode Listen Later Dec 12, 2022 29:04


This and all episodes at: https://aiandyou.net/ .   The AI arms race between China and the United States continues to heat up following China's declaration that they intend to lead the world in all aspects of AI by 2030. Handel Jones has over 50 years of experience in the electronics industry and consulting for International Business Strategies for over 30 years, supporting governments and corporations globally, analyzing technology and predicting corporate and government strategy and market trends. His new book is When AI Rules the World: China, the US, and the Race to Control a Smart Planet, and so he is just the person to tell us what's happening with AI in China. This interview will be of enormous use to anyone who is in or adjacent to international relations, educational strategies, or microcomputer technology supply chains. In part 1 we discuss Chinese attitudes towards privacy and surveillance, their education strategy for AI, the impact of the recent sanctions on both their plans for Taiwan and the economic outlook in the West, and differences in patterns of innovation between the West and China.  All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.        

The Gradient Podcast
Matt Sheehan: China's AI Strategy and Governance

The Gradient Podcast

Play Episode Listen Later Nov 3, 2022 66:30


* Have suggestions for future podcast guests (or other feedback)? Let us know here!* Want to write with us? Send a pitch using this form :)In episode 47 of The Gradient Podcast, Daniel Bashir speaks to Matt Sheehan.Matt is a fellow at the Carnegie Endowment for International Peace, where he researches global technology with a focus on China. His writing and research explores China's AI ecosystem, the future of China's technology policy, and technology's role in China's political economy. Matt has also written for Foreign Affairs andThe Huffington Post, among other venues.Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (02:28) Matt's path to analyzing China's AI governance* (06:50) Matt's experience understanding daily life in China and developing a bottom-up perspective* (09:40) The development of government constraints in technology/AI in the US and China* (12:40) Matt's take on China's priorities and motivations* (17:00) How recent history influences China's technology ambitions* (17:30) Matt gives an overview of the Century of Humiliation* (22:07) Adversarial perceptions, Xi Jinping's brashness and its effect on discourse about International Relations, how this intersects with AI* (24:40) Self-reliance and semiconductors. Was the recent chip ban the right move?* (36:15) Matt's question: could foundation models be trained on trailing edge chips if necessary? Limitations* (38:30) Silicon Valley and China, The Transpacific Experiment and stories* (46:17) 躺平 and how trends among youth in China interact with tech development, parallel trends in the US, work culture* (51:05) China's recent AI governance initiatives* (56:25) Squaring China's AI ethics stance with its use of AI* (59:53) The US can learn from both Chinese and European regulators* (1:02:03) How technologists should think about geopolitics and national tensions* (1:05:43) OutroLinks:* Matt's Twitter* China's influences/ambitions* Beijing's Industrial Internet Ambitions* Beijing's Tech Ambitions: What Exactly Does It Want?* US-China exchange and US responses* Who benefits from American AI research in China?* Two New Tech Bills Could Transform US Innovation* Fear of Chinese Competition Won't Preserve US Tech Leadership* China's tech standards, government initiatives and regulation in AI* How US businesses view China's growing influence in tech standards* Three takeaways from China's new standards strategy* China's new AI governance initiatives shouldn't be ignored* Semiconductors* Biden's Unprecedented Semiconductor Bet (a new piece from Matt!)* Choking Off China's Access to the Future of AI Get full access to The Gradient at thegradientpub.substack.com/subscribe

Generation Digital Workforce
192. Powering Governments with AI

Generation Digital Workforce

Play Episode Listen Later Oct 13, 2022 36:00


In this episode, host Michael Marchuk discusses the usage of AI and Intelligent Automation with Dr. Al Naqvi, president of American AI and author of several books on artificial intelligence. Dr. Naqvi discusses the unique environment that governments have for fostering innovation in AI related to their relationship with academia and private sector entities. Here's what we talked with Dr. Naqvi about: - How AI and automation are essential for government operations - Where government competition drives innovation - How private sector companies engage to support innovation with AI and automation Dr. Al Naqvi's latest book - At the Speed of Irrelevance: How America Blew Its AI Leadership Position and How to Regain It https://www.amazon.com/At-Speed-Irrelevance-Leadership-Position-ebook/dp/B0B82X3YZM To ensure that you never miss an episode of Transform NOW, be sure to subscribe!

OpenTreasury
Drought and the economy - (TREASURY NEWS)

OpenTreasury

Play Episode Listen Later Aug 26, 2022 23:02


Pushpendra Mehta meets with Craig Jeffery, Managing Partner of Strategic Treasurer, to review the latest treasury news and developments. Topics of discussion include the following: Russia offering SCO countries connection to its version of SWIFT Multinational companies developing contingency plans in the event of a US-China military conflict UBS investing in BigPanda, an American AI company Lending rates lowered again in China BlackRock warning Wall Street watchdog that new ESG rules could harm investors Drought hurting China's economy The Rhine shrinking, adding to Germany's ongoing economic woes

Big Tech
Hong Shen on How Tech Really Works behind the Great Firewall

Big Tech

Play Episode Listen Later Jul 8, 2021 31:21


Western democracies and tech companies have long painted the Chinese tech sector as not only a threat to the US sector but also as operating in direct conflict with American companies. They say that China is walled off from the rest of the world, that these tech companies are just an extension of the state, and that they create and promote state surveillance and censorship tools. While China, the country, isn't completely innocent — there are clear examples of state interventions and human rights abuses — this episode's guest argues that a Western-centric framing of how the sector operates isn't quite accurate.In this episode of Big Tech, Taylor Owen speaks with Hong Shen, a systems scientist at the Human-Computer Interaction Institute at Carnegie Mellon University and author of Alibaba: Infrastructuring Global China. Shen's research focuses on the global internet industry and the social implications of algorithmic systems, with an emphasis on China. Shen explains how China's tech sector is not walled off from the rest of the world but instead highly integrated with it. International venture capital has been flowing into the Chinese tech sector for years. And the artificial intelligence (AI) development that is popularly depicted as an “us” vs “them” arms race is in reality better described as a production chain, with Chinese companies providing the labour to develop American AI systems. 

AI Business Podcast
Voice as a service

AI Business Podcast

Play Episode Listen Later Jun 1, 2021 34:05


This week, we cover the chaotic developments around synthetic voices, their generation, and ownership. We start with the news about Marvel.ai, the new service from American AI vendor Veritone that promises to enable celebrities to monetize their voices. The company calls this Voice-as-a-Service, or VAAS. The main problem with synthetic voices is it's currently challenging (well, pretty much impossible) to enforce copyright for an AI model based on voice recordings of a real person. Cue countless examples of Internet denizens misusing voices with no apparent retribution – from the mixes produced by British experimental musician and campaigner casseteboy, who makes Boris Johnson say things like “you can tell our technology's going well, we're running this whole thing in Excel,” to more recent examples of sound clips created using Uberduck.ai, and a variety of quickly, cheaply synthesized celebrity voices. These include the version of Sir Patrick Stewart you've heard opening the show. In other news, our own Ben Wodecki is now 25! Treasure your youth, Ben. We also cover: Hatsune Miku! Impersonators! TikTok! Cameo! Stephen Hawking! Auto-Tune! Every like we receive goes towards helping struggling podcast producers. As always, you can find the people responsible for the circus podcast online: Max Smolaks (@maxsmolax) Sebastian Moss (@SebMoss) Tien Fu (@tienchifu) Ben Wodecki (@benwodecki)

National Security Law Today
Artificial Intelligence, National Security Law and Ethics

National Security Law Today

Play Episode Listen Later Apr 1, 2021 45:27


The National Security Commission on Artificial Intelligence has said, “the development of AI will shape the future of power.” AI is coming and coming hard. The meaningful application of law and ethics will help determine whether we maximize the opportunities and minimize and mitigate the risks. Law and ethics will, or could and should, distinguish democratic and American AI from authoritarian applications of AI. Law and ethics will bind like-minded alliances in the AI field and it will help to build and sustain public trust and support for appropriate AI applications. The converse is also likely. If, for example, the public does not trust the government’s use of AI because of certain facial recognition applications, it may not trust the government with using AI to facilitate contact tracing amidst a pandemic. This session will consider the ethical use of AI in national security decision-making including: (1) The use of predictive algorithms; (2) Potential AI decision-making redlines and permits; and (3) What is it national security lawyers should know and should ask about AI before it is used to inform and execute national security decisions. Corin Stone is a Scholar-in-Residence at American University's Washington College of Law: https://www.wcl.american.edu/community/faculty/profile/cstone/bio Hon. James E. Baker is the Director of the Institute of Security Policy and Law at Syracuse University: http://law.syr.edu/profile/the-hon.-james-e.-baker References: - James E. Baker, The Centaur's Dilemma: National Security Law for the Coming AI Revolution. Brookings Institution Press, 2020. Introduction: https://www.americanbar.org/content/dam/aba/administrative/law_national_security/centaurs-dilemma-introduction.pdf Chapter 10: https://www.americanbar.org/content/dam/aba/administrative/law_national_security/centaurs-dilemma-chapter-10.pdf - Department of National Intelligence, "Artificial Intelligence Ethics Framework for the Intelligence Community." June, 2020. https://www.dni.gov/files/ODNI/documents/AI_Ethics_Framework_for_the_Intelligence_Community_10.pdf - Ashley Deeks, “Predicting Enemies,” 104 Virginia Law Review 1529 (2018). https://www.virginialawreview.org/wp-content/uploads/2018/12/104VaLRev-2.pdf - Department of Defense Ethical Principles for Artificial Intelligence: https://www.defense.gov/Newsroom/Releases/Release/Article/2091996/dod-adopts-ethical-principles-for-artificial-intelligence/ -ABA Model Rules of Professional Conduct and Comments: https://www.americanbar.org/content/dam/aba/administrative/law_national_security/model-rules-ai-webinar.pdf - "Principled Artificial Intelligence – Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI." Berkman Klein Center for Internet & Society at Harvard University. Jan. 2020: https://cyber.harvard.edu/publication/2020/principled-ai

The Market Marauder Show
Episode 4: Big Mergers Monday!

The Market Marauder Show

Play Episode Listen Later Sep 14, 2020 15:35


Today 9/14/20 we have seen a large amount of mergers in the market. First we have Oracle winning the contract to partner with the popular social media app Tik Tok. This comes as Microsoft was declined as a potential buyer with Walmart. Also we have Nvidia acquiring the ARMs section of Soft Bank, this is a HUGE step forward for American AI research and could put the company and one of the leaders in that sector. Next we have Gilead Sciences to acquire Immunomedics. This merger means that they will now have access to great Breast Cancer and Tumor research that Immunimedics as doing. Lastly we have Merck to acquire Seattle Genetics, this move will greatly help the company moving forward. Apple's Apple Event will be tomorrow 9/15 and it will be the debut of the worlds first 5G iPhone, this will be a large event for the companies history and one to watch. On 9/22 Tesla will be having there Battery Day at the Freemont factory which will be a ground breaking event for the company and the EV(Electric Vehicle) world as a whole.

Artificial Intelligence in Industry with Daniel Faggella
What AI Readiness Really Means - with Tim Estes of Digital Reasoning

Artificial Intelligence in Industry with Daniel Faggella

Play Episode Listen Later Aug 27, 2020 25:33


The great and brilliant Tim Estes returns for another conversation, this time focused on AI readiness. Tim is Founder & CEO of Digital Reasoning, an American AI company that raised over $130 million to bring cognitive computing services to intelligence agencies, financial institutions, and healthcare organizations. This episode's topics include: what AI readiness means, ground rules and AI readiness components including: data, expertise, and in-house talent. Want to learn more about how to implement AI in your organization? Download Emerj's guide: emerj.com/beg1

Artificial Intelligence in Industry with Daniel Faggella
Selling AI - Lessons from the Trenches - With Tim Estes of Digital Reasoning

Artificial Intelligence in Industry with Daniel Faggella

Play Episode Listen Later Jun 11, 2020 25:05


Discover how to nurture alignment that closes AI deals, as we interview Tim Estes, Founder & CEO of Digital Reasoning, an American AI company that raised over $130 million to bring cognitive computing services to intelligence agencies, financial institutions, and healthcare organizations. Whether selling or buying, technology leaders need to ask the right questions before purchasing or implementing AI solutions. In this episode, Tim offers his valuable perspective on how to match AI solutions with real business problems, how to estimate and quickly calibrate ROI, and how to effectively communicate with decision-makers. Discover our full range of high-ROI use cases for AI with Emerj Plus: emerj.com/p1

info@theworkforceshow.com
Mike McGeehan --American AI

info@theworkforceshow.com

Play Episode Listen Later Oct 10, 2019 28:33


Sponsored by: TWFS, Scientific Logic, LookingGlassCyber, FairfaxCity In this interview Michael discusses with Al Naqvi how RPA is a "doer" why A.I. is the "thinker" in automation and how everyone will be impacted by RPA in the future. Michael is the Executive Director of Strategy and Business Development at Blue Prism. Blue Prism was named as one of the Best Workplaces for Innovation by Fast Co. His specialities include: Robotic Process Automation (RPA), Intelligent Automation, Strategy, Leadership, Business Development, Analytics and Analytic Solutions, Credit Scoring, Information/Data Management, Technology, Financial Services, Small Business, Risk Management, Government Contracting, Consulting Services, Decision Management Software and Solutions, Governance, Risk and Compliance (GRC) Software and Solutions. Michael is a graduate of Muhlenberg College in Pennsylvania.

Artificially Intelligent
74: Regulating Facebook

Artificially Intelligent

Play Episode Listen Later Apr 26, 2019 55:18


Zuckerberg released an op-ed in the Washington Post asking to be regulated. He paints this as a win for consumers, but in reality, it's a call to put a government wall around Facebook to prevent competition. We look at the details of his letter and discuss Google's failed AI ethics board and Trump's recent Executive Order on American AI dominance.  Links Ep 20: Musk vs. Zuck and the Future of AI Zuckerberg's WaPo Op-Ed Mark Zuckerberg Facebook Regulations Explained Google Cancels AI Ethics Board Dangers of Government Funded Artificial Intelligence EU Article 11 and 13  Follow us and leave a rating! iTunes Homepage Twitter @artlyintelly Facebook

Rationally Speaking
Rationally Speaking #231 - Helen Toner on "Misconceptions about China and artificial intelligence"

Rationally Speaking

Play Episode Listen Later Apr 15, 2019 59:00


Helen Toner, the director of strategy at Georgetown's Center for Security and Emerging Technology (CSET), shares her observations from the last few years of talking with AI scientists and policymakers in the US and China. Helen and Julia discuss, among other things: How do the views of Chinese and American AI scientists differ? How is media coverage of China misleading? Why the notion of an "AI arms race" is flawed Why measures of China's AI capabilities are overstated Reasons for optimism and pessimism about international cooperation over AI

Peggy Smedley Show
02/19/19 American AI Initiative

Peggy Smedley Show

Play Episode Listen Later Feb 20, 2019 14:55


Peggy Smedley talks about the American AI Initiative and what the U.S. is doing to preserve its role in innovation. She explains that it is a multi-pronged approach to maintaining and accelerating America's leadership in AI and it makes a point of saying it intends to prepare the U.S. workforce to adapt and thrive in this new age of AI. Further, she compares President Trump's American AI Initiative to former President Obama's big data initiatives. Calling for action, she says we must come together on important issues to continue to keep innovation moving forward.

Peggy Smedley Show
02/19/19 American AI Initiative

Peggy Smedley Show

Play Episode Listen Later Feb 20, 2019 14:55


Peggy Smedley talks about the American AI Initiative and what the U.S. is doing to preserve its role in innovation. She explains that it is a multi-pronged approach to maintaining and accelerating America's leadership in AI and it makes a point of saying it intends to prepare the U.S. workforce to adapt and thrive in this new age of AI. Further, she compares President Trump's American AI Initiative to former President Obama's big data initiatives. Calling for action, she says we must come together on important issues to continue to keep innovation moving forward.

Town Hall Seattle Science Series

The United States has long been the global leader in Artificial Intelligence. Dr. Kai-Fu Lee—one of the world’s most respected experts on AI—reveals that China has suddenly caught up to the US at an astonishingly rapid pace. He joined us with insight from his provocative book AI Superpowers: China, Silicon Valley, and the New World Order to envision China and the US forming a powerful duopoly in AI—one that is based on each nation’s unique and traditional cultural inclinations. Dr. Lee predicted that Chinese and American AI will have a stunning impact on traditional blue-collar industries—and a devastating effect on white-collar professions. He outlined how millions of suddenly displaced workers will need to find new ways to make their lives meaningful, and how government policies will have to deal with the unprecedented inequality between the haves and the have-nots. Join Lee for a sobering prognosis on the future of global advances in AI and the profound changes coming to our world sooner than we think. Recorded live at The Collective by Town Hall Seattle Thursday, September 27, 2018. 

The Anfield Index Podcast
American AI Podcast: Sillybeans Yank Pod

The Anfield Index Podcast

Play Episode Listen Later Dec 15, 2015 55:02


Dylan, Jon, and Justin are back with the widely (un)acclaimed Dave Hendrick! A pod literally years in the making, the boys discuss Klopp, then all hell breaks loose. Weird uncles, places that Donald Trump can go, and American visas to follow, it's the Anfield Index USA podcast! See acast.com/privacy for privacy and opt-out information.

The Anfield Index Podcast
American AI Podcast: Episode 8 - Klopp, Tactics and Bournemouth

The Anfield Index Podcast

Play Episode Listen Later Oct 31, 2015 59:48


Dylan Baker is back with the American AI Podcast along with Mexican friends! Jason Belk, Aly Cardoza, Andres Palafox and Samuel Chuicha join Dylan in this Tex-Mex Special! They discuss Klopp, tactics and cover the Bournemouth game in the Capital One Cup! See acast.com/privacy for privacy and opt-out information.

The Anfield Index Podcast
Southampton Vs Liverpool Preview Show - The Saints Are Coming!

The Anfield Index Podcast

Play Episode Listen Later Feb 17, 2015 48:01


This special global edition of the Anfield Index Premier League Preview Show looks forward to the clash between the two form teams in the Premier League, Southampton and Liverpool. Kaylon (AI South Africa) hosts Leroy (Ai Malaysia), Erik (AI Sweden) and Martin (AI Norway). Joining them to make his AI podcast debut is American AI writer Dylan Baker. The panel gives their impressions of the season had thus far by the Saints and their manager Ronald Koeman as well as how Southampton has changed their approach over the course of the season.