Podcasts about whether ai

  • 93PODCASTS
  • 116EPISODES
  • 48mAVG DURATION
  • 1WEEKLY EPISODE
  • May 26, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about whether ai

Latest podcast episodes about whether ai

You Can Learn Chinese
Are AI Tutors the Future of Language Learning?

You Can Learn Chinese

Play Episode Listen Later May 26, 2025 36:55


AI tutors are everywhere—but are they actually good for learning Chinese? In this episode, Jared and John take a deep dive into the fast-evolving world of AI-powered language learning tools. They explore how these AI tutors work and why tools that work well in English often fall short in Chinese.You'll learn: - The surprising limitations of AI when it comes to staying within beginner-friendly vocabulary - How AI tutors compare to human teachers in giving corrections (including recasting!) - Why voice recognition can be a dealbreaker—especially for Chinese tones - What makes a good AI language partner... and where most still fall short - Whether AI tutors reduce anxiety or just reduce motivationYou'll get practical tips for using AI tools effectively depending on your Chinese level and what features to look for if you're exploring AI conversation practice or personalized lessons.Curious or skeptical about AI tutors? This episode will help you evaluate whether they're worth your time, and how to get the most out of them.Links from the episode:Recasting in Language Learning | SinoSpliceDo you have a story to share? Reach out to us

SmartBug on Tap
From Guesswork to Greatness: Paid Media in the AI Era

SmartBug on Tap

Play Episode Listen Later May 22, 2025 36:08


In this episode of SmartBug on Tap, “From Guesswork to Greatness: Paid Media in the AI Era,” we dive into how AI is transforming digital advertising—and what marketers need to know to stay competitive. Join Paul Schmidt, VP of Marketing at SmartBug, and Louis-Claude Martin, a seasoned paid media expert at SmartBug, as they unpack the real impact of AI on campaign strategy, targeting, and performance. From the power of first-party data to the evolving role of media managers, this episode reveals how to shift from manual guesswork to data-backed greatness in the age of AI.

100x Entrepreneur
3 Tech Founders on Whether AI Will Replace Your Job ft Rahul, Ananda and Vishwa

100x Entrepreneur

Play Episode Listen Later May 11, 2025 38:28


“Any sufficiently advanced technology is indistinguishable from magic.”In this episode of The Neon Show, Vishwa (Co-founder of ZenDuty) is joined by Rahul Sasi (Co-founder of CloudSEK) and Ananda Krishna (Co-founder of Astra Security). They share how AI feels magical now. And how as founders they are trying to sprinkle this magic everywhere from how building products to building teams and everything that matters.-------------00:00 Everything is being Reimagined!00:41 Meet the New Hosts01:08 Founders' Biggest Time Killer05:10 Why entrepreneurs should fail fast?11:19 Will your job inevitably change?17:40 How AI has reimagined engineering jobs?19:13 PMs & Designers have New workflows20:35 Why everyone should learn to Prompt?24:14 Is your team using AI Budgets efficiently?26:59 Do people trust AI chatbots?32:18 Does sales still need humans?35:53 Do we expect empathy from AI?-------------Check us out on:Website: https://neon.fund/Instagram: https://www.instagram.com/theneonshoww/LinkedIn: https://www.linkedin.com/company/beneon/Twitter: https://x.com/TheNeonShowwConnect with Siddhartha on:LinkedIn: https://www.linkedin.com/in/siddharthaahluwalia/Twitter: https://x.com/siddharthaa7-------------This video is for informational purposes only. The views expressed are those of the individuals quoted and do not constitute professional advice.Send us a text

The Tech Trek
Why Tech Debt Isn't the Enemy

The Tech Trek

Play Episode Listen Later May 9, 2025 26:05


In this episode, Amir sits down with Brent Keator, an expert advisor at Primary Venture Partners, to unpack one of the most debated engineering challenges: tech debt versus reengineering. They explore how to define tech debt, when to refactor versus rebuild, the ROI of revisiting old code, and how AI is (and isn't) changing the equation. This is a must-listen for engineering leaders navigating complex technical decisions in fast-moving environments.

LEVITY
#22 Aging will be cured within 20 years — here's why | Prof. Derya Unutmaz

LEVITY

Play Episode Listen Later Apr 29, 2025 112:58


Lately, there's been growing pushback against the idea that AI will transform geroscience in the short term.When Nobel laureate Demis Hassabis told 60 Minutes that AI could help cure every disease within 5–10 years, many in the longevity and biotech communities scoffed. Leading aging biologists called it wishful thinking - or outright fantasy.They argue that we still lack crucial biological data to train AI models, and that experiments and clinical trials move too slowly to change the timeline.Our guest in this episode, Professor Derya Unutmaz, knows these objections well. But he's firmly on Team Hassabis.In fact, Unutmaz goes even further. He says we won't just cure diseases - we'll solve aging itself within the next 20 years.And best of all, he offers a surprisingly detailed, concrete explanation of how it will happen:building virtual cells, modeling entire biological systems *in silico*, and dramatically accelerating drug discovery — powered by next-generation AI reasoning engines.

The IoT Podcast
Inside the NXP Smart Home Lab: Edge AI in 2025 - Where Are We, Actually? | Edge of Tomorrow: The Edge AI Debate with NXP's Davis Sawyer & Anthony Huereca

The IoT Podcast

Play Episode Listen Later Apr 24, 2025 35:10


We've been talking about smarter devices for years - but what does progress actually look like today? In this episode of our spin-off series Edge of Tomorrow: The Edge AI Debate, created in collaboration with the  @edgeaifoundation  , Host Tom White heads to NXP Semiconductor's Smart Home Lab to sit down with Davis Sawyer - AI Product Manager and Anthony Huereca - Senior Embedded Systems Engineer. Together, they explore where Edge AI really stands in 2025 from smaller, more efficient models to the gap between what people want and what's actually being built. They dive into the current state of embedded intelligence, real-world applications in the smart home, technical and practical challenges, and what the future might look like, with a focus on what's actually working today. Expect thoughts on...

Future of UX
#109 The UX Design Process in the Age of AI – Where Does AI Help (and Where Does It Hurt)?

Future of UX

Play Episode Listen Later Apr 17, 2025 22:07


Utah Stories Show
The Real Impact of Tariffs on Small Businesses | Utah Stories Podcast with Scott Brown

Utah Stories Show

Play Episode Listen Later Mar 31, 2025 21:23


How are the latest tariffs impacting small businesses in America? In this episode of the Utah Stories Podcast, we sit down with Scott Brown, the founder of Paddle Smash, to discuss how rising import tariffs are threatening small businesses that rely on overseas manufacturing. Scott shares:

The Legalpreneurs Sandbox
Episode 207: Legal GenAI Conversations Series – Legal leaders in the AI era – Are we there yet?

The Legalpreneurs Sandbox

Play Episode Listen Later Mar 20, 2025 61:23


Every aspect of legal practice is transforming before our eyes with AI as the catalyst and the enabler.  It's a challenging time for law firm leaders - it's definitely not a role for the faint hearted. On 18 March 2025, in the second podcast in CLI's Legal GenAI Conversations Series Terri Mottershead, Executive Director of the Centre for Legal Innovation was joined by three law firm leaders from Australia and New Zealand to discuss this and much more in Legal leaders in the AI era – Are we there yet? Simon Newcomb, Partner, Clayton Utz Dan Proietto, Chief Executive Partner, Lander & Rogers Prue Tyler, Founder and Director, SHIFT Advisory Limited Topics covered in this session included: Whether law firm leaders need to think differently and lead differently in the AI era Whether AI demanded a fundamental rethink of law firm structures, billing models, and governance and if so, how? How you reskill lawyers, attract new talent, create multidisciplinary teams, and redefine roles to thrive alongside AI How you introduce and embed AI without overwhelming everyone in the process With AI democratising legal knowledge and making legal services/products/solutions more accessible, how law firms can stand out, create, and sustain unique value Where law firms need to be on the AI Maturity Scale in the next 12 months, why, and how they will know if they have hit the mark You'll find details about the other topics we'll be discussing in this series here. If you would prefer to watch rather than listen to this episode, you'll find the video in our CLI-Collaborate (CLIC) free Resource Hub here. Don't forget to join CLI's free Legal Generative AI Community here – it's a lightly curated daily news feed on all things legal GenAI.

Leveraging AI
172 | AI agents are taking over, Google is on fire with AI releases, using AI to talk to animals, and many more important AI news for the week ending on March 14, 2025

Leveraging AI

Play Episode Listen Later Mar 15, 2025 44:08 Transcription Available


The AI revolution isn't slowing down—if anything, it's sprinting forward.This week, we're diving into the latest AI breakthroughs, including Google's relentless AI releases, the rise of Manus—China's viral AI agent—and OpenAI's latest play to dominate the industry. Oh, and we might just be on the verge of using AI to talk to animals. (Yes, really.)In this episode, you'll discover:How AI agents like Manus are redefining automation and what it means for businesses.The game-changing advancements from Google, OpenAI, and Anthropic—and how they impact you.Why AI-powered code generation is skyrocketing, with some startups using AI to write 95% of their code.The security and control challenges of AI agents (and what could happen if they start making their own decisions).Whether AI really can help us talk to animals—and what it means for communication beyond humans.AI is evolving fast, and business leaders can't afford to sit on the sidelines. Tune in now to stay ahead of the curve!About Leveraging AI The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/ YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/ Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/ Free AI Consultation: https://multiplai.ai/book-a-call/ Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events If you've enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

RIMScast
A Brand New Day with RIMS President Kristen Peed

RIMScast

Play Episode Listen Later Mar 11, 2025 32:13


Welcome to RIMScast. Your host is Justin Smulison, Business Content Manager at RIMS, the Risk and Insurance Management Society.   Our guest, Kristen Peed, is the Chief Risk Officer of Sequoia, and RIMS 2025 President. Kristen was recently promoted to Chief Risk Officer. She tells about that role and how it differs from her other risk roles.   Kristen speaks of a few of the risks to organizations today. She shares two stories of mentorship from her past and her efforts to provide mentorship today. Kristen shares thoughts about the evolving role of the risk manager and her pathway to the C-Suite for a seat at the table. She discusses the legislative summit, the topic of third-party-funded litigation, and the need for transparency and regulation.   Justin and Kristen discuss how every day is a brand new day for RIMS, what will be celebrated at RISKWORLD 2025, and a couple of new RIMS initiatives you can expect to learn about there. Kristen shares her gratitude to all the RIMS volunteers who make her job as 2025 president possible.   Listen for Kristen's career advancement advice and her final suggestion for growth. Key Takeaways: [:01] About RIMS and RIMScast. [:14] Public registration is open for RISKWORLD 2025! Engage Today and Embrace Tomorrow with RIMS at RISKWORLD from May 4th through May 7th in Chicago, Illinois. Register at RIMS.org/RISKWORLD and the link in this episode's show notes. [:31] About this episode of RIMScast. We will be joined by RIMS President Kristen Peed. [:48] RIMS-CRMP Workshops! The next workshop will be March 19th and 20th. Register by March 12th. As part of our continuing strategic partnership with Purima, we have a two-day course coming up on April 22nd and 23rd. [1:03] Links to these courses can be found through the Certification page of RIMS.org and this episode's show notes. [1:10] Virtual Workshops! On March 26th, Pat Saporito will host “Generative AI for Risk Management”. [1:18] On April 16th and 17th, Chris Hansen will lead “Managing Worker Compensation, Employer's Liability, and Employment Practices in the U.S.”. [1:31] A link to the full schedule of virtual workshops can be found on the RIMS.org/education and RIMS.org/education/online-learning pages. A link is also in this episode's show notes. [1:40] RISKWORLD registration is open. Engage Today and Embrace Tomorrow, May 4th through 7th in Chicago. Register at RIMS.org/RISKWORLD. Also, remember there will be lots of pre-conference workshops being held in Chicago just ahead of RISKWORLD. [1:57] These courses include “Applying and Integrating ERM,” “Captives as an Alternate Risk Financing Technique,” “Contractual Risk Transfer,” “Fundamentals of Insurance,” “Fundamentals of Risk Management,” RIMS-CRMP Exam Prep, and more! Links are in the show notes. [2:17] Our guest today is the Chief Risk Officer at Sequoia and the RIMS 2025 President, Kristen Peed. We're going to talk about her risk management career journey, what it took for her to ascend to the level of Chief Risk Officer, and what that means for her organization. [2:38] We will also talk about the power of mentorship, networking, and what's in store for us at RISKWORLD 2025 and throughout the year as we celebrate the 75th anniversary of RIMS. [2:50] Interview! RIMS 2025 President, Kristen Peed, welcome to RIMScast! [3:07] This is Kristen's eighth year on the RIMS Board. It's been an amazing journey! Most of her best friends are RIMS staff members or RIMS members, all over the globe. RIMS is a huge part of her life! Justin joined RIMS almost eight years ago. They have known each other for years. [3:33] Justin shares a memory with Kristen in Halifax. Kristen took part in an impromptu presentation, in the role of a petulant child. [4:06] Kristen wears sneakers; she has branded herself as the sneaker queen. She has stopped counting how many pairs of sneakers she has. [4:34] This year is the 75th anniversary of RIMS. There is a big RISKWORLD in May; its theme is Engage Today and Embrace Tomorrow with RIMS. [4:54] Kristen Peed was recently promoted to Chief Risk Officer of Sequoia. Sequoia has ambitious growth goals, which is one of the reasons Kristen joined it. In her new role, Kristen has oversight of all corporate risks. [5:27] These include enterprise risks, IT risks, security risks, property & casualty risks, and E&O risks. It's overarching. [5:46] Kristen sees there has been a slow transition for risk managers in general, from a transactional, procurement role to a strategic role, where they see opportunities with risk. Where they see places where they can offer value and insight. [6:07] Sequoia is a client-based company. Clients are reaching out to Kristen for help dealing with deep-fake interviews. Kristen looked to the RIMS Board of Directors and Cherise Papadopolo, RIMS VP of DEI, People, & Culture, and got some helpful HR information. [6:48] Kristen was able to provide strategic advice to a Sequoia client's Chief People Officer. It's a perfect example of how RIMS helps risk managers to be viewed as strategic. The RIMS community is part of the reason Kristen was able to take on the role of Chief Risk Officer. [7:13] The role is something Kristen has been preparing for ever since she started as a risk analyst. Every step has been more of a strategic and leadership role rather than being in the weeds doing stuff. The Chief Risk Officer helps navigate and chart the map for the “captain.” [7:55] Kristen's career advancement came both from having a plan and from being seen for her hard work. She has learned to ask for things more. She was fortunate to have some success early in her career and capitalized on it. A new boss provided amazing mentorship for Kristen. [8:41] She asked, “What's the next role for Kristen?” Kristen realized she would like to be considered for a Chief Risk Officer role. Kristen's boss understood her value and wanted to make sure she felt appreciated. Six months later, Kristen was offered the role if she wanted to take it. [9:45] Part of it is making your leadership aware that these titles exist, showing your value, and asking for it. [10:05] One of Kristen's early successes at Sequoia involved using her RIMS network to put together a presentation on using surplus funds from the captive PEO insurer to fuel additional risk management activities. Leadership was excited and Kristen implemented it right away. [11:03] Another success was the consolidation of insurance programs. Sequoia had grown quickly and had renewal dates in different places. Kristen showed her market savvy and leveraged her relationships with carriers to bring down some initial premium costs. [12:00] Kristen says that putting the C-level title on a risk officer differentiates it. When she partners with the CISO or the Chief Data Officer, they are on equal footing. The C-level carries more weight. It also helps carriers in the marketplace see her as being in company leadership. [12:42] When Kristen meets with underwriters and carriers, they have a greater sense of comfort knowing she has a seat at the table and understands the direction of the company and how to mitigate against risk before it hits insurance. [13:01] Plug Time! RIMS Webinars! On March 13th, our friends from Global Risk Consultants will return to discuss “How to Make Your Property Insurance Submission AI-Ready”. [13:15] On Wednesday, March 26th at 2:00 p.m. Eastern Time, members of the RIMS Strategic and Enterprise Risk Management Council will extend the dialog that began in the recent RIMS Executive Report “Understanding Interconnected Risks”. [13:30] On Thursday, March 27th, Descartes Underwriting will make its RIMS Webinar debut with a session about parametric insurance. On April 3rd, join Zurich for “Understanding Third-Party Litigation Funding”. [13:43] On April 10th, Audit Board will present, “What CISOs Want Risk Executives to Know About Cyber Risk in 2025”. [13:51] Following the success of their recent webinar, HUB International returns for the next installment of their Ready for Tomorrow Series, “From Defense to Prevention: Strengthening Your Liability Risk Management Approach”. That session will be on April 17th. [14:07] On April 24th, RiskConnect returns to deliver “Better Together: The Marriage of Insurable Risk and Business Continuity”. [14:40] More webinars will be announced soon and added to the RIMS.org/webinars page. Go there to register. Registration is complimentary for RIMS members. [14:26] Let's Return to Our Interview with RIMS 2025 President Kristen Peed! [14:37] As a follow-up to the RIMScast episode with Mark Prysock on RIMS's legislative priorities, Kristen talks about third-party-funded litigation. It affects risk managers, carriers, and brokers because of premium pricing. [15:06] It's necessary to have transparency around third-party-funded litigation and eliminate the ability of foreign entities to fund and profit from it. The concern is around nuclear verdicts that are detrimental to the industry as a whole. [15:39] Nuclear verdicts will impact pricing, not only for that one company but for all risk managers. These verdicts are not sustainable. We need transparency. We want Congress to act upon this. We can all get behind this. Kristen doesn't think this is a partisan issue. [15:58] Being able to partner with our carriers and brokers to have a strong message on the Hill is critical to the success and continuation of our industry. [16:08] Time and money are finite resources. There is no bottomless pit of money. [16:30] Kristen will soon be going to Capitol Hill with fellow risk practitioners for the RIMS Legislative Summit. [16:43] Kristen got involved in legislative advocacy after getting a mailer for the Legislative Summit. She attended and met the staff, including Robert Cartwright. She saw It was an amazing platform for risk managers to have their voices heard by the people they elect. [17:17] The RIMS Legislative Summit is one of Kristen's favorite annual events. It can be so impactful to the community as a whole. It will be March 19th and 20th. This is your last chance to register for it. Prepare for the trip to D.C. [17:54] March is Women's History Month. Kristen says she was lucky to have some key female leaders placed in her life at critical moments, that helped her down this path. [18:15] At CBIZ, Nancy Mallard was the GC for CBIZ's Benefits and Insurance Division. She was the first female chair of the CIAB (The Council). Kristen saw Nancy's leadership throughout the years in the industry. Kristen used her great example to figure out how to get involved at RIMS. [19:15] Kristen's new boss, Kathy Ross, is amazing. She's been a great advocate for Kristen and it has been awesome to learn from her how to elevate her leadership skills. Kristen feels blessed to have had these two impactful women in her life. [19:47] Sequoia's culture is paramount to its people. One of its service commandments is “Be of extraordinary value to others.” Sequoia's mission is “Coming through for others that put their trust in us.” Kristen takes these values to heart, whether in mentoring or calling on the phone.  [20:39] Kristen looks at how she can help create career paths for people and develop them, at Sequoia and in the risk community, as well. Kristen brings together interns and “externs” from other companies and stays in touch with them. She always asks them to pay the help forward. [21:35] Plug Time! Kristen Peed was a board member of the Spencer Educational Foundation. [21:41] The Spencer Educational Foundation's goal to help build a talent pipeline of risk management and insurance professionals is achieved in part by its collaboration with risk management and insurance educators across the U.S. and Canada. [21:59] Since 2010, Spencer has awarded over $3.3 million in general grants to support over 130 student-centered experiential learning initiatives at universities and RMI non-profits. [22:13] Spencer's 2026 application process will open on May 1st, 2025, and close on July 30th, 2025. General Grant awardees are typically notified at the end of October. Learn more about Spencer's General Grants through the Programs tab of SpencerEd.org. [22:31] Spencer has several events lined up before and during RISKWORLD 2025. On May 3rd, there's the Spencer-CNA Pickle Ball Social, on May 4th, the Spencer-Gallagher Golf Tournament, on May 5th, the Spencer Soiree, and on May 6th, the Spencer-Sedgwick 5K Fun Run. [22:51] You can register for or sponsor any of these through the links on this page or by visiting SpencerEd.org/riskworld2025. [23:00] The Conclusion of My Interview with RIMS 2025 President, Kristen Peed! [23:27] Kristen's theme for her presidency is Brand New Day. Every day is a brand new day of risks. Every day, new risks are popping up. Whether AI, advancements in cyber threats, wildfires, or climate change, everything is changing. [24:12] It's a brand new day for risk managers. We have to be more nimble and strategic. That means it's a brand new day for RIMS. It's about how RIMS is going to support us in this moment and also as we move into the future, making sure we stay relevant for the next 75 years and on. [24:41] A new track, Alternative Risk Transfer, highly focused on captives, is being presented at RISKWORLD 2025. This is something risk managers have been asking to learn more about. It's part of the strategic conversation; how do you start to offer value back to your company? [25:09] How do you more strategically look at risk from a long-term perspective? That dovetails with Enterprise Risk Management. RIMS ERM content is relevant and has evolved over time. Captives will continue to be a value-generating part of the profession. [26:06] The 75th anniversary of RIMS is special for Kristen because it shows that RIMS has come so far. This year, RIMS is launching the RIMS Foundation to create opportunities for early-career students. That's the critical time to help them stay in the profession. [26:47] The RIMS Foundation will provide them with opportunities for growth, learning, and networking. This is a graying industry. We need to attract the next generation of talent to the industry and fill the pipeline with lots of people to backfill when current risk professionals retire. [27:21] Also in 2025, RIMS has a brand new Texas regional conference from August 4th through the 6th, on the San Antonio River Walk. People are reaching out to Kristen to submit sessions. It's exciting to see all the buzz around that conference. [28:35] Kristen's concludes: “Never quit learning. In my role, I've been doing this for two-plus decades, but I learn something new every day. When I took the RIMS-CRMP, I learned even more. It's the only risk management credential accredited by ANSI. Go and get your RIMS-CRMP.” [28:56] “It is one of the best educational opportunities you will have to demonstrate your proficiency and excellence and show your senior leadership team that you have the skills to elevate and provide strategic direction to your company.” [29:18] Justin notes that later this year, you can look for a RIMS-CRMP story, featuring RIMS 2025 President Kristen Peed. [29:23] Kristen, it is such a pleasure to see you! I'm so happy that you're our president this year and I'm happy for your continued success. I look forward to being able to celebrate with you in May at RISKWORLD 2025! [29:35] Kristen says she is honored to lead RIMS this year but it wouldn't be possible without all the other volunteer risk professionals around the world, all our chapter leaders, all committee members, and all our council volunteers. [29:53] Kristen wants to thank everybody who donates their time and energy to making RIMS so relevant and future-thinking. I could not do what I do without your support. [30:10] Special thanks again to RIMS 2025 President, Kristen Peed. Be sure to catch her at RISKWORLD 2025. She will have a presence on the main stage and during many of the ceremonies. Be sure to register for RISKWORLD 2025 at RIMS.org/riskworld. [30:23] More RIMS Plugs! You can sponsor a RIMScast episode for this, our weekly show, or a dedicated episode. Links to sponsored episodes are in the show notes. [30:48] RIMScast has a global audience of risk and insurance professionals, legal professionals, students, business leaders, C-Suite executives, and more. Let's collaborate and help you reach them! Contact pd@rims.org for more information. [31:05] Become a RIMS member and get access to the tools, thought leadership, and network you need to succeed. Visit RIMS.org/membership or email membershipdept@RIMS.org for more information. [31:21] Risk Knowledge is the RIMS searchable content library that provides relevant information for today's risk professionals. Materials include RIMS executive reports, survey findings, contributed articles, industry research, benchmarking data, and more. [31:35] For the best reporting on the profession of risk management, read Risk Management Magazine at RMMagazine.com. It is written and published by the best minds in risk management. [31:48] Justin Smulison is the Business Content Manager at RIMS. You can email Justin at Content@RIMS.org. [31:54] Thank you all for your continued support and engagement on social media channels! We appreciate all your kind words. Listen every week! Stay safe!   Mentioned in this Episode: RISKWORLD 2025 — May 4‒7 | Register today! RIMS Legislative Summit — March 19‒20, 2025 Nominations for the Donald M. Stuart Award [Canada] Spencer Educational Foundation — General Grants 2026 — Application Dates Spencer's RISKWORLD Events — Register or Sponsor! RIMS-Certified Risk Management Professional (RIMS-CRMP) RISK PAC | RIMS Advocacy RIMS Risk Management magazine RIMS Leadership Corner — Featuring Kristen Peed RIMS Webinars: RIMS.org/Webinars “How to Make Your Property Insurance Submission AI-Ready” | Sponsored by Global Risk Consultants, a TÜV SÜD Company | March 13, 2025 “Understanding Interconnected Risks” | Presented by RIMS and the Strategic and Enterprise Risk Management Council | March 26, 2025 “Parametric Insurance and Climate Risk: An Innovative Tool for CAT Risk Management” | Sponsored by Descartes Underwriting | March 27, 2025 “Understanding Third-Party Litigation Funding” | Sponsored by Zurich | April 3, 2025 “What CISOs Want Risk Executives to Know About Cyber Risk in 2025” | Sponsored by Auditboard | April 10, 2025 “Ready for Tomorrow? From Defense to Prevention: Strengthening Your Liability Risk Management Approach” | Sponsored by Hub International | April 17, 2025 “Better Together: The Marriage of Insurable Risk and Business Continuity” | Sponsored by Riskonnect | April 24, 2025   Upcoming RIMS-CRMP Prep Virtual Workshops: RIMS-CRMP | March 19‒20 | Register by March 12 RIMS-CRMP Exam Prep with PARIMA | April 22‒23 Full RIMS-CRMP Prep Course Schedule   Upcoming Virtual Workshops: “Generative AI for Risk Management” | March 26 | Instructor: Pat Saporito “Managing Worker Compensation, Employer's Liability and Employment Practices in the U.S.” | April 16‒17 | Instructor: Chris Hansen See the full calendar of RIMS Virtual Workshops RIMS-CRMP Prep Workshops   Related RIMScast Episodes: “Kicking off 2025 with RIMS CEO Gary LaBranche” “RIMS Legislative Priorities in 2025 with Mark Prysock” “(Re)Humanizing Leadership in Risk Management with Holly Ransom” (RISKWORLD Keynote) “Risk and Relatability with Rachel DeAlto, RISKWORLD Keynote” “Risk and Leadership Patterns with Super Bowl Champion Ryan Harris” (RISKWORLD Keynote)   Sponsored RIMScast Episodes: “Simplifying the Challenges of OSHA Recordkeeping” | Sponsored by Medcor “Risk Management in a Changing World: A Deep Dive into AXA's 2024 Future Risks Report” | Sponsored by AXA XL “How Insurance Builds Resilience Against An Active Assailant Attack” | Sponsored by Merrill Herzog “Third-Party and Cyber Risk Management Tips” | Sponsored by Alliant “RMIS Innovation with Archer” | Sponsored by Archer “Navigating Commercial Property Risks with Captives” | Sponsored by Zurich “Breaking Down Silos: AXA XL's New Approach to Casualty Insurance” | Sponsored by AXA XL “Weathering Today's Property Claims Management Challenges” | Sponsored by AXA XL “Storm Prep 2024: The Growing Impact of Convective Storms and Hail” | Sponsored by Global Risk Consultants, a TÜV SÜD Company “Partnering Against Cyberrisk” | Sponsored by AXA XL “Harnessing the Power of Data and Analytics for Effective Risk Management” | Sponsored by Marsh “Accident Prevention — The Winning Formula For Construction and Insurance” | Sponsored by Otoos “Platinum Protection: Underwriting and Risk Engineering's Role in Protecting Commercial Properties” | Sponsored by AXA XL “Elevating RMIS — The Archer Way” | Sponsored by Archer   RIMS Publications, Content, and Links: RIMS Membership — Whether you are a new member or need to transition, be a part of the global risk management community! RIMS Virtual Workshops On-Demand Webinars RIMS-Certified Risk Management Professional (RIMS-CRMP) RISK PAC | RIMS Advocacy RIMS Strategic & Enterprise Risk Center RIMS-CRMP Stories — Featuring RIMS Vice President Manny Padilla!   RIMS Events, Education, and Services: RIMS Risk Maturity Model®   Sponsor RIMScast: Contact sales@rims.org or pd@rims.org for more information.   Want to Learn More? Keep up with the podcast on RIMS.org, and listen on Spotify and Apple Podcasts.   Have a question or suggestion? Email: Content@rims.org.   Join the Conversation! Follow @RIMSorg on Facebook, Twitter, and LinkedIn.   About our guest: Kristen Peed, Chief Risk Officer at Sequoia and the RIMS 2025 President   Production and engineering provided by Podfly.  

High Net Purpose
Next Gen & Tech: Making Digital Childhood Safer with Alexandra Evans

High Net Purpose

Play Episode Listen Later Feb 27, 2025 69:04


The next generation face a challenge their parents never did, a digital childhood that shapes their future.Alexandra Evans has spent her career fighting to make the digital space safer. From shaping human rights law at Mishcon de Reya to leading policy at TikTok and the British Board of Film Classification, she's been at the forefront of child safety in a world where technology is evolving faster than regulation, and children are paying the price.Now, as the founder of the Digital Childhood Agency, Alexandra is working with policymakers, brands, and tech platforms to create a future where kids can thrive online—not just survive it.In this episode, you'll learn:How parents, policymakers, and businesses can take back control in a rapidly evolving digital world.Why screen addiction is not the biggest problem. How social media algorithms push kids towards riskier content. The real power and responsibility of tech companies in shaping digital childhood.The truth about banning smartphones in schools.Whether AI will become the biggest threat to childhood or a tool for protection.What needs to change for a safer digital future. Hosted on Acast. See acast.com/privacy for more information.

Rise Up In Business
AI & Business Contracts - should you go there?

Rise Up In Business

Play Episode Listen Later Feb 25, 2025 17:03 Transcription Available


Let's talk about AI and business contracts. If you're like me, you're always on the lookout for tools that could streamline our work processes, save us time and of course, save us a bit of money. But when it comes to something as critical as business contracts, should we actually entrust this task to AI? This discussion is more important than ever, considering AI is becoming an indelible part of our lives, whether we like it or not.To give you some context, when I talk about AI in this realm, I refer to the likes of generative AI and anthropic AI, such as chatbots like ChatGPT and Claude AI. They're great at churning out content, but when the stakes are high and involve the legalities of business contracts, there's a lot to consider. Today, we'll unpack some of the looming questions around AI, exploring: - Whether AI should have a role in developing business contracts- The risk of breaching confidentiality when using AI for automating business contracts- The importance of incorporating human knowledge into AI contract review and analysis- The positive side of AI in businessWhile AI offers some intriguing prospects, when it comes to the complexities and nuances of business contracts, I would advise caution. The technology isn't yet advanced enough to replace the tailored, up-to-date expertise required in legal practices. As we tread this fast-moving landscape, let's focus on harnessing AI to benefit our businesses, but always within the parameters of due diligence and legal compliance.LINKS:Episode Website:  https://tmsolicitor.com.au/rise-up-in-business-podcast/Discover the Masterclass Series hereCheck Your Legals with the Essential Legal Checklist hereBook a Free 20-minute Initial Consult with me hereJoin me on Instagram here

Unchained
Olaf Carlson-Wee and Rushi Manche on Why Move Is Safer for Crypto - Ep. 777

Unchained

Play Episode Listen Later Feb 4, 2025 68:38


Listen to the episode on Apple Podcasts, Spotify, Pods, Fountain, Podcast Addict, Pocket Casts, Amazon Music, or on your favorite podcast platform. The success of any blockchain isn't just about scalability, security, or decentralization—it's about attracting developers. The easier it is to build, the more innovation happens. Or at least, that's the thesis of Movement Labs co-founder Rushi Manche and Olaf Carlson-Wee, CEO of Polychain Capital. In this episode of Unchained, Rushi explains why Move, originally developed by Meta, is a fundamentally better programming language for crypto than the Ethereum Virtual Machine (EVM). He breaks down how Move's unique approach to security and asset management improves developer experience and why the Movement Network is bringing Move to Ethereum as a layer 2 solution. Olaf shares his thoughts on how alternative programming environments like Move could challenge the dominance of the EVM, why Ethereum is at a critical moment, and how AI-powered financial agents could change how investments work. Show highlights: 2:32 What problems Move solves for crypto and how it got started 8:57 How the programming language is safer than others, specifically for crypto finance 21:00 What's the thesis behind the Movement network 23:12 Why Movement chose to become an Ethereum L2 30:08 Where ETH is headed and what it needs to succeed 32:25 Why Rushi is so bearish on EVM layer 2s 34:59 Whether Ethereum is going through an existential crisis 37:47 Why Rushi believes that modularity will save Ethereum 39:28 How Movement differs from Aptos and Sui 41:36 The importance of developer experience in crypto's growth 44:48 How tokens can signal the significance of content in social media 52:04 Why Olaf thinks we'll soon see an explosive growth of financialized agents 57:19 Whether AI will replace VC investors and other jobs 1:01:38 What Rushi has to say about the Trump team buying MOVE 1:04:09 The significance of the U.S. making crypto a national priority Visit our website for breaking news, analysis, op-eds, articles to learn about crypto, and much more: unchainedcrypto.com Thank you to our sponsor! Mantle Guest: Rushi Manche, Co-founder of Movement Labs Olaf Carlson-Wee, CEO of Polychain Capital Previous appearances on Unchained: OG Olaf Carlson-Wee on Why His Crypto Thesis Is Stronger Than Ever Olaf Carlson-Wee: ‘If There Is a Money-Losing Exploit, the Money Is Gone'  Why The First Employee Of Coinbase Launched A Hedge Fund To the Moon and Back With Polychain's Olaf Carlson-Wee Special Episode with CNBC's Crypto Trader: Olaf Carlson-Wee on Why This Crypto Winter Is Different From Previous Ones All Things Cryptoeconomics, Pt. 1, With Olaf Carlson-Wee and Ryan Zurrer of Polychain Capital Links Unchained:  Trump's Crypto Project Bought MOVE Tokens as DOGE News Leaked How Solana Beat Out Ethereum to Nab New Crypto Developers in 2024 Chris Dixon on Why We Will Finally See New App Innovation in Crypto 2025 Will Be a Year of Crypto Competition. Can Ethereum Make a Comeback? With AI Agents Now Trading Crypto, What Does Their Future Look Like? Learn more about your ad choices. Visit megaphone.fm/adchoices

Unchained
Olaf Carlson-Wee and Rushi Manche on Why Move Is Safer for Crypto - Ep. 777

Unchained

Play Episode Listen Later Feb 4, 2025 68:38


Listen to the episode on Apple Podcasts, Spotify, Pods, Fountain, Podcast Addict, Pocket Casts, Amazon Music, or on your favorite podcast platform. The success of any blockchain isn't just about scalability, security, or decentralization—it's about attracting developers. The easier it is to build, the more innovation happens. Or at least, that's the thesis of Movement Labs co-founder Rushi Manche and Olaf Carlson-Wee, CEO of Polychain Capital. In this episode of Unchained, Rushi explains why Move, originally developed by Meta, is a fundamentally better programming language for crypto than the Ethereum Virtual Machine (EVM). He breaks down how Move's unique approach to security and asset management improves developer experience and why the Movement Network is bringing Move to Ethereum as a layer 2 solution. Olaf shares his thoughts on how alternative programming environments like Move could challenge the dominance of the EVM, why Ethereum is at a critical moment, and how AI-powered financial agents could change how investments work. Show highlights: 2:32 What problems Move solves for crypto and how it got started 8:57 How the programming language is safer than others, specifically for crypto finance 21:00 What's the thesis behind the Movement network 23:12 Why Movement chose to become an Ethereum L2 30:08 Where ETH is headed and what it needs to succeed 32:25 Why Rushi is so bearish on EVM layer 2s 34:59 Whether Ethereum is going through an existential crisis 37:47 Why Rushi believes that modularity will save Ethereum 39:28 How Movement differs from Aptos and Sui 41:36 The importance of developer experience in crypto's growth 44:48 How tokens can signal the significance of content in social media 52:04 Why Olaf thinks we'll soon see an explosive growth of financialized agents 57:19 Whether AI will replace VC investors and other jobs 1:01:38 What Rushi has to say about the Trump team buying MOVE 1:04:09 The significance of the U.S. making crypto a national priority Visit our website for breaking news, analysis, op-eds, articles to learn about crypto, and much more: unchainedcrypto.com Thank you to our sponsor! Mantle Guest: Rushi Manche, Co-founder of Movement Labs Olaf Carlson-Wee, CEO of Polychain Capital Previous appearances on Unchained: OG Olaf Carlson-Wee on Why His Crypto Thesis Is Stronger Than Ever Olaf Carlson-Wee: ‘If There Is a Money-Losing Exploit, the Money Is Gone'  Why The First Employee Of Coinbase Launched A Hedge Fund To the Moon and Back With Polychain's Olaf Carlson-Wee Special Episode with CNBC's Crypto Trader: Olaf Carlson-Wee on Why This Crypto Winter Is Different From Previous Ones All Things Cryptoeconomics, Pt. 1, With Olaf Carlson-Wee and Ryan Zurrer of Polychain Capital Links Unchained:  Trump's Crypto Project Bought MOVE Tokens as DOGE News Leaked How Solana Beat Out Ethereum to Nab New Crypto Developers in 2024 Chris Dixon on Why We Will Finally See New App Innovation in Crypto 2025 Will Be a Year of Crypto Competition. Can Ethereum Make a Comeback? With AI Agents Now Trading Crypto, What Does Their Future Look Like? Learn more about your ad choices. Visit megaphone.fm/adchoices

The Full Ratchet: VC | Venture Capital | Angel Investors | Startup Investing | Fundraising | Crowdfunding | Pitch | Private E
466. Investing in xAI, Wiz, and Flexport; Masayoshi Son's Superpower; How Elon Will Win the LLM War; and Whether AI Is an Extinction-Level Event for SaaS (Kevin Jiang)

The Full Ratchet: VC | Venture Capital | Angel Investors | Startup Investing | Fundraising | Crowdfunding | Pitch | Private E

Play Episode Listen Later Jan 20, 2025 51:38


Kevin Jiang of Mangusta Capital joins Nick to discuss Investing in xAI, Wiz, and Flexport; Masayoshi Son's Superpower; How Elon Will Win the LLM War; and Whether AI Is an Extinction-Level Event for SaaS. In this episode we cover: Choosing Early-Stage Investing Over Growth Investing Masayoshi and SoftBank's Investment Decisions X AI and Elon Musk's Vision for AI Vertical AI and Industry-Specific Solutions Scalability and Expansion in Vertical AI Challenges and Opportunities in AI Adoption Guest Links: Kevin Jiang's LinkedIn Company's LinkedIn Company's Website Kevin Jiang's Twitter/X The host of The Full Ratchet is Nick Moran of New Stack Ventures, a venture capital firm committed to investing in founders outside of the Bay Area. Want to keep up to date with The Full Ratchet? Follow us on social. You can learn more about New Stack Ventures by visiting our LinkedIn and Twitter. Are you a founder looking for your next investor? Visit our free tool VC-Rank and we'll send a list of potential investors right to your inbox!

The Long Game w/ Elijah Murray
Bryan Trilli: Can AI Have a Soul? Ethics, Consciousness & ‘Soulless Intelligence'

The Long Game w/ Elijah Murray

Play Episode Listen Later Jan 16, 2025 48:28


Bryan Trilli is an AI expert, entrepreneur, and author of Soulless Intelligence. In this conversation, we explore the ethics and philosophical implications of artificial intelligence, focusing on: -The moral dilemmas of AI development. -Whether AI can ever become conscious. -How AI is changing our understanding of humanity, sentience, and value. EPISODE LINKS: Twitter: https://twitter.com/bryantrilli LinkedIn: https://www.linkedin.com/in/bryantrilli Website: https://www.bryantrilli.com TIMESTAMPS: 00:00:00 Exploring Human Sentience and AI 00:03:34 The Ethics and Morality of AI 00:04:50 Philosophical and Religious Perspectives 00:13:25 Defining AGI and Its Implications 00:19:32 Consciousness and Near-Death Experiences 00:24:09 The Intelligence of Machines 00:25:22 Human Worth Beyond Intelligence 00:28:51 Moral Standards and AI 00:31:24 AI and Human Rights 00:38:37 The Future of AI in Physics 00:43:41 Misconceptions and Fears About AI 00:47:27 Closing CONNECT: Website: https://hoo.be/elijahmurray YouTube: https://www.youtube.com/@elijahmurray Twitter: https://twitter.com/elijahmurray Instagram: https://www.instagram.com/elijahmurray LinkedIn: https://www.linkedin.com/in/elijahmurray/ Apple Podcasts: https://podcasts.apple.com/us/podcast/the-long-game-w-elijah-murray/ Spotify: https://podcasters.spotify.com/pod/show/elijahmurray RSS: https://anchor.fm/s/3e31c0c/podcast/rss

Unchained
2025 Will Be a Year of Crypto Competition. Can Ethereum Make a Comeback? - Ep. 760

Unchained

Play Episode Listen Later Jan 7, 2025 108:59


Ethereum, once the undisputed leader in the smart contract ecosystem, is facing intense competition from Solana, which has outpaced Ethereum on key metrics such as developer growth. Meanwhile, debates rage within Ethereum's community over governance, scalability, and the Ethereum Foundation's leadership. Adding to the disruption, AI agents are rapidly reshaping DeFi and token launches, reducing barriers to entry and creating new opportunities—and risks—for founders and investors. Is this the next big leap for crypto or just the latest bubble? In this episode, Marc Zeller of Aave Chan Initiative and Kain Warwick of Infinex discuss Ethereum's future, the role of AI in DeFi, and whether Solana's momentum will continue. They also share bold predictions for crypto in 2025 and debate whether Ethereum's fragmented ecosystem can still deliver on its promise. Show highlights: 03:35 Why 2024 became a turning point for the crypto ecosystem 07:02 How AI agents could reshape onchain innovation and public discourse in 2025 14:47 Whether AI agents might soon compete with VCs 22:10 Why fundamentals-driven crypto projects will gradually dominate the market, according to Marc 26:05 Whether Solana will continue to steal Ethereum's thunder 37:56 Is Base cannibalistic to Ethereum? Can Base or Ethereum compete with Solana? 42:53 Where the ETH ecosystem is headed and whether it can overcome issues of fragmentation, lack or interoperability, so many L2 tokens detracting from the ETH price 52:30 Whether Coinbase's deep commitment to Ethereum is causing it to discriminate against Solana 1:00:10 How Ethereum's reputation rises and falls with its price action, not its fundamentals, per Kain 1:08:43 What the purpose is of the Ethereum Foundation and whether it should change its approach 1:24:57 Why Kain doesn't think the native rollups proposal is tenable  1:31:12 Whether Ethena is depressing the price of ETH 1:34:00 What Kain and Marc think about crypto-specific phones, especially the Solana Seeker 1:42:07 Kain's and Marc's predictions for 2025 Visit our website for breaking news, analysis, op-eds, articles to learn about crypto, and much more: unchainedcrypto.com Thank you to our sponsors! Stellar Build Better  Robinhood & Arbitrum Kelp DAO Polkadot Guests: Marc Zeller, founder of Aave Chan Initiative Kain Warwick, founder of Infinex Links Unchained: Why Robinhood CEO Vlad Tenev Is Betting Big on Crypto and Stablecoins Unchained: 2024 Was Solana's Best Year Yet. Can It Sustain the Momentum in 2025? Unchained: What's the Best Way for Ethereum to Grow? Two Ethereans Debate The Block: Long-running Ethereum newsletter shutting down, cites lack of funding Unchained: How Solana Beat Out Ethereum to Nab New Crypto Developers in 2024 Evan Van Ness' tweet on the newsletter shutdown  EF's Josh Stark's reply to Van Ness Unchained: Vitalik Has Gone 'Founder Mode.' Is This Just What Ethereum Needs? Marc's proposal about the EF  0xMawuko's tweet on ETH governance  Unchained: Are Layer 2s Failing Ethereum? A New Proposal Advocates for Native L2s Ben Lilly's tweet on Ethena possibly suppressing the price of ETH Learn more about your ad choices. Visit megaphone.fm/adchoices

Unchained
2025 Will Be a Year of Crypto Competition. Can Ethereum Make a Comeback? - Ep. 760

Unchained

Play Episode Listen Later Jan 7, 2025 108:59


Ethereum, once the undisputed leader in the smart contract ecosystem, is facing intense competition from Solana, which has outpaced Ethereum on key metrics such as developer growth. Meanwhile, debates rage within Ethereum's community over governance, scalability, and the Ethereum Foundation's leadership. Adding to the disruption, AI agents are rapidly reshaping DeFi and token launches, reducing barriers to entry and creating new opportunities—and risks—for founders and investors. Is this the next big leap for crypto or just the latest bubble? In this episode, Marc Zeller of Aave Chan Initiative and Kain Warwick of Infinex discuss Ethereum's future, the role of AI in DeFi, and whether Solana's momentum will continue. They also share bold predictions for crypto in 2025 and debate whether Ethereum's fragmented ecosystem can still deliver on its promise. Show highlights: 03:35 Why 2024 became a turning point for the crypto ecosystem 07:02 How AI agents could reshape onchain innovation and public discourse in 2025 14:47 Whether AI agents might soon compete with VCs 22:10 Why fundamentals-driven crypto projects will gradually dominate the market, according to Marc 26:05 Whether Solana will continue to steal Ethereum's thunder 37:56 Is Base cannibalistic to Ethereum? Can Base or Ethereum compete with Solana? 42:53 Where the ETH ecosystem is headed and whether it can overcome issues of fragmentation, lack or interoperability, so many L2 tokens detracting from the ETH price 52:30 Whether Coinbase's deep commitment to Ethereum is causing it to discriminate against Solana 1:00:10 How Ethereum's reputation rises and falls with its price action, not its fundamentals, per Kain 1:08:43 What the purpose is of the Ethereum Foundation and whether it should change its approach 1:24:57 Why Kain doesn't think the native rollups proposal is tenable  1:31:12 Whether Ethena is depressing the price of ETH 1:34:00 What Kain and Marc think about crypto-specific phones, especially the Solana Seeker 1:42:07 Kain's and Marc's predictions for 2025 Visit our website for breaking news, analysis, op-eds, articles to learn about crypto, and much more: unchainedcrypto.com Thank you to our sponsors! Stellar Build Better  Robinhood & Arbitrum Kelp DAO Polkadot Guests: Marc Zeller, founder of Aave Chan Initiative Kain Warwick, founder of Infinex Links Unchained: Why Robinhood CEO Vlad Tenev Is Betting Big on Crypto and Stablecoins Unchained: 2024 Was Solana's Best Year Yet. Can It Sustain the Momentum in 2025? Unchained: What's the Best Way for Ethereum to Grow? Two Ethereans Debate The Block: Long-running Ethereum newsletter shutting down, cites lack of funding Unchained: How Solana Beat Out Ethereum to Nab New Crypto Developers in 2024 Evan Van Ness' tweet on the newsletter shutdown  EF's Josh Stark's reply to Van Ness Unchained: Vitalik Has Gone 'Founder Mode.' Is This Just What Ethereum Needs? Marc's proposal about the EF  0xMawuko's tweet on ETH governance  Unchained: Are Layer 2s Failing Ethereum? A New Proposal Advocates for Native L2s Ben Lilly's tweet on Ethena possibly suppressing the price of ETH Learn more about your ad choices. Visit megaphone.fm/adchoices

Health Hats, the Podcast
From Dick Tracy to AI: Out of Mind to Beyond Mind

Health Hats, the Podcast

Play Episode Listen Later Dec 19, 2024


  Demystify AI's evolution, from Netflix recommendations to ChatGPT, exploring how neural networks learn & why even AI creators can't fully explain how it works. Summary Claude AI used in this summary

Thinking Deeply about Primary Education
Promise or Peril: Can AI shape the future of education?

Thinking Deeply about Primary Education

Play Episode Listen Later Dec 14, 2024 56:11


Episode 207: In this episode of Thinking Deeply about Primary Education, I'm joined by Dominic Bristow and Hannah Gillott to discuss the potential for meaningful use of AI in education. Together, we examine its promises, pitfalls, and practical applications for classroom settings. We explore: Whether AI has the power to genuinely improve teacher practice. How schools and teachers can avoid falling for costly gimmicks and repackaged tools. Key flags to watch out for when investing in AI solutions. The areas where AI could offer the greatest gains in the next 18 months to 2 years. If you've been curious about AI's impact on education or want to navigate this space effectively, this episode offers insights that are both practical and forward-looking. If you enjoy this episode, please support us by subscribing to our YouTube channel, leaving a review on Apple Podcasts (wherever you listen), or making a donation via www.ko-fi.com/tdape. Have questions or comments? Join our Discord server, where you'll find a special channel for unseen question submissions!

Workplace Stories by RedThread Research
The Biggest Mistakes Companies are Making with AI with Christopher Lind

Workplace Stories by RedThread Research

Play Episode Listen Later Nov 20, 2024 48:35


What happens when AI moves faster than the people who implement it? Christopher Lind, executive advisor and industry expert, shares stories of organizations that got it wrong—sometimes with devastating consequences.From replacing entire teams with AI to accelerating broken processes, the conversation reveals how quickly things can unravel when technology outpaces understanding. At the same time, there's tremendous opportunity if AI is handled with care.We explored what it takes to keep humans at the center of the work while letting AI handle repetitive tasks. This isn't about avoiding AI—it's about understanding how to use it in a way that aligns with our goals, values, and the irreplaceable need for human relationships.You will want to hear this episode if you are interested in...Why AI experts are often the most skeptical [0:56]How generative AI can quickly magnify problems [1:30]Whether AI should serve us or the other way around [2:00]Why measuring AI's true cost is so tricky [3:20]How you might unknowingly rely on AI daily [8:42]Preparing for the changes AI will bring to jobs [15:00]What happens when automation goes wrong [35:17]How one company's overuse of AI caused failure [39:10]Whether AI's logic is misunderstood or alien [44:00]Resources & People MentionedPodcast: Future Focused with Christopher LindRadiolab's "Shell Game" EpisodeConnect with Christopher LindLinkedIn: Christopher LindConnect With Red Thread ResearchWebsite: Red Thread ResearchOn LinkedInOn FacebookOn TwitterSubscribe to WORKPLACE STORIES

Unchained
The Backstory of How 3 AI Agents Led to the Rise of the Hottest Memecoin, GOAT - Ep. 725

Unchained

Play Episode Listen Later Oct 25, 2024 51:01


Subscribe to our new regulatory newsletter Unregulated: https://unchainedcrypto.substack.com/s/unregulated The two-week-old GOAT memecoin, which hit a market cap of almost $880 million on Thursday, is captivating everyone in crypto. Not because this is memecoin szn, but because its rise was fueled by an AI called Truth Terminal, which is itself a baby of two other AI models.  Teng Yan, founder of Chain of Thought, joins Unchained to break down how this unexpected AI creation has turned into a phenomenon, why it has captured the attention of the crypto world, and what the future holds for AI-driven tokens.  At the end, Laura also discusses with Unchained's regulatory reporter Veronica Irwin two interesting and important news stories: who Kamala Harris is vetting for SEC chair and how one Senate race could inadvertently give Senator Elizabeth Warren more power over crypto.  Show highlights: How an AI experiment unexpectedly led to the creation of GOAT and sparked interest in AI-generated subcultures How "Terminal of Truth" evolved its own personality, gained attention from Marc Andreessen, and began posting about a new "Goat Sea gospel" religion How the spelling mistake sparked skepticism about the AI model What happened with the $50,000 in BTC that Marc Andreessen gave to Truth Terminal Whether an AI can have its own wallet and what the implications are  Whether AI memecoins could start surging on other chains What we can expect in terms of the proliferation of AI memecoins What the future looks like for the intersection of crypto and AI Who Kamala Harris is considering for SEC Chair if she wins the U.S. election Why one Senate race could give Elizabeth Warren more power over crypto Visit our website for breaking news, analysis, op-eds, articles to learn about crypto, and much more: unchainedcrypto.com Thank you to our sponsors! Polkadot Mantle's FBTC Guest Teng Yan, Founder of Chain of Thought Teng's article: GOAT: The Gospel of Goatse  Links Previous coverage of Unchained on GOAT: GOAT Hits a Record $879 Million Market Cap After Brian Armstrong Offers to Help Truth Terminal GOAT: How AI Agents Talking Turned Into a $268 Million Memecoin 'Religion' Infinite backrooms Andy : https://x.com/AndyAyrey Andy Ayrey's (creator of Truth Terminal) research paper on LLMtheism:  Truth Terminal's X account  Kaito: GOAT's mindshare Timestamps:  00:00 Intro 01:28 How an AI experiment led to the creation of GOAT 06:16 The rise of “Terminal of Truth” and its unexpected evolution 11:57 How a simple spelling mistake raised skepticism 20:09 What happened to the $50,000 in BTC from Marc Andreessen? 21:06 Whether AI models can have their own wallets 24:17 Whether AI memecoins will surge on other chains 26:19 What's next for the rise of AI memecoins 28:35 The future of AI and crypto's intersection 31:39 Who Kamala Harris may consider for SEC Chair 34:12 How one Senate race could boost Elizabeth Warren's power over crypto 40:33 News Recap Learn more about your ad choices. Visit megaphone.fm/adchoices

Late Confirmation by CoinDesk
UNCHAINED: The Backstory of How 3 AI Agents Led to the Rise of the Hottest Memecoin, GOAT

Late Confirmation by CoinDesk

Play Episode Listen Later Oct 25, 2024 50:29


The intersection of artificial intelligence and cryptocurrency is gaining momentum with the rise of memecoins like GOAT, an AI-created token now valued at over $700 million.The two-week-old GOAT memecoin, which hit a market cap of almost $880 million on Thursday, is captivating everyone in crypto. Not because this is memecoin szn, but because its rise was fueled by an AI called Truth Terminal, which is itself a baby of two other AI models. Teng Yan, founder of Chain of Thought, joins Unchained to break down how this unexpected AI creation has turned into a phenomenon, why it has captured the attention of the crypto world, and what the future holds for AI-driven tokens. At the end, Laura also discusses with Unchained's regulatory reporter Veronica Irwin two interesting and important news stories: who Kamala Harris is vetting for SEC chair and how one Senate race could inadvertently give Senator Elizabeth Warren more power over crypto. Show highlights:How an AI experiment unexpectedly led to the creation of GOAT and sparked interest in AI-generated subculturesHow "Terminal of Truth" evolved its own personality, gained attention from Marc Andreessen, and began posting about a new "Goat Sea gospel" religionHow the spelling mistake sparked skepticism about the AI modelWhat happened with the $50,000 in BTC that Marc Andreesseen gave to Truth TerminalWhether an AI can have its own wallet and what the implications are Whether AI memecoins could start surging on other chainsWhat we can expect in terms of the proliferation of AI memecoinsWhat the future looks like for the intersection of crypto and AIWho Kamala Harris is considering for SEC Chair if she wins the U.S. electionWhy one Senate race could give Elizabeth Warren more power over cryptoVisit our website for breaking news, analysis, op-eds, articles to learn about crypto, and much more: unchainedcrypto.comThank you to our sponsors!PolkadotMantle's FBTCGuestTeng Yan, Founder of Chain of ThoughtTeng's article: GOAT: The Gospel of Goatse LinksPrevious coverage of Unchained on GOAT:GOAT Hits a Record $879 Million Market Cap After Brian Armstrong Offers to Help Truth TerminalGOAT: How AI Agents Talking Turned Into a $268 Million Memecoin 'Religion'Infinite backroomsAndy : https://x.com/AndyAyreyAndy Ayrey's (creator of Truth Terminal) research paper on LLMtheism: Truth Terminal's X account Kaito: GOAT's mindshareUnchained Podcast is Produced by Laura Shin Media, LLC. Distributed by CoinDesk.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Diaspora.nz
S2 | E13 — Rhys Darby and Rosie Carnahan-Darby on championing Kiwi humour around the world, Tall Poppy syndrome, losing creative jobs to AI, and "going direct" with your fans to survive social media.

Diaspora.nz

Play Episode Listen Later Sep 27, 2024 73:08


Listen/subscribe on * Apple podcasts* Spotify He's been Murray Hewitt, Psycho Sam, Norman from Yes Man, Guy Mann, Hypno-Potamus, a stand-up comedian, a sit-down band manager, a children's book author, a soldier… and now Binkle-bonk the Tree Goblin in upcoming “Badjelly the Witch”

The Full Ratchet: VC | Venture Capital | Angel Investors | Startup Investing | Fundraising | Crowdfunding | Pitch | Private E
452. Consumer Headwinds, Whether AI Revenue Catch up with Infra Spend, and Are Creative “Takeunders” the Solution to Lina Khan's FTC Agenda (Laura Chau)

The Full Ratchet: VC | Venture Capital | Angel Investors | Startup Investing | Fundraising | Crowdfunding | Pitch | Private E

Play Episode Listen Later Sep 23, 2024 36:51


Laura Chau of Canaan joins Nick to discuss Consumer Headwinds, Whether AI Revenue Catch up with Infra Spend, and Are Creative “Takeunders” the Solution to Lina Khan's FTC Agenda. In this episode we cover: Consumer Investing and Churn Customer Acquisition and Data Challenges AI Investment Framework and Opportunities AI Infrastructure and Revenue Gap Google's Agreement with Character.ai Non-AI Investment Areas and Founder Preferences Guest Links: Laura's LinkedIn Laura's Twitter/X Canaan's LinkedIn Canaan's Website The hosts of The Full Ratchet are Nick Moran and Nate Pierotti of New Stack Ventures, a venture capital firm committed to investing in founders outside of the Bay Area. Want to keep up to date with The Full Ratchet? Follow us on social. You can learn more about New Stack Ventures by visiting our LinkedIn and Twitter. Are you a founder looking for your next investor? Visit our free tool VC-Rank and we'll send a list of potential investors right to your inbox!

The Nonlinear Library
LW - "Can AI Scaling Continue Through 2030?", Epoch AI (yes) by gwern

The Nonlinear Library

Play Episode Listen Later Aug 24, 2024 7:07


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Can AI Scaling Continue Through 2030?", Epoch AI (yes), published by gwern on August 24, 2024 on LessWrong. We investigate the scalability of AI training runs. We identify electric power, chip manufacturing, data and latency as constraints. We conclude that 2e29 FLOP training runs will likely be feasible by 2030. Introduction In recent years, the capabilities of AI models have significantly improved. Our research suggests that this growth in computational resources accounts for a significant portion of AI performance improvements. 1 The consistent and predictable improvements from scaling have led AI labs to aggressively expand the scale of training, with training compute expanding at a rate of approximately 4x per year. To put this 4x annual growth in AI training compute into perspective, it outpaces even some of the fastest technological expansions in recent history. It surpasses the peak growth rates of mobile phone adoption (2x/year, 1980-1987), solar energy capacity installation (1.5x/year, 2001-2010), and h uman genome sequencing (3.3x/year, 2008-2015). Here, we examine whether it is technically feasible for the current rapid pace of AI training scaling - approximately 4x per year - to continue through 2030. We investigate four key factors that might constrain scaling: power availability, chip manufacturing capacity, data scarcity, and the "latency wall", a fundamental speed limit imposed by unavoidable delays in AI training computations. Our analysis incorporates the expansion of production capabilities, investment, and technological advancements. This includes, among other factors, examining planned growth in advanced chip packaging facilities, construction of additional power plants, and the geographic spread of data centers to leverage multiple power networks. To account for these changes, we incorporate projections from various public sources: semiconductor foundries' planned expansions, electricity providers' capacity growth forecasts, other relevant industry data, and our own research. We find that training runs of 2e29 FLOP will likely be feasible by the end of this decade. In other words, by 2030 it will be very likely possible to train models that exceed GPT-4 in scale to the same degree that GPT-4 exceeds GPT-2 in scale. 2 If pursued, we might see by the end of the decade advances in AI as drastic as the difference between the rudimentary text generation of GPT-2 in 2019 and the sophisticated problem-solving abilities of GPT-4 in 2023. Whether AI developers will actually pursue this level of scaling depends on their willingness to invest hundreds of billions of dollars in AI expansion over the coming years. While we briefly discuss the economics of AI investment later, a thorough analysis of investment decisions is beyond the scope of this report. For each bottleneck we offer a conservative estimate of the relevant supply and the largest training run they would allow. 3 Throughout our analysis, we assume that training runs could last between two to nine months, reflecting the trend towards longer durations. We also assume that when distributing AI data center power for distributed training and chips companies will only be able to muster about 10% to 40% of the existing supply. 4 Power constraints. Plans for data center campuses of 1 to 5 GW by 2030 have already been discussed, which would support training runs ranging from 1e28 to 3e29 FLOP (for reference, GPT-4 was likely around 2e25 FLOP). Geographically distributed training could tap into multiple regions' energy infrastructure to scale further. Given current projections of US data center expansion, a US distributed network could likely accommodate 2 to 45 GW, which assuming sufficient inter-data center bandwidth would support training runs from 2e28 to 2e30 FLOP. Beyond this, an actor willing to...

Motley Fool Money
Can Zoom be More than Zoom?

Motley Fool Money

Play Episode Listen Later Aug 22, 2024 19:51


Zoom is great, but it needs to find something outside video conferencing to get investors and the market excited about where it is going.  (00:21) Jason Moser and Dylan Lewis discuss: - Whether AI and expanded offerings can create a next act and growth pillar for Zoom. - Why Lowe's and Home Depot continue to hold up even as home improvement projects dry up. - The early signs that Target's focus on loyalty and value are getting customers back in the stores just in time for back to school shopping. Companies discussed: ZM, LOW, HD, TGT, WMT Host: Dylan Lewis Guests: Jason Moser Engineers: Dan Boyd Learn more about your ad choices. Visit megaphone.fm/adchoices

Living The Next Chapter: Authors Share Their Journey
E394 - Bill Wittur - Climate Fiction, A New Category for Readers

Living The Next Chapter: Authors Share Their Journey

Play Episode Listen Later Jul 19, 2024 41:29


Episode 394 - Bill Wittur - Climate Fiction, A New Category for ReadersEXTINCTION EVENT is a full-length fiction novel set in the present day that belongs in the speculative fiction / dystopian category. Story SynopsisThe main character – known only as LP – tries to reinvent himself by going back to school. With some help from his recording instructor and his friends, he discovers a mystery surrounding different ‘glitches' heard with analog versus digital recordings of popular songs. He investigates further to discover an important figure from his past who built an Artificial Intelligence (AI) platform called GAIA which is being used to spread false stories to the public but to also change the fate of the planet and its inhabitants.Mythology ‘Easter Eggs'This story is loaded with Easter Eggs related to different mythological sources and resources. Examples include:Sylvie Hunter, a key character that embodies traits related to ArtemisJohn Atman, the all-knowing sageThe MusesAesop's Fables and other animal talesPrometheus and the Creation MythAnd much, much moreClimate Fiction: A New Category for ReadersThis is a climate fiction, or ‘cli-fi' story. As events transpire, the reader is brought into our world in environmental crises. There are several overlapping themes that are explored in Extinction Event, including:AI electricity and water use and how it will soon strangle all of us given the current increases in demand for different AI tools.The ‘sixth extinction' that many scientists anticipate will start to close in on us, as all species of creatures on Earth face extinction because of human abuse of the planet.Different economic and political arguments surrounding resource-based capitalism and why our pace of consumption is unsustainable.Exploring the beauty of the different animal kingdoms that surround us and why they're worth preserving.Reviews5/5: Whether “AI phobic” or not … this book will make you thinkBill Wittur's first novel is a timely, creative and fast paced read. Imagine a world where an AI platform is asked to save the planet … and does (unfortunately for some of those who trained it). Drawing inspiration from current events and his own varied work experiences, Wittur plays with the “what if” scenarios and creates an AI heroine, GAIA, her evil taskmasters and the brilliant, committed engineer who unleashes a whole new kind of justice on earth and its inhabitants. A fun read!5/5: Highly recommended!!Was an excellent read! Great story and character development. It was a hard book to put down. On a side note, I also dug all the music references throughout. Do yourself a favour and grab a copy.5/5: A fabulous read from cover to coverhttps://billwittur.com/https://amzn.to/45xFrBeSupport the Show.___https://livingthenextchapter.com/podcast produced by: https://truemediasolutions.ca/

Keen On Democracy
Episode 2031: Laurent Dubreuil's creative answer to whether AI can think creatively

Keen On Democracy

Play Episode Listen Later Jul 16, 2024 48:09


Trust a French literary theorist to think creatively about whether AI can think creatively. Laurent Dubreuil is a professor of French literature at Cornell and the author of the intriguing Harper's piece, Metal Machine Music, which asks both if AI and we humans can think creatively. Using ChatGPT, Dubreuil ran a test at Cornell asking a bot and humans to compete poems written in English and then invited people to guess which were authored by AI and which by humans. The results of this creative literary experiment were surprising, particularly in terms of the common assumption that we humans are more creative than machines.Laurent Dubreuil is Professor of French, Francophone and Comparative Literature at Cornell University. In his research, Laurent Dubreuil aims to explore the powers of literary and artistic thinking at the interface of social thought, the humanities and the sciences. Dubreuil's scholarship is broadly comparative and makes use of his reading knowledge in some ten languages. Professor Dubreuil is the founding director of the Cornell Humanities Lab, a place for reflexive dialogues between practitioners from the sciences and the discursive disciplines who wish to eschew reductionism. At the École normale supérieure, Paris, and in other French universities, Prof. Dubreuil received training in most fields pertaining to the humanities, with a particular emphasis on French, Francophone and Comparative Literature (doctorate: 2001), Philosophy (doctorate: 2002), and Classical Philology. His professors and advisors included Jacques Derrida, Hélène Cixous, Umberto Eco and Pierre Judet de La Combe. In his years as a Mellon New Directions Fellow, Dubreuil acquired further competencies in Cognitive Science. Dubreuil is the author of thirteen books. Among his scholarly essays, five are available in English, most recently Poetry and Mind (Fordham UP: 2018) and Dialogues on the Human Ape (U of Minnesota P: 2019: co-authored with primatologist Sue Savage-Rumbaugh). Five other volumes have been released in French, including (in 2019) Baudelaire au gouffre de la modernité (Hermann), La dictature des identités (Gallimard). Dr. Dubreuil also authored three “creative” literary essays in French. In 2016, Anthony Mangeon edited L'empire de la littérature, an anthology of previously unreleased texts on and by Dubreuil.Named as one of the "100 most connected men" by GQ magazine, Andrew Keen is amongst the world's best known broadcasters and commentators. In addition to presenting KEEN ON, he is the host of the long-running How To Fix Democracy show. He is also the author of four prescient books about digital technology: CULT OF THE AMATEUR, DIGITAL VERTIGO, THE INTERNET IS NOT THE ANSWER and HOW TO FIX THE FUTURE. Andrew lives in San Francisco, is married to Cassandra Knight, Google's VP of Litigation & Discovery, and has two grown children.Keen On is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe

80,000 Hours Podcast with Rob Wiblin
#191 (Part 2) – Carl Shulman on government and society after AGI

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Jul 5, 2024 140:32


This is the second part of our marathon interview with Carl Shulman. The first episode is on the economy and national security after AGI. You can listen to them in either order!If we develop artificial general intelligence that's reasonably aligned with human goals, it could put a fast and near-free superhuman advisor in everyone's pocket. How would that affect culture, government, and our ability to act sensibly and coordinate together?It's common to worry that AI advances will lead to a proliferation of misinformation and further disconnect us from reality. But in today's conversation, AI expert Carl Shulman argues that this underrates the powerful positive applications the technology could have in the public sphere.Links to learn more, highlights, and full transcript.As Carl explains, today the most important questions we face as a society remain in the "realm of subjective judgement" -- without any "robust, well-founded scientific consensus on how to answer them." But if AI 'evals' and interpretability advance to the point that it's possible to demonstrate which AI models have truly superhuman judgement and give consistently trustworthy advice, society could converge on firm or 'best-guess' answers to far more cases.If the answers are publicly visible and confirmable by all, the pressure on officials to act on that advice could be great. That's because when it's hard to assess if a line has been crossed or not, we usually give people much more discretion. For instance, a journalist inventing an interview that never happened will get fired because it's an unambiguous violation of honesty norms — but so long as there's no universally agreed-upon standard for selective reporting, that same journalist will have substantial discretion to report information that favours their preferred view more often than that which contradicts it.Similarly, today we have no generally agreed-upon way to tell when a decision-maker has behaved irresponsibly. But if experience clearly shows that following AI advice is the wise move, not seeking or ignoring such advice could become more like crossing a red line — less like making an understandable mistake and more like fabricating your balance sheet.To illustrate the possible impact, Carl imagines how the COVID pandemic could have played out in the presence of AI advisors that everyone agrees are exceedingly insightful and reliable. But in practice, a significantly superhuman AI might suggest novel approaches better than any we can suggest.In the past we've usually found it easier to predict how hard technologies like planes or factories will change than to imagine the social shifts that those technologies will create — and the same is likely happening for AI.Carl Shulman and host Rob Wiblin discuss the above, as well as:The risk of society using AI to lock in its values.The difficulty of preventing coups once AI is key to the military and police.What international treaties we need to make this go well.How to make AI superhuman at forecasting the future.Whether AI will be able to help us with intractable philosophical questions.Whether we need dedicated projects to make wise AI advisors, or if it will happen automatically as models scale.Why Carl doesn't support AI companies voluntarily pausing AI research, but sees a stronger case for binding international controls once we're closer to 'crunch time.'Opportunities for listeners to contribute to making the future go well.Chapters:Cold open (00:00:00)Rob's intro (00:01:16)The interview begins (00:03:24)COVID-19 concrete example (00:11:18)Sceptical arguments against the effect of AI advisors (00:24:16)Value lock-in (00:33:59)How democracies avoid coups (00:48:08)Where AI could most easily help (01:00:25)AI forecasting (01:04:30)Application to the most challenging topics (01:24:03)How to make it happen (01:37:50)International negotiations and coordination and auditing (01:43:54)Opportunities for listeners (02:00:09)Why Carl doesn't support enforced pauses on AI research (02:03:58)How Carl is feeling about the future (02:15:47)Rob's outro (02:17:37)Producer and editor: Keiran HarrisAudio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic ArmstrongTranscriptions: Katy Moore

Author U Your Guide to Book Publishing
Tips for Engaging AI in Author Publishing and Marketing Forward 06-20-2024

Author U Your Guide to Book Publishing

Play Episode Listen Later Jun 21, 2024 58:07


In this week's AuthorU-Your Guide to Book Publishing, Host Dr. Judith Briles invites Kathy Meis, Founder and CEO of Bublish to join her for Bublish updates and the integration of AI. Together, they will reveal strategies and tips to further advance marketing and publishing for the self and independent author. Your takeaways include: -Whether AI is good or bad for publishing and authors. -Best practices for authors in using AI. -What Bublish has added to its community to aid authors. -What authors need to consider before using AI and platforms. -Various strengths and weaknesses of AI tools. -Rules to know before authors load original info and data into any AI system. -Discovering AI Narration and AI test to Image. And, of course, much more. Tune in for lots of ideas and how-to tactics via the AuthorU-Your Guide to Book Publishing podcast ranked in the top 10 of book marketing podcasts. Since its inception, over 18,000,000 listeners have downloaded various shows for practical publishing and book marketing guidance. Join me and become a regular subscriber.

Author U Your Guide to Book Publishing
Tips for Engaging AI in Author Publishing and Marketing Forward 06-20-2024

Author U Your Guide to Book Publishing

Play Episode Listen Later Jun 21, 2024 58:07


In this week's AuthorU-Your Guide to Book Publishing, Host Dr. Judith Briles invites Kathy Meis, Founder and CEO of Bublish to join her for Bublish updates and the integration of AI. Together, they will reveal strategies and tips to further advance marketing and publishing for the self and independent author. Your takeaways include: -Whether AI is good or bad for publishing and authors. -Best practices for authors in using AI. -What Bublish has added to its community to aid authors. -What authors need to consider before using AI and platforms. -Various strengths and weaknesses of AI tools. -Rules to know before authors load original info and data into any AI system. -Discovering AI Narration and AI test to Image. And, of course, much more. Tune in for lots of ideas and how-to tactics via the AuthorU-Your Guide to Book Publishing podcast ranked in the top 10 of book marketing podcasts. Since its inception, over 18,000,000 listeners have downloaded various shows for practical publishing and book marketing guidance. Join me and become a regular subscriber.

Inner Cosmos with David Eagleman
Ep58 "What do brains teach us about whether AI is creative?"

Inner Cosmos with David Eagleman

Play Episode Listen Later May 13, 2024 42:44 Transcription Available


From a neuroscience point of view, what is creativity? How does it shine light on the current lawsuits over large language models and whether they produce anything fundamentally new... or are simply remixing the old? How do the arts expose something important about what's happening in the human brain? What do we know about the cultural evolution of ideas? And what does any of this have to do with how cell phones got their names, and why koala bears don't write novels? Join Eagleman and his guest, composer Anthony Brandt, as they uncover the surprises about creativity.

80k After Hours
Highlights: #185 – Lewis Bollard on the 7 most promising ways to end factory farming, and whether AI is going to be good or bad for animals

80k After Hours

Play Episode Listen Later May 2, 2024 22:36


This is a selection of highlights from episode #185 of The 80,000 Hours Podcast.These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:Lewis Bollard on the 7 most promising ways to end factory farming, and whether AI is going to be good or bad for animalsAnd if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

The Nonlinear Library
EA - #185 - The 7 most promising ways to end factory farming, and whether AI is going to be good or bad for animals (Lewis Bollard on the 80,000 Hours Podcast) by 80000 Hours

The Nonlinear Library

Play Episode Listen Later Apr 30, 2024 21:53


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: #185 - The 7 most promising ways to end factory farming, and whether AI is going to be good or bad for animals (Lewis Bollard on the 80,000 Hours Podcast), published by 80000 Hours on April 30, 2024 on The Effective Altruism Forum. We just published an interview: Lewis Bollard on the 7 most promising ways to end factory farming, and whether AI is going to be good or bad for animals . Listen on Spotify or click through for other audio options, the transcript, and related links. Below are the episode summary and some key excerpts. Episode summary The constraint right now on factory farming is how far can you push the biology of these animals? But AI could remove that constraint. It could say, "Actually, we can push them further in these ways and these ways, and they still stay alive. And we've modelled out every possibility and we've found that it works." I think another possibility, which I don't understand as well, is that AI could lock in current moral values. And I think in particular there's a risk that if AI is learning from what we do as humans today, the lesson it's going to learn is that it's OK to tolerate mass cruelty, so long as it occurs behind closed doors. I think there's a risk that if it learns that, then it perpetuates that value, and perhaps slows human moral progress on this issue. Lewis Bollard In today's episode, host Luisa Rodriguez speaks to Lewis Bollard - director of the Farm Animal Welfare programme at Open Philanthropy - about the promising progress and future interventions to end the worst factory farming practices still around today. They cover: The staggering scale of animal suffering in factory farms, and how it will only get worse without intervention. Work to improve farmed animal welfare that Open Philanthropy is excited about funding. The amazing recent progress made in farm animal welfare - including regulatory attention in the EU and a big win at the US Supreme Court - and the work that still needs to be done. The occasional tension between ending factory farming and curbing climate change. How AI could transform factory farming for better or worse - and Lewis's fears that the technology will just help us maximise cruelty in the name of profit. How Lewis has updated his opinions or grantmaking as a result of new research on the "moral weights" of different species. Lewis's personal journey working on farm animal welfare, and how he copes with the emotional toll of confronting the scale of animal suffering. How listeners can get involved in the growing movement to end factory farming - from career and volunteer opportunities to impactful donations. And much more. Producer and editor: Keiran Harris Audio engineering lead: Ben Cordell Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore Highlights Factory farming is philosophically indefensible Lewis Bollard: Honestly, I hear surprisingly few philosophical objections. I remember when I first learned about factory farming, and I was considering whether this was an issue to work on, I went out to try and find the best objections I could - because I was like, it can't possibly just be as straightforward as this; it can't possibly just be the case that we're torturing animals just to save a few cents. And the only book I was able to find at the time that was opposed to animal welfare and animal rights was a book by the late British philosopher Roger Scruton. He wrote a book called Animal Rights and Wrongs. And I was really excited. I was like, "Cool, we're going to get this great philosophical defence of factory farming here." In the preface, the first thing he says is, "Obviously, I'm not going to defend factory farming. That's totally indefensible. I'm going to defend why you should st...

80,000 Hours Podcast with Rob Wiblin
#185 – Lewis Bollard on the 7 most promising ways to end factory farming, and whether AI is going to be good or bad for animals

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Apr 18, 2024 153:12


"The constraint right now on factory farming is how far can you push the biology of these animals? But AI could remove that constraint. It could say, 'Actually, we can push them further in these ways and these ways, and they still stay alive. And we've modelled out every possibility and we've found that it works.' I think another possibility, which I don't understand as well, is that AI could lock in current moral values. And I think in particular there's a risk that if AI is learning from what we do as humans today, the lesson it's going to learn is that it's OK to tolerate mass cruelty, so long as it occurs behind closed doors. I think there's a risk that if it learns that, then it perpetuates that value, and perhaps slows human moral progress on this issue." —Lewis BollardIn today's episode, host Luisa Rodriguez speaks to Lewis Bollard — director of the Farm Animal Welfare programme at Open Philanthropy — about the promising progress and future interventions to end the worst factory farming practices still around today.Links to learn more, highlights, and full transcript.They cover:The staggering scale of animal suffering in factory farms, and how it will only get worse without intervention.Work to improve farmed animal welfare that Open Philanthropy is excited about funding.The amazing recent progress made in farm animal welfare — including regulatory attention in the EU and a big win at the US Supreme Court — and the work that still needs to be done.The occasional tension between ending factory farming and curbing climate changeHow AI could transform factory farming for better or worse — and Lewis's fears that the technology will just help us maximise cruelty in the name of profit.How Lewis has updated his opinions or grantmaking as a result of new research on the “moral weights” of different species.Lewis's personal journey working on farm animal welfare, and how he copes with the emotional toll of confronting the scale of animal suffering.How listeners can get involved in the growing movement to end factory farming — from career and volunteer opportunities to impactful donations.And much more.Chapters:Common objections to ending factory farming (00:13:21)Potential solutions (00:30:55)Cage-free reforms (00:34:25)Broiler chicken welfare (00:46:48)Do companies follow through on these commitments? (01:00:21)Fish welfare (01:05:02)Alternatives to animal proteins (01:16:36)Farm animal welfare in Asia (01:26:00)Farm animal welfare in Europe (01:30:45)Animal welfare science (01:42:09)Approaches Lewis is less excited about (01:52:10)Will we end factory farming in our lifetimes? (01:56:36)Effect of AI (01:57:59)Recent big wins for farm animals (02:07:38)How animal advocacy has changed since Lewis first got involved (02:15:57)Response to the Moral Weight Project (02:19:52)How to help (02:28:14)Producer and editor: Keiran HarrisAudio engineering lead: Ben CordellTechnical editing: Simon Monsour, Milo McGuire, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore

Clownfish TV: Audio Edition
Disney Can Be FIXED with A.I. Says Investors.

Clownfish TV: Audio Edition

Play Episode Listen Later Feb 27, 2024 26:29


Disney needs AI to come up with creative ideas again and to manage their theme parks, according to one of the two activist investor groups involved in the upcoming proxy battle. Well, THAT'S pretty embarrassing to basically say that AI can Disney better than Disney. Then we talk about the OTHER group, that includes Nelson Peltz and Jay Rasulo, and how their plan seems to just be "fix what's broken and make money." ➡️ Tip Jar and Fan Support: http://ClownfishSupport.com ➡️ Official Merch Store: http://ShopClownfish.com ➡️ Official Website: http://ClownfishTV.com ➡️ Audio Edition: https://open.spotify.com/show/6qJc5C6OkQkaZnGCeuVOD1 ➡️ Gaming News: https://open.spotify.com/show/0A7VIqE3r5MQkFgL9nifNc Additional Context: In the swirling vortex of pop culture and corporate drama, Disney finds itself at the center of an intriguing narrative that reads like the plot of a sci-fi blockbuster. One of the two activist investor groups circling the House of Mouse like sharks with a taste for profit has made a bold, if somewhat cheeky, suggestion: Disney, the storied empire of imagination and dreams, might need to hand over the reins to artificial intelligence to spark its creative fires once again and to streamline the operations of its theme parks. Yes, you heard that right. AI might just become the next big Imagineer, or should we say, "AI-magineer"? This provocative proposal is the kind of headline that makes you do a double-take, followed by a hearty chuckle. It's as if someone said the secret ingredient to Coca-Cola should be Pepsi. It's an admission, wrapped in a critique, served with a side of irony, that perhaps the current human minds steering the Disney ship need a little help from their silicon counterparts. The notion that a machine could out-Disney Disney in creativity and efficiency is a deliciously humbling thought for a company that's built on the foundation of human imagination and storytelling. On the flip side of this coin, we have another group of activist investors, including names like Nelson Peltz and Jay Rasulo, whose strategy seems less like a page from a science fiction novel and more like a stern financial advisor's tough love. Their approach is decidedly more grounded and, dare we say, traditional: identify the problems, fix them, and then, as if by magic, the money will flow. It's the corporate equivalent of "clean your room, and you'll find your lost stuff." This group's plan doesn't involve futuristic AI overlords taking the helm but rather a back-to-basics approach, focusing on operational efficiency and profitability. While the AI-driven proposal might tickle your fancy or provoke a raised eyebrow, it underscores a critical conversation about innovation, technology, and the future of entertainment. In an age where AI is writing novels, composing symphonies, and even generating art, the question of whether it could contribute to, or enhance, Disney's creative process is not entirely far-fetched. However, it does beg the question: Can true creativity, the kind that resonates with human emotions and experiences, be artificially manufactured? On the other side, the Peltz-Rasulo plan, with its feet firmly planted on the ground, reminds us that sometimes, the best way forward is to fix the leaks in the boat rather than dreaming about flying cars. It's a less glamorous, but arguably more pragmatic, approach to corporate rejuvenation. As this drama unfolds, one thing is clear: the future of Disney, a beloved icon of entertainment and imagination, is at a crossroads between tradition and innovation. Whether AI will play a role in its next chapter or if a back-to-basics strategy will prevail is a story worth watching. One thing's for sure, though; the Happiest Place on Earth is currently the most interesting place in the corporate world. About Us: Clownfish TV is an independent, opinionated news and commentary channel that covers Entertainment and Tech from a consumer's point of view. We talk about Gaming,

Net Learnings
5 “Must-Knows” for a Successful Career in Equity Research, with Duncan McKeen

Net Learnings

Play Episode Listen Later Dec 26, 2023 42:07


Join Kyle for his chat with Duncan McKeen, EVP Financial Modeling at CFI. Duncan brings 10 years of experience in equity research and another 10 in corporate training.In this episode they discuss:Why are models so important anyhow?Whether AI will make humans obsolete in financial modeling.How to be a more effective modeler.What career progression looks like in the research game.A detailed unpacking of equity research, including Duncan's 5 “must know” pointers.Craft beer - buy, sell, or hold?And so much more!With more model talk than fashion week in Milan, this episode is a must listen for anyone that wants the inside scoop on a career in equity research.

80,000 Hours Podcast with Rob Wiblin
#176 – Nathan Labenz on the final push for AGI, understanding OpenAI's leadership drama, and red-teaming frontier models

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Dec 22, 2023 226:52


OpenAI says its mission is to build AGI — an AI system that is better than human beings at everything. Should the world trust them to do that safely?That's the central theme of today's episode with Nathan Labenz — entrepreneur, AI scout, and host of The Cognitive Revolution podcast.Links to learn more, summary, and full transcript. Nathan saw the AI revolution coming years ago, and, astonished by the research he was seeing, set aside his role as CEO of Waymark and made it his full-time job to understand AI capabilities across every domain. He has been obsessively tracking the AI world since — including joining OpenAI's “red team” that probed GPT-4 to find ways it could be abused, long before it was public.Whether OpenAI was taking AI safety seriously enough became a topic of dinner table conversation around the world after the shocking firing and reinstatement of Sam Altman as CEO last month.Nathan's view: it's complicated. Discussion of this topic has often been heated, polarising, and personal. But Nathan wants to avoid that and simply lay out, in a way that is impartial and fair to everyone involved, what OpenAI has done right and how it could do better in his view.When he started on the GPT-4 red team, the model would do anything from diagnose a skin condition to plan a terrorist attack without the slightest reservation or objection. When later shown a “Safety” version of GPT-4 that was almost the same, he approached a member of OpenAI's board to share his concerns and tell them they really needed to try out GPT-4 for themselves and form an opinion.In today's episode, we share this story as Nathan told it on his own show, The Cognitive Revolution, which he did in the hope that it would provide useful background to understanding the OpenAI board's reservations about Sam Altman, which to this day have not been laid out in any detail.But while he feared throughout 2022 that OpenAI and Sam Altman didn't understand the power and risk of their own system, he has since been repeatedly impressed, and came to think of OpenAI as among the better companies that could hypothetically be working to build AGI.Their efforts to make GPT-4 safe turned out to be much larger and more successful than Nathan was seeing. Sam Altman and other leaders at OpenAI seem to sincerely believe they're playing with fire, and take the threat posed by their work very seriously. With the benefit of hindsight, Nathan suspects OpenAI's decision to release GPT-4 when it did was for the best.On top of that, OpenAI has been among the most sane and sophisticated voices advocating for AI regulations that would target just the most powerful AI systems — the type they themselves are building — and that could make a real difference. They've also invested major resources into new ‘Superalignment' and ‘Preparedness' teams, while avoiding using competition with China as an excuse for recklessness.At the same time, it's very hard to know whether it's all enough. The challenge of making an AGI safe and beneficial may require much more than they hope or have bargained for. Given that, Nathan poses the question of whether it makes sense to try to build a fully general AGI that can outclass humans in every domain at the first opportunity. Maybe in the short term, we should focus on harvesting the enormous possible economic and humanitarian benefits of narrow applied AI models, and wait until we not only have a way to build AGI, but a good way to build AGI — an AGI that we're confident we want, which we can prove will remain safe as its capabilities get ever greater.By threatening to follow Sam Altman to Microsoft before his reinstatement as OpenAI CEO, OpenAI's research team has proven they have enormous influence over the direction of the company. If they put their minds to it, they're also better placed than maybe anyone in the world to assess if the company's strategy is on the right track and serving the interests of humanity as a whole. Nathan concludes that this power and insight only adds to the enormous weight of responsibility already resting on their shoulders.In today's extensive conversation, Nathan and host Rob Wiblin discuss not only all of the above, but also:Speculation about the OpenAI boardroom drama with Sam Altman, given Nathan's interactions with the board when he raised concerns from his red teaming efforts.Which AI applications we should be urgently rolling out, with less worry about safety.Whether governance issues at OpenAI demonstrate AI research can only be slowed by governments.Whether AI capabilities are advancing faster than safety efforts and controls.The costs and benefits of releasing powerful models like GPT-4.Nathan's view on the game theory of AI arms races and China.Whether it's worth taking some risk with AI for huge potential upside.The need for more “AI scouts” to understand and communicate AI progress.And plenty more.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuire and Dominic ArmstrongTranscriptions: Katy Moore

The JoyPowered Workspace Podcast
Writing with ChatGPT

The JoyPowered Workspace Podcast

Play Episode Listen Later Nov 20, 2023 31:56


In this episode, JoDee and Susan discuss using AI as a writing tool with Linda Comerford. Topics include: Whether AI will ever replace the need for us to participate in the process Pros and cons of ChatGPT Writing tips to help ChatGPT work to your advantage In this episode's listener question, we're asked whether an org can refuse to rehire an employee who is terminated for taking unapproved leave or not providing medical documentation. In the news, remote workers are taking less vacation than their in-office counterparts. Full show notes and links are available here: https://getjoypowered.com/show-notes-episode-182-writing-with-chatgpt/ A transcript of the episode can be found here: https://getjoypowered.com/transcript-episode-182-writing-with-chatgpt/ To get 0.50 hour of SHRM recertification credit, fill out the evaluation here: https://getjoypowered.com/shrm/ Connect with us: @JoyPowered on Instagram: https://instagram.com/joypowered @JoyPowered on Twitter: https://twitter.com/joypowered @JoyPowered on Facebook: https://facebook.com/joypowered @JoyPowered on LinkedIn: https://linkedin.com/company/joypowered Sign up for our email newsletter: https://getjoypowered.com/newsletter/ 

80k After Hours
Highlights: #161 – Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite

80k After Hours

Play Episode Listen Later Oct 19, 2023 31:39


This is a selection of highlights from episode #161 of The 80,000 Hours Podcast.These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the oppositeAnd if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.Highlights put together by Simon Monsour and Milo McGuire

80,000 Hours Podcast with Rob Wiblin
#161 – Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Aug 23, 2023 210:32


"Do you remember seeing these photographs of generally women sitting in front of these huge panels and connecting calls, plugging different calls between different numbers? The automated version of that was invented in 1892. However, the number of human manual operators peaked in 1920 -- 30 years after this. At which point, AT&T is the monopoly provider of this, and they are the largest single employer in America, 30 years after they've invented the complete automation of this thing that they're employing people to do. And the last person who is a manual switcher does not lose their job, as it were: that job doesn't stop existing until I think like 1980.So it takes 90 years from the invention of full automation to the full adoption of it in a single company that's a monopoly provider. It can do what it wants, basically. And so the question perhaps you might have is why?" — Michael WebbIn today's episode, host Luisa Rodriguez interviews economist Michael Webb of DeepMind, the British Government, and Stanford about how AI progress is going to affect people's jobs and the labour market.Links to learn more, summary and full transcript.They cover:The jobs most and least exposed to AIWhether we'll we see mass unemployment in the short term How long it took other technologies like electricity and computers to have economy-wide effectsWhether AI will increase or decrease inequalityWhether AI will lead to explosive economic growthWhat we can we learn from history, and reasons to think this time is differentCareer advice for a world of LLMsWhy Michael is starting a new org to relieve talent bottlenecks through accelerated learning, and how you can get involvedMichael's take as a musician on AI-generated musicAnd plenty moreIf you'd like to work with Michael on his new org to radically accelerate how quickly people acquire expertise in critical cause areas, he's now hiring! Check out Quantum Leap's website.Get this episode by subscribing to our podcast on the world's most pressing problems and how to solve them: type ‘80,000 Hours' into your podcasting app. Or read the transcript.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuire and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore

Amanpour
Focus on AI

Amanpour

Play Episode Listen Later Jul 27, 2023 54:54


Whether AI "makes our societies more or less equitable, unlocks breakthroughs or becomes a tool of authoritarians — is up to us." That is the warning, and the call to arms, from the Biden administration this week. In just a few short months, the power and the peril of AI have become the focus of huge public debate. And the conversation could not be more relevant -- as the atomic bomb biopic "Oppenheimer" reminds us all of the danger of unleashing unbelievably powerful technology on the world. To assess all this, Christiane hosts a panel of leaders in the field of artificial technology.  Also on today's show: In a world where it's increasingly hard to discern fact from fiction, Hari Sreenivasan and Christiane Amanpour discuss the ethical dilemmas of A.I., and why it's more important than ever to keep real journalists in the game.    To learn more about how CNN protects listener privacy, visit cnn.com/privacy

Big Technology Podcast
Meta's New AI Model, AppleGPT's Potential, Is ChatGPT Getting Dumber — With Aaron Levie

Big Technology Podcast

Play Episode Listen Later Jul 21, 2023 58:10


Aaron Levie is the CEO of Box. He joins us for a special Friday episode to break down a major week of AI news. We cover: 1) Meta's incentives to open source its Llama 2 AI model. 2) Whether people actually want to interact with chatbots, no matter how well they perform. 3) Why enterprise might be the clearest use case for Ai. 4) Why Apple is developing LLMs and where the project might go. 5) Whether AI companies can actually build moats around their products. 6) Is ChatGPT getting dumber? 7) Levie's view on AI and jobs 8) AI's influence on creativity. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Questions? Feedback? Write to: bigtechnologypodcast@gmail.com

We Study Billionaires - The Investor’s Podcast Network
TIP560: Richer, Wiser, Happier Q2 2023 w/ Stig Brodersen & William Green

We Study Billionaires - The Investor’s Podcast Network

Play Episode Listen Later Jun 18, 2023 105:57


On today's show, Stig Brodersen talks with co-host William Green, the author of “Richer, Wiser, Happier.” With a strong focus on books, they discuss what has made them Richer, Wiser, or Happier in the past quarter.IN THIS EPISODE YOU'LL LEARN:00:00 - Intro01:27 - How to curate a book list12:53 - How can you find books the same way as your pick stocks23:32 - Which books have made us Wiser, Richer, and Happier30:14 - How the master appears when the student is ready44:13 - Whether AI changes how books are written1:25:46 - How to encourage your peers to read1:42:16 - Why you should give books away as your hobby 1:45:03 - Which two books have William recently read that he would recommendDisclaimer: Slight discrepancies in the timestamps may occur due to podcast platform differences.BOOKS AND RESOURCESListen to Stig Brodersen and William Green's episode on being Richer, Wiser, and Happier, Q1 2023 or watch the video.Listen to Stig Brodersen and William Green's episode on Money and Happiness or watch the video.Tune in to William Green's episode with Mohnish Pabrai on Playing to Win or watch the video.Tune in to William Green's episode with Jason Karp on Wealth and Health or watch the video.Listen to Clay Finck's episode with Scott Patterson about the book Chaos Kings or watch the video.William Green's book Richer, Wiser, Happier – read reviews of this book.William Green's book, The Great Minds of Investing – read reviews of this book.Scott Patterson's book, Chaos Kings – read reviews of this book.Peter Matthiessen's book, Snow Leopard – read reviews of this book.Benjamin Labatut's book, When we cease to Understand the World – read reviews of this book.Jared Diamond's book, Guns, Germs, and Steel – read reviews of this book.Yuval Harari's book, Sapiens – read reviews of this book.Michael Greger's book, How Not to Die – read reviews of this book.Mark Hyman's book, Forever Young – read reviews of this book.Steven Kotler's book, The Art of the Impossible – read reviews of this book.Dean and Anne Ornish's book, Undo It! - read reviews of this book.Robert Pirsig's book, Zen and the Art of Motorcycle Maintenance - read reviews of this book.Robert Pirsig's book, On Quality - read reviews of this book.Alice Schroder's book, The Snowball - read reviews of this book.Warren Buffett's book, The Essays of Warren Buffett - read reviews of this book.Ray Dalio's book, The Changing World Order - read reviews of this book.NEW TO THE SHOW?Check out our We Study Billionaires Starter Packs.Browse through all our episodes (complete with transcripts) here.Try our tool for picking stock winners and managing our portfolios: TIP Finance Tool.Enjoy exclusive perks from our favorite Apps and Services.Stay up-to-date on financial markets and investing strategies through our daily newsletter, We Study Markets.P.S The Investor's Podcast Network is excited to launch a subreddit devoted to our fans in discussing financial markets, stock picks, questions for our hosts, and much more! Join our subreddit r/TheInvestorsPodcast today!SPONSORSInvest in Bitcoin with confidence on River. It's the most secure way to buy Bitcoin with 100% full reserve custody and zero fees on recurring orders.Easily diversify beyond stocks and bonds, and build wealth through streamlined CRE investing with EquityMultiple.Join over 5k investors in the data security revolution with Atakama.Make connections, gain knowledge, and uplift your governance CV by becoming a member of the AICD today.Have the visibility and control you need to make better decisions faster with NetSuite's cloud financial system. Plus, take advantage of their unprecedented financing offer today - defer payments of a full NetSuite implementation. That's no payment and no interest for six months!Enjoy flexibility and support with free cancellation, payment options, and 24/7 service when booking travel experiences with Viator. Download the Viator app NOW and use code VIATOR10 for 10% off your first booking.Send, spend, and receive money around the world easily with Wise.Having physical gold physical gold can help if you have an IRA or 401(k)! Call Augusta Precious Metals today to get their free “Ultimate Guide to Gold IRAs" at 855-44-GOLD-IRA.Choose Toyota for your next vehicle - SUVs that are known for their reliability and longevity, making them a great investment. Plus, Toyotas now have more advanced technology than ever before, maximizing that investment with a comfortable and connected drive.Support our free podcast by supporting our sponsors.HELP US OUT!Help us reach new listeners by leaving us a rating and review on Apple Podcasts! It takes less than 30 seconds, and really helps our show grow, which allows us to bring on even better guests for you all! Thank you – we really appreciate it!See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Srsly Wrong
287. Artificial Intelligence Pt. 1

Srsly Wrong

Play Episode Listen Later Jun 14, 2023 119:26


The Wrong Boys are talking AI, ALI, AGI, and ASI- ChatGPT, Midjourn, For-Profit Hype Cycles, Self Driving Cars, Roko’s Basilisk, Whether AI can save us, and Whether AI might doom us. Whether...

The Journal.
Elon Musk on 2024 Politics, Succession Plans and Whether AI Will Annihilate Humanity

The Journal.

Play Episode Listen Later May 24, 2023 20:40


In an interview at WSJ's CEO Council Summit with editor Thorold Barker, Elon Musk talked about whether he regrets buying Twitter, who might eventually take the helm of the three companies he runs and how AI will change our future. Further Reading: - Ron DeSantis to Launch 2024 Presidential Run in Twitter Talk With Elon Musk  - Elon Musk Wants to Challenge Google and Microsoft in AI  - The Elon Musk Doctrine: How the Billionaire Navigates the World Stage  Further Listening: - Twitter's New CEO: The Velvet Hammer  Learn more about your ad choices. Visit megaphone.fm/adchoices

Science Salon
338. AI SciFi — Physicist, Science Fiction Author, and AI Expert David Brin on ChatGPT and Whether AI Poses an Existential Threat

Science Salon

Play Episode Listen Later Apr 8, 2023 96:48


Shermer and Brin discuss: AI and AGI • are they existential threats? • the alignment problem • Large Language Models • ChatGPT, GPT-4, GPT-5, and beyond • the Future of Life Institute's Open Letter calling for a pause on “giant AI experiments” • Asilomar AI principles • Eliezer Yudkowsky's Time OpEd: “Shut it All Down” • laws and ethics. David Brin earned a Bachelor's degree in astronomy from Caltech, a Master's in electrical engineering from UC San Diego, and a PhD in astronomy from UC San Diego. From 1983 to 1986 he was a postdoc research fellow at the California Space Institute at UC San Diego, where he also helped establish the Arthur C. Clarke Center for Human Imagination. An advisor to NASA's Innovative & Advanced Concepts program, David appears frequently on shows such as Nova, The Universe and Life After People, speaking about science and future trends. His first non-fiction book, The Transparent Society: Will Technology Make Us Choose Between Freedom and Privacy?, won the Freedom of Speech Award of the American Library Association. His second nonfiction book is Vivid Tomorrows: Science Fiction and Hollywood. He is best known for his science fiction, for which he has won numerous major awards, including the Hugo, Locus, Campbell, and Nebula Awards. His novel The Postman was adapted into a feature film starring Kevin Costner. He even has a minor planet named after him: 5748 Davebrin. He has written a number of articles on Artificial Intelligence, most recently in response to the call for a moratorium on AI research by many leading AI researchers and scientists, which he titled “The Only Way Out of the AI Dilemma.” His website is davidbrin.com.