Podcasts about daniel so

  • 26PODCASTS
  • 115EPISODES
  • 20mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Mar 6, 2026LATEST

POPULARITY

20192020202120222023202420252026


Best podcasts about daniel so

Latest podcast episodes about daniel so

벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 1 Chronicles (23) Daniel So / 1 Chronicles Chapter 23 (2026-3-5)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later Mar 6, 2026 6:58


EC [Morning By Morning] 1 Chronicles (23) Daniel So / 1 Chronicles Chapter 23 (2026-3-5) by One Hope Church

chronicles chronicles chapter daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 1 Chronicles (23) Daniel So / 1 Chronicles Chapter 23 (2026-3-5)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later Mar 5, 2026 7:18


EC [Morning By Morning] 1 Chronicles (23) Daniel So / 1 Chronicles Chapter 23 (2026-3-5) by One Hope Church

chronicles chronicles chapter daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 1 Chronicles (22) Daniel So / 1 Chronicles Chapter 22 (2026-3-4)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later Mar 4, 2026 6:32


EC [Morning By Morning] 1 Chronicles (22) Daniel So / 1 Chronicles Chapter 22 (2026-3-4) by One Hope Church

chronicles chronicles chapter daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 1 Chronicles (21) Daniel So / 1 Chronicles Chapter 21 (2026-3-3)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later Mar 3, 2026 9:31


EC [Morning By Morning] 1 Chronicles (21) Daniel So / 1 Chronicles Chapter 21 (2026-3-3) by One Hope Church

chronicles chronicles chapter daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 1 Chronicles (20) Daniel So / 1 Chronicles Chapter 20 (2026-2-6)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later Feb 6, 2026 7:21


EC [Morning By Morning] 1 Chronicles (20) Daniel So / 1 Chronicles Chapter 20 (2026-2-6) by One Hope Church

chronicles chronicles chapter daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 1 Chronicles (19) Daniel So / 1 Chronicles Chapter 19 (2026-2-5)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later Feb 5, 2026 7:34


EC [Morning By Morning] 1 Chronicles (19) Daniel So / 1 Chronicles Chapter 19 (2026-2-5) by One Hope Church

chronicles chronicles chapter daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 1 Chronicles (18) Daniel So / 1 Chronicles Chapter 18 (2026-2-4)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later Feb 4, 2026 9:04


EC [Morning By Morning] 1 Chronicles (18) Daniel So / 1 Chronicles Chapter 18 (2026-2-4) by One Hope Church

chronicles chronicles chapter daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
KC [Morning By Morning] 1 Chronicles (17) Daniel So / 1 Chronicles Chapter 17 (2026-2-3)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later Feb 3, 2026 6:41


KC [Morning By Morning] 1 Chronicles (17) Daniel So / 1 Chronicles Chapter 17 (2026-2-3) by One Hope Church

chronicles chronicles chapter daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 1 Chronicles (16) Daniel So / 1 Chronicles Chapter 16 (2026-1-23)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later Jan 23, 2026 12:02


EC [Morning By Morning] 1 Chronicles (16) Daniel So / 1 Chronicles Chapter 16 (2026-1-23) by One Hope Church

chronicles chronicles chapter daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 1 Chronicles (15) Daniel So / 1 Chronicles Chapter 15 (2026-1-22)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later Jan 22, 2026 10:40


EC [Morning By Morning] 1 Chronicles (15) Daniel So / 1 Chronicles Chapter 15 (2026-1-22) by One Hope Church

chronicles chronicles chapter daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 1 Chronicles (14) Daniel So / 1 Chronicles Chapter 14 (2026-1-21)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later Jan 21, 2026 9:32


EC [Morning By Morning] 1 Chronicles (14) Daniel So / 1 Chronicles Chapter 14 (2026-1-21) by One Hope Church

chronicles chronicles chapter daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 1 Chronicles (13) Daniel So / 1 Chronicles Chapter 13 (2026-1-20)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later Jan 20, 2026 7:44


EC [Morning By Morning] 1 Chronicles (13) Daniel So / 1 Chronicles Chapter 13 (2026-1-20) by One Hope Church

chronicles chronicles chapter daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 1 Chronicles (12) Daniel So / 1 Chronicles Chapter 12 (2025-12-19)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later Dec 19, 2025 10:25


EC [Morning By Morning] 1 Chronicles (12) Daniel So / 1 Chronicles Chapter 12 (2025-12-19) by One Hope Church

chronicles chronicles chapter daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 1 Chronicles (11) Daniel So / 1 Chronicles Chapter 11 (2025-12-18)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later Dec 18, 2025 8:28


EC [Morning By Morning] 1 Chronicles (11) Daniel So / 1 Chronicles Chapter 11 (2025-12-18) by One Hope Church

chronicles chronicles chapter daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 1 Chronicles (9) Daniel So / 1 Chronicles Chapter 9 (2025-12-16)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later Dec 17, 2025 7:07


EC [Morning By Morning] 1 Chronicles (9) Daniel So / 1 Chronicles Chapter 9 (2025-12-16) by One Hope Church

chronicles chronicles chapter daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 1 Chronicles (9) Daniel So / 1 Chronicles Chapter 9 (2025-12-16)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later Dec 16, 2025 8:43


EC [Morning By Morning] 1 Chronicles (9) Daniel So / 1 Chronicles Chapter 9 (2025-12-16) by One Hope Church

chronicles chronicles chapter daniel so one hope church
Martial Arts Media Business Podcast
164 - How 1 Pricing Shift Let Daniel Quit His Job & Build a Profitable 500 - Student Martial Arts School

Martial Arts Media Business Podcast

Play Episode Listen Later Dec 15, 2025 24:18


Lifelong martial artist Daniel Jancek shares how fixing pricing and surrounding himself with growth-driven school owners helped him step away from his job and go all-in on his martial arts business.IN THIS EPISODE:How a simple pricing mistake kept a 500-student school stuck.Why shifting from pay-as-you-go to real memberships created instant stability.The fear every school owner faces when raising tuition and what actually happened.How to identify the families who truly value your program.Why premium pricing increases commitment and reduces afterthought attendance.The power of building decisions inside a room of growth-driven school owners.TRANSCRIPTIONGeorge: Hey, it's George Fourie.Welcome to the Martial Arts Media™ Business Podcast.Today, I am with Daniel Jancek.How are you, Daniel?Daniel: Good, George.How are you?George: Good.Did I say Jancek?Did I say it in the proper accent?All right, cool.So I'm going to give a brief intro, but I'm going to let Daniel tell the story.This is sort of a cool part where I get to interview people that are in the Partners group.We tell a bit of a case story, but I also get to chat about things that just probably don't just come up in conversation.So it's a great opportunity for me to get to know Daniel better and just talk about his journey in martial arts.They've had great success in martial arts over the last year, especially going full-time.So we'll dive a bit deeper into that.But yeah, welcome to the call, Daniel.I appreciate it.Daniel: So thanks for having me.George: Cool.So I guess just start right at the beginning for those of you that don't know who you are.Just give us a bit of background on you, your journey, martial arts, and where you got started.Daniel: Yeah, No, love to.So yeah, I basically was a bit of an energetic kid.I suppose you could say when I was really young, I had a lot of energy that I needed to release.I'm a little bit aggressive at times.I was not the most well-behaved kid.So I got into football at a really young age when I was four years old.And when I was about five and a half, yeah.My mom and dad thought it'd be really good to get me into martial arts once a week.So I brought a newsletter home from school.It was like a newsletter pamphlet drop that I got in school.I took that home to Mom and handed that to her and went for our first lesson.And yeah, really never looked back.So I'm 35 now.Well, yeah, going close to 36.It's been a good 30 years that I've been doing martial arts nonstop for.I started as a five-and-a-half-year-old kid that thought it looked pretty cool.I liked the logo.That's what sort of got my attention.It was a boxing kangaroo.The rest is history.

벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 1 Chronicles (7) Daniel So / 1 Chronicles Chapter 7 (2025-12-11)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later Dec 12, 2025 13:03


EC [Morning By Morning] 1 Chronicles (7) Daniel So / 1 Chronicles Chapter 7 (2025-12-11) by One Hope Church

chronicles chronicles chapter daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 1 Chronicles (7) Daniel So / 1 Chronicles Chapter 7 (2025-12-11)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later Dec 11, 2025 9:23


EC [Morning By Morning] 1 Chronicles (7) Daniel So / 1 Chronicles Chapter 7 (2025-12-11) by One Hope Church

chronicles chronicles chapter daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 1 Chronicles (6) Daniel So / 1 Chronicles Chapter 6 (2025-12-10)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later Dec 10, 2025 12:45


EC [Morning By Morning] 1 Chronicles (6) Daniel So / 1 Chronicles Chapter 6 (2025-12-10) by One Hope Church

chronicles chronicles chapter daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 1 Chronicles (5) Daniel So / 1 Chronicles Chapter 5 (2025-12-09)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later Dec 9, 2025 10:37


EC [Morning By Morning] 1 Chronicles (5) Daniel So / 1 Chronicles Chapter 5 (2025-12-09) by One Hope Church

chronicles chronicles chapter daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 1 Chronicles (4) Daniel So / 1 Chronicles Chapter 4 (2025-12-05)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later Dec 5, 2025 12:29


EC [Morning By Morning] 1 Chronicles (4) Daniel So / 1 Chronicles Chapter 4 (2025-12-05) by One Hope Church

chronicles chronicles chapter daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 1 Chronicles (3) Daniel So / 1 Chronicles Chapter 3 (2025-12-04)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later Dec 4, 2025 14:06


EC [Morning By Morning] 1 Chronicles (3) Daniel So / 1 Chronicles Chapter 3 (2025-12-04) by One Hope Church

chronicles chronicles chapter daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 1 Chronicles (2) Daniel So / 1 Chronicles Chapter 2 (2025-12-03)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later Dec 3, 2025 11:50


EC [Morning By Morning] 1 Chronicles (2) Daniel So / 1 Chronicles Chapter 2 (2025-12-03) by One Hope Church

chronicles chronicles chapter daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 1 Chronicles (1) Daniel So / 1 Chronicles Chapter 1 (2025-12-02)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later Dec 2, 2025 16:17


EC [Morning By Morning] 1 Chronicles (1) Daniel So / 1 Chronicles Chapter 1 (2025-12-02) by One Hope Church

chronicles chronicles chapter daniel so one hope church
Cafe AZS
#107 CAFE AZS - DANIEL SOŁTYSIAK

Cafe AZS

Play Episode Listen Later Oct 27, 2025 19:37


Zapraszamy na 107. odcinek CAFE AZS! Naszym gościem tym razem jest Daniel Sołtysiak – jeden z najbardziej utalentowanych polskich lekkoatletów młodego pokolenia.Bartek Wasilewski zapraszam!

Cafe AZS
#101 CAFE AZS - MAKSYMILIAN SZWED

Cafe AZS

Play Episode Listen Later Aug 2, 2025 18:18


Maksymilian Szwed podczas tegorocznej Letniej Uniwersjady w Niemczech zdobył złoty medal w sztafecie 4x400 m mężczyzn. Polacy — w składzie Daniel Sołtysiak, Marcin Karolewski, Wiktor Wróbel i Maksymilian Szwed — wygrali finał tego biegu z czasem 3:03.64, pokonując m.in. reprezentacje USA i Turcji. To nie jedyny sukces Maksymiliana Szweda w ostatnim czasie. Zapraszam na rozmowę nagraną przed wyjazdem na Uniwersjadę.Bartek Wasilewski zapraszam!

벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 2 Timothy (12) Daniel So / 2 Timothy 4:9-22 (2025-05-16)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later May 16, 2025 14:28


EC [Morning By Morning] 2 Timothy (12) Daniel So / 2 Timothy 4:9-22 (2025-05-16) by One Hope Church

2 timothy daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 2 Timothy (11) Daniel So / 2 Timothy 4:1-8 (2025-05-15)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later May 15, 2025 14:56


EC [Morning By Morning] 2 Timothy (11) Daniel So / 2 Timothy 4:1-8 (2025-05-15) by One Hope Church

2 timothy daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 2 Timothy (10) Daniel So / 2 Timothy 3:14-17 (2025-05-14)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later May 14, 2025 6:53


EC [Morning By Morning] 2 Timothy (10) Daniel So / 2 Timothy 3:14-17 (2025-05-14) by One Hope Church

2 timothy daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 2 Timothy (9) Daniel So / 2 Timothy 2:22-26 (2025-05-13)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later May 13, 2025 13:24


EC [Morning By Morning] 2 Timothy (9) Daniel So / 2 Timothy 2:22-26 (2025-05-13) by One Hope Church

2 timothy daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 2 Timothy (8) Daniel So / 2 Timothy 2:20-21 (2025-05-09)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later May 9, 2025 13:23


EC [Morning By Morning] 2 Timothy (8) Daniel So / 2 Timothy 2:20-21 (2025-05-09) by One Hope Church

2 timothy daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 2 Timothy (7) Daniel So / 2 Timothy 2:14-19 (2025-05-08)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later May 8, 2025 12:34


EC [Morning By Morning] 2 Timothy (7) Daniel So / 2 Timothy 2:14-19 (2025-05-08) by One Hope Church

2 timothy daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 2 Timothy (56) Daniel So / 2 Timothy 2:6 (2025-05-07)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later May 7, 2025 16:15


EC [Morning By Morning] 2 Timothy (56) Daniel So / 2 Timothy 2:6 (2025-05-07) by One Hope Church

2 timothy daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 2 Timothy (5) Daniel So / 2 Timothy 2:5 (2025-05-06)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later May 6, 2025 12:31


EC [Morning By Morning] 2 Timothy (5) Daniel So / 2 Timothy 2:5 (2025-05-06) by One Hope Church

2 timothy daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 2 Timothy (4) Daniel So / 2 Timothy 2:3-4 (2025-05-02)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later May 2, 2025 10:22


EC [Morning By Morning] 2 Timothy (4) Daniel So / 2 Timothy 2:3-4 (2025-05-02) by One Hope Church

2 timothy daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 2 Timothy (3) Daniel So / 2 Timothy 2:1-2 (2025-05-01)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later May 1, 2025 14:24


EC [Morning By Morning] 2 Timothy (3) Daniel So / 2 Timothy 2:1-2 (2025-05-01) by One Hope Church

2 timothy daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 2 Timothy (2) Daniel So / 2 Timothy 1:8-18 (2025-04-30)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later Apr 30, 2025 12:38


EC [Morning By Morning] 2 Timothy (2) Daniel So / 2 Timothy 1:8-18 (2025-04-30) by One Hope Church

2 timothy 1 daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 2 Timothy (1) Daniel So / 2 Timothy 1:1-7 (2025-04-29)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later Apr 29, 2025 14:52


EC [Morning By Morning] 2 Timothy (1) Daniel So / 2 Timothy 1:1-7 (2025-04-29) by One Hope Church

2 timothy 1 daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 1 Timothy (8) Daniel So / 1 Timothy Chapter 6 (2025-04-11)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later Apr 11, 2025 14:06


EC [Morning By Morning] 1 Timothy (8) Daniel So / 1 Timothy Chapter 6 (2025-04-11) by One Hope Church

1 timothy timothy chapter daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 1 Timothy (7) Daniel So / 1 Timothy Chapter 5 (2025-04-10)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later Apr 10, 2025 10:24


EC [Morning By Morning] 1 Timothy (7) Daniel So / 1 Timothy Chapter 5 (2025-04-10) by One Hope Church

1 timothy timothy chapter daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 1 Timothy (6) Daniel So / 1 Timothy Chapter 4 (2025-04-09)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later Apr 9, 2025 14:22


EC [Morning By Morning] 1 Timothy (6) Daniel So / 1 Timothy Chapter 4 (2025-04-09) by One Hope Church

1 timothy 6 timothy chapter daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 1 Timothy (5) Daniel So / 1 Timothy Chapter 3 (2025-04-08)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later Apr 8, 2025 13:59


EC [Morning By Morning] 1 Timothy (5) Daniel So / 1 Timothy Chapter 3 (2025-04-08) by One Hope Church

1 timothy 5 timothy chapter daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 1 Timothy (4) Daniel So / 1 Timothy Chapter 2 (2025-04-04)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later Apr 4, 2025 10:51


EC [Morning By Morning] 1 Timothy (4) Daniel So / 1 Timothy Chapter 2 (2025-04-04) by One Hope Church

1 timothy timothy chapter daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 1 Timothy (3) Daniel So / 1 Timothy 1:12-20 (2025-04-03)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later Apr 3, 2025 10:24


EC [Morning By Morning] 1 Timothy (3) Daniel So / 1 Timothy 1:12-20 (2025-04-03) by One Hope Church

1 timothy 3 daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
EC [Morning By Morning] 1 Timothy (2) Daniel So / 1 Timothy 1:3-11 (2025-04-02)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later Apr 2, 2025 11:53


EC [Morning By Morning] 1 Timothy (2) Daniel So / 1 Timothy 1:3-11 (2025-04-02) by One Hope Church

1 timothy 2 daniel so one hope church
벤쿠버한소망교회팟케스트(KC) - One Hope Church
KC [Morning By Morning] 1 Timothy (1) Daniel So / 1 Timothy 1:1-2 (2025-04-01)

벤쿠버한소망교회팟케스트(KC) - One Hope Church

Play Episode Listen Later Apr 1, 2025 18:08


KC [Morning By Morning] 1 Timothy (1) Daniel So / 1 Timothy 1:1-2 (2025-04-01) by One Hope Church

1 timothy daniel so one hope church
Lutheran Answers
Christian Finance, with Daniel Catone

Lutheran Answers

Play Episode Listen Later Jan 30, 2025 79:36


Catholic financial advisor Daniel Catone joins Lutheran Answers to explore how Christians can steward their finances in ways that honor their faith. From ethical investing to the virtues needed for financial success, this episode offers practical and theological wisdom for navigating the world of money.In this episode of Lutheran Answers, Daniel Catone, a Catholic financial advisor and founder of Arimathea Investing, discusses the intersection of Christian values and financial stewardship. The conversation explores how Christians can align their investments with their faith, focusing on avoiding support for industries that conflict with Christian ethics, such as abortion, pornography, and anti-family policies. Daniel shares his expertise in helping religious organizations and individuals manage their assets conscientiously, offering insights into Catholic investment guidelines and the moral implications of financial decisions.The discussion also covers broader financial topics, including the pitfalls of speculative investments like cryptocurrency, the behavioral aspect of financial success, and critiques of mainstream financial advice figures like Dave Ramsey. Daniel emphasizes the importance of virtues like prudence and courage in navigating financial decisions and stresses that all wealth ultimately belongs to God and should be stewarded accordingly.Things We DiscussedFollow Daniel on X Arimathea Investing The Psychology of Money Fooled by Randomness RCC 2021 Guidelines to Ethical InvestingParting ThoughtEvery dollar we manage is ultimately God's treasure, entrusted to us for His purposes. As Christians, our financial decisions should reflect our faith, guided by virtues like courage and prudence, ensuring that we honor God not just with our words but also with our wallets.Please Consider GivingNifty Links: Join the Community Click Here to Check Out the Store Click Here to Donate Greatest Theology Newsletter on the PlanetTranscriptRemy: Should be it. Yeah. This is live. So anyway, the way my apartment is set up behind me over here is the kitchen and then that's my front door.Daniel: Oh, funny.Remy: So I can't. This is the only place I can put my computer because it's like an open floor plan. Oh.Daniel: Yeah.Remy: So I just have the old shower curtain. I. It's funny, I. Sometimes I put it like upside down or sideways so, like the books are wrong and so far no one has noticed.Daniel: So, I mean, I would have noticed, brother.Backgrounds are kind of fun, man. It's like I spent a lot of time on my background actually.You can actually see some of my artifacts. I have a lot. Quite a collection of artifacts. We can talk about that too.Remy: Yeah, yeah. Well, I mean, so far the only thing that I'm getting out of this is that you're a Catholic billionaire. Once again, that's.That's, that's what I'm getting.Daniel: I don't think that's a good idea.Remy: I.I don't have artifacts. No, no.Daniel: You talk about that a little bit. I. It's funny. Every year I try to find one or two topics that I both know nothing...

Pigeon Hour
Best of Pigeon Hour

Pigeon Hour

Play Episode Listen Later Jan 24, 2024 107:33


Table of contentsNote: links take you to the corresponding section below; links to the original episode can be found there.* Laura Duffy solves housing, ethics, and more [00:01:16]* Arjun Panickssery solves books, hobbies, and blogging, but fails to solve the Sleeping Beauty problem because he's wrong on that one [00:10:47]* Nathan Barnard on how financial regulation can inform AI regulation [00:17:16]* Winston Oswald-Drummond on the tractability of reducing s-risk, ethics, and more [00:27:48]* Nathan Barnard (again!) on why general intelligence is basically fake [00:34:10]* Daniel Filan on why I'm wrong about ethics (+ Oppenheimer and what names mean in like a hardcore phil of language sense) [00:56:54]* Holly Elmore on AI pause, wild animal welfare, and some cool biology things I couldn't fully follow but maybe you can [01:04:00]* Max Alexander and I solve ethics, philosophy of mind, and cancel culture once and for all [01:24:43]* Sarah Woodhouse on discovering AI x-risk, Twitter, and more [01:30:56] * Pigeon Hour x Consistently Candid pod-crossover: I debate moral realism with Max Alexander and Sarah Hastings-Woodhouse [01:41:08]Intro [00:00:00]To wrap up the year of Pigeon Hour, the podcast, I put together some clips from each episode to create a best-of compilation. This was inspired by 80,000 Hours, a podcast that did the same with their episodes, and I thought it was pretty cool and tractable enough.It's important to note that the clips I chose range in length significantly. This does not represent the quality or amount of interesting content in the episode. Sometimes there was a natural place to break the episode into a five-minute chunk, and other times it wouldn't have made sense to take a five-minute chunk out of what really needed to be a 20-minute segment. I promise I'm not just saying that.So without further ado, please enjoy.#1: Laura Duffy solves housing, ethics, and more [00:01:16]In this first segment, Laura, Duffy, and I discuss the significance and interpretation of Aristotle's philosophical works in relation to modern ethics and virtue theory.AARON: Econ is like more interesting. I don't know. I don't even remember of all the things. I don't know, it seems like kind of cool. Philosophy. Probably would have majored in philosophy if signaling wasn't an issue. Actually, maybe I'm not sure if that's true. Okay. I didn't want to do the old stuff though, so I'm actually not sure. But if I could aristotle it's all wrong. Didn't you say you got a lot out of Nicomachi or however you pronounce that?LAURA: Nicomachian ethics guide to how you should live your life. About ethics as applied to your life because you can't be perfect. Utilitarians. There's no way to be that.AARON: But he wasn't even responding to utilitarianism. I'm sure it was a good work given the time, but like, there's like no other discipline in which we care. So people care so much about like, what people thought 2000 years ago because like the presumption, I think the justified presumption is that things have iterated and improved since then. And I think that's true. It's like not just a presumption.LAURA: Humans are still rather the same and what our needs are for living amongst each other in political society are kind of the same. I think America's founding is very influenced by what people thought 2000 years ago.AARON: Yeah, descriptively that's probably true. But I don't know, it seems like all the whole body of philosophers have they've already done the work of, like, compressing the good stuff. Like the entire academy since like, 1400 or whatever has like, compressed the good stuff and like, gotten rid of the bad stuff. Not in like a high fidelity way, but like a better than chance way. And so the stuff that remains if you just take the state of I don't know if you read the Oxford Handbook of whatever it is, like ethics or something, the takeaways you're going to get from that are just better than the takeaways you're going to get from a summary of the state of the knowledge in any prior year. At least. Unless something weird happened. And I don't know. I don't know if that makes sense.LAURA: I think we're talking about two different things, though. Okay. In terms of knowledge about logic or something or, I don't know, argumentation about trying to derive the correct moral theory or something, versus how should we think about our own lives. I don't see any reason as to why the framework of virtue theory is incorrect and just because it's old. There's many virtue theorists now who are like, oh yeah, they were really on to something and we need to adapt it for the times in which we live and the kind of societies we live in now. But it's still like there was a huge kernel of truth in at least the way of thinking that Aristotle put forth in terms of balancing the different virtues that you care about and trying to find. I think this is true. Right? Like take one virtue of his humor. You don't want to be on one extreme where you're just basically a meme your entire life. Everybody thinks you're funny, but that's just not very serious. But you don't want to be a boar and so you want to find somewhere in the middle where it's like you have a good sense of humor, but you can still function and be respected by other people.AARON: Yeah. Once again, I agree. Well, I don't agree with everything. I agree with a lot of what you just said. I think there was like two main points of either confusion or disagreement. And like, the first one is that I definitely think, no, Aristotle shouldn't be discounted or like his ideas or virtue ethics or anything like that shouldn't be discounted because they were canonical texts or something were written a long time ago. I guess it's just like a presumption that I have a pretty strong presumption that conditional on them being good, they would also be written about today. And so you don't actually need to go back to the founding texts and then in fact, you probably shouldn't because the good stuff will be explained better and not in weird it looks like weird terms. The terms are used differently and they're like translations from Aramaic or whatever. Probably not Aramaic, probably something else. And yeah, I'm not sure if you.LAURA: Agree with this because we have certain assumptions about what words like purpose mean now that we're probably a bit richer in the old conception of them like telos or happiness. Right. Udaimnia is much better concept and to read the original text and see how those different concepts work together is actually quite enriching compared to how do people use these words now. And it would take like I don't know, I think there just is a lot of value of looking at how these were originally conceived because popularizers of the works now or people who are seriously doing philosophy using these concepts. You just don't have the background knowledge that's necessary to understand them fully if you don't read the canonical text.AARON: Yeah, I think that would be true. If you are a native speaker. Do you know Greek? If you know Greek, this is like dumb because then you're just right.LAURA: I did take a quarter of it.AARON: Oh God. Oh my God. I don't know if that counts, but that's like more than anybody should ever take. No, I'm just kidding. That's very cool. No, because I was going to say if you're a native speaker of Greek and you have the connotations of the word eudaimonia and you were like living in the temper shuttle, I would say. Yeah, that's true actually. That's a lot of nuanced, connotation and context that definitely gets lost with translation. But once you take the jump of reading English translations of the texts, not you may as well but there's nothing super special. You're not getting any privileged knowledge from saying the word eudaimonia as opposed to just saying some other term as a reference to that concept or something. You're absorbing the connotation in the context via English, I guess, via the mind of literally the translators who have like.LAURA: Yeah, well see, I tried to learn virtue theory by any other route than reading Aristotle.AARON: Oh God.LAURA: I took a course specifically on Plato and Aristotle.AARON: Sorry, I'm not laughing at you. I'm just like the opposite type of philosophy person.LAURA: But keep going. Fair. But she had us read his physics before we read Nicomachi.AARON: Think he was wrong about all that.LAURA: Stuff, but it made you understand what he meant by his teleology theory so much better in a way that I could not get if I was reading some modern thing.AARON: I don't know, I feel like you probably could. No, sorry, that's not true. I don't think you could get what Aristotle the man truly believed as well via a modern text. But is that what you? Depends. If you're trying to be a scholar of Aristotle, maybe that's important. If you're trying to find the best or truest ethics and learn the lessons of how to live, that's like a different type of task. I don't think Aristotle the man should be all that privileged in that.LAURA: If all of the modern people who are talking about virtue theory are basically Aristotle, then I don't see the difference.AARON: Oh, yeah, I guess. Fair enough. And then I would say, like, oh, well, they should probably start. Is that in fact the state of the things in virtue theory? I don't even know.LAURA: I don't know either.#2 Arjun Panickssery solves books, hobbies, and blogging, but fails to solve the Sleeping Beauty problem because he's wrong on that one [00:10:47]All right, next, Arjun Panixery and I explore the effectiveness of reading books in retaining and incorporating knowledge, discussing the value of long form content and the impact of great literary works on understanding and shaping personal worldviews.ARJUN: Oh, you were in the book chat, though. The book rant group chat, right?AARON: Yeah, I think I might have just not read any of it. So do you want to fill me in on what I should have read?ARJUN: Yeah, it's group chat of a bunch of people where we were arguing about a bunch of claims related to books. One of them is that most people don't remember pretty much anything from books that they read, right? They read a book and then, like, a few months later, if you ask them about it, they'll just say one page's worth of information or maybe like, a few paragraphs. The other is that what is it exactly? It's that if you read a lot of books, it could be that you just incorporate the information that's important into your existing models and then just forget the information. So it's actually fine. Isn't this what you wrote in your blog post or whatever? I think that's why I added you to that.AARON: Oh, thank you. I'm sorry I'm such a bad group chat participant. Yeah, honestly, I wrote that a while ago. I don't fully remember exactly what it says, but at least one of the things that it said was and that I still basically stand by, is that it's basically just like it's increasing the salience of a set of ideas more so than just filling your brain with more facts. And I think this is probably true insofar as the facts support a set of common themes or ideas that are kind of like the intellectual core of it. It would be really hard. Okay, so this is not a book, but okay. I've talked about how much I love an 80,000 hours podcast, and I've listened to, I don't think every episode, but at least 100 of the episodes. And no, you're just, like, not going to definitely I've forgotten most of the actual almost all of the actual propositional pieces of information said, but you're just not going to convince me that it's completely not affecting either model of the world or stuff that I know or whatever. I mean, there are facts that I could list. I think maybe I should try.ARJUN: Sure.AARON: Yeah. So what's your take on book other long form?ARJUN: Oh, I don't know. I'm still quite confused or I think the impetus for the group chat's creation was actually Hanania's post where he wrote the case against most books or most was in parentheses or something. I mean, there's a lot of things going on in that post. He just goes off against a bunch of different categories of books that are sort of not closely related. Like, he goes off against great. I mean, this is not the exact take he gives, but it's something like the books that are considered great are considered great literature for some sort of contingent reason, not because they're the best at getting you information that you want.AARON: This is, like, another topic. But I'm, like, anti great books. In fact, I'm anti great usually just means old and famous. So insofar as that's what we mean by I'm like, I think this is a bad thing, or, like, I don't know, aristotle is basically wrong about everything and stuff like that.ARJUN: Right, yeah. Wait, we could return to this. I guess this could also be divided into its component categories. He spends more time, though, I think, attacking a certain kind of nonfiction book that he describes as the kind of book that somebody pitches to a publisher and basically expands a single essay's worth of content into with a bunch of anecdotes and stuff. He's like, most of these books are just not very useful to read, I guess. I agree with that.AARON: Yeah. Is there one that comes to mind as, like, an? Mean, I think of Malcolm Gladwell as, like, the kind of I haven't actually read any of his stuff in a while, but I did, I think, when I started reading nonfiction or with any sort of intent, I read. A bunch of his stuff or whatever and vaguely remember that this is basically what he like for better or.ARJUN: Um yeah, I guess so. But he's almost, like, trying to do it on purpose. This is the experience that you're getting by reading a Malcolm Gladwell book. It's like talib. Right? It's just him just ranting. I'm thinking, I guess, of books that are about something. So, like, if you have a book that's know negotiation or something, it'll be filled with a bunch of anecdotes that are of dubious usefulness. Or if you get a book that's just about some sort of topic, there'll be historical trivia that's irrelevant. Maybe I can think of an example.AARON: Yeah. So the last thing I tried to read, maybe I am but haven't in a couple of weeks or whatever, is like, the Derek Parfit biography. And part of this is motivated because I don't even like biographies in general for some reason, I don't know. But I don't know. He's, like, an important guy. Some of the anecdotes that I heard were shockingly close to home for me, or not close to home, but close to my brain or something. So I was like, okay, maybe I'll see if this guy's like the smarter version of Aaron Bergman. And it's not totally true.ARJUN: Sure, I haven't read the book, but I saw tweet threads about it, as one does, and I saw things that are obviously false. Right. It's the claims that he read, like, a certain number of pages while brushing his teeth. That's, like, anatomically impossible or whatever. Did you get to that part? Or I assumed no, I also saw.AARON: That tweet and this is not something that I do, but I don't know if it's anatomically impossible. Yeah, it takes a little bit of effort to figure out how to do that, I guess. I don't think that's necessarily false or whatever, but this is probably not the most important.ARJUN: Maybe it takes long time to brush his teeth.#3: Nathan Barnard on how financial regulation can inform AI regulation [00:17:16]In this next segment, Nathan Barnard and I dive into the complexities of AI regulation, including potential challenges and outcomes of governing AI in relation to economic growth and existential security. And we compare it to banking regulation as well.AARON: Yeah, I don't know. I just get gloomy for, I think justified reasons when people talk about, oh yeah, here's the nine step process that has to take place and then maybe there's like a 20% chance that we'll be able to regulate AI effectively. I'm being facetious or exaggerating, something like that, but not by a gigantic amount.NATHAN: I think this is pretty radically different to my mainline expectation.AARON: What's your mainline expectation?NATHAN: I suppose I expect like AI to come with an increasing importance past economy and to come up to really like a very large fraction of the economy before really crazy stuff starts happening and this world is going very anonymous. Anonymous, anonymous, anonymous. I know the word is it'd be very unusual if this extremely large sector economy which was impacted like a very large number of people's lives remains like broadly unregulated.AARON: It'll be regulated, but just maybe in a stupid way.NATHAN: Sure, yes, maybe in a stupid way. I suppose critically, do you expect the stupid way to be like too conservative or too like the specific question of AI accenture it's basically too conservative or too lenient or I just won't be able to interact with this.AARON: I guess generally too lenient, but also mostly on a different axis where just like I don't actually know enough. I don't feel like I've read learned about various governance proposals to have a good object level take on this. But my broad prior is that there are just a lot of ways to for anything. There's a lot of ways to regulate something poorly. And the reason insofar as anything isn't regulated poorly it's because of a lot of trial and error.NATHAN: Maybe.AARON: I mean, there's probably exceptions, right? I don't know. Tax Americana is like maybe we didn't just kept winning wars starting with World War II. I guess just like maybe like a counterexample or something like that.NATHAN: Yeah, I think I still mostly disagree with this. Oh, cool. Yeah. I suppose I see a much like broader spectrum between bad regulation and good regulation. I agree it's like very small amount. The space of optimal regulation is very small. But I think we have to hit that space for regulation to be helpful. Especially in this especially if you consider that if you sort of buy the AI extension safety risk then the downsides of it's not this quite fine balancing act between too much whether consumer protection and siphoning competition and cycling innovation too much. It's like trying to end this quite specific, very bad outcome which is maybe much worse than going somewhat slowering economic growth, at least somewhat particularly if we think we're going to get something. This is very explosive rates for economic growth really quite soon. And the cost of slowing down economic growth by weather even by quite a large percentage, very small compared to the cost of sort of an accidental catastrophe. I sort of think of Sony iconic growth as the main cost of main way regulation goes wrong currently.AARON: I think in an actual sense that is correct. There's the question of like okay, Congress in the states like it's better than nothing. I'm glad it's not anarchy in terms of like I'm glad we have a legislature.NATHAN: I'm also glad the United States.AARON: How reasons responsive is Congress? I don't think reasons responsive enough to make it so that the first big law that gets passed insofar as there is one or if there is one is on the pareto frontier trading off between economic growth and existential security. It's going to be way inside of that production frontier or whatever. It's going to suck on every action, maybe not every act but at least like some relevant actions.NATHAN: Yeah that doesn't seem like obviously true to me. I think Dodge Frank was quite a good law.AARON: That came after 2008, right?NATHAN: Yeah correct. Yeah there you go. No, I agree. I'm not especially confident about doing regulation before there's some quite bad before there's a quite bad warning shot and yes, if we're in world where we have no warning shots and we're just like blindsided by everyone getting turned into everyone getting stripped their Athens within 3 seconds, this is not good. Both in law we do have one of those shots and I think Glass Seagull is good law. Not good law is a technical term. I think Glass Steagall was a good piece of legislation. I think DoD Frank was a good piece of legislation. I think the 2008 Seamless Bill was good piece of legislation. I think the Troubled Assets Relief Program is a good piece of piece of legislation.AARON: I recognize these terms and I know some of them and others I do not know the contents of.NATHAN: Yeah so Glass Eagle was the financial regulation passed in 1933 after Great Depression. The Tropical Asset Relief Program was passed in I think 2008, moved 2009 to help recapitalize banks. Dodge Frank was the sort of landmark post financial cris piece of legislation passed in 2011. I think these are all good pieces of legislation now. I think like financial regulation is probably unusually good amongst US legislation. This is like a quite weak take, I guess. It's unusually.AARON: So. I don't actually know the pre depression financial history at all but I feel like the more relevant comparison to the 21st century era is what was the regulatory regime in 1925 or something? I just don't know.NATHAN: Yeah, I know a bit. I haven't read this stuff especially deeply and so I don't want to don't want to be so overcompensant here but sort of the core pieces which were sort of important for the sort of the Great Depression going very badly was yeah, no distinction between commercial banks and investment banks. Yes, such a bank could take much riskier. Much riskier. Things with like custom deposits than they could from 1933 until the Peel Glass Eagle. And combine that with no deposit insurance and if you sort of have the combination of banks being able to do quite risky things with depositors money and no deposit insurance, this is quite dangerously known. And glassy repeal.AARON: I'm an expert in the sense that I have the Wikipedia page up. Well, yeah, there was a bunch of things. Basically. There's the first bank of the United States. There's the second bank of the United States. There's the free banking era. There was the era of national banks. Yada, yada, yada. It looks like 19. Seven was there was some panic. I vaguely remember this from like, AP US history, like seven years ago or.NATHAN: Yes, I suppose in short, I sort of agree that the record of sort of non post Cris legislation is like, not very good, but I think record of post Cris legislation really, at least in the financial sector, really is quite good. I'm sure lots of people disagree with this, but this is my take.#4 Winston Oswald-Drummond on the tractability of reducing s-risk, ethics, and more [00:27:48]Up next, Winston Oswald Drummond and I talk about the effectiveness and impact of donating to various research organizations, such as suffering-focused S-risk organizations. We discuss tractability, expected value, and essentially where we should give our money.AARON: Okay, nice. Yeah. Where to go from here? I feel like largely we're on the same page, I feel like.WINSTON: Yeah. Is your disagreement mostly tractability? Then? Maybe we should get into the disagreement.AARON: Yeah. I don't even know if I've specified, but insofar as I have one, yes, it's trapped ability. This is the reason why I haven't donated very much to anywhere for money reasons. But insofar as I have, I have not donated to Clrcrs because I don't see a theory of change that connects the research currently being done to actually reducing s risks. And I feel like there must be something because there's a lot of extremely smart people at both of these orgs or whatever, and clearly they thought about this and maybe the answer is it's very general and the outcome is just so big in magnitude that anything kind.WINSTON: Of that is part of it, I think. Yeah, part of it is like an expected value thing and also it's just very neglected. So it's like you want some people working on this, I think, at least. Even if it's unlikely to work. Yeah, even that might be underselling it, though. I mean, I do think there's people at CRS and Clr, like talking to people at AI labs and some people in politics and these types of things. And hopefully the research is a way to know what to try to get done at these places. You want to have some concrete recommendations and I think obviously people have to also be willing to listen to you, but I think there is some work being done on that and research is partially just like a community building thing as well. It's a credible signal that you were smart and have thought about this, and so it gives people reason to listen to you and maybe that mostly pays off later on in the future.AARON: Yeah, that all sounds like reasonable. And I guess one thing is that I just don't there's definitely things I mean, first of all, I haven't really stayed up to date on what's going on, so I haven't even done I've done zero research for this podcast episode, for example. Very responsible and insofar as I've know things about these. Orgs. It's just based on what's on their website at some given time. So insofar as there's outreach going on, not like behind the scenes, but just not in a super public way, or I guess you could call that behind the scenes. I just don't have reason to, I guess, know about that. And I guess, yeah, I'm pretty comfortable. I don't even know if this is considered biting a bullet for the crowd that will be listening to this, if that's anybody but with just like yeah, saying a very small change for a very large magnitude, just, like, checks out. You can just do expected value reasoning and that's basically correct, like a correct way of thinking about ethics. But even I don't know how much you know specifically or, like, how much you're allowed want to reveal, but if there was a particular alignment agenda that I guess you in a broad sense, like the suffering focused research community thought was particularly promising and relative to other tractable, I guess, generic alignment recommendations. And you were doing research on that and trying to push that into the alignment mainstream, which is not very mainstream. And then with the hope that that jumps into the AI mainstream. Even if that's kind of a long chain of events. I think I would be a lot more enthusiastic about I don't know that type of agenda, because it feels like there's like a particular story you're telling where it cashes out in the end. You know what I mean?WINSTON: Yeah, I'm not the expert on this stuff, but I do think you just mean I think there's some things about influencing alignment and powerful AI for sure. Maybe not like a full on, like, this is our alignment proposal and it also handles Sris. But some things we could ask AI labs that are already building, like AGI, we could say, can you also implement these sort of, like, safeguards so if you failed alignment, you fail sort of gracefully and don't cause lots of suffering.AARON: Right?WINSTON: Yeah. Or maybe there are other things too, which also seem potentially more tractable. Even if you solve alignment in some sense, like aligning with whatever the human operator tells the AI to do, then you can also get the issue that malevolent actors can take control of the AI and then what they want also causes lots of suffering that type of alignment wouldn't. Yeah, and I guess I tend to be somewhat skeptical of coherent extrapolated volition and things like this, where the idea is sort of like it'll just figure out our values and do the right thing. So, yeah, there's some ways to push on this without having a full alignment plan, but I'm not sure if that counts as what you were saying.AARON: No, I guess it does. Yeah, it sounds like it does. And it could be that I'm just kind of mistaken about the degree to which that type of research and outreach is going on. That sounds like it's at least partially true.#5: Nathan Barnard (again!) on why general intelligence is basically fake [00:34:10]Up next, Nathan Barnard is back for his second episode. And we talked about the nature of general intelligence, its relationship with language and the implications of specialized brain functions on the understanding of human cognitive abilities.NATHAN: Yes. This like symbolic like symbolic, symbolic reasoning stuff. Yeah. So I think if I was, like, making the if I was, like, making the case for general intelligence being real, I wouldn't have symbolic reasoning, but I would have language stuff. I'd have this hierarchical structure thing, which.AARON: I would probably so I think of at least most uses of language and central examples as a type of symbolic reasoning because words mean things. They're like yeah. Pointers to objects or something like that.NATHAN: Yeah, I think it's like, pretty confidence isn't where this isn't a good enough description of general intelligence. So, for instance so if you bit in your brain called, I'm using a checklist, I don't fuck this up vernacular, I'm not making this cool. Lots of connects to use words like pointers as these arbitrary signs happens mostly in this area of the brain called Berkeley's area. But very famously, you can have Berkeley's epaxics who lose the ability to do language comprehension and use the ability to consistently use words as pointers, as signs to point to things, but still have perfect good spatial reasoning abilities. And so, conversely, people with brokers of fascia who fuck up, who have the broker's reason their brain fucks up will not be able to form fluent sentences and have some problems like unsigned syntax, and they'll still be able to have very good spatial reasoning. It could still, for instance, be like, good engineers. Would you like many problems which, like, cost engineering?AARON: Yeah, I totally buy that. I don't think language is the central thing. I think it's like an outgrowth of, like I don't know, there's like a simplified model I could make, which is like it's like an outgrowth of whatever general intelligence really is. But whatever the best spatial or graphical model is, I don't think language is cognition.NATHAN: Yes, this is a really big debate in psycholinguistics as to whether language is like an outgrowth of other abilities like the brain has, whether language whether there's very specialized language modules. Yeah, this is just like a very live debate in psycholinguistics moments. I actually do lean towards the reason I've been talking about this actually just going to explain this hierarchical structure thing? Yeah, I keep talking about it. So one theory for how you can comprehend new sentences, like, the dominant theory in linguistics, how you can comprehend new sentences, um, is you break them up into, like you break them up into, like, chunks, and you form these chunks together in this, like, tree structure. So something like, if you hear, like, a totally novel sentence like the pit bull mastiff flopped around deliciously or something, you can comprehend what the sentence means despite the fact you've never heard it. Theory behind this is you saw yes, this can be broken up into this tree structure, where the different, like, ah, like like bits of the sentence. So, like like the mastiff would be like, one bit, and then you have, like, another bit, which is like, the mastiff I can't remember I said rolled around, so that'd be like, another bit, and then you'd have connectors to our heart.AARON: Okay.NATHAN: So the massive rolling around one theory of one of the sort of distinctive things that humans have disabilities is like, this quite general ability to break things up into these these tree structures. This is controversial within psycholinguistics, but it's broadly an area which I broadly buy it because we do see harms to other areas of intelligence. You get much worse at, like, Ravens Progressive Matrices, for instance, when you have, like, an injury to brokers area, but, like, not worse at, like, tests like tests of space, of, like, spatial reasoning, for instance.AARON: So what is like, is there, like, a main alternative to, like, how humans.NATHAN: Understand language as far as this specificity of how we pass completely novel sentences, as far as where this is just like this is just like the the academic consensus. Okay.AARON: I mean, it sounds totally like right? I don't know.NATHAN: Yeah. But yeah, I suppose going back to saying, how far is language like an outgrowth of general intelligence? An outgrowth like general intelligence versus having much more specialized language modules? Yeah, I lean towards the latter, despite yeah, I still don't want to give too strong of a personal opinion here because I'm not a linguistic this is a podcast.AARON: You're allowed to give takes. No one's going to say this is like the academic we want takes.NATHAN: We want takes. Well, gone to my head is.AARON: I.NATHAN: Think language is not growth of other abilities. I think the main justification for this, I think, is that the loss of other abilities we see when you have damage to broker's area and verca's area.AARON: Okay, cool. So I think we basically agree on that. And also, I guess one thing to highlight is I think outgrowth can mean a couple of different things. I definitely think it's plausible. I haven't read about this. I think I did at some point, but not in a while. But outgrowth could mean temporarily or whatever. I think I'm kind of inclined to think it's not that straightforward. You could have coevolution where language per se encourages both its own development and the development of some general underlying trait or something.NATHAN: Yeah. Which seems likely.AARON: Okay, cool. So why don't humans have general intelligence?NATHAN: Right. Yeah. As I was sort of talking about previously.AARON: Okay.NATHAN: I think I think I'd like to use go back to like a high level like a high level argument is there appears to be very surprised, like, much higher levels of functional specialization in brains than you expect. You can lose much more specific abilities than you expect to be able to lose. You can lose specifically the ability a famous example is like facebindness, actually. You probably lose the ability to specifically recognize things which you're, like, an expert in.AARON: Who does it or who loses this ability.NATHAN: If you've damaged your fuse inform area, you'll lose the ability to recognize faces, but nothing else.AARON: Okay.NATHAN: And there's this general pattern that your brain is much more you can lose much more specific abilities than you expect. So, for instance, if you sort of have damage to your ventral, medial, prefrontal cortex, you can say the reasoning for why you shouldn't compulsively gamble but still compulsively gamble.AARON: For instance okay, I understand this not gambling per se, but like executive function stuff at a visceral level. Okay, keep going.NATHAN: Yeah. Some other nice examples of this. I think memory is quite intuitive. So there's like, a very famous patient called patient HM who had his hippocampus removed and so as a result, lost all declarative memory. So all memory of specific facts and things which happened in his life. He just couldn't remember any of these things, but still perfectly functioning otherwise. I think at a really high level, I think this functional specialization is probably the strongest piece of evidence against the general intelligence hypothesis. I think fundamentally, general intelligence hypothesis implies that, like, if you, like yeah, if you was, like, harm a piece of your brain, if you have some brain injury, you might like generically get worse at tasks you like, generically get worse at, like at like all task groups use general intelligence. But I think suggesting people, including general intelligence, like the ability to write, the ability to speak, maybe not speak, the ability to do math, you do have.AARON: This it's just not as easy to analyze in a Cogsy paper which IQ or whatever. So there is something where if somebody has a particular cubic centimeter of their brain taken out, that's really excellent evidence about what that cubic centimeter does or whatever, but that non spatial modification is just harder to study and analyze. I guess we'll give people drugs, right? Suppose that set aside the psychometric stuff. But suppose that general intelligence is mostly a thing or whatever and you actually can ratchet it up and down. This is probably just true, right? You can probably give somebody different doses of, like, various drugs. I don't know, like laughing gas, like like, yeah, like probably, probably weed. Like I don't know.NATHAN: So I think this just probably isn't true. Your working memory corrects quite strongly with G and having better working memory generic can make you much better at lots of tasks if you have like.AARON: Yeah.NATHAN: Sorry, but this is just like a specific ability. It's like just specifically your working memory, which is improved if you go memory to a drugs. Improved working memory. I think it's like a few things like memory attention, maybe something like decision making, which are all like extremely useful abilities and improve how well other cognitive abilities work. But they're all separate things. If you improved your attention abilities, your working memory, but you sort of had some brain injury, which sort of meant you sort of had lost ability to pass syntax, you would not get better at passing syntax. And you can also use things separately. You can also improve attention and improve working memory separately, which just it's not just this one dial which you can turn up.AARON: There's good reason to expect that we can't turn it up because evolution is already sort of like maximizing, given the relevant constraints. Right. So you would need to be looking just like injuries. Maybe there are studies where they try to increase people's, they try to add a cubic centimeter to someone's brain, but normally it's like the opposite. You start from some high baseline and then see what faculties you lose. Just to clarify, I guess.NATHAN: Yeah, sorry, I think I've lost the you still think there probably is some general intelligence ability to turn up?AARON: Honestly, I think I haven't thought about this nearly as much as you. I kind of don't know what I think at some level. If I could just write down all of the different components and there are like 74 of them and what I think of a general intelligence consists of does that make it I guess in some sense, yeah, that does make it less of an ontologically legit thing or something. I think I think the thing I want to get the motivating thing here is that with humans yet you can like we know humans range in IQ, and there's, like, setting aside a very tiny subset of people with severe brain injuries or development disorders or whatever. Almost everybody has some sort of symbolic reasoning that they can do to some degree. Whereas the smartest maybe I'm wrong about this, but as far as I know, the smartest squirrel is not going to be able to have something semantically represent something else. And that's what I intuitively want to appeal to, you know what I mean?NATHAN: Yeah, I know what you're guessing at. So I think there's like two interesting things here. So I think one is, could a squirrel do this? I'm guessing a squirrel couldn't do this, but a dog can, or like a dog probably can. A chimpanzee definitely can.AARON: Do what?NATHAN: Chimpanzees can definitely learn to associate arbitrary signs, things in the world with arbitrary signs.AARON: Yes, but maybe I'm just adding on epicentercles here, but I feel like correct me if I'm wrong, but I think that maybe I'm just wrong about this, but I would assume that Chicken Tees cannot use that sign in a domain that is qualitatively different from the ones they've been in. Right. So, like, a dog will know that a certain sign means sit or whatever, but maybe that's not a good I.NATHAN: Don'T know think this is basically not true.AARON: Okay.NATHAN: And we sort of know this from teaching.AARON: Teaching.NATHAN: There's like a famously cocoa de guerrilla. Also a bonobo whose name I can't remember were taught sign language. And the thing they were consistently bad at was, like, putting together sentences they could learn quite large vocabularies learning to associate by large, I mean in the hundreds of words, in the low hundreds of words which they could consistently use consistently use correctly.AARON: What do you mean by, like, in what sense? What is bonobo using?NATHAN: A very famous and quite controversial example is like, coco gorilla was like, saw a swan outside and signed water bird. That's like, a controversial example. But other things, I think, which are controversial here is like, the syntax part of putting water and bird together is the controversial part, but it's not the controversial part that she could see a swan and call that a bird.AARON: Yeah, I mean, this is kind of just making me think, okay, maybe the threshold for D is just like at the chimp level or something. We are like or whatever the most like that. Sure. If a species really can generate from a prefix and a suffix or whatever, a concept that they hadn't learned before.NATHAN: Yeah, this is a controversial this is like a controversial example of that the addition to is the controversial part. Yeah, I suppose maybe brings back to why I think this matters is will there be this threshold which AIS cross such that their reasoning after this is qualitatively different to their reasoning previously? And this is like two things. One, like a much faster increase in AI capabilities and two, alignment techniques which worked on systems which didn't have g will no longer work. Systems which do have g. Brings back to why I think this actually matters. But I think if we're sort of accepting it, I think elephants probably also if you think that if we're saying, like, g is like a level of chimpanzees, chimpanzees just, like, don't don't look like quantitatively different to, like, don't look like that qualitatively different to, like, other animals. Now, lots of other animals live in similar complex social groups. Lots of other animals use tools.AARON: Yeah, sure. For one thing, I don't think there's not going to be a discontinuity in the same way that there wasn't a discontinuity at any point between humans evolution from the first prokaryotic cells or whatever are eukaryotic one of those two or both, I guess. My train of thought. Yes, I know it's controversial, but let's just suppose that the sign language thing was legit with the waterbird and that's not like a random one off fluke or something. Then maybe this is just some sort of weird vestigial evolutionary accident that actually isn't very beneficial for chimpanzees and they just stumbled their way into and then it just enabled them to it enables evolution to bootstrap Shimp genomes into human genomes. Because at some the smartest or whatever actually, I don't know. Honestly, I don't have a great grasp of evolutionary biology or evolution at all. But, yeah, it could just be not that helpful for chimps and helpful for an extremely smart chimp that looks kind of different or something like that.NATHAN: Yeah. So I suppose just like the other thing she's going on here, I don't want to keep banging on about this, but you can lose the language. You can lose linguistic ability. And it's just, like, happens this happens in stroke victims, for instance. It's not that rare. Just, like, lose linguistic ability, but still have all the other abilities which we sort of think of as like, general intelligence, which I think would be including the general intelligence, like, hypothesis.AARON: I agree that's, like, evidence against it. I just don't think it's very strong evidence, partially because I think there is a real school of thought that says that language is fundamental. Like, language drives thought. Language is, like, primary to thought or something. And I don't buy that. If you did buy that, I think this would be, like, more damning evidence.#6 Daniel Filan on why I'm wrong about ethics (+ Oppenheimer and what names mean in like a hardcore phil of language sense) [00:56:54][Note: I forgot to record an intro segment here. Sorry!]AARON: Yeah. Yes. I'm also anti scam. Right, thank you. Okay, so I think that thing that we were talking about last time we talked, which is like the thing I think we actually both know stuff about instead of just like, repeating New York Times articles is my nuanced ethics takes and why you think about talk about that and then we can just also branch off from there.DANIEL: Yeah, we can talk about that.AARON: Maybe see where that did. I luckily I have a split screen up, so I can pull up things. Maybe this is kind of like egotistical or something to center my particular view, but you've definitely given me some of the better pushback or whatever that I haven't gotten that much feedback of any kind, I guess, but it's still interesting to hear your take. So basically my ethical position or the thing that I think is true is that which I think is not the default view. I think most people think this is wrong is that total utilitarianism does not imply that for some amount of suffering that could be created there exists some other extremely large arbitrarily, large amount of happiness that could also be created which would morally justify the former. Basically.DANIEL: So you think that even under total utilitarianism there can be big amounts of suffering such that there's no way to morally tip the calculus. However much pleasure you can create, it's just not going to outweigh the fact that you inflicted that much suffering on some people.AARON: Yeah, and I'd highlight the word inflicted if something's already there and you can't do anything about it, that's kind of neither here nor there as it pertains to your actions or something. So it's really about you increasing, you creating suffering that wouldn't have otherwise been created. Yeah. It's also been a couple of months since I've thought about this in extreme detail, although I thought about it quite a bit. Yeah.DANIEL: Maybe I should say my contrary view, I guess, when you say that, I don't know, does total utilitarianism imply something or not? I'm like, well, presumably it depends on what we mean by total utilitarianism. Right. So setting that aside, I think that thesis is probably false. I think that yeah. You can offset great amounts of suffering with great amounts of pleasure, even for arbitrary amounts of suffering.AARON: Okay. I do think that position is like the much more common and even, I'd say default view. Do you agree with that? It's sort of like the implicit position of people who are of self described total utilitarians who haven't thought a ton about this particular question.DANIEL: Yeah, I think it's probably the implicit default. I think it's the implicit default in ethical theory or something. I think that in practice, when you're being a utilitarian, I don't know, normally, if you're trying to be a utilitarian and you see yourself inflicting a large amount of suffering, I don't know. I do think there's some instinct to be like, is there any way we can get around this?AARON: Yeah, for sure. And to be clear, I don't think this would look like a thought experiment. I think what it looks like in practice and also I will throw in caveats as I see necessary, but I think what it looks like in practice is like, spreading either wild animals or humans or even sentient digital life through the universe. That's in a non as risky way, but that's still just maybe like, say, making the earth, making multiple copies of humanity or something like that. That would be an example that's probably not like an example of what an example of creating suffering would be. For example, just creating another duplicate of earth. Okay.DANIEL: Anything that would be like so much suffering that we shouldn't even the pleasures of earth outweighs.AARON: Not necessarily, which is kind of a cop out. But my inclination is that if you include wild animals, the answer is yes, that creating another earth especially. Yeah, but I'm much more committed to some amount. It's like some amount than this particular time and place in human industry is like that or whatever.DANIEL: Okay, can I get a feel of some other concrete cases to see?AARON: Yeah.DANIEL: So one example that's on my mind is, like, the atomic bombing of Hiroshima and Nagasaki, right? So the standard case for this is, like, yeah, what? A hundred OD thousand people died? Like, quite terrible, quite awful. And a lot of them died, I guess a lot of them were sort of some people were sort of instantly vaporized, but a lot of people died in extremely painful ways. But the countercase is like, well, the alternative to that would have been like, an incredibly grueling land invasion of Japan, where many more people would have died or know regardless of what the actual alternatives were. If you think about the atomic bombings, do you think that's like the kind of infliction of suffering where there's just not an offsetting amount of pleasure that could make that okay?AARON: My intuition is no, that it is offsettable, but I would also emphasize that given the actual historical contingencies, the alternative, the implicit case for the bombing includes reducing suffering elsewhere rather than merely creating happiness. There can definitely be two bad choices that you have to make or something. And my claim doesn't really pertain to that, at least not directly.#7: Holly Elmore on AI pause, wild animal welfare, and some cool biology things I couldn't fully follow but maybe you can [01:04:00]Up next, Holly Elmore and I discuss the complexities and implications of AI development and open sourcing. We talk about protests and ethical considerations around her, um, uh, campaign to pause the development of frontier AI systems until, until we can tell that they're safe.AARON: So what's the plan? Do you have a plan? You don't have to have a plan. I don't have plans very much.HOLLY: Well, right now I'm hopeful about the UK AI summit. Pause AI and I have planned a multi city protest on the 21 October to encourage the UK AI Safety Summit to focus on safety first and to have as a topic arranging a pause or that of negotiation. There's a lot of a little bit upsetting advertising for that thing that's like, we need to keep up capabilities too. And I just think that's really a secondary objective. And that's how I wanted to be focused on safety. So I'm hopeful about the level of global coordination that we're already seeing. It's going so much faster than we thought. Already the UN Secretary General has been talking about this and there have been meetings about this. It's happened so much faster at the beginning of this year. Nobody thought we could talk about nobody was thinking we'd be talking about this as a mainstream topic. And then actually governments have been very receptive anyway. So right now I'm focused on other than just influencing opinion, the targets I'm focused on, or things like encouraging these international like, I have a protest on Friday, my first protest that I'm leading and kind of nervous that's against Meta. It's at the Meta building in San Francisco about their sharing of model weights. They call it open source. It's like not exactly open source, but I'm probably not going to repeat that message because it's pretty complicated to explain. I really love the pause message because it's just so hard to misinterpret and it conveys pretty clearly what we want very quickly. And you don't have a lot of bandwidth and advocacy. You write a lot of materials for a protest, but mostly what people see is the title.AARON: That's interesting because I sort of have the opposite sense. I agree that in terms of how many informational bits you're conveying in a particular phrase, pause AI is simpler, but in some sense it's not nearly as obvious. At least maybe I'm more of a tech brain person or whatever. But why that is good, as opposed to don't give extremely powerful thing to the worst people in the world. That's like a longer everyone.HOLLY: Maybe I'm just weird. I've gotten the feedback from open source ML people is the number one thing is like, it's too late, there's already super powerful models. There's nothing you can do to stop us, which sounds so villainous, I don't know if that's what they mean. Well, actually the number one message is you're stupid, you're not an ML engineer. Which like, okay, number two is like, it's too late, there's nothing you can do. There's all of these other and Meta is not even the most powerful generator of models that it share of open source models. I was like, okay, fine. And I don't know, I don't think that protesting too much is really the best in these situations. I just mostly kind of let that lie. I could give my theory of change on this and why I'm focusing on Meta. Meta is a large company I'm hoping to have influence on. There is a Meta building in San Francisco near where yeah, Meta is the biggest company that is doing this and I think there should be a norm against model weight sharing. I was hoping it would be something that other employees of other labs would be comfortable attending and that is a policy that is not shared across the labs. Obviously the biggest labs don't do it. So OpenAI is called OpenAI but very quickly decided not to do that. Yeah, I kind of wanted to start in a way that made it more clear than pause AI. Does that anybody's welcome something? I thought a one off issue like this that a lot of people could agree and form a coalition around would be good. A lot of people think that this is like a lot of the open source ML people think know this is like a secret. What I'm saying is secretly an argument for tyranny. I just want centralization of power. I just think that there are elites that are better qualified to run everything. It was even suggested I didn't mention China. It even suggested that I was racist because I didn't think that foreign people could make better AIS than Meta.AARON: I'm grimacing here. The intellectual disagreeableness, if that's an appropriate term or something like that. Good on you for standing up to some pretty bad arguments.HOLLY: Yeah, it's not like that worth it. I'm lucky that I truly am curious about what people think about stuff like that. I just find it really interesting. I spent way too much time understanding the alt. Right. For instance, I'm kind of like sure I'm on list somewhere because of the forums I was on just because I was interested and it is something that serves me well with my adversaries. I've enjoyed some conversations with people where I kind of like because my position on all this is that look, I need to be convinced and the public needs to be convinced that this is safe before we go ahead. So I kind of like not having to be the smart person making the arguments. I kind of like being like, can you explain like I'm five. I still don't get it. How does this work?AARON: Yeah, no, I was thinking actually not long ago about open source. Like the phrase has such a positive connotation and in a lot of contexts it really is good. I don't know. I'm glad that random tech I don't know, things from 2004 or whatever, like the reddit source code is like all right, seems cool that it's open source. I don't actually know if that was how that right. But yeah, I feel like maybe even just breaking down what the positive connotation comes from and why it's in people's self. This is really what I was thinking about, is like, why is it in people's self interest to open source things that they made and that might break apart the allure or sort of ethical halo that it has around it? And I was thinking it probably has something to do with, oh, this is like how if you're a tech person who makes some cool product, you could try to put a gate around it by keeping it closed source and maybe trying to get intellectual property or something. But probably you're extremely talented already, or pretty wealthy. Definitely can be hired in the future. And if you're not wealthy yet I don't mean to put things in just materialist terms, but basically it could easily be just like in a yeah, I think I'll probably take that bit out because I didn't mean to put it in strictly like monetary terms, but basically it just seems like pretty plausibly in an arbitrary tech person's self interest, broadly construed to, in fact, open source their thing, which is totally fine and normal.HOLLY: I think that's like 99 it's like a way of showing magnanimity showing, but.AARON: I don't make this sound so like, I think 99.9% of human behavior is like this. I'm not saying it's like, oh, it's some secret, terrible self interested thing, but just making it more mechanistic. Okay, it's like it's like a status thing. It's like an advertising thing. It's like, okay, you're not really in need of direct economic rewards, or sort of makes sense to play the long game in some sense, and this is totally normal and fine, but at the end of the day, there's reasons why it makes sense, why it's in people's self interest to open source.HOLLY: Literally, the culture of open source has been able to bully people into, like, oh, it's immoral to keep it for yourself. You have to release those. So it's just, like, set the norms in a lot of ways, I'm not the bully. Sounds bad, but I mean, it's just like there is a lot of pressure. It looks bad if something is closed source.AARON: Yeah, it's kind of weird that Meta I don't know, does Meta really think it's in their I don't know. Most economic take on this would be like, oh, they somehow think it's in their shareholders interest to open source.HOLLY: There are a lot of speculations on why they're doing this. One is that? Yeah, their models aren't as good as the top labs, but if it's open source, then open source quote, unquote then people will integrate it llama Two into their apps. Or People Will Use It And Become I don't know, it's a little weird because I don't know why using llama Two commits you to using llama Three or something, but it just ways for their models to get in in places where if you just had to pay for their models too, people would go for better ones. That's one thing. Another is, yeah, I guess these are too speculative. I don't want to be seen repeating them since I'm about to do this purchase. But there's speculation that it's in best interests in various ways to do this. I think it's possible also that just like so what happened with the release of Llama One is they were going to allow approved people to download the weights, but then within four days somebody had leaked Llama One on four chan and then they just were like, well, whatever, we'll just release the weights. And then they released Llama Two with the weights from the beginning. And it's not like 100% clear that they intended to do full open source or what they call Open source. And I keep saying it's not open source because this is like a little bit of a tricky point to make. So I'm not emphasizing it too much. So they say that they're open source, but they're not. The algorithms are not open source. There are open source ML models that have everything open sourced and I don't think that that's good. I think that's worse. So I don't want to criticize them for that. But they're saying it's open source because there's all this goodwill associated with open source. But actually what they're doing is releasing the product for free or like trade secrets even you could say like things that should be trade secrets. And yeah, they're telling people how to make it themselves. So it's like a little bit of a they're intentionally using this label that has a lot of positive connotations but probably according to Open Source Initiative, which makes the open Source license, it should be called something else or there should just be like a new category for LLMs being but I don't want things to be more open. It could easily sound like a rebuke that it should be more open to make that point. But I also don't want to call it Open source because I think Open source software should probably does deserve a lot of its positive connotation, but they're not releasing the part, that the software part because that would cut into their business. I think it would be much worse. I think they shouldn't do it. But I also am not clear on this because the Open Source ML critics say that everyone does have access to the same data set as Llama Two. But I don't know. Llama Two had 7 billion tokens and that's more than GPT Four. And I don't understand all of the details here. It's possible that the tokenization process was different or something and that's why there were more. But Meta didn't say what was in the longitude data set and usually there's some description given of what's in the data set that led some people to speculate that maybe they're using private data. They do have access to a lot of private data that shouldn't be. It's not just like the common crawl backup of the Internet. Everybody's basing their training on that and then maybe some works of literature they're not supposed to. There's like a data set there that is in question, but metas is bigger than bigger than I think well, sorry, I don't have a list in front of me. I'm not going to get stuff wrong, but it's bigger than kind of similar models and I thought that they have access to extra stuff that's not public. And it seems like people are asking if maybe that's part of the training set. But yeah, the ML people would have or the open source ML people that I've been talking to would have believed that anybody who's decent can just access all of the training sets that they've all used.AARON: Aside, I tried to download in case I'm guessing, I don't know, it depends how many people listen to this. But in one sense, for a competent ML engineer, I'm sure open source really does mean that. But then there's people like me. I don't know. I knew a little bit of R, I think. I feel like I caught on the very last boat where I could know just barely enough programming to try to learn more, I guess. Coming out of college, I don't know, a couple of months ago, I tried to do the thing where you download Llama too, but I tried it all and now I just have like it didn't work. I have like a bunch of empty folders and I forget got some error message or whatever. Then I tried to train my own tried to train my own model on my MacBook. It just printed. That's like the only thing that a language model would do because that was like the most common token in the training set. So anyway, I'm just like, sorry, this is not important whatsoever.HOLLY: Yeah, I feel like torn about this because I used to be a genomicist and I used to do computational biology and it was not machine learning, but I used a highly parallel GPU cluster. And so I know some stuff about it and part of me wants to mess around with it, but part of me feels like I shouldn't get seduced by this. I am kind of worried that this has happened in the AI safety community. It's always been people who are interested in from the beginning, it was people who are interested in singularity and then realized there was this problem. And so it's always been like people really interested in tech and wanting to be close to it. And I think we've been really influenced by our direction, has been really influenced by wanting to be where the action is with AI development. And I don't know that that was right.AARON: Not personal, but I guess individual level I'm not super worried about people like you and me losing the plot by learning more about ML on their personal.HOLLY: You know what I mean? But it does just feel sort of like I guess, yeah, this is maybe more of like a confession than, like a point. But it does feel a little bit like it's hard for me to enjoy in good conscience, like, the cool stuff.AARON: Okay. Yeah.HOLLY: I just see people be so attached to this as their identity. They really don't want to go in a direction of not pursuing tech because this is kind of their whole thing. And what would they do if we weren't working toward AI? This is a big fear that people express to me with they don't say it in so many words usually, but they say things like, well, I don't want AI to never get built about a pause. Which, by the way, just to clear up, my assumption is that a pause would be unless society ends for some other reason, that a pause would eventually be lifted. It couldn't be forever. But some people are worried that if you stop the momentum now, people are just so luddite in their insides that we would just never pick it up again. Or something like that. And, yeah, there's some identity stuff that's been expressed. Again, not in so many words to me about who will we be if we're just sort of like activists instead of working on.AARON: Maybe one thing that we might actually disagree on. It's kind of important is whether so I think we both agree that Aipause is better than the status quo, at least broadly, whatever. I know that can mean different things, but yeah, maybe I'm not super convinced, actually, that if I could just, like what am I trying to say? Maybe at least right now, if I could just imagine the world where open eye and Anthropic had a couple more years to do stuff and nobody else did, that would be better. I kind of think that they are reasonably responsible actors. And so I don't k

The Bike Shed
385: The Boring Parts of Tech

The Bike Shed

Play Episode Listen Later May 23, 2023 24:41


Joël is joined by thoughtbot Software Developer and Dirt Jumper Daniel Nolan. Dirt jumping is BMX-style riding