Podcasts about LSP

  • 251PODCASTS
  • 641EPISODES
  • 57mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • May 27, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about LSP

Show all podcasts related to lsp

Latest podcast episodes about LSP

On The Brink
Episode 418: Jolynn Ledgerwood

On The Brink

Play Episode Listen Later May 27, 2025 62:27


Jolynn D. Ledgerwood has over 25 years experience in Learning and Development. Her experience spans Hospitality, Consumer Goods, ProfessionalServices, IT, and Cyber Security. She has worked with several large companies including PepsiCo, Brinker International, FritoLay, Critical Start, and Toyota Motors. While she enjoyed her work in the large corporate setting, she was discouraged by the methodologies for Team Building and allowing ALL members a voice.  When she found LEGO®️ Serious Play®️, she was drawn to its familiarity and plentiful application opportunities. (LSP has over 15,000 facilitators in the Europe countries, and only 100+ in the US.). She added it to her list of Certifications including Bob Goff's Dream Big, The Primal Question,and Gallup StrengthsFinder.

SlatorPod
#249 How to Expand in AI Data Services with DATAmundi CEO Véronique Özkaya

SlatorPod

Play Episode Listen Later May 6, 2025 37:12


Véronique Özkaya, Co-CEO of DATAmundi, returns to SlatorPod for round 2 to talk about the company's strategic rebrand and how it is positioning itself as a key player in the data-for-AI space.Véronique details her journey to leading DATAmundi, formerly known as Summa Linguae, where she now drives a strategic shift from traditional language services to AI-focused data enablement.The Co-CEO explains that their LSP background makes them well-suited to offer fine-tuning services for AI, especially in multilingual and domain-specific contexts. However, she cautions that language expertise alone isn't enough; deep tech infrastructure, data science capabilities, and the ability to quickly build custom workflows are also essential.While many companies still rely on crowd-sourced, basic annotation, DATAmundi targets higher-complexity projects requiring domain experts and linguists. Véronique notes the market for data-for-AI is growing significantly faster than traditional LSP work and sees a second wave of demand from enterprises needing to adapt pre-trained models.Véronique highlights data scarcity, hallucination, and bias as core AI challenges that DATAmundi tackles through technical solutions and expert guidance, helping enterprises as they face pressure to implement AI despite legacy systems and unclear strategies.Looking ahead, DATAmundi plans to expand its consultative services through further acquisitions, focusing not on tech per se, but on organizations that deepen its expertise in data application and AI deployment.

SlatorPod
#248 DeepL Plants Flag on iPhone, RWS Stock Puzzle

SlatorPod

Play Episode Listen Later May 2, 2025 29:38


Florian and Esther discuss the language industry news of the week, with DeepL becoming the first third-party translation app users can set as default on the iPhone, a position gained by navigating Apple's developer requirements that others like Google Translate have yet to meet.Florian and Esther examine RWS's mid-year trading update, which triggered a steep 40% share price drop despite stable revenue, healthy profits, and manageable debt.On the partnerships front, the duo covers multiple collaborations: Acclaro and Phrase co-funded a new Solutions Architect role, Unbabel entered a strategic partnership with Acclaro, and Phrase partnered with Clearly Local in Shanghai. Also, KUDO expanded its network with new partners, while Deepdub was featured in an AWS case study for its work with Paramount. Wistia partnered with HeyGen to launch translation and AI-dubbing features and Synthesia joined forces with DeepL, further cementing the trend of avatar-based multilingual video content.In Esther's M&A corner, MotionPoint acquired GetGloby to enhance multilingual marketing capabilities, while OXO and Powerling merged to form a transatlantic LSP leader. TransPerfect deepened its media footprint with two studio acquisitions from Technicolor, and Magna Legal Services continued its acquisition spree with Basye Santiago Reporting.Meanwhile, in funding, Linguana, an AI dubbing startup targeted at YouTube creators, raised USD 8.5m, and pyannoteAI secured EUR 8m to enhance multilingual voice tech using speaker diarization. The episode concluded with speculation about DeepL's rumored IPO, which could have broader implications for capital markets.

The Art of SBA Lending
The Model Has Changed: Rethinking SBA Operations ft. Mike Breckheimer, Brian Carlson & Chris Kwiatkowski | Ep. 178

The Art of SBA Lending

Play Episode Listen Later May 1, 2025 50:00


This week on The Art of SBA Lending, we're confronting the hard truth: the traditional SBA lending model no longer works. Rising overhead, margin compression, and tighter audits have flipped the economics, and now, shops across the country are closing or scrambling to restructure. So what's the new model? And how do you scale without losing control? Ray Drew is joined by three SBA leaders navigating this shift in real time. Mike Breckheimer, Brian Carlson, and Chris Kwiatkowski are coming together to unpack the numbers, the pitfalls, and the path forward.

Land Stewardship Project's Ear to the Ground
Ear to the Ground 369: Emerging Agrarians

Land Stewardship Project's Ear to the Ground

Play Episode Listen Later Apr 11, 2025 37:52


Ka Zoua Berry says supporting a future generation of farmers who don’t fit the traditional Midwestern stereotype isn’t just about building a resilient farm and food system. It’s also about building resilient communities. More Information • Big River Farms • Emerging Farmers Conference • Farmland Access Hub • LSP Farmland Clearinghouse You can find LSP…  Read More → Source

CanadianSME Small Business Podcast
AI for Small Business

CanadianSME Small Business Podcast

Play Episode Listen Later Mar 26, 2025 24:00


In this exciting episode of the CanadianSME Small Business Podcast, host Maheen chats with Chandrashekar (LSP), Managing Director of Zoho Canada, to explore the latest advancements in Zoho One and the game-changing potential of Zoho AI for small businesses.With 98% of Canadian businesses classified as SMEs, technology plays a critical role in their growth and efficiency. LSP shares insights into Zoho's unified user interface, customizable dashboards, and the launch of Zia Agents, an AI-driven platform designed to streamline business operations, enhance security, and improve customer experiences.We also discuss Zoho's vision for empowering small businesses, how they stay ahead in an evolving tech landscape, and what the future holds for AI-driven business tools. If you're a business owner looking to leverage AI, automation, and business software for productivity and growth, this episode is a must-listen!Key Highlights:Latest Zoho One Updates—The new unified UI and custom dashboards that enhance usability and efficiency.AI-Powered Business Transformation—How Zia Agents improve network management, security, and customer engagement for SMEs.Leveraging AI for Productivity—Practical applications of Zia across Zoho's ecosystem to streamline operations.Zoho's Vision for SMEs—How Zoho tailors its products to the unique challenges of small businesses.Staying Ahead in Tech—Zoho's approach to continuous innovation and adapting to market trends.Final Takeaway from LSP—Why investing in AI and automation is crucial for future-proofing your business.Special Thanks to Our Partners:RBC: https://www.rbcroyalbank.com/dms/business/accounts/beyond-banking/index.htmlUPS: https://solutions.ups.com/ca-beunstoppable.html?WT.mc_id=BUSMEWAIHG Hotels and Resorts: https://businessedge.ihg.com/s/registration?language=en_US&CanSMEGoogle: https://www.google.ca/For more expert insights, visit www.canadiansme.ca and subscribe to the CanadianSME Small Business Magazine. Stay innovative, stay informed, and thrive in the digital age!Disclaimer: The information shared in this podcast is for general informational purposes only and should not be considered as direct financial or business advice. Always consult with a qualified professional for advice specific to your situation

Stephan Livera Podcast
How Lightning Builders Can Improve Bitcoin Wallets with Nick Slaney | SLP640

Stephan Livera Podcast

Play Episode Listen Later Mar 3, 2025 60:43


In this episode, Stephan speaks with Nick Slaney about the current state and future of the Lightning Network. They discuss the misconceptions surrounding Lightning adoption, the legal challenges faced by developers, and the opportunities for Lightning Service Providers (LSPs). Nick shares insights on hosted channels, liquidity management, and the user experience of Lightning, emphasizing the importance of understanding costs associated with using the network. The conversation highlights the potential for growth and innovation in the Lightning ecosystem as it continues to evolve. In this conversation, Stephan and Nick Slaney delve into the intricacies of the Lightning Network, Bitcoin fees, and the role of stablecoins in the crypto ecosystem. They discuss the real-world user experience with Bitcoin and Lightning, emphasizing the importance of understanding user needs and the misconceptions prevalent in online discussions. The conversation also touches on the implications of Taproot assets for the Lightning Network and the future of Bitcoin development, highlighting the need for better user experiences and broader adoption.Takeaways

Land Stewardship Project's Ear to the Ground
Ear to the Ground 365: Perennial Pivot

Land Stewardship Project's Ear to the Ground

Play Episode Listen Later Feb 13, 2025 21:08


When Sogn Valley Farm transitioned out of intensive production of vegetable crops, it opened up opportunities to utilize a unique cousin of wheat as a way to steward the land. More Information • Sogn Valley Farm • Forever Green Initiative • Ear to the Ground 229: Kernza’s Continuous Cover • Wrap-Up of LSP’s 2025 Small…  Read More → Source

Bitcoin Takeover Podcast
S16 E6: Super Testnet on Monero vs Lightning Network Privacy

Bitcoin Takeover Podcast

Play Episode Listen Later Feb 4, 2025 149:39


Bitcoin developer Super Testnet argues that the Lightning Network is more private – and therefore better suited for darknet markets than Monero. In this episode, he breaks down all the nuances involved and defines good financial privacy. Time stamps: Introducing Super Testnet (00:00:48) Lightning on Dark Web Markets (00:01:08) Lightning Network Privacy Features (00:01:40) Analysis of Sender and Receiver Privacy (00:02:02) Onion Routing Explanation (00:03:07) Invoice Privacy Comparison (00:04:36) Transaction Visibility in Monero? (00:06:08) Information Storage in Lightning (00:07:12) Liquidity and Large Transactions (00:08:10) Amount Privacy in Lightning (00:09:34) Private Channels in Lightning (00:11:25) Routing Nodes and Privacy (00:13:59) How Monero Transactions Work (00:15:08) Encryption Standards in Monero (00:16:01) Recipient Privacy in Monero (00:17:54) Privacy Tech (00:18:52) Network Level Privacy (00:19:02) Tor Usage in Lightning Network (00:19:44) Routing Node Configuration (00:20:07) Dandelion++ (00:21:00) IP Address Association in Lightning (00:21:22) Encryption in Lightning Transactions (00:22:50) Monero's Network Privacy by Default (00:23:18) Chainalysis Video Reference (00:23:40) Remote Procedure Call Limitations (00:24:38) Custodial Solutions and Privacy (00:26:31) Privacy Advantages of Mints (00:28:08) Full Chain Membership Proofs (00:29:53) Encrypted Senders in Lightning (00:31:52) Comparison with Zcash (00:32:30) Barriers for Lightning Network Adoption (00:34:05) Exploring XMR Bazaar (00:35:02) SideShift (00:36:03) Paul Sztorc's Core Untouched Soft Work (00:37:14) Drivechains Activation (00:38:27) Ossification of Bitcoin (00:41:09) Concerns About Ossification (00:41:51) ZK Rollups Discussion (00:42:35) Citrea's Zero Knowledge Proof Rollup (00:45:05) Community Concerns on Lightning Network (00:48:34) Chainalysis and Dandelion Protocol (00:50:23) LSP and KYC Privacy Issues (00:52:39) Receiver Privacy in Lightning Network (00:53:28) Phoenix Wallet Setup (00:54:15) Sender Privacy Concerns (00:55:30) View Key and Monero (00:57:10) Chainalysis and Lightning Network (01:02:08) Monero Tracing Capabilities (01:06:01) User Input Error in Privacy (01:07:02) The Lightning Network vs. Monero Privacy (01:10:56) Conference Plans in Romania (01:12:00) Monero Payment Channel Network (01:14:36) Full Chain Membership Proofs (01:15:21) Lightning Network and Sender Encryption (01:15:32) Stablecoins and Lightning Network (01:16:22) Monero Transaction Validation (01:18:05) Zero Knowledge Proofs in Monero (01:18:56) Bitcoin's Zero Knowledge Rollups (01:20:31 Rollups and Bitcoin Scalability (01:21:04) Trojan Horse Concept in Bitcoin (01:23:46) Tornado Cash vs. Coinjoin (01:25:36) Coin Pool on Bitcoin (01:27:34) Darknet Market Listings (01:29:13) Nostr and Classified Ads (01:29:34) Privacy in Darknet Transactions (01:30:50) Risks of Direct Payments (01:31:54) Exploring Shopstr Listings (01:32:43) Comparing Shopstr and XMR Bazaar (01:35:00) Privacy Improvements in Shopstr (01:37:13) Lightning Network Developments (01:44:42) KYC and Banking Issues (01:48:24) Introduction to Bank Privacy Issues (01:48:59) Financial Regulations in Romania (01:49:48) Advice on Relocation for Financial Privacy (01:50:12) Intrusiveness of Banking Regulations (01:51:03) Personal Experience with Banking Scrutiny (01:51:34) Living Arrangements (01:52:13) Lightning Network Implementations Privacy (01:53:16) Privacy Implications of Lightning Wallets (01:54:03) User-Friendliness of Lightning Wallets (01:54:57) BOLT 12 and Privacy Claims (01:56:29) Improvements in BOLT 12 (01:57:09) Critique of BOLT 12's Privacy Features (01:59:30) Super Testnet's Current Projects and Work Focus (02:00:15) Development of Mint Market Cap Tool (02:01:30) Title Transfer App and State Chains (02:02:30) Ensuring Security in State Chains (02:03:32) Nostr Wallet Connect Protocol (02:04:26) Creation of Faucet Generator (02:05:40) Creating a Testnet (02:06:36) State Chains Discussion (02:07:12) Prediction Market Concept (02:08:14) Project Backlog Overview (02:10:14) Super Testnet's Music Career (02:12:29) Upcoming Conferences (02:14:39) Coin Pools Advantages (02:15:41) Planning Conference Attendance (02:17:25) Workshops and Commitments (02:17:56) Health and Fitness Journey (02:19:08) Should Bitcoin Increase the Block Size? (02:20:13) Soft Fork Proposal (02:21:04) Market Value of Transactions (02:22:47) Workshop Availability (02:24:02) Social Media Presence (02:25:14) Scams and Fake Accounts (02:26:02) Social Engineering Tactics (02:26:39) Money Requests Clarification (02:27:34) Social Links and Resources (02:27:58) Audience Engagement (02:28:28) Closing Remarks (02:29:01)

FreightCasts
WHAT THE TRUCK?!? EP799 Mexico tariffs see their shadow: delayed; are load boards listening to truckers?

FreightCasts

Play Episode Listen Later Feb 3, 2025 45:35


On episode 799 of WHAT THE TRUCK?!? Dooner is joined by GenLog's CEO Ryan Joyce to talk about their $14.6M Series A. We'll find out how this will help them in their fight against freight theft and fraud. Owner-operator Jayme Anderson says that load boards like DAT aren't doing enough to prevent scammers. He'll talk about his heated debates with the DAT and will tell us why he doesn't think they're listening to owner-operators. It's also A1 vs Heinz 57 and Jayme and me chug our favorite steak sauces.  Counteract's Simon Martin and Daniel LeBlanc are all about tire maxing. They say that when it comes to tires it's all about balance. BlueYonder's Ann Maire Jonkman shares LSP strategy. Over the weekend the world was flipped on its head with tariffs against Canada, Mexico and China. Where do we move from here? Catch new shows live at noon EDT Mondays, Wednesdays and Fridays on FreightWaves LinkedIn, Facebook, X or YouTube, or on demand by looking up WHAT THE TRUCK?!? on your favorite podcast player and at 5 p.m. Eastern on SiriusXM's Road Dog Trucking Channel 146. Watch on YouTube Check out the WTT merch store Subscribe to the WTT newsletter Apple Podcasts Spotify More FreightWaves Podcasts #WHATTHETRUCK #FreightNews #supplychain Learn more about your ad choices. Visit megaphone.fm/adchoices

What The Truck?!?
Mexico tariffs see their shadow: delayed; are load boards listening to truckers?

What The Truck?!?

Play Episode Listen Later Feb 3, 2025 45:35


On episode 799 of WHAT THE TRUCK?!? Dooner is joined by GenLog's CEO Ryan Joyce to talk about their $14.6M Series A. We'll find out how this will help them in their fight against freight theft and fraud. Owner-operator Jayme Anderson says that load boards like DAT aren't doing enough to prevent scammers. He'll talk about his heated debates with the DAT and will tell us why he doesn't think they're listening to owner-operators. It's also A1 vs Heinz 57 and Jayme and me chug our favorite steak sauces.  Counteract's Simon Martin and Daniel LeBlanc are all about tire maxing. They say that when it comes to tires it's all about balance. BlueYonder's Ann Maire Jonkman shares LSP strategy. Over the weekend the world was flipped on its head with tariffs against Canada, Mexico and China. Where do we move from here? Catch new shows live at noon EDT Mondays, Wednesdays and Fridays on FreightWaves LinkedIn, Facebook, X or YouTube, or on demand by looking up WHAT THE TRUCK?!? on your favorite podcast player and at 5 p.m. Eastern on SiriusXM's Road Dog Trucking Channel 146. Watch on YouTube Check out the WTT merch store Subscribe to the WTT newsletter Apple Podcasts Spotify More FreightWaves Podcasts #WHATTHETRUCK #FreightNews #supplychain Learn more about your ad choices. Visit megaphone.fm/adchoices

memoQ talks
Helping Companies to Grow and Scale, with Paul Barlow

memoQ talks

Play Episode Listen Later Jan 27, 2025 39:01


This episode of memoQ talks features a conversation between host Mark Shriner and industry veteran Paul Barlow. Paul shares his extensive background in the localization industry, starting in Dublin in the early days of the field and transitioning to the LSP side in the late 1990s. He discusses his experience helping small and medium-sized LSPs scale their operations and sales, emphasizing the importance of challenging legacy processes and having a clear growth strategy.The discussion delves into the unique challenges faced by smaller LSPs compared to their larger counterparts. Paul provides examples of LSPs that successfully repositioned themselves by focusing on their top clients, automating lower-value work, and having difficult but necessary pricing conversations. The conversation also covers the role of sales processes, CRM systems, and the emerging applications of AI in localization, such as automated transcription and project management tools.Throughout the discussion, Paul and Mark share insights on building authentic relationships, attending industry events, and leveraging social selling. Paul also shares his upcoming plans, including potential speaking engagements and consulting work with various LSPs and organizations. The podcast provides valuable perspectives for localization professionals, particularly those at small and medium-sized LSPs, on driving growth and navigating the evolving industry landscape.Acorn West Growth Strategieshttps://awgs.ai/

WWL First News with Tommy Tucker
Driving can still be treacherous right now

WWL First News with Tommy Tucker

Play Episode Listen Later Jan 24, 2025 3:44


Tommy talks with Jacob Pucheu with LSP about driving safely

Atareao con Linux
ATA 664 Vi, Vim o Neovim ¿Cual es el mejor?

Atareao con Linux

Play Episode Listen Later Jan 23, 2025 22:43


#vi #vim #neovim ¿cual es el mejor editor #linux de los tres?¿cual elegir?¿que diferencias hay entre los tres?¿donde utilizar cada uno de ellos? Últimamente, tanto en en el grupo de Telegram como en el canal de YouTube hay una pregunta recurrente, que es ¿Que diferencias hay entre Vim y Neovim?. ¿Cual escoger para cada situación?. Así que esto me dio una idea para un episodio, y para lo cual ha sido necesario documentarme, claro. He querido añadir también al vetusto Vi, con el objetivo de que la comparativa sea lo mas exhaustiva posible, y que sepas cual es tu mejor opción en cada caso. En mi caso, particular, cuando decidí adentrarme en el mundo de Vi, lo hice directamente a Vim, y tengo que confesarte que me costó decidirme dar el salto de Vim a Neovim. Aunque este salto lo hice básicamente por dos aspectos que para mi resultaban importantes, el primero es el LSP, Language Server Protocol, y en segundo lugar por los complementos de Neovim, que al utilizar LUA como lenguaje de scripting facilitaba mucho la creación de estos. Así, en este episodio voy a intentar aclarar las diferencias entre Vi, Vim y Neovim, cuando elegir uno u otro y la razón para hacerlo. Más información y enlaces en las notas del episodio

Reversim Podcast
487 Bumpers 85

Reversim Podcast

Play Episode Listen Later Dec 31, 2024


פרק מספר 487 של רברס עם פלטפורמה - באמפרס מספר 85: רן, דותן ואלון באולפן הוירטואלי עם סדרה של קצרצרים שתפסו את תשומת הלב בתקופה האחרונה - בלוגים מעניינים, דברים מ- GitHub, וכל מיני פרויקטים מעניינים או דברים יפים שראינו באינטרנט וחשבנו לאסוף ולהביא אליכם.וכמיטב המסורת לאחרונה - גם לא מעט AI, כי על זה הצעירים מדברים בזמן האחרון.

Bitcoin Magazine
The Bitcoin Treasury Wave w/ LQWD Tech and Shane Stuart

Bitcoin Magazine

Play Episode Listen Later Dec 22, 2024 48:49


Step into the future of corporate Bitcoin adoption with Liquid Technologies groundbreaking approach to integrating Bitcoin and Lightning Network infrastructure. In this exclusive interview, CEO Shane Stuart shares insights into how Liquid became one of Canada's top five publicly traded companies for Bitcoin per share, while pioneering Lightning Network innovation as a leading LSP provider. Host: Allen Helm Guest: @LQWDTech & Shane Stuart Lower your time preference and lock-in your Bitcoin 2025 conference tickets today!!! Use promo code BM10 for 10% off your tickets today! Click Here: http://b.tc/conference/2025 #Bitcoin #LightningNetwork #CorporateBitcoin #BitcoinStrategy #CryptoAdoption #BitcoinTreasury #BitcoinBusiness #BitcoinInnovation #CorporateStrategy #BitcoinInfrastructure #LightningInnovation #BitcoinPayments #CorporateCrypto #BitcoinTechnology #BitcoinDevelopment #BlockchainTechnology #FinTech #PaymentInnovation #BitcoinFuture #CryptoInfrastructure

The Art of SBA Lending
Largest SBA Loan Service Provider Gets Acquired (again) feat. Mike Breckheimer | Ep. 169

The Art of SBA Lending

Play Episode Listen Later Dec 12, 2024 51:19


In this episode of The Art of SBA Lending, Ray sits down with Mike Breckheimer at NAGGL 2024 to discuss the ins and outs of starting a lender service provider (LSP) business and navigating the SBA ecosystem. Mike shares his extensive experience building turnkey solutions for banks, credit unions, and other institutions looking to outsource SBA loan operations. He dives into the intricate balance of compliance, relationship building, and patience required to thrive in this niche industry. Key Highlights: Starting an LSP: Learn the essential steps to establish yourself as a trusted SBA lender service provider, including the necessary regulatory steps with OCRM. Sales Strategies for LSPs: Mike explains the long sales cycle when working with banks and credit unions and how patience was the key during the process. Navigating SBA Oversight: Discover the role of OCRM in reviewing lender service provider agreements and maintaining oversight in the SBA ecosystem. Tech & Innovation in SBA Lending: Explore how emerging technology and data analytics are reshaping the SBA lending landscape. Whether you're an aspiring LSP entrepreneur or a seasoned professional curious about industry trends, this episode is packed with valuable insights on the current state of SBA lending. Don't miss out on our exclusive NAGGL interview series—subscribe now to catch every episode! This episode is sponsored by: Lumos Data Lumos empowers your small business lending growth with cutting-edge analytics and streamlined applications that optimize your performance. If you're ready to take your small business lending to the next level with cutting edge analytics visit lumosdata.com.   Rapid Business Plans Rapid Business Plans is the go-to provider of business plans and feasibility studies for government guaranteed small business lenders. For more information, or to set up a Get Acquainted call go to http://www.rapidbusinessplans.com/art-of-sba   SBA Jobs Board Hiring for your SBA department? We've got you covered! SBA Jobs Board is here to bridge the gap between you and top SBA talent. Our Art of SBA Lending audience is packed with experts ready for their next career move. List your openings with us to connect with the best in the industry and find the right fit for your team. Live now on our new Art of SBA Website | https://www.artofsba.com/job-board   BDO's…let's start your weeks strong! Sign up for our weekly sales advice series, Sales Ammo. Every Monday morning wake up to a piece of Rays sales advice in your inbox to help you rise to the top. Subscribe here: https://www.artofsba.com/army-of-bdos   Loving The Art of SBA Lending episodes? Make sure to follow along with our sister shows, The BDO Show and SBA Today, each week with the links below! https://www.youtube.com/@TheBDOShow http://www.youtube.com/@SBAToday   Head to http://www.artofsba.com   for more information and to sign up for our must-read monthly newsletter to stay up to date with The Art of SBA Lending.

Stephan Livera Podcast
The Evolution of Alby with Michael Bumann | SLP622

Stephan Livera Podcast

Play Episode Listen Later Dec 5, 2024 60:22


Bumi & Stephan explore the evolution of Alby from a browser extension to a self-custodial Lightning wallet, Alby Hub. The conversation delves into the integration of Nostr for self-sovereign digital identity, security considerations for browser extensions, and the role of LSPs in channel management.  Bumi explains the architecture of Alby Hub, its user experience, and pricing models, emphasizing the importance of integrating Bitcoin into various applications. They also discuss the cost structures associated with Bitcoin services, the optimization of Lightning channels, and the challenges of on-chain payments.  The conversation highlights the importance of merchant adoption and the innovative Nostr Wallet Connect (NWC) protocol, which decouples wallets from applications, making it easier for developers. They introduce Alby Go, a mobile application designed for seamless payments, and explore the future of self-custodial solutions in the cryptocurrency space. Takeaways

Pi Tech
News: чому нейронки не заміняють лікарів, як JetBrains протистоїть VSCode, чому Михайло любить Anthropic більше OpenAI

Pi Tech

Play Episode Listen Later Nov 29, 2024 53:13


The Living Strong Podcast with Kym Sellers
Dealing with inflamation

The Living Strong Podcast with Kym Sellers

Play Episode Listen Later Nov 19, 2024 14:44


On this episode of the LSP, Kym discusses solutions for dealing with inflamation

Thinking Elixir Podcast
227: Oban Web Goes Open Source?

Thinking Elixir Podcast

Play Episode Listen Later Nov 5, 2024 29:35


News includes Oban Web going open source, making it more accessible for startups, a new community resource featuring over 80 Phoenix LiveView components, interesting insights from a frontend technology survey highlighting Phoenix's potential, the introduction of Klife, a high-performance Elixir + Kafka client, and more! Show Notes online - http://podcast.thinkingelixir.com/227 (http://podcast.thinkingelixir.com/227) Elixir Community News https://www.youtube.com/shorts/mKp30PNM_Q4 (https://www.youtube.com/shorts/mKp30PNM_Q4?utm_source=thinkingelixir&utm_medium=shownotes) – Parker Selbert announced that the Oban Web dashboard will be open sourced. https://github.com/rails/solid_queue/ (https://github.com/rails/solid_queue/?utm_source=thinkingelixir&utm_medium=shownotes) – The Rails community is working on a database-backed job queue called "Solid Queue". Mark shares a personal story about the significance of Oban Web being open sourced for startups. https://x.com/shahryar_tbiz/status/1850844469307785274 (https://x.com/shahryar_tbiz/status/1850844469307785274?utm_source=thinkingelixir&utm_medium=shownotes) – An announcement of an open source project with more than 80 Phoenix LiveView components. https://github.com/mishka-group/mishka_chelekom (https://github.com/mishka-group/mishka_chelekom?utm_source=thinkingelixir&utm_medium=shownotes) – GitHub repository for the open source project with Phoenix LiveView components. https://mishka.tools/chelekom/docs/ (https://mishka.tools/chelekom/docs/?utm_source=thinkingelixir&utm_medium=shownotes) – Documentation and interactive examples for the Phoenix LiveView components. https://x.com/ZachSDaniel1/status/1850882330249875883 (https://x.com/ZachSDaniel1/status/1850882330249875883?utm_source=thinkingelixir&utm_medium=shownotes) – Zach Daniel mentions that Igniter is effectively used for installing components. https://www.youtube.com/live/bHoCMMk2ksc (https://www.youtube.com/live/bHoCMMk2ksc?utm_source=thinkingelixir&utm_medium=shownotes) – Dave Lucia will live-stream coding an Igniter installer for OpenTelemetry. https://fluxonui.com/getting-started/introduction (https://fluxonui.com/getting-started/introduction?utm_source=thinkingelixir&utm_medium=shownotes) – Introduction to Fluxon UI, a paid resource with Phoenix LiveView components. https://tsh.io/state-of-frontend/#frameworks (https://tsh.io/state-of-frontend/#frameworks?utm_source=thinkingelixir&utm_medium=shownotes) – Results of a frontend technology survey where Phoenix is mentioned. https://www.youtube.com/playlist?list=PLSk21zn8fFZAa5UdY76ASWAwyu_xWFR6u (https://www.youtube.com/playlist?list=PLSk21zn8fFZAa5UdY76ASWAwyu_xWFR6u?utm_source=thinkingelixir&utm_medium=shownotes) – YouTube playlist of Elixir Stream Week presentations. https://elixirforum.com/t/2024-10-21-elixir-stream-week-five-days-five-streams-five-elixir-experts-online/66482/17 (https://elixirforum.com/t/2024-10-21-elixir-stream-week-five-days-five-streams-five-elixir-experts-online/66482/17?utm_source=thinkingelixir&utm_medium=shownotes) – Forum post about Elixir Stream Week featuring presentations and streams. https://elixirforum.com/t/klife-a-kafka-client-with-performance-gains-over-10x/67040 (https://elixirforum.com/t/klife-a-kafka-client-with-performance-gains-over-10x/67040?utm_source=thinkingelixir&utm_medium=shownotes) – Introduction of Klife, a new Elixir + Kafka client with improved performance. https://github.com/oliveigah/klife (https://github.com/oliveigah/klife?utm_source=thinkingelixir&utm_medium=shownotes) – GitHub repository for the Klife Kafka client in Elixir. https://github.com/BeaconCMS/beacon/blob/main/ROADMAP.md (https://github.com/BeaconCMS/beacon/blob/main/ROADMAP.md?utm_source=thinkingelixir&utm_medium=shownotes) – Roadmap for the BeaconCMS project. https://x.com/josevalim/status/1850106541887689133?s=12&t=ZvCKMAXrZFtDX8pfjW14Lw (https://x.com/josevalim/status/1850106541887689133?s=12&t=ZvCKMAXrZFtDX8pfjW14Lw?utm_source=thinkingelixir&utm_medium=shownotes) – José Valim clarifies that Elixir and LSP remain separate projects with independent release schedules. https://flutterfoundation.dev/blog/posts/we-are-forking-flutter-this-is-why/ (https://flutterfoundation.dev/blog/posts/we-are-forking-flutter-this-is-why/?utm_source=thinkingelixir&utm_medium=shownotes) – Blog post about Flutter forking into Flock to promote open-source community development. Do you have some Elixir news to share? Tell us at @ThinkingElixir (https://twitter.com/ThinkingElixir) or email at show@thinkingelixir.com (mailto:show@thinkingelixir.com) Find us online - Message the show - @ThinkingElixir (https://twitter.com/ThinkingElixir) - Message the show on Fediverse - @ThinkingElixir@genserver.social (https://genserver.social/ThinkingElixir) - Email the show - show@thinkingelixir.com (mailto:show@thinkingelixir.com) - Mark Ericksen - @brainlid (https://twitter.com/brainlid) - Mark Ericksen on Fediverse - @brainlid@genserver.social (https://genserver.social/brainlid) - David Bernheisel - @bernheisel (https://twitter.com/bernheisel) - David Bernheisel on Fediverse - @dbern@genserver.social (https://genserver.social/dbern)

Land Stewardship Project's Ear to the Ground
Ear to the Ground 356: First Things First

Land Stewardship Project's Ear to the Ground

Play Episode Listen Later Nov 4, 2024 27:50


Thinking of applying for NRCS funds? First, advises vegetable and livestock farmer Klaus Zimmermann-Mayo, figure out what kind of farming you want to do and how you want to do it. More Information • Whetstone Farm • Go Farm Connect • NRCS Environmental Quality Incentives Program • NRCS Service Center Locator You can find LSP…  Read More → Source

Land Stewardship Project's Ear to the Ground
Ear to the Ground 355: Silver Buckshot

Land Stewardship Project's Ear to the Ground

Play Episode Listen Later Oct 31, 2024 26:54


Father-son team Joe and Matthew Fitzgerald are quite willing to share their insights with other farmers on how to get started in organic crop production. First piece of advice: sell your fishing boat. More Information • Fitzgerald Organic Mad Agriculture Video • Organic Agronomy Training Service • LSP Soil Health Web Page You can find LSP…  Read More → Source

Bitcoin Optech Podcast
Bitcoin Optech: SuperScalar Deep Dive Podcast

Bitcoin Optech Podcast

Play Episode Listen Later Oct 31, 2024 53:03


Dave Harding and Mike Schmidt are joined by ZmnSCPxj to discuss his SuperScalar proposal. Why a deep dive? (0:40) Proposal overview (1:58) Importance of reallocating liquidity (4:13) What about overloading channels with liquidity from the start? (9:42) Discussion of multi-LSP vs single LSP approaches (13:05) Ensuring unilateral exit is possible (15:22) Malicious users forcing unilateral closes (20:21) Decker–Wattenhofer channels vs John Law's tunable penalties (27:11) Decker–Wattenhofer relative lock times impact on users (38:44) Discussion of trustless non-P2P protocol structure (40:01) Contrasting SuperScalar with Ark (44:08) Implementation discussion (48:44)

The Translation Company Talk
S05E11: Building & Scaling a Global LSP Team

The Translation Company Talk

Play Episode Listen Later Oct 28, 2024 51:53


In this episode of The Translation Company Talk, we welcome back Jordan Evans, CEO of Language Network and Managing Partner of HireGlobo, an industry expert with deep insights into scaling Language Service Providers (LSPs) and building global teams. We dive into how LSPs are scaling up today, touching on virtual teams' structure, business enablement, and the challenges faced with managing hybrid teams across global locations. Jordan also discusses the technological hurdles that come with operating a global, hybrid team and shares strategies for overcoming them. As we explore the intricacies of global expansion, Jordan provides valuable advice on how LSPs can structure their businesses to fully benefit from a global presence, while carefully balancing the risks of scaling too quickly. He shares insights on how to manage cultural shifts within a growing team and explains how scaling impacts not just operations, but also the supply chain, including the linguist community. We also delve into the financial aspects of scaling, such as preparing for investment costs, evaluating different geographical jurisdictions, and the pros and cons of mergers and acquisitions as a growth strategy. Whether you're part of an LSP or another industry, this episode is full of practical advice on building and sustaining a thriving global team. Subscribe to the Translation Company Talk podcast on Apple Podcasts, iTunes, Spotify, Audible or your platform of choice. This episode of the Translation Company Talk podcast is brought to you by Hybrid Lynx.

Land Stewardship Project's Ear to the Ground
Ear to the Ground 353: 7 Years Later

Land Stewardship Project's Ear to the Ground

Play Episode Listen Later Oct 27, 2024 20:49


Jon and Carin Stevens farm unforgiving land that leaves little room for mistakes. But thanks to a system based on no-till, cover cropping, and reintegrating livestock, a “victory year” has finally emerged from the ashes of failure. More Information • LSP Soil Health Web Page • Maple Grove Farms YouTube Page You can find LSP…  Read More → Source

Thinking Elixir Podcast
225: A BeaconCMS of Hope

Thinking Elixir Podcast

Play Episode Listen Later Oct 22, 2024 21:28


News includes coming info on new features in Elixir v1.18, the release of Beacon CMS v0.1 with new tools for developers, German Velasco's insightful video on the origins of Phoenix contexts, Alex Koutmos sharing his sql_fmt tool for cleaner SQL code in Ecto, an exciting new tool for the Mastodon community called MastodonBotEx, and more! Show Notes online - http://podcast.thinkingelixir.com/225 (http://podcast.thinkingelixir.com/225) Elixir Community News https://x.com/josevalim/status/1846109246116536567 (https://x.com/josevalim/status/1846109246116536567?utm_source=thinkingelixir&utm_medium=shownotes) – José Valim updated his Elixir Stream Week presentation to talk about Elixir v1.18. https://x.com/NickGnd/status/1846103330352697455 (https://x.com/NickGnd/status/1846103330352697455?utm_source=thinkingelixir&utm_medium=shownotes) – Discussion about the new LSP server for Elixir v1.18. https://github.com/elixir-webrtc/ex_webrtc (https://github.com/elixir-webrtc/ex_webrtc?utm_source=thinkingelixir&utm_medium=shownotes) – ExWebRTC library for Elixir mentioned in the context of Elixir Stream Week. https://x.com/BeaconCMS/status/1844089765572026611 (https://x.com/BeaconCMS/status/1844089765572026611?utm_source=thinkingelixir&utm_medium=shownotes) – Announcement of Beacon CMS v0.1 release. https://www.youtube.com/watch?v=JBLOd9Oxwpc (https://www.youtube.com/watch?v=JBLOd9Oxwpc?utm_source=thinkingelixir&utm_medium=shownotes) – Hype video for the new Beacon CMS release. https://github.com/BeaconCMS/beacon (https://github.com/BeaconCMS/beacon?utm_source=thinkingelixir&utm_medium=shownotes) – The GitHub repository for Beacon CMS, an open-source CMS built with Phoenix LiveView. https://www.youtube.com/live/c2TLDiFv8ZI (https://www.youtube.com/live/c2TLDiFv8ZI?utm_source=thinkingelixir&utm_medium=shownotes) – Zach Daniel and Leandro paired programming session on Beacon CMS Igniter task. https://github.com/BeaconCMS/beacon_demo (https://github.com/BeaconCMS/beacon_demo?utm_source=thinkingelixir&utm_medium=shownotes) – Beacon_demo project helps users try Beacon CMS locally. https://www.youtube.com/watch?v=5jk0fIJOFuc (https://www.youtube.com/watch?v=5jk0fIJOFuc?utm_source=thinkingelixir&utm_medium=shownotes) – ElixirConf video related to Beacon CMS development. Hexdeck.pm is a new community tool for browsing multiple HexDocs pages at once. https://hexdeck.pm/ (https://hexdeck.pm/?utm_source=thinkingelixir&utm_medium=shownotes) – Website for hexdeck.pm, a documentation aggregator. https://github.com/hayleigh-dot-dev/hexdeck (https://github.com/hayleigh-dot-dev/hexdeck?utm_source=thinkingelixir&utm_medium=shownotes) – GitHub repository for hexdeck.pm, created by Hayleigh from the Gleam team. https://github.com/elixir-lsp/elixir-ls/releases/tag/v0.24.1 (https://github.com/elixir-lsp/elixir-ls/releases/tag/v0.24.1?utm_source=thinkingelixir&utm_medium=shownotes) – Update to ElixirLS, fixing several crashes. German Velasco created a stream video explaining the origins of Phoenix "contexts". https://x.com/germsvel/status/1846137519508787644 (https://x.com/germsvel/status/1846137519508787644?utm_source=thinkingelixir&utm_medium=shownotes) – Tweet about German Velasco's stream video on Phoenix contexts. https://www.elixirstreams.com/tips/why-phoenix-contexts (https://www.elixirstreams.com/tips/why-phoenix-contexts?utm_source=thinkingelixir&utm_medium=shownotes) – German explains the history of Phoenix Contexts. https://www.youtube.com/watch?v=tMO28ar0lW8 (https://www.youtube.com/watch?v=tMO28ar0lW8?utm_source=thinkingelixir&utm_medium=shownotes) – Chris McCord's keynote on Phoenix 1.3 at Lonestar ElixirConf 2017. https://phoenixframework.org/blog/phoenix-1-3-0-released (https://phoenixframework.org/blog/phoenix-1-3-0-released?utm_source=thinkingelixir&utm_medium=shownotes) – Blog post on Phoenix 1.3 release. https://x.com/akoutmos/status/1843706957267656969 (https://x.com/akoutmos/status/1843706957267656969?utm_source=thinkingelixir&utm_medium=shownotes) – Alex Koutmos' announcement of sql_fmt version 0.2.0 support for ~SQL sigil and Mix Formatter plugin. https://github.com/akoutmos/sql_fmt (https://github.com/akoutmos/sql_fmt?utm_source=thinkingelixir&utm_medium=shownotes) – GitHub repository for sql_fmt, a SQL formatting tool. https://github.com/akoutmos/ecto_dbg (https://github.com/akoutmos/ecto_dbg?utm_source=thinkingelixir&utm_medium=shownotes) – GitHub page for ectodbg, which uses sqlfmt for debugging Ecto SQL queries. https://mastodon.kaiman.uk/@neojet/113284100323613786 (https://mastodon.kaiman.uk/@neojet/113284100323613786?utm_source=thinkingelixir&utm_medium=shownotes) – MastodonBotEx simplifies interacting with the Mastodon API. https://github.com/kaimanhub/MastodonBot.ex (https://github.com/kaimanhub/MastodonBot.ex?utm_source=thinkingelixir&utm_medium=shownotes) – GitHub repository for MastodonBotEx designed for Mastodon API interactions. https://codebeamnyc.com/#schedule (https://codebeamnyc.com/#schedule?utm_source=thinkingelixir&utm_medium=shownotes) – Details about the schedule for CodeBEAM NYC Lite for November 15, 2024. https://elixirfriends.transistor.fm/episodes/friend-3-tyler-young (https://elixirfriends.transistor.fm/episodes/friend-3-tyler-young?utm_source=thinkingelixir&utm_medium=shownotes) – Elixir Friend's podcast episode with Tyler Young discussing marketing and technology topics. https://elixirfriends.transistor.fm/episodes/friend-2-david-bernheisel (https://elixirfriends.transistor.fm/episodes/friend-2-david-bernheisel?utm_source=thinkingelixir&utm_medium=shownotes) – Previous Elixir Friend's podcast episode with David Bernheisel. Do you have some Elixir news to share? Tell us at @ThinkingElixir (https://twitter.com/ThinkingElixir) or email at show@thinkingelixir.com (mailto:show@thinkingelixir.com) Find us online - Message the show - @ThinkingElixir (https://twitter.com/ThinkingElixir) - Message the show on Fediverse - @ThinkingElixir@genserver.social (https://genserver.social/ThinkingElixir) - Email the show - show@thinkingelixir.com (mailto:show@thinkingelixir.com) - Mark Ericksen - @brainlid (https://twitter.com/brainlid) - Mark Ericksen on Fediverse - @brainlid@genserver.social (https://genserver.social/brainlid) - David Bernheisel - @bernheisel (https://twitter.com/bernheisel) - David Bernheisel on Fediverse - @dbern@genserver.social (https://genserver.social/dbern)

Never Ending Adventure: An Adventure Time Podcast
#148 - Not so Sweet on the Candy Streets

Never Ending Adventure: An Adventure Time Podcast

Play Episode Listen Later Sep 24, 2024 52:05


S5E25 - Finn and Jake play detective tracking down what is expected to be the masked villian of an LSP disaster....turns out they ain't as good as Joshua and Margaret! 

The Deep End with Joey Mudd
The Deep End with Joey Mudd

The Deep End with Joey Mudd

Play Episode Listen Later Sep 24, 2024 115:58


The Deep End with Joey Muddwith guests Darcy Thompson of the Louisville Story Program and Jeremy WrightPreviewing the LSP's latest project Glad About It, The Legacy of Gospel Music In LouisvilleShow #527Originally aired September 18, 2024Artist - SongThe Guiding Stars - Been Dipped In The WaterThe Solomonaires - Come Out of the WildernessThe Butlerairs - He's So Good To MeJoe Thomas - I Feel Like Pressing My WayJoe Thomas & Cliff Butler - This Ole WorldThe Sensational Bells Of Joy - Come On SinnerRev. Eddie James & Family - Been To The WaterThe Religious Five Quartet - Let Me Lean On YouJimmy Ellis & Riverview Spiritual Singers - I've Come A Long, Long WayRev.. Charles E. Kirby - Lord You Been Good To MeUniversity Of Louisville Black Diamond Choir - Feast of the LordThe Deep River Song Birds - Gates To The CityMelvin Cuff & The Gospel Voices of Soul - Woke Up This MorningThe Junior Dynamics - God Is Using MeThe Gospel Motivators - Trust HimThe Lee Brothers - Keep On Trying To Make InThe Webster Singers - Stay By Our SideCleo Joyner III and the Metropolitan Community Choir - Spirit of Living God Hosted on Acast. See acast.com/privacy for more information.

PodCannstatt by MeinVfB
Wo liegt der VfB-Fokus vor Madrid? [feat. Carlos Ubina] | Episode 302

PodCannstatt by MeinVfB

Play Episode Listen Later Sep 12, 2024 59:03


Die Themen dieser Folge: • Begrüßung & Themenvorstellung 00:00:00 • Rückblick #LSP 00:02:00 • NLZ-Newsflash  00:16:36 • Schwerpunkt Ausblick #RMAVfB 00:23:42 • Ausblick #BMGVfB 00:43:28 ────────────────────​​──────── Unser Tippspiel: https://www.kicktipp.de/meinvfb/ Unser neues Abo (mit Heimtrikot): https://produkte.stuttgarter-nachrichten.de/vfb/?wt=EPRBR Madrid erwartet eine Invasion in Rot: https://stn.de/n26 Das Champions League Trikot: https://stn.de/n23 ────────────────────​​──────── Der MeinVfB-PodCannstatt wird präsentiert von den Stuttgarter Nachrichten und der Stuttgarter Zeitung. Hosts: Philipp Maisel, Felix Mahler Produktion: Marian Hepp Alles rund um MeinVfB findest du hier: https://linktr.ee/meinvfb ────────────────────​​──────── Impressum: https://www.meinvfb.de/impressum.html

Land Stewardship Project's Ear to the Ground
Ear to the Ground 347: Bite-by-Bite

Land Stewardship Project's Ear to the Ground

Play Episode Listen Later Sep 5, 2024 35:49


Mapping a rural region’s “community food assets” reveals isolated islands of opportunity in a sea of corn and soybeans. LSP’s Scott DeMuth says now is the time to connect the dots and create a new relationship between farmers, eaters, and the places they live in. More Information • LSP's Community-Based Food Systems Web Page • Report:…  Read More → Source

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Betteridge's law says no: with seemingly infinite flavors of RAG, and >2million token context + prompt caching from Anthropic/Deepmind/Deepseek, it's reasonable to believe that "in context learning is all you need".But then there's Cosine Genie, the first to make a huge bet using OpenAI's new GPT4o fine-tuning for code at the largest scale it has ever been used externally; resulting in what is now the #1 coding agent in the world according to SWE-Bench Full, Lite, and Verified:SWE-Bench has been the most successful agent benchmark of the year, receiving honors at ICLR (our interview here) and recently being verified by OpenAI. Cognition (Devin) was valued at $2b after reaching 14% on it. So it is very, very big news when a new agent appears to beat all other solutions, by a lot:While this number is self reported, it seems to be corroborated by OpenAI, who also award it clear highest marks on SWE-Bench verified:The secret is GPT-4o finetuning on billions of tokens of synthetic data. * Finetuning: As OpenAI says:Genie is powered by a fine-tuned GPT-4o model trained on examples of real software engineers at work, enabling the model to learn to respond in a specific way. The model was also trained to be able to output in specific formats, such as patches that could be committed easily to codebases. Due to the scale of Cosine's finetuning, OpenAI worked closely with them to figure out the size of the LoRA:“They have to decide how big your LoRA adapter is going to be… because if you had a really sparse, large adapter, you're not going to get any signal in that at all. So they have to dynamically size these things.”* Synthetic data: we need to finetune on the process of making code work instead of only training on working code.“…we synthetically generated runtime errors. Where we would intentionally mess with the AST to make stuff not work, or index out of bounds, or refer to a variable that doesn't exist, or errors that the foundational models just make sometimes that you can't really avoid, you can't expect it to be perfect.”Genie also has a 4 stage workflow with the standard LLM OS tooling stack that lets it solve problems iteratively:Full Video Podlike and subscribe etc!Show Notes* Alistair Pullen - Twitter, Linkedin* Cosine Genie launch, technical report* OpenAI GPT-4o finetuning GA* Llama 3 backtranslation* Cursor episode and Aman + SWEBench at ICLR episodeTimestamps* [00:00:00] Suno Intro* [00:05:01] Alistair and Cosine intro* [00:16:34] GPT4o finetuning* [00:20:18] Genie Data Mix* [00:23:09] Customizing for Customers* [00:25:37] Genie Workflow* [00:27:41] Code Retrieval* [00:35:20] Planning* [00:42:29] Language Mix* [00:43:46] Running Code* [00:46:19] Finetuning with OpenAI* [00:49:32] Synthetic Code Data* [00:51:54] SynData in Llama 3* [00:52:33] SWE-Bench Submission Process* [00:58:20] Future Plans* [00:59:36] Ecosystem Trends* [01:00:55] Founder Lessons* [01:01:58] CTA: Hiring & CustomersDescript Transcript[00:01:52] AI Charlie: Welcome back. This is Charlie, your AI cohost. As AI engineers, we have a special focus on coding agents, fine tuning, and synthetic data. And this week, it all comes together with the launch of Cosign's Genie, which reached 50 percent on SWE Bench Lite, 30 percent on the full SWE Bench, and 44 percent on OpenAI's new SWE Bench Verified.[00:02:17] All state of the art results by the widest ever margin recorded compared to former leaders Amazon Q and US Autocode Rover. And Factory Code Droid. As a reminder, Cognition Devon went viral with a 14 percent score just five months ago. Cosign did this by working closely with OpenAI to fine tune GPT 4. 0, now generally available to you and me, on billions of tokens of code, much of which was synthetically generated.[00:02:47] Alistair Pullen: Hi, I'm Ali. Co founder and CEO of Cosign, a human reasoning lab. And I'd like to show you Genie, our state of the art, fully autonomous software engineering colleague. Genie has the highest score on SWBench in the world. And the way we achieved this was by taking a completely different approach. We believe that if you want a model to behave like a software engineer, it has to be shown how a human software engineer works.[00:03:15] We've designed new techniques to derive human reasoning from real examples of software engineers doing their jobs. Our data represents perfect information lineage, incremental knowledge discovery, and step by step decision making. Representing everything a human engineer does logically. By actually training Genie on this unique dataset, rather than simply prompting base models, which is what everyone else is doing, we've seen that we're no longer simply generating random code until some works.[00:03:46] It's tackling problems like[00:03:48] AI Charlie: a human. Alistair Pullen is CEO and co founder of Kozen, and we managed to snag him on a brief trip stateside for a special conversation on building the world's current number one coding agent. Watch out and take care.[00:04:07] Alessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO of Resonance at Decibel Partners, and I'm joined by my co host Swyx, founder of Small. ai.[00:04:16] swyx: Hey, and today we're back in the studio. In person, after about three to four months in visa jail and travels and all other fun stuff that we talked about in the previous episode.[00:04:27] But today we have a special guest, Ali Pullen from Cosign. Welcome. Hi, thanks for having me. We're very lucky to have you because you're on a two day trip to San Francisco. Yeah, I wouldn't recommend it. I would not[00:04:38] Alistair Pullen: recommend it. Don't fly from London to San Francisco for two days.[00:04:40] swyx: And you launched Genie on a plane.[00:04:42] On plain Wi Fi, um, claiming state of the art in SuiteBench, which we're all going to talk about. I'm excited to dive into your whole journey, because it has been a journey. I've been lucky to be a small angel in part of that journey. And it's exciting to see that you're launching to such acclaim and, you know, such results.[00:05:01] Alistair and Cosine intro[00:05:01] swyx: Um, so I'll go over your brief background, and then you can sort of fill in the blanks on what else people should know about you. You did your bachelor's in computer science at Exeter.[00:05:10] Speaker 6: Yep.[00:05:10] swyx: And then you worked at a startup that got acquired into GoPuff and round about 2022, you started working on a stealth startup that became a YC startup.[00:05:19] What's that? Yeah. So[00:05:21] Alistair Pullen: basically when I left university, I, I met my now co founder, Sam. At the time we were both mobile devs. He was an Android developer. iOS developer. And whilst at university, we built this sort of small consultancy, sort of, we'd um, be approached to build projects for people and we would just take them up and start with, they were student projects.[00:05:41] They weren't, they weren't anything crazy or anything big. We started with those and over time we started doing larger and larger projects, more interesting things. And then actually, when we left university, we just kept doing that. We didn't really get jobs, traditional jobs. It was also like in the middle of COVID, middle of lockdown.[00:05:57] So we were like, this is a pretty good gig. We'll just keep like writing code in our bedrooms. And yeah, that's it. We did that for a while. And then a friend of ours that we went to Exeter with started a YC startup during COVID. And it was one of these fast grocery delivery companies. At the time I was living in the deepest, darkest countryside in England, where fast grocery companies are still not a thing.[00:06:20] So he, he sort of pitched me this idea and was like, listen, like I need an iOS dev, do you fancy coming along? And I thought, absolutely. It was a chance to get out of my parents house, chance to move to London, you know, do interesting things. And at the time, truthfully, I had no idea what YC was. I had no idea.[00:06:34] I wasn't in the startup space. I knew I liked coding and building apps and stuff, but I'd never, never really done anything in that area. So I said, yes, absolutely. I moved to London just sort of as COVID was ending and yeah, worked at what was fancy for about a year and a half. Then we brought Sam along as well.[00:06:52] So we, Sam and I, were the two engineers at Fancy for basically its entire life, and we built literally everything. So like the, the front, the client mobile apps, the, the backends, the internal like stock management system, the driver routing, algorithms, all those things. Literally like everything. It was my first.[00:07:12] You know, both of us were super inexperienced. We didn't have, like, proper engineering experience. There were definitely decisions we'd do differently now. We'd definitely buy a lot of stuff off the shelf, stuff like that. But it was the initial dip of the toe into, like, the world of startups, and we were both, like, hooked immediately.[00:07:26] We were like, this is so cool. This sounds so much better than all our friends who were, like, consultants and doing, like, normal jobs, right? We did that, and it ran its course, and after, I want to say, 18 months or so, GoPuff came and acquired us. And there was obviously a transitionary period, an integration period, like with all acquisitions, and we did that, and as soon as we'd vested what we wanted to vest, and as soon as we thought, okay, this chapter is sort of done, uh, in about 2022, We left and we knew that we wanted to go alone and try something like we'd had this taste.[00:07:54] Now we knew we'd seen how a like a YC startup was managed like up close and we knew that we wanted to do something similar ourselves. We had no idea what it was at the time. We just knew we wanted to do something. So we, we tried a small, um, some small projects in various different areas, but then GPT 3.[00:08:12] He'd seen it on Reddit and I'm his source of all knowledge. Yeah, Sam loves Reddit. I'd actually heard of GPT 2. And obviously had like loosely followed what OpenAI had done with, what was the game they trained a model to play? Dota. Was it Dota? Yeah. So I'd followed that and, I knew loosely what GPT 2 was, I knew what BERT was, so I was like, Okay, this GPT 3 thing sounds interesting.[00:08:35] And he just mentioned it to me on a walk. And I then went home and, like, googled GPT was the playground. And the model was DaVinci 2 at the time. And it was just the old school playground, completions, nothing crazy, no chat, no nothing. I miss completions though. Yeah. Oh, completion. Honestly, I had this conversation in open hours office yesterday.[00:08:54] I was like, I just went. I know. But yeah, so we, we, um, I started playing around with the, the playground and the first thing I ever wrote into it was like, hello world, and it gave me some sort of like, fairly generic response back. I was like, okay, that looks pretty cool. The next thing was. I looked through the docs, um, also they had a lot of example prompts because I had no idea.[00:09:14] I didn't know if the, if you could put anything in, I didn't know if you had to structure in a certain way or whatever, and I, and I saw that it could start writing like tables and JSON and stuff like that. So I was like, okay, can you write me something in JSON? And it did. And I was like, Oh, wow, this is, this is pretty cool.[00:09:28] Um, can it, can it just write arbitrary JSON for me? And, um, immediately as soon as I realized that my mind was racing and I like got Sam in and we just started messing around in the playground, like fairly innocently to start with. And then, of course, both being mobile devs and also seeing, at that point, we learned about what the Codex model was.[00:09:48] It was like, this thing's trained to write code, sounds awesome. And Copilot was start, I think, I can't actually remember if Copilot had come out yet, it might have done. It's round about the same time as Codex. Round about the same time, yeah. And we were like, okay, as mobile devs, let's see what we can do.[00:10:02] So the initial thing was like, okay, let's see if we can get this AI to build us a mobile app from scratch. We eventually built the world's most flimsy system, which was back in the day with like 4, 000 token context windows, like chaining prompts, trying to keep as much context from one to the other, all these different things, where basically, Essentially, you'd put an app idea in a box, and then we'd do, like, very high level stuff, figuring out what the stack should be, figuring out what the frontend should be written in, backend should be written in, all these different things, and then we'd go through, like, for each thing, more and more levels of detail, until the point that you're You actually got Codex to write the code for each thing.[00:10:41] And we didn't do any templating or anything. We were like, no, we're going to write all the code from scratch every time, which is basically why it barely worked. But there were like occasions where you could put in something and it would build something that did actually run. The backend would run, the database would work.[00:10:54] And we were like, Oh my God, this is insane. This is so cool. And that's what we showed to our co founder Yang. I met my co founder Yang through, through fancy because his wife was their first employee. And, um, we showed him and he was like, You've discovered fire. What is this? This is insane. He has a lot more startup experience.[00:11:12] Historically, he's had a few exits in the past and has been through all different industries. He's like our dad. He's a bit older. He hates me saying that. He's your COO now? He's our COO. Yeah. And, uh, we showed him and he was like, this is absolutely amazing. Let's just do something. Cause he, he, at the time, um, was just about to have a child, so he didn't have anything going on either.[00:11:29] So we, we applied to YC, got an interview. The interview was. As most YC interviews are short, curt, and pretty brutal. They told us they hated the idea. They didn't think it would work. And that's when we started brainstorming. It was almost like the interview was like an office hours kind of thing. And we were like, okay, given what you know about the space now and how to build things with these LLMs, like what can you bring out of what you've learned in building that thing into Something that might be a bit more useful to people on the daily, and also YC obviously likes B2B startups a little bit more, at least at the time they did, back then.[00:12:01] So we were like, okay, maybe we could build something that helps you with existing codebases, like can sort of automate development stuff with existing codebases, not knowing at all what that would look like, or how you would build it, or any of these things. And They were like, yeah, that sounds interesting.[00:12:15] You should probably go ahead and do that. You're in, you've got two weeks to build us an MVP. And we were like, okay, okay. We did our best. The MVP was absolutely horrendous. It was a CLI tool. It sucked. And, um, at the time we were like, we, we don't even know. How to build what we want to build. And we didn't really know what we wanted to build, to be honest.[00:12:33] Like, we knew we wanted to try to help automate dev work, but back then we just didn't know enough about how LLM apps were built, the intricacies and all those things. And also, like, the LLMs themselves, like 4, 000 tokens, you're not going very far, they're extremely expensive. So we ended up building a, uh, a code based retrieval tool, originally.[00:12:51] Our thought process originally was, we want to build something that can do our jobs for us. That is like the gold star, we know that. We've seen like there are glimpses of it happening with our initial demo that we did. But we don't see the path of how to do that at the moment. Like the tech just wasn't there.[00:13:05] So we were like, well, there are going to be some things that you need to build this when the tech does catch up. So retrieval being one of the most important things, like the model is going to have to build like pull code out of a code base somehow. So we were like, well, let's just build the tooling around it.[00:13:17] And eventually when the tech comes, then we'll be able to just like plug it into our, our tooling and then it should work basically. And to be fair, that's basically what we've done. And that's basically what's happened, which is very fortunate. But in the meantime, whilst we were waiting for everything to sort of become available, we built this code base retrieval tool.[00:13:34] That was the first thing we ever launched when we were in YC like that, and it didn't work. It was really frustrating for us because it was just me and Sam like working like all hours trying to get this thing to work. It was quite a big task in of itself, trying to get like a good semantic search engine working that could run locally on your machine.[00:13:51] We were trying to avoid sending code to the cloud as much as possible. And then for very large codebases, you're like, you know, millions of lines of code. You're trying to do some sort of like local HNSW thing that runs inside your VS Code instance that like eats all your RAM as you've seen in the past.[00:14:05] All those different things. Yep. Yeah.[00:14:07] swyx: My first call with[00:14:07] Alistair Pullen: you, I had trouble. You were like, yeah, it sucks, man. I know, I know. I know it sucks. I'm sorry. I'm sorry. But building all that stuff was essentially the first six to eight months of what at the time was built. Which, by the way, build it. Build it. Yeah, it was a terrible, terrible name.[00:14:25] It was the worst,[00:14:27] swyx: like, part of trying to think about whether I would invest is whether or not people could pronounce it.[00:14:32] Alistair Pullen: No, when we, so when we went on our first ever YC, like, retreat, No one got the name right. They were like, build, build, well, um, and then we actually changed the names, cosign, like, although some people would spell it as in like, as if you're cosigning for an apartment or something like that's like, can't win.[00:14:49] Yeah. That was what built was back then. But the ambition, and I did a talk on this back in the end of 2022, the ambition to like build something that essentially automated our jobs was still very much like core to what we were doing. But for a very long time, it was just never apparent to us. Like. How would you go about doing these things?[00:15:06] Even when, like, you had 3. suddenly felt huge, because you've gone from 4 to 16, but even then 16k is like, a lot of Python files are longer than 16k. So you can't, you know, before you even start doing a completion, even then we were like, eh, Yeah, it looks like we're still waiting. And then, like, towards the end of last year, you then start, you see 32k.[00:15:28] 32k was really smart. It was really expensive, but also, like, you could fit a decent amount of stuff in it. 32k felt enormous. And then, finally, 128k came along, and we were like, right, this is, like, this is what we can actually deal with. Because, fundamentally, to build a product like this, you need to get as much information in front of the model as possible, and make sure that everything it ever writes in output can be read.[00:15:49] traced back to something in the context window, so it's not hallucinating it. As soon as that model existed, I was like, okay, I know that this is now going to be feasible in some way. We'd done early sort of dev work on Genie using 3. 5 16k. And that was a very, very like crude way of proving that this loop that we were after and the way we were generating the data actually had signal and worked and could do something.[00:16:16] But the model itself was not useful because you couldn't ever fit enough information into it for it to be able to do the task competently and also the base intelligence of the model. I mean, 3. 5, anyone who's used 3. 5 knows the base intelligence of the model is. is lacking, especially when you're asking it to like do software engineering, this is quite quite involved.[00:16:34] GPT4o finetuning[00:16:34] Alistair Pullen: So, we saw the 128k context model and um, at that point we'd been in touch with OpenAI about our ambitions and like how we wanted to build it. We essentially are, I just took a punt, I was like, I'm just going to ask to see, can we like train this thing? Because at the time Fortobo had just come out and back then there was still a decent amount of lag time between like OpenAI releasing a model and then allowing you to fine tune it in some way.[00:16:59] They've gotten much better about that recently, like 4. 0 fine tuning came out either, I think, a day, 4. 0 mini fine tuning came out like a day after the model did. And I know that's something they're definitely like, optimising for super heavily inside, which is great to see.[00:17:11] swyx: Which is a little bit, you know, for a year or so, YC companies had like a direct Slack channel to open AI.[00:17:17] We still do. Yeah. Yeah. So, it's a little bit of a diminishing of the YC advantage there. Yeah. If they're releasing this fine tuning[00:17:23] Alistair Pullen: ability like a day after. Yeah, no, no, absolutely. But like. You can't build a startup otherwise. The advantage is obviously nice and it makes you feel fuzzy inside. But like, at the end of the day, it's not that that's going to make you win.[00:17:34] But yeah, no, so like we'd spoken to Shamul there, Devrel guy, I'm sure you know him. I think he's head of solutions or something. In their applied team, yeah, we'd been talking to him from the very beginning when we got into YC, and he's been absolutely fantastic throughout. I basically had pitched him this idea back when we were doing it on 3.[00:17:53] 5, 16k, and I was like, this is my, this is my crazy thesis. I want to see if this can work. And as soon as like that 128k model came out, I started like laying the groundwork. I was like, I know this definitely isn't possible because he released it like yesterday, but know that I want it. And in the interim, like, GPT 4, like, 8K fine tuning came out.[00:18:11] We tried that, it's obviously even fewer tokens, but the intelligence helped. And I was like, if we can marry the intelligence and the context window length, then we're going to have something special. And eventually, we were able to get on the Experimental Access Program, and we got access to 4Turbo fine tuning.[00:18:25] As soon as we did that, because in the entire run up to that we built the data pipeline, we already had all that set up, so we were like, right, we have the data, now we have the model, let's put it through and iterate, essentially, and that's, that's where, like, Genie as we know it today, really was born. I won't pretend like the first version of Gene that we trained was good.[00:18:45] It was a disaster. That's where you realize all the implicit biases in your data set. And you realize that, oh, actually this decision you made that was fairly arbitrary was the wrong one. You have to do it a different way. Other subtle things like, you know, how you write Git diffs in using LLMs and how you can best optimize that to make sure they actually apply and work and loads of different little edge cases.[00:19:03] But as soon as we had access to the underlying tool, we were like, we can actually do this. And I was I breathed a sigh of relief because I didn't know it was like, it wasn't a done deal, but I knew that we could build something useful. I mean, I knew that we could build something that would be measurably good on whatever eval at the time that you wanted to use.[00:19:23] Like at the time, back then, we weren't actually that familiar with Swift. But once Devin came out and they announced the SBBench core, I like, that's when my life took a turn. Challenge accepted. Yeah, challenge accepted. And that's where like, yes, that's where my friendships have gone. My sleep has gone. My weight.[00:19:40] Everything got into SweeBench and yeah, we, we, it was actually a very useful tool in building GeniX beforehand. It was like, yes, vibe check this thing and see if it's useful. And then all of a sudden you have a, an actual measure to, to see like, couldn't it do software engineering? Not, not the best measure, obviously, but like it's a, it's the best that we've got now.[00:19:57] We, we just iterated and built and eventually we got it to the point where it is now. And a little bit beyond since we actually Like, we actually got that score a couple of weeks ago, and yeah, it's been a hell of a journey from the beginning all the way now. That was a very rambling answer to your question about how we got here, but that's essentially the potted answer of how we got here.[00:20:16] Got the full[00:20:16] swyx: origin story[00:20:17] Alessio: out. Yeah, no, totally.[00:20:18] Genie Data Mix[00:20:18] Alessio: You mentioned bias in the data and some of these things. In your announcement video, you called Genie the worst verse AI software engineering colleague. And you kind of highlighted how the data needed to train it needs to show how a human engineer works. I think maybe you're contrasting that to just putting code in it.[00:20:37] There's kind of like a lot more than code that goes into software engineering. How do you think about the data mixture, you know, and like, uh, there's this kind of known truth that code makes models better when you put in the pre training data, but since we put so much in the pre training data, what else do you add when you turn to Genium?[00:20:54] Alistair Pullen: Yeah, I think, well, I think that sort of boils down fundamentally to the difference between a model writing code and a model doing software engineering, because the software engineering sort of discipline goes wider, because if you look at something like a PR, that is obviously a Artifact of some thought and some work that has happened and has eventually been squashed into, you know, some diffs, right?[00:21:17] What the, very crudely, what the pre trained models are reading is they're reading those final diffs and they're emulating that and they're being able to output it, right? But of course, it's a super lossy thing, a PR. You have no idea why or how, for the most part, unless there are some comments, which, you know, anyone who's worked in a company realizes PR reviews can be a bit dodgy at times, but you see that you lose so much information at the end, and that's perfectly fine, because PRs aren't designed to be something that perfectly preserves everything that happened, but What we realized was if you want something that's a software engineer, and very crudely, we started with like something that can do PRs for you, essentially, you need to be able to figure out why those things happened.[00:21:58] Otherwise, you're just going to rely, you essentially just have a code writing model, you have something that's good at human eval, but But, but not very good at Sweet Eng. Essentially that realization was, was part of the, the kernel of the idea of of, of the approach that we took to design the agent. That, that is genie the way that we decided we want to try to extract what happened in the past, like as forensically as possible, has been and is currently like one of the, the main things that we focus all our time on, because doing that as getting as much signal out as possible, doing that as well as possible is the biggest.[00:22:31] thing that we've seen that determines how well we do on that benchmark at the end of the day. Once you've sorted things out, like output structure, how to get it consistently writing diffs and all the stuff that is sort of ancillary to the model actually figuring out how to solve a problem, the core bit of solving the problem is how did the human solve this problem and how can we best come up with how the human solved these problems.[00:22:54] So all the effort went in on that. And the mix that we ended up with was, as you've probably seen in the technical report and so on, all of those different languages and different combinations of different task types, all of that has run through that pipeline, and we've extracted all that information out.[00:23:09] Customizing for Customers[00:23:09] Alessio: How does that differ when you work with customers that have private workflows? Like, do you think, is there usually a big delta between what you get in open source and maybe public data versus like Yeah,[00:23:19] Alistair Pullen: yeah, yeah. When you scrape enough of it, most of open source is updating readmes and docs. It's hilarious, like we had to filter out so much of that stuff because when we first did the 16k model, like the amount of readme updating that went in, we did like no data cleaning, no real, like, we just sort of threw it in and saw what happened.[00:23:38] And it was just like, It was really good at updating readme, it was really good at writing some comments, really good at, um, complaining in Git reviews, in PR reviews, rather, and it would, again, like, we didn't clean the data, so you'd, like, give it some feedback, and it would just, like, reply, and, like, it would just be quite insubordinate when it was getting back to you, like, no, I don't think you're right, and it would just sort of argue with you, so The process of doing all that was super interesting because we realized from the beginning, okay, there's a huge amount of work that needs to go into like cleaning this, getting it aligned with what we want the model to do to be able to get the model to be useful in some way.[00:24:12] Alessio: I'm curious, like, how do you think about the customer willingness? To share all of this historical data, I've done a lot of developer tools investing in my career and getting access to the code base is always one of the hard things. Are people getting more cautious about sharing this information? In the past, it was maybe like, you know, you're using static analysis tool, like whatever else you need to plug into the code base, fine.[00:24:35] Now you're building. A model based on it, like, uh, what's the discussion going into these companies? Are most people comfortable with, like, letting you see how to work and sharing everything?[00:24:44] Alistair Pullen: It depends on the sector, mostly. We've actually seen, I'd say, people becoming more amenable to the idea over time, actually, rather than more skeptical, because I think they can see the, the upside.[00:24:55] If this thing could be, Does what they say it does, it's going to be more help to us than it is a risk to our infosec. Um, and of course, like, companies building in this space, we're all going to end up, you know, complying with the same rules, and there are going to be new rules that come out to make sure that we're looking at your code, that everything is safe, and so on.[00:25:12] So from what we've seen so far, we've spoken to some very large companies that you've definitely heard of and all of them obviously have stipulations and many of them want it to be sandbox to start with and all the like very obvious things that I, you know, I would say as well, but they're all super keen to have a go and see because like, despite all those things, if we can genuinely Make them go faster, allow them to build more in a given time period and stuff.[00:25:35] It's super worth it to them.[00:25:37] Genie Workflow[00:25:37] swyx: Okay, I'm going to dive in a little bit on the process that you have created. You showed the demo on your video, and by the time that we release this, you should be taking people off the waitlist and launching people so people can see this themselves. There's four main Parts of the workflow, which is finding files, planning action, writing code and running tests.[00:25:58] And controversially, you have set yourself apart from the Devins of the world by saying that things like having access to a browser is not that important for you. Is that an accurate reading of[00:26:09] Alistair Pullen: what you wrote? I don't remember saying that, but At least with what we've seen, the browser is helpful, but it's not as helpful as, like, ragging the correct files, if that makes sense.[00:26:20] Like, it is still helpful, but obviously there are more fundamental things you have to get right before you get to, like, Oh yeah, you can read some docs, or you can read a stack overflow article, and stuff like that.[00:26:30] swyx: Yeah, the phrase I was indexing on was, The other software tools are wrappers around foundational models with a few additional tools, such as a web browser or code interpreter.[00:26:38] Alistair Pullen: Oh, I see. No, I mean, no, I'm, I'm not, I'm not, I'm not deri, I'm deriding the, the, the approach that, not the, not the tools. Yeah, exactly. So like, I would[00:26:44] swyx: say in my standard model of what a code agent should look like, uh, Devon has been very influential, obviously. Yeah. Yeah. Because you could just add the docs of something.[00:26:54] Mm-Hmm. . And like, you know, now I have, now when I'm installing a new library, I can just add docs. Yeah, yeah. Cursor also does this. Right. And then obviously having a code interpreter does help. I guess you have that in the form[00:27:03] Alistair Pullen: of running tests. I mean, uh, the Genie has both of those tools available to it as well.[00:27:08] So, yeah, yeah, yeah. So, we have a tool where you can, like, put in URLs and it will just read the URLs. And you can also use this Perplexities API under the hood as well to be able to actually ask questions if it wants to. Okay. So, no, we use both of those tools as well. Like, those tools are Super important and super key.[00:27:24] I think obviously the most important tools to these agents are like being able to retrieve code from a code base, being able to read Stack Overflow articles and what have you and just be able to essentially be able to Google like we do is definitely super useful.[00:27:38] swyx: Yeah, I thought maybe we could just kind of dive into each of those actions.[00:27:41] Code Retrieval[00:27:41] swyx: Code retrieval, one of the core indexer that Yes. You've worked on, uh, even as, as built, what makes it hard, what approach you thought would work, didn't work,[00:27:52] Alistair Pullen: anything like that. It's funny, I had a similar conversation to this when I was chatting to the guys from OpenAI yesterday. The thing is that searching for code, specifically semantically, at least to start with, I mean like keyword search and stuff like that is a, is a solved problem.[00:28:06] It's been around for ages, but at least being able to, the phrase we always used back in the day was searching for what code does rather than what code is. Like searching for functionality is really hard. Really hard. The way that we approached that problem was that obviously like a very basic and easy approach is right.[00:28:26] Let's just embed the code base. We'll chunk it up in some arbitrary way, maybe using an AST, maybe using number of lines, maybe using whatever, like some overlapping, just chunk it up and embed it. And once you've done that, I will write a query saying, like, find me some authentication code or something, embed it, and then do the cosine similarity and get the top of K, right?[00:28:43] That doesn't work. And I wish it did work, don't get me wrong. It doesn't work well at all, because fundamentally, if you think about, like, semantically, how code looks is very different to how English looks, and there's, like, not a huge amount of signal that's carried between the two. So what we ended up, the first approach we took, and that kind of did well enough for a long time, was Okay, let's train a model to be able to take in English code queries and then produce a hypothetical code snippet that might look like the answer, embed that, and then do the code similarity.[00:29:18] And that process, although very simple, gets you so much more performance out of the retrieval accuracy. And that was kind of like the start of our of our engine, as we called it, which is essentially like the aggregation of all these different heuristics, like semantic, keyword, LSP, and so on. And then we essentially had like a model that would, given an input, choose which ones it thought were most appropriate, given the type of requests you had.[00:29:45] So the whole code search thing was a really hard problem. And actually what we ended up doing with Genie is we, um, let The model through self play figure out how to retrieve code. So actually we don't use our engine for Genie. So instead of like a request coming in and then like say GPT 4 with some JSON output being like, Well, I think here we should use a keyword with these inputs and then we should use semantic.[00:30:09] And then we should like pick these results. It's actually like, A question comes in and Genie has self played in its training data to be able to be like, okay, this is how I'm going to approach finding this information. Much more akin to how a developer would do it. Because if I was like, Shawn, go into this new code base you've never seen before.[00:30:26] And find me the code that does this. You're gonna probably, you might do some keywords, you're gonna look over the file system, you're gonna try to figure out from the directories and the file names where it might be, you're gonna like jump in one, and then once you're in there, you're probably gonna be doing the, you know, go to definition stuff to like jump from file to file and try to use the graph to like get closer and closer.[00:30:46] And that is exactly what Genie does. Starts on the file system, looks at the file system, picks some candidate files, is this what I'm looking for, yes or no, and If there's something that's interesting, like an import or something, it can, it can command click on that thing, go to definition, go to references, and so on.[00:31:00] And it can traverse the codebase that way.[00:31:02] swyx: Are you using the VS Code, uh, LSP, or? No,[00:31:05] Alistair Pullen: that's not, we're not like, we're not doing this in VS Code, we're just using the language servers running. But, we really wanted to try to mimic the way we do it as best as possible. And we did that during the self play process when we were generating the dataset, so.[00:31:18] Although we did all that work originally, and although, like, Genie still has access to these tools, so it can do keyword searches, and it can do, you know, basic semantic searches, and it can use the graph, it uses them through this process and figures out, okay, I've learned from data how to find stuff in codebases, and I think in our technical report, I can't remember the exact number, but I think it was around 65 or 66 percent retrieval accuracy overall, Measured on, we know what lines we need for these tasks to find, for the task to actually be able to be completed, And we found about 66 percent of all those lines, which is one of the biggest areas of free performance that we can get a hold of, because When we were building Genie, truthfully, like, a lot more focus went on assuming you found the right information, you've been able to reproduce the issue, assuming that's true, how do you then go about solving it?[00:32:08] And the bulk of the work we did was on the solving. But when you go higher up the funnel, obviously, like, the funnel looks like, have you found everything you need for the task? Are you able to reproduce the problem that's seen in the issue? Are you then able to solve it? And the funnel gets narrower as you go down.[00:32:22] And at the top of the funnel, of course, is rank. So I'm actually quite happy with that score. I think it's still pretty impressive considering the size of some of the codebases we're doing, we're using for this. But as soon as that, if that number becomes 80, think how many more tasks we get right. That's one of the key areas we're going to focus on when we continue working on Genie.[00:32:37] It'd be interesting to break out a benchmark just for that.[00:32:41] swyx: Yeah, I mean, it's super easy. Because I don't know what state of the art is.[00:32:43] Alistair Pullen: Yeah, I mean, like, for a, um, it's super easy because, like, for a given PR, you know what lines were edited. Oh, okay. Yeah, you know what lines were[00:32:50] swyx: you can[00:32:51] Alistair Pullen: source it from Cbench, actually.[00:32:52] Yeah, you can do it, you can do it super easily. And that's how we got that figure out at the other end. Um, for us being able to see it against, um, our historic models were super useful. So we could see if we were, you know, actually helping ourselves or not. And initially, one of the biggest performance gains that we saw when we were work, when we did work on the RAG a bit was giving it the ability to use the LSP to like go to definition and really try to get it to emulate how we do that, because I'm sure when you go into an editor with that, where like the LSP is not working or whatever, you suddenly feel really like disarmed and naked.[00:33:20] You're like, Oh my god, I didn't realize how much I actually used this to get about rather than just find stuff. So we really tried to get it to do that and that gave us a big jump in performance. So we went from like 54 percent up to like the 60s, but just by adding, focusing on that.[00:33:34] swyx: One weird trick. Yes.[00:33:37] I'll briefly comment here. So this is the standard approach I would say most, uh, code tooling startups are pursuing. The one company that's not doing this is magic. dev. So would you do things differently if you have a 10 million[00:33:51] Alistair Pullen: token context window? If I had a 10 million context window and hundreds of millions of dollars, I wouldn't have gone and built, uh, it's an LTM, it's not a transformer, right, that they're using, right?[00:34:03] If I'm not mistaken, I believe it's not a transformer. Yeah, Eric's going to come on at some point. Listen, they obviously know a lot more about their product than I do. I don't know a great deal about how magic works. I don't think he knows anything yet. I'm not going to speculate. Would I do it the same way as them?[00:34:17] I like the way we've done it because fundamentally like we focus on the Active software engineering and what that looks like and showing models how to do that. Fundamentally, the underlying model that we use is kind of null to us, like, so long as it's the best one, I don't mind. And the context windows, we've already seen, like, you can get transformers to have, like, million, one and a half million token context windows.[00:34:43] And that works perfectly well, so like, as soon as you can fine tune Gemini 1. 5, then you best be sure that Genie will run on Gemini 1. 5, and like, we'll probably get very good performance out of that. I like our approach because we can be super agile and be like, Oh, well, Anthropic have just released whatever, uh, you know, and it might have half a million tokens and it might be really smart.[00:35:01] And I can just immediately take my JSONL file and just dump it in there and suddenly Genie works on there and it can do all the new things. Does[00:35:07] swyx: Anthropic have the same fine tuning support as OpenAI? I[00:35:11] Alistair Pullen: actually haven't heard any, anyone do it because they're working on it. They are partner, they're partnered with AWS and it's gonna be in Bedrock.[00:35:16] Okay. As far as, as far as I know, I think I'm, I think, I think that's true. Um, cool. Yeah.[00:35:20] Planning[00:35:20] swyx: We have to keep moving on to, uh, the other segments. Sure. Uh, planning the second piece of your four step grand master plan, that is the frontier right now. You know, a lot of people are talking about strawberry Q Star, whatever that is.[00:35:32] Monte Carlo Tree Search. Is current state of the art planning good enough? What prompts have worked? I don't even know what questions to ask. Like, what is the state of planning?[00:35:41] Alistair Pullen: I think it's fairly obvious that with the foundational models, like, you can ask them to think by step by step and ask them to plan and stuff, but that isn't enough, because if you look at how those models score on these benchmarks, then they're not even close to state of the art.[00:35:52] Which ones are[00:35:52] swyx: you referencing? Benchmarks? So, like,[00:35:53] Alistair Pullen: just, uh, like, SweetBench and so on, right? And, like, even the things that get really good scores on human evalor agents as well, because they have these loops, right? Yeah. Obviously these things can reason, quote unquote, but the reasoning is the model, like, it's constrained by the model as intelligence, I'd say, very crudely.[00:36:10] And what we essentially wanted to do was we still thought that, obviously, reasoning is super important, we need it to get the performance we have. But we wanted the reasoning to emulate how we think about problems when we're solving them as opposed to how a model thinks about a problem when we're solving it.[00:36:23] And that was, that's obviously part of, like, the derivation pipeline that we have when we, when we, when we Design our data, but the reasoning that the models do right now, and who knows what Q star, whatever ends up being called looks like, but certainly what I'm excited on a small tangent to that, like, what I'm really excited about is when models like that come out, obviously, the signal in my data, when I regenerate, it goes up.[00:36:44] And then I can then train that model. It's already better at reasoning with it. improved reasoning data and just like I can keep bootstrapping and keep leapfrogging every single time. And that is like super exciting to me because I don't, I welcome like new models so much because immediately it just floats me up without having to do much work, which is always nice.[00:37:02] But at the state of reasoning generally, I don't see it going away anytime soon. I mean, that's like an autoregressive model doesn't think per se. And in the absence of having any thought Maybe, uh, an energy based model or something like that. Maybe that's what QSTAR is. Who knows? Some sort of, like, high level, abstract space where thought happens before tokens get produced.[00:37:22] In the absence of that for the moment, I think it's all we have and it's going to have to be the way it works. For what happens in the future, we'll have to see, but I think certainly it's never going to hinder performance to do it. And certainly, the reasoning that we see Genie do, when you compare it to like, if you ask GPT 4 to break down step by step and approach for the same problem, at least just on a vibe check alone, looks far better.[00:37:46] swyx: Two elements that I like, that I didn't see in your initial video, we'll see when, you know, this, um, Genie launches, is a planner chat, which is, I can modify the plan while it's executing, and then the other thing is playbooks, which is also from Devin, where, here's how I like to do a thing, and I'll use Markdown to, Specify how I do it.[00:38:06] I'm just curious if, if like, you know,[00:38:07] Alistair Pullen: those things help. Yeah, no, absolutely. We're a hundred percent. We want everything to be editable. Not least because it's really frustrating when it's not. Like if you're ever, if you're ever in a situation where like this is the one thing I just wish I could, and you'd be right if that one thing was right and you can't change it.[00:38:21] So we're going to make everything as well, including the code it writes. Like you can, if it makes a small error in a patch, you can just change it yourself and let it continue and it will be fine. Yeah. So yeah, like those things are super important. We'll be doing those two.[00:38:31] Alessio: I'm curious, once you get to writing code, is most of the job done?[00:38:35] I feel like the models are so good at writing code when they're like, And small chunks that are like very well instructed. What's kind of the drop off in the funnel? Like once you get to like, you got the right files and you got the right plan. That's a great question[00:38:47] Alistair Pullen: because by the time this is out, there'll be another blog, there'll be another blog post, which contains all the information, all the learnings that I delivered to OpenAI's fine tuning team when we finally got the score.[00:38:59] Oh, that's good. Um, go for it. It's already up. And, um, yeah, yeah. I don't have it on my phone, but basically I, um, broke down the log probs. I basically got the average log prob for a token at every token position in the context window. So imagine an x axis from 0 to 128k and then the average log prob for each index in there.[00:39:19] As we discussed, like, The way genie works normally is, you know, at the beginning you do your RAG, and then you do your planning, and then you do your coding, and that sort of cycle continues. The certainty of code writing is so much more certain than every other aspect of genie's loop. So whatever's going on under the hood, the model is really comfortable with writing code.[00:39:35] There is no doubt, and it's like in the token probabilities. One slightly different thing, I think, to how most of these models work is, At least for the most part, if you ask GPT4 in ChatGPT to edit some code for you, it's going to rewrite the entire snippet for you with the changes in place. We train Genie to write diffs and, you know, essentially patches, right?[00:39:55] Because it's more token efficient and that is also fundamentally We don't write patches as humans, but it's like, the result of what we do is a patch, right? When Genie writes code, I don't know how much it's leaning on the pre training, like, code writing corpus, because obviously it's just read code files there.[00:40:14] It's obviously probably read a lot of patches, but I would wager it's probably read more code files than it has patches. So it's probably leaning on a different part of its brain, is my speculation. I have no proof for this. So I think the discipline of writing code is slightly different, but certainly is its most comfortable state when it's writing code.[00:40:29] So once you get to that point, so long as you're not too deep into the context window, another thing that I'll bring up in that blog post is, um, Performance of Genie over the length of the context window degrades fairly linearly. So actually, I actually broke it down by probability of solving a SWE bench issue, given the number of tokens of the context window.[00:40:49] It's 60k, it's basically 0. 5. So if you go over 60k in context length, you are more likely to fail than you are to succeed just based on the amount of tokens you have on the context window. And when I presented that to the fine tuning team at OpenAI, that was super interesting to them as well. And that is more of a foundational model attribute than it is an us attribute.[00:41:10] However, the attention mechanism works in, in GPT 4, however, you know, they deal with the context window at that point is, you know, influencing how Genie is able to form, even though obviously all our, all our training data is perfect, right? So even if like stuff is being solved in 110, 000 tokens, sort of that area.[00:41:28] The training data still shows it being solved there, but it's just in practice, the model is finding it much harder to solve stuff down that end of the context window.[00:41:35] Alessio: That's the scale with the context, so for a 200k context size, is 100k tokens like the 0. 5? I don't know. Yeah, but I,[00:41:43] Alistair Pullen: I, um, hope not. I hope you don't just take the context length and halve it and then say, oh, this is the usable context length.[00:41:50] But what's been interesting is knowing that Actually really digging into the data, looking at the log probs, looking at how it performs over the entire window. It's influenced the short term improvements we've made to Genie since we did the, got that score. So we actually made some small optimizations to try to make sure As best we can without, like, overdoing it, trying to make sure that we can artificially make sure stuff sits within that sort of range, because we know that's our sort of battle zone.[00:42:17] And if we go outside of that, we're starting to push the limits, we're more likely to fail. So just doing that sort of analysis has been super useful without actually messing with anything, um, like, more structural in getting more performance out of it.[00:42:29] Language Mix[00:42:29] Alessio: What about, um, different languages? So, in your technical report, the data makes sense.[00:42:34] 21 percent JavaScript, 21 percent Python, 14 percent TypeScript, 14 percent TSX, um, Which is JavaScript, JavaScript.[00:42:42] Alistair Pullen: Yeah,[00:42:42] swyx: yeah, yeah. Yes,[00:42:43] Alistair Pullen: yeah, yeah. It's like 49 percent JavaScript. That's true, although TypeScript is so much superior, but anyway.[00:42:46] Alessio: Do you see, how good is it at just like generalizing? You know, if you're writing Rust or C or whatever else, it's quite different.[00:42:55] Alistair Pullen: It's pretty good at generalizing. Um, obviously, though, I think there's 15 languages in that technical report, I think, that we've, that we've covered. The ones that we picked in the highest mix were, uh, the ones that, selfishly, we internally use the most, and also that are, I'd argue, some of the most popular ones.[00:43:11] When we have more resource as a company, and, More time and, you know, once all the craziness that has just happened sort of dies down a bit, we are going to, you know, work on that mix. I'd love to see everything ideally be represented in a similar level as it is. If you, if you took GitHub as a data set, if you took like how are the languages broken down in terms of popularity, that would be my ideal data mix to start.[00:43:34] It's just that it's not cheap. So, um, yeah, trying to have an equal amount of Ruby and Rust and all these different things is just, at our current state, is not really what we're looking for.[00:43:46] Running Code[00:43:46] Alessio: There's a lot of good Ruby in my GitHub profile. You can have it all. Well, okay, we'll just train on that. For running tests It sounds easy, but it isn't, especially when you're working in enterprise codebases that are kind of like very hard to spin up.[00:43:58] Yes. How do you set that up? It's like, how do you make a model actually understand how to run a codebase, which is different than writing code for a codebase?[00:44:07] Alistair Pullen: The model itself is not in charge of like setting up the codebase and running it. So Genie sits on top of GitHub, and if you have CI running GitHub, you have GitHub Actions and stuff like that, then Genie essentially makes a call out to that, runs your CI, sees the outputs and then like moves on.[00:44:23] Making a model itself, set up a repo, wasn't scoped in what we wanted Genie to be able to do because for the most part, like, at least most enterprises have some sort of CI pipeline running and like a lot of, if you're doing some, even like, A lot of hobbyist software development has some sort of like basic CI running as well.[00:44:40] And that was like the lowest hanging fruit approach that we took. So when, when Genie ships, like the way it will run its own code is it will basically run your CI and it will like take the, um, I'm not in charge of writing this. The rest of the team is, but I think it's the checks API on GitHub allows you to like grab that information and throw it in the context window.[00:44:56] Alessio: What's the handoff like with the person? So, Jeannie, you give it a task, and then how long are you supposed to supervise it for? Or are you just waiting for, like, the checks to eventually run, and then you see how it goes? Like, uh, what does it feel like?[00:45:11] Alistair Pullen: There are a couple of modes that it can run in, essentially.[00:45:14] It can run in, like, fully headless autonomous modes, so say you assign it a ticket in linear or something. Then it won't ask you for anything. It will just go ahead and try. Or if you're in like the GUI on the website and you're using it, then you can give it a task and it, it might choose to ask you a clarifying question.[00:45:30] So like if you ask it something super broad, it might just come back to you and say, what does that actually mean? Or can you point me in the right direction for this? Because like our decision internally was, it's going to piss people off way more if it just goes off and has, and makes a completely like.[00:45:45] ruined attempt at it because it just like from day one got the wrong idea. So it can ask you for a lot of questions. And once it's going much like a regular PR, you can leave review comments, issue comments, all these different things. And it, because you know, he's been trained to be a software engineering colleague, responds in actually a better way than a real colleague, because it's less snarky and less high and mighty.[00:46:08] And also the amount of filtering has to do for When you train a model to like be a software engineer, essentially, it's like you can just do anything. It's like, yeah, it looks good to me, bro.[00:46:17] swyx: Let's[00:46:17] Alistair Pullen: ship it.[00:46:19] Finetuning with OpenAI[00:46:19] swyx: I just wanted to dive in a little bit more on your experience with the fine tuning team. John Allard was publicly sort of very commentary supportive and, you know, was, was part of it.[00:46:27] Like, what's it like working with them? I also picked up that you initially started to fine tune what was publicly available, the 16 to 32 K range. You got access to do more than that. Yeah. You've also trained on billions of tokens instead of the usual millions range. Just, like, take us through that fine tuning journey and any advice that you might have.[00:46:47] Alistair Pullen: It's been so cool, and this will be public by the time this goes out, like, OpenAI themselves have said we are pushing the boundaries of what is possible with fine tuning. Like, we are right on the edge, and like, we are working, genuinely working with them in figuring out how stuff works, what works, what doesn't work, because no one's doing No one else is doing what we're doing.[00:47:06] They have found what we've been working on super interesting, which is why they've allowed us to do so much, like, interesting stuff. Working with John, I mean, I had a really good conversation with John yesterday. We had a little brainstorm after the video we shot. And one of the things you mentioned, the billions of tokens, one of the things we've noticed, and it's actually a very interesting problem for them as well, when you're[00:47:28] How big your peft adapter, your lore adapter is going to be in some way and like figuring that out is actually a really interesting problem because if you make it too big and because they support data sets that are so small, you can put like 20 examples through it or something like that, like if you had a really sparse, large adapter, you're not going to get any signal in that at all.[00:47:44] So they have to dynamically size these things and there is an upper bound and actually we use. Models that are larger than what's publicly available. It's not publicly available yet, but when this goes out, it will be. But we have larger law adapters available to us, just because the amount of data that we're pumping through it.[00:48:01] And at that point, you start seeing really Interesting other things like you have to change your learning rate schedule and do all these different things that you don't have to do when you're on the smaller end of things. So working with that team is such a privilege because obviously they're like at the top of their field in, you know, in the fine tuning space.[00:48:18] So we're, as we learn stuff, they're learning stuff. And one of the things that I think really catalyzed this relationship is when we first started working on Genie, like I delivered them a presentation, which will eventually become the blog post that you'll love to read soon. The information I gave them there I think is what showed them like, oh wow, okay, these guys are really like pushing the boundaries of what we can do here.[00:48:38] And truthfully, our data set, we view our data set right now as very small. It's like the minimum that we're able to afford, literally afford right now to be able to produce a product like this. And it's only going to get bigger. So yesterday while I was in their offices, I was basically, so we were planning, we were like, okay, how, this is where we're going in the next six to 12 months.[00:48:57] Like we're, Putting our foot on the gas here, because this clearly works. Like I've demonstrated this is a good, you know, the best approach so far. And I want to see where it can go. I want to see what the scaling laws like for the data. And at the moment, like, it's hard to figure that out because you don't know when you're running into like saturating a PEFT adapter, as opposed to actually like, is this the model's limit?[00:49:15] Like, where is that? So finding all that stuff out is the work we're actively doing with them. And yeah, it's, it's going to get more and more collaborative over the next few weeks as we, as we explore like larger adapters, pre training extension, different things like that.[00:49:27] swyx: Awesome. I also wanted to talk briefly about the synthetic data process.[00:49:32] Synthetic Code Data[00:49:32] swyx: One of your core insights was that the vast majority of the time, the code that is published by a human is encrypted. In a working state. And actually you need to fine tune on non working code. So just, yeah, take us through that inspiration. How many rounds, uh, did you, did you do? Yeah, I mean, uh,[00:49:47] Alistair Pullen: it might, it might be generous to say that the vast majority of code is in a working state.[00:49:51] I don't know if I don't know if I believe that. I was like, that's very nice of you to say that my code works. Certainly, it's not true for me. No, I think that so yeah, no, but it was you're right. It's an interesting problem. And what we saw was when we didn't do that, obviously, we'll just hope you have to basically like one shot the answer.[00:50:07] Because after that, it's like, well, I've never seen iteration before. How am I supposed to figure out how this works? So what the what you're alluding to there is like the self improvement loop that we started working on. And that was in sort of two parts, we synthetically generated runtime errors. Where we would intentionally mess with the AST to make stuff not work, or index out of bounds, or refer to a variable that doesn't exist, or errors that the foundational models just make sometimes that you can't really avoid, you can't expect it to be perfect.[00:50:39] So we threw some of those in with a, with a, with a probability of happening and on the self improvement side, I spoke about this in the, in the blog post, essentially the idea is that you generate your data in sort of batches. First batch is like perfect, like one example, like here's the problem, here's the answer, go, train the model on it.[00:50:57] And then for the second batch, you then take the model that you trained before that can look like one commit into the future, and then you let it have the first attempt at solving the problem. And hopefully it gets it wrong, and if it gets it wrong, then you have, like, okay, now the codebase is in this incorrect state, but I know what the correct state is, so I can do some diffing, essentially, to figure out how do I get the state that it's in now to the state that I want it in, and then you can train the model to then produce that diff next, and so on, and so on, and so on, so the model can then learn, and also reason as to why it needs to make these changes, to be able to learn how to, like, learn, like, solve problems iteratively and learn from its mistakes and stuff like that.[00:51:35] Alessio: And you picked the size of the data set just based on how much money you could spend generating it. Maybe you think you could just make more and get better results. How, what[00:51:42] Alistair Pullen: multiple of my monthly burn do I spend doing this? Yeah. Basically it was, it was very much related to Yeah. Just like capital and um, yes, with any luck that that will be alleviated to[00:51:53] swyx: very soon.[00:51:54] Alistair Pullen: Yeah.[00:51:54] SynData in Llama 3[00:51:54] swyx: Yeah. I like drawing references to other things that are happening in, in the, in the wild. So, 'cause we only get to release this podcast once a week. Mm-Hmm. , the LAMA three paper also had some really interesting. Thoughts on synthetic data for code? I don't know if you have reviewed that. I'll highlight the back translation section.[00:52:11] Because one of your dataset focuses is updating documentation. I think that translation between natural language, English versus code, and

Oh My Glob! An Adventure Time Podcast
Season 6 - Episodes 13, 14 (Thanks for the Crabapples, Giuseppe!, Princess Day)

Oh My Glob! An Adventure Time Podcast

Play Episode Listen Later Aug 19, 2024 58:05


Amy and Matt discuss fan-favorite Adventure Time episode, "Thanks for the Crabapples, Giuseppe!" and then get into the Marceline and LSP-centric "Princess Day". It's a pretty dang swell time. A pretty dang swell time indeed. For Amy's episode predictions, we present... Caroline's Handy Dandy Grading Rubric: -Does the prediction contain the same characters as the actual episode? -If I worked at A.T. corp. would I produce this episode idea? -How much creative effort was put forth while coming up w/ this prediction? -Do the prediction and the actual ep. follow the same archetype (i.e. love & loss, heroic adventure, self-discovery, etc.)? -Would this story aide in the development of the overall plot and/or character development? -Do the events of the story seem plausible in regard to character traits (i.e. It would not be plausible for Finn to do something evil)? -Does a similar story line occur at some later point in the show? -Has a similar story line already occurred in a previously reviewed episode? Rate us on Apple Podcasts! itunes.apple.com/us/podcast/oh-my-glob-an-adventure-time-podcast/id1434343477?mt=2 Facebook: facebook.com/ohmyglobpodcast Contact us: ohmyglobpodcast@gmail.com And that Twitter thing: https://twitter.com/ohmyglobpodcast Amy: https://twitter.com/moxiespeaks Trivia Theme by Adrian C.

Land Stewardship Project's Ear to the Ground
Ear to the Ground 346: Pasture Pixie Dust

Land Stewardship Project's Ear to the Ground

Play Episode Listen Later Aug 15, 2024 23:15


The NRCS’s Jeff Duchene has set up grazing plans in 50 Minnesota counties, and has yet to find that proverbial “magic grass.” But he’s more convinced than ever that good management and good planning are their own kind of silver bullet. (Fifth and last episode in a series on LSP's 2024 Grazing School.) More Information • 4th in…  Read More → Source

Land Stewardship Project's Ear to the Ground
Ear to the Ground 345: Grazing’s Generational Jump

Land Stewardship Project's Ear to the Ground

Play Episode Listen Later Aug 12, 2024 27:15


Rick Matt was flat on his back when it became evident how he and his son, Damien, could build an intergenerational farming operation based on soil health, diversity, and grazing.  (Fourth episode in a series on LSP's 2024 Grazing School.) More Information • 3rd in the Grazing School Podcast Series: “Flerd is the Word” • 2nd in…  Read More → Source

Land Stewardship Project's Ear to the Ground
Ear to the Ground 344: Flerd is the Word

Land Stewardship Project's Ear to the Ground

Play Episode Listen Later Aug 5, 2024 20:20


Poor soil, short growing seasons, and little infrastructure: beginning farmer George Heller is proving that a successful grazing operation doesn’t require optimal conditions. (Third episode in a series on LSP's 2024 Grazing School.) More Information • 2nd in the Grazing School Podcast Series: “Healthy Soil Vs. Plastic Worms” • 1st in the Grazing School Podcast Series:…  Read More → Source

Land Stewardship Project's Ear to the Ground
Ear to the Ground 343: Healthy Soil Vs. Plastic Worms

Land Stewardship Project's Ear to the Ground

Play Episode Listen Later Jul 30, 2024 17:04


Clifford Johnson calls himself an “honest regenerative hypocrite,” which says a lot about his approach to building soil health on his family’s crop and livestock farm.  (Second episode in a series on LSP's 2024 Grazing School.) More Information • LSP's “Building Soil Health Profitably” Web Page • LSP's “Grazing & Soil Health” Web Page You…  Read More → Source

Land Stewardship Project's Ear to the Ground
Ear to the Ground 342: Ignoring the Red Dress

Land Stewardship Project's Ear to the Ground

Play Episode Listen Later Jul 22, 2024 24:33


Wholesome Family Farms is known for juggling enough enterprises to overwhelm even the most ambitious farmer. But Rachelle Meyer says a “three-legged stool” strategy keeps them balanced. (First episode in a series on LSP’s 2024 Grazing School.) More Information • Wholesome Family Farms • Drone Footage of Rachelle & Jordan Meyer Moving Sheep • LSP’s “Building Soil…  Read More → Source

Thinking Elixir Podcast
211: A Passion for Testing

Thinking Elixir Podcast

Play Episode Listen Later Jul 16, 2024 49:26


News includes the recent release of Elixir 1.17.2, updates to Livebook v0.13 making clustering in Kubernetes easier and introducing a proxy API for webhooks, and exciting developments in the Explorer library with remote dataframes. We also discuss handy Phoenix tips from Andrew Stewart and the new Gleam 1.3.0 features. In our interview, German Velasco shares his journey from Rails to Elixir, his contributions like Elixir Streams and the Phoenix Test library, and his philosophy on TDD. German also speaks about his upcoming talk at ElixirConf and his desire for integrating JavaScript testing capabilities. Tune in to hear all this and more! Show Notes online - http://podcast.thinkingelixir.com/211 (http://podcast.thinkingelixir.com/211) Elixir Community News - https://github.com/elixir-lang/elixir/releases/tag/v1.17.2 (https://github.com/elixir-lang/elixir/releases/tag/v1.17.2?utm_source=thinkingelixir&utm_medium=shownotes) – Elixir 1.17.2 was released, which includes a Logger fix and some Mix-related bugfixes. - Livebook updates - follow-up - https://x.com/miruoss/status/1809633392088027193 (https://x.com/miruoss/status/1809633392088027193?utm_source=thinkingelixir&utm_medium=shownotes) – Michael Ruoss notes that Livebook v0.13 works well for clustering on Kubernetes. - https://github.com/mruoss/livebook-helm (https://github.com/mruoss/livebook-helm?utm_source=thinkingelixir&utm_medium=shownotes) – Michael Ruoss created a Livebook Helm chart for easier deployment in Kubernetes clusters. - https://artifacthub.io/packages/helm/livebook/livebook (https://artifacthub.io/packages/helm/livebook/livebook?utm_source=thinkingelixir&utm_medium=shownotes) – Helm chart for Livebook on Artifact Hub. - https://news.livebook.dev/livebook-0.13-expose-an-http-api-from-your-notebook-2wE6GY (https://news.livebook.dev/livebook-0.13-expose-an-http-api-from-your-notebook-2wE6GY?utm_source=thinkingelixir&utm_medium=shownotes) – Livebook gains a proxy API to allow it to receive webhooks, useful for publishing Livebook as an app. - https://x.com/livebookdev/status/1809203084154843279 (https://x.com/livebookdev/status/1809203084154843279?utm_source=thinkingelixir&utm_medium=shownotes) – Details on the new proxy API feature in Livebook. - https://x.com/hugobarauna/status/1809203637022863784 (https://x.com/hugobarauna/status/1809203637022863784?utm_source=thinkingelixir&utm_medium=shownotes) – Use Plug.Router and Kino.Proxy.listen for sending webhooks or events to your Livebook. - https://www.elixirstreams.com/tips/liveview-used-input (https://www.elixirstreams.com/tips/liveview-used-input?utm_source=thinkingelixir&utm_medium=shownotes) - LiveView 1.0 removes the phx-feedback-for annotation for showing and hiding input feedback. The update introduces the used_input?/2 helper on the server-side. - https://github.com/phoenixframework/phoenixliveview/blob/main/CHANGELOG.md#backwards-incompatible-changes-for-10 (https://github.com/phoenixframework/phoenix_live_view/blob/main/CHANGELOG.md#backwards-incompatible-changes-for-10?utm_source=thinkingelixir&utm_medium=shownotes) – LiveView 1.0 Upgrade instructions, including a JavaScript shim for backwards compatibility. - https://x.com/josevalim/status/1808560304172761191 (https://x.com/josevalim/status/1808560304172761191?utm_source=thinkingelixir&utm_medium=shownotes) – Explorer gets remote dataframes support. - https://github.com/elixir-explorer/explorer/pull/932 (https://github.com/elixir-explorer/explorer/pull/932?utm_source=thinkingelixir&utm_medium=shownotes) – A PR was merged into Explorer to support remote dataframes, enabling transparent proxy operations in a cluster. - Explorer is part of the Nx project for data analysis and machine learning, supporting one and two-dimensional data structures. The new feature also performs distributed garbage collection. - https://x.com/src_rip/status/1810360113343115521 (https://x.com/src_rip/status/1810360113343115521?utm_source=thinkingelixir&utm_medium=shownotes) – Andrew Stewart shares a Phoenix tip on creating a link button to submit a post action without a form. - https://hexdocs.pm/phoenixliveview/Phoenix.Component.html#link/1 (https://hexdocs.pm/phoenix_live_view/Phoenix.Component.html#link/1?utm_source=thinkingelixir&utm_medium=shownotes) – More details on using Phoenix's link component. - https://github.com/phoenixframework/phoenixliveview/blob/f778e5bb1a4b0a29f8d688bbc6c0b7182dea51ca/lib/phoenix_component.ex#L2734-L2737 (https://github.com/phoenixframework/phoenix_live_view/blob/f778e5bb1a4b0a29f8d688bbc6c0b7182dea51ca/lib/phoenix_component.ex#L2734-L2737?utm_source=thinkingelixir&utm_medium=shownotes) – Underlying implementation details of Phoenix.HTML's data attributes. - https://gleam.run/news/auto-imports-and-tolerant-expressions/ (https://gleam.run/news/auto-imports-and-tolerant-expressions/?utm_source=thinkingelixir&utm_medium=shownotes) – Gleam 1.3.0 release features LSP improvements, CLI commands for adding/removing dependencies, and support for Erlang/OTP 27 keywords. - https://www.erlang-solutions.com/blog/let-your-database-update-you-with-ectowatch/ (https://www.erlang-solutions.com/blog/let-your-database-update-you-with-ectowatch/?utm_source=thinkingelixir&utm_medium=shownotes) – EctoWatch by Brian Underwood allows notifications about database changes directly from PostgreSQL. - https://github.com/cheerfulstoic/ecto_watch (https://github.com/cheerfulstoic/ecto_watch?utm_source=thinkingelixir&utm_medium=shownotes) – EctoWatch GitHub repository. - https://github.com/ityonemo/protoss (https://github.com/ityonemo/protoss?utm_source=thinkingelixir&utm_medium=shownotes) – Isaac Yonemoto's Protoss library update, improving ergonomics of setting up protocols. - https://www.youtube.com/watch?v=dCRGgFkCkmA (https://www.youtube.com/watch?v=dCRGgFkCkmA?utm_source=thinkingelixir&utm_medium=shownotes) – Watch a video explaining the Protoss library. - https://hexdocs.pm/protoss/Protoss.html (https://hexdocs.pm/protoss/Protoss.html?utm_source=thinkingelixir&utm_medium=shownotes) – Protoss documentation. Do you have some Elixir news to share? Tell us at @ThinkingElixir (https://twitter.com/ThinkingElixir) or email at show@thinkingelixir.com (mailto:show@thinkingelixir.com) Discussion Resources - https://www.elixirstreams.com/ (https://www.elixirstreams.com/?utm_source=thinkingelixir&utm_medium=shownotes) – Short video tips that German creates and shares. - https://www.testingliveview.com/ (https://www.testingliveview.com/?utm_source=thinkingelixir&utm_medium=shownotes) – Video course focused on testing LiveViews - https://github.com/germsvel/phoenix_test (https://github.com/germsvel/phoenix_test?utm_source=thinkingelixir&utm_medium=shownotes) – PhoenixTest provides a unified way of writing feature tests -- regardless of whether you're testing LiveView pages or static (non-LiveView) pages. - https://www.youtube.com/watch?v=JNWPsaO4PNM (https://www.youtube.com/watch?v=JNWPsaO4PNM?utm_source=thinkingelixir&utm_medium=shownotes) – ElixirConf 2023 - German Velasco - Using DDD concepts to create better Phoenix Contexts - https://www.mechanical-orchard.com/ (https://www.mechanical-orchard.com/?utm_source=thinkingelixir&utm_medium=shownotes) - https://github.com/elixir-wallaby/wallaby (https://github.com/elixir-wallaby/wallaby?utm_source=thinkingelixir&utm_medium=shownotes) Guest Information - https://x.com/germsvel (https://x.com/germsvel?utm_source=thinkingelixir&utm_medium=shownotes) – on Twitter - https://github.com/germsvel (https://github.com/germsvel?utm_source=thinkingelixir&utm_medium=shownotes) – on Github - https://www.germanvelasco.com/ (https://www.germanvelasco.com/?utm_source=thinkingelixir&utm_medium=shownotes) – Blog - https://www.testingliveview.com/ (https://www.testingliveview.com/?utm_source=thinkingelixir&utm_medium=shownotes) – Testing LiveView course site - https://elixirstreams.com (https://elixirstreams.com?utm_source=thinkingelixir&utm_medium=shownotes) – Short video tips Find us online - Message the show - @ThinkingElixir (https://twitter.com/ThinkingElixir) - Message the show on Fediverse - @ThinkingElixir@genserver.social (https://genserver.social/ThinkingElixir) - Email the show - show@thinkingelixir.com (mailto:show@thinkingelixir.com) - Mark Ericksen - @brainlid (https://twitter.com/brainlid) - Mark Ericksen on Fediverse - @brainlid@genserver.social (https://genserver.social/brainlid) - David Bernheisel - @bernheisel (https://twitter.com/bernheisel) - David Bernheisel on Fediverse - @dbern@genserver.social (https://genserver.social/dbern)

Land Stewardship Project's Ear to the Ground
Ear to the Ground 341: Seeds of Local Democracy

Land Stewardship Project's Ear to the Ground

Play Episode Listen Later Jun 24, 2024 39:51


LSP’s political action partner, the Land Stewardship Action Fund, is working to show that vibrant rural communities require local people participating in decision-making — one vote at a time. More Information • Land Stewardship Action Fund • Land Stewardship Letter Article: “Land Stewardship Action Fund's Local Impact”  • LSP’s Policy Campaigns You can find LSP Ear…  Read More → Source

World Languages Collaborative Podcast
Episode 22 (Season 3: Ep. 3): Diana Ruggiero and the Importance of Cultural Fluency

World Languages Collaborative Podcast

Play Episode Listen Later May 28, 2024 50:13


Welcome to the World Languages Collaborative Podcast, where we delve into the art and science of language teaching and learning. Our guest, Dr. Diana Ruggiero, Full Professor of Spanish at the University of Memphis, joins us today. Diana is a renowned expert in world languages for specific purposes, particularly Spanish, and is celebrated globally for her contributions to both scholarship and teaching in this field. Her expertise extends to Spanish for healthcare, community service learning, and the Latino community in Memphis. Episode Highlights- The importance of LSP in today's globalized world- How to create relevant and engaging language courses tailored to specific professional contexts- The role of cultural competence in LSP- Best practices for integrating service-learning into LSP coursesThis is great follow-up to our last episode with Dr. Darcy Lear. Be sure to check out that episode as well!Show notes:-Dr. Ruggiero's article: “Hybrid Spanish: Succeeding in First-Year College Foreign Language Class through Metacognitive Awareness,” in Currents in Teaching and Learning, Volume 5, Nos. 1 & 2, Fall 2012, pages 79-93. Canopy LearnNATIONAL CULTURALLY AND LINGUISTICALLY APPROPRIATE SERVICES STANDARDSThe Latino Patient

The EPAM Continuum Podcast Network
The Resonance Test 91: Open Source with Christopher Spalding, Rachel Fadlon, and Chris Howard

The EPAM Continuum Podcast Network

Play Episode Listen Later May 21, 2024 30:54


“Open source” is, of course, a technology term. But, as it turns out, when you connect tech-minded people with those who don't necessarily think of themselves as IT nerds, something magical can happen. In this case, what works in the digital world—transparency, community, collaboration—has a funny way of spilling over into the analog world. Because, well, people are people. We're wired to connect. In today's episode of *The Resonance Test,* EPAM's open source sage Chris Howard chats up two open source experts from EBSCO Information Services: Christopher Spalding, Vice President of Product, and Rachel Fadlon, Vice President of SaaS Marketing and National Conferences & Events. EBSCO is a founding member of Folio, an open source library services platform (LSP), to which EPAM contributes. The Open Source Initiative (OSI) maintains a precise definition for the term, but in broad strokes, open source refers to software containing source code that can be edited and used by anyone. We all use it every day without realizing it. Indeed, open source powers the internet as we know it. Howard asked Spalding and Fadlon to reflect on what open source has been like at EBSCO, so other companies and industries can learn from an open source project that has achieved scale. Folio has allowed developers and librarians to work together in an unprecedented way. Being part of the Folio community, says Fadlon, has dramatically transformed the way EBSCO interacts with customers across the company. The relationships that develop organically in an open source community, which are less formal and more “person to person,” says Fadlon, have influenced EBSCO to be more community-oriented in all aspects of the business. “The way that you approach someone in the library as a community member *to* a community member is very different than the way we were approaching our customers before,” she says. “We've made a lot more things more transparent and open” since joining Folio. Spalding says even the language has changed around communications more broadly. “The focus is on, ‘Well, why would that be closed? Let's make that open. Why wouldn't we talk about that?' Let's put it all on the table because we get feedback instantly, and then we know the direction that we go as a partnership with the larger community.” Of course, the trio also talked about security and artificial intelligence, the latter playing out differently in different regions. Open source made headlines recently when Linux, one of the most well-known examples of open source, narrowly avoided a cybersecurity disaster thanks to an eagle-eyed engineer. Open source comes with risks, like anything online. Spalding says security concerns might have pushed libraries away from open source a few years ago, but now, increasingly, libraries are adopting the open source adage: “More eyes, fewer bugs. And definitely, more eyes, better security.” Howard agrees. “We shouldn't be afraid of having all of those eyes on us… One of my developers calls it kind of ‘battle testing' the software, throwing it out to the world and saying, ‘Does this do what you want it to do?' And if it doesn't, at least you can tell me … and I can go and fix it or you can even fix it for me if you want to. And I think we're now finding more and more organizations that actually find that more attractive than scary.” Open yourself up to a more flexible, transparent future by listening to this engaging conversation. Host/Producer: Lisa Kocian Engineer: Kyp Pilalas Executive Producer: Ken Gordon

Thinking Elixir Podcast
201: Thinking Elixir News

Thinking Elixir Podcast

Play Episode Listen Later May 7, 2024 18:30


This week's podcast dives into the latest tech updates, including the release of Lexical 0.6.0 with its impressive performance upgrades and new features for Phoenix controller completions. We'll also talk about building smarter Slack bots with Elixir, and the LiveView support enhancements that bolster security against spam connections. Plus, we celebrate the 5-year milestone of Saša Jurić's influential “Soul of Erlang and Elixir” talk. Of course we have to touch on the FTC's impactful ruling that bans non-compete employment clauses, a significant shift that will likely shake up the tech industry and innovation landscape. Stay tuned for this and more! Show Notes online - http://podcast.thinkingelixir.com/201 (http://podcast.thinkingelixir.com/201) Elixir Community News - https://github.com/lexical-lsp/lexical/releases/tag/v0.6.0 (https://github.com/lexical-lsp/lexical/releases/tag/v0.6.0?utm_source=thinkingelixir&utm_medium=shownotes) – Lexical 0.6.0 release includes document and workspace symbols, improved Phoenix controller completions, and enhanced indexing performance. - https://benreinhart.com/blog/verifying-slack-requests-elixir-phoenix/ (https://benreinhart.com/blog/verifying-slack-requests-elixir-phoenix/?utm_source=thinkingelixir&utm_medium=shownotes) – Ben Reinhart's blog post details the process for cryptographically verifying event notifications from Slack in Phoenix apps for Slack bots. - https://twitter.com/PJUllrich/status/1784707877157970387 (https://twitter.com/PJUllrich/status/1784707877157970387?utm_source=thinkingelixir&utm_medium=shownotes) – Peter Ulrich has launched a LiveView-oriented course on building forms as announced on his Twitter account. - https://indiecourses.com/catalog/building-forms-with-phoenix-liveview-2OPYIqaekkZwrpgLUZOyZV (https://indiecourses.com/catalog/building-forms-with-phoenix-liveview-2OPYIqaekkZwrpgLUZOyZV?utm_source=thinkingelixir&utm_medium=shownotes) – The course covers building forms with Phoenix LiveView including various types of schema and dynamic fields. - https://paraxial.io/blog/live-view-support (https://paraxial.io/blog/live-view-support?utm_source=thinkingelixir&utm_medium=shownotes) – Michael Lubas outlines security-focused support for LiveView on Paraxial.io, including protection against initial connection and websocket spam. - https://github.com/nccgroup/sobelow/pull/123 (https://github.com/nccgroup/sobelow/pull/123?utm_source=thinkingelixir&utm_medium=shownotes) – There was work on adding support for HEEx to Sobelow.XSS.Raw, as a part of Sobelow's security-focused static analysis for the Phoenix Framework. - https://twitter.com/sasajuric/status/1784958371998601526 (https://twitter.com/sasajuric/status/1784958371998601526?utm_source=thinkingelixir&utm_medium=shownotes) – It's the 5 Year Anniversary of Saša Jurić's “Soul of Erlang and Elixir” talk, recommended for its lasting relevance in the development community. - https://www.youtube.com/watch?v=JvBT4XBdoUE (https://www.youtube.com/watch?v=JvBT4XBdoUE?utm_source=thinkingelixir&utm_medium=shownotes) – Saša Jurić's influential “Soul of Erlang and Elixir” talk is still very relevant and worth watching, even five years later. - https://www.elixirconf.eu/ (https://www.elixirconf.eu/?utm_source=thinkingelixir&utm_medium=shownotes) – ElixirConf EU 2025 dates and location have been announced, with a waitlist available for those interested in attending. - https://www.ftc.gov/news-events/news/press-releases/2024/04/ftc-announces-rule-banning-noncompetes (https://www.ftc.gov/news-events/news/press-releases/2024/04/ftc-announces-rule-banning-noncompetes?utm_source=thinkingelixir&utm_medium=shownotes) – The FTC ruling banning non-compete clauses aims to increase wages, entrepreneurship, and overall economic dynamism in the US technology sector. - While bans on non-compete clauses for technology workers are in effect, trade secret laws and NDAs continue to provide employers with protection against information leaks. Do you have some Elixir news to share? Tell us at @ThinkingElixir (https://twitter.com/ThinkingElixir) or email at show@thinkingelixir.com (mailto:show@thinkingelixir.com) Find us online - Message the show - @ThinkingElixir (https://twitter.com/ThinkingElixir) - Message the show on Fediverse - @ThinkingElixir@genserver.social (https://genserver.social/ThinkingElixir) - Email the show - show@thinkingelixir.com (mailto:show@thinkingelixir.com) - Mark Ericksen - @brainlid (https://twitter.com/brainlid) - Mark Ericksen on Fediverse - @brainlid@genserver.social (https://genserver.social/brainlid) - David Bernheisel - @bernheisel (https://twitter.com/bernheisel) - David Bernheisel on Fediverse - @dbern@genserver.social (https://genserver.social/dbern)

the progressive bitcoiner
TPB85 - Bitcoin Mining and the Future of Energy with Harry Sudock

the progressive bitcoiner

Play Episode Listen Later Apr 30, 2024 87:28


"Bitcoin is those last five puzzle pieces we're waiting for to achieve incredible quality of life." - Harry SudockMy guest today is Harry Sudock. Harry is the Chief Strategy Officer at Griid, a purpose-built American-based infrastructure company operating bitcoin mining facilities since 2019 utilizing low-cost, low-carbon energy, and a partner at Bitcoin Park, a community supported campus in Nashville focused on grassroots freedom tech adoption and a home for bitcoiners to work, learn, collaborate, and build. In this episode we discusses Bitcoin mining's role in expanding renewable energy, reducing emissions, and empowering communities. Harry explains Bitcoin's proof-of-work mechanism, addresses common misconceptions, and highlights how Bitcoin mining incentivizes clean energy solutions through free market dynamics.Harry's book recommendation, Energy and Civilization: A History, by Vaclav Smil. https://www.amazon.com/Energy-Civilization-History-MIT-Press/dp/0262035774Follow Harry on X and nostr You can find Trey on nostr, X, and via the pod's social channelsEXCLUSIVE SPONSORS:ZEUS is an open-source, self-custodial Bitcoin wallet that gives you full control over how you make payments. Head to Zuesln.com to learn more and download. Save 5% on LSP fees by using code ‘TPB' in the access code under LSP settings.BitBox: Get the open-source Bitbox02 Bitcoin only edition. It's my favorite bitcoin hardware wallet for you to take self-custody of your bitcoin and keep your private keys safe in cold storage. Use promo-code TPB during checkout at https://bitbox.swiss/tpb to get 5% off your purchase.You, our listener! Thank you to our supporters. To support The Progressive Bitcoiner and access rewards, including our new TPB merch, head to our geyser page: https://geyser.fund/project/tpbpodPROMO CODES:Sazmining: Hosted Bitcoin mining made easy, using 100% cheap and renewable energy. Get $50 off the purchase of a miner using the following link: https://app.sazmining.com/purchase?ref=byyhN2mCGXluLightning Store: Head to https://lightning.store/ and use promo-code ‘TPB' to get 20% off all products.To learn more, visit our websiteFollow the pod on X | Nostr | Bluesky | Instagram | Threads | Facebook | LinkedIn | TikTokJoin in on the conversation at our Progressive Bitcoiner Community telegram group!The Team: Producer/Editor: @DamienSomerset | Branding/Art: @Daniel | Website: @EvanPrim Get full access to TPB Weekly Digest at progressivebitcoiner.substack.com/subscribe

the progressive bitcoiner
TPB84 - Bitcoin Mining is Climate Action with Troy Cross and Margot Paez

the progressive bitcoiner

Play Episode Listen Later Apr 23, 2024 76:29


"We do these things not because we expect to win, but because it's the right thing to do ethically or morally." - Margot PaezMy guests today are Troy Cross and Margot Paez. Troy is an environmentalist and Professor of Philosophy at Reed College, having also written, researched and consulted on Bitcoin mining and its intersection with energy, environmentalism, and climate change. Margot is a climate change physicist and PhD candidate at the Georgia Institute of Technology, focusing on research relating to climate change statistical based modeling, bitcoin mining, and combating misinformation regarding bitcoin mining and the environment. They are both Fellows at the Bitcoin Policy Institute. In this episode we discuss key takeaways from the recent Bitcoin Policy Summit, debunking environmental FUD around Bitcoin mining, sharing data-driven insights on bitcoin mining's flexibility and potential climate benefits, and explore how Bitcoiners and environmentalists can find common ground. We also discuss finding hope when fighting potential climate catastrophe seems overwhelming, how market based solutions fighting climate change are required if we want real solutions today, and the unfortunate authoritarian bent that some leftists have adopted from those very same figures they criticized from the far right. Follow Troy and Margot on XSupport Margot's mission and research to fight Bitcoin Mining FUD You can find Trey on nostr, X, and via the pod's social channelsEXCLUSIVE SPONSORS:ZEUS is an open-source, self-custodial Bitcoin wallet that gives you full control over how you make payments. Head to Zuesln.com to learn more and download. Save 5% on LSP fees by using code ‘TPB' in the access code under LSP settings.BitBox: Get the open-source Bitbox02 Bitcoin only edition. It's my favorite bitcoin hardware wallet for you to take self-custody of your bitcoin and keep your private keys safe in cold storage. Use promo-code TPB during checkout at https://bitbox.swiss/tpb to get 5% off your purchase.You, our listener! Thank you to our supporters. To support The Progressive Bitcoiner and access rewards, including our new TPB merch, head to our geyser page: https://geyser.fund/project/tpbpodPROMO CODES:Sazmining: Hosted Bitcoin mining made easy, using 100% cheap and renewable energy. Get $50 off the purchase of a miner using the following link: https://app.sazmining.com/purchase?ref=byyhN2mCGXluLightning Store: Head to https://lightning.store/ and use promo-code ‘TPB' to get 20% off all products.To learn more, visit our websiteFollow the pod on X | Nostr | Bluesky | Instagram | Threads | Facebook | LinkedIn | TikTokJoin in on the conversation at our Progressive Bitcoiner Community telegram group!The Team: Producer/Editor: @DamienSomerset | Branding/Art: @Daniel | Website: @EvanPrim Get full access to TPB Weekly Digest at progressivebitcoiner.substack.com/subscribe

the progressive bitcoiner
TPB83 - Bitcoin is a Paradigm Shift with CK

the progressive bitcoiner

Play Episode Listen Later Apr 16, 2024 80:08


"Bitcoin is way beyond any sort of central ethos. It's a force of nature." - CKMy guest today is Christian Keroles, aka CK. CK is the Director of Financial Freedom at the Human Rights Foundation, focusing on advancing open-source Bitcoin development and global Bitcoin adoption. In his prior position as Chief Operating Officer (COO) at BTC Inc, CK was instrumental in shaping Bitcoin Magazine into the foremost publication dedicated to all things Bitcoin. He also played a pivotal role in establishing one of the world's largest and most influential conferences in the Bitcoin and fintech industries—the Bitcoin Conference. In this episode we discuss discusses Bitcoin's transformative potential as a global paradigm shift. We explores Bitcoin's anti-fragile nature, the need for solutions that work under realistic conditions, and the importance of adapting to the evolving landscape. CK emphasizes the limitless possibilities within Bitcoin's consensus rules and the significance of environmental solutions.Follow CK on X and nostrYou can find Trey on nostr, X, and via the pod's social channelsTo learn more about HRF's Financial Freedom programs, the Bitcoin Dev Fund, and subscribe to the weekly Financial Freedom Report, head to https://hrf.org/programs/financial-freedom/ EXCLUSIVE SPONSORS:ZEUS is an open-source, self-custodial Bitcoin wallet that gives you full control over how you make payments. Head to Zuesln.com to learn more and download. Save 5% on LSP fees by using code ‘TPB' in the access code under LSP settings.BitBox: Get the open-source Bitbox02 Bitcoin only edition. It's my favorite bitcoin hardware wallet for you to take self-custody of your bitcoin and keep your private keys safe in cold storage. Use promo-code TPB during checkout at https://bitbox.swiss/tpb to get 5% off your purchase.You, our listener! Thank you to our supporters. To support The Progressive Bitcoiner and access rewards, including our new TPB merch, head to our geyser page: https://geyser.fund/project/theprogressivebitcoiner PROMO CODES:Sazmining: Hosted Bitcoin mining made easy, using 100% cheap and renewable energy. Get $50 off the purchase of a miner using the following link: https://app.sazmining.com/purchase?ref=byyhN2mCGXluLightning Store: Head to https://lightning.store/ and use promo-code ‘TPB' to get 20% off all products.To learn more, visit our websiteFollow the pod on X | Nostr | Bluesky | Instagram | Threads | Facebook | LinkedIn | TikTokJoin in on the conversation at our Progressive Bitcoiner Community telegram group!The Team: Producer/Editor: @DamienSomerset | Branding/Art: @Daniel | Website: @EvanPrim Get full access to TPB Weekly Digest at progressivebitcoiner.substack.com/subscribe

the progressive bitcoiner
TPB82 - A Snapshot of Hyperinflation in Lebanon with Ahmed Klink

the progressive bitcoiner

Play Episode Listen Later Apr 9, 2024 82:07


"Every day, fiat currency buys you a little bit less Bitcoin." - AhmedMy guest today is Ahmed Klink. Ahmed is Co-Founder and Partner at Manhattan based BIPOC & Women-owned creative company Sunday Afternoon, which also supports Bitcoin brands like Zeus. Born in Lebanon and later moving to France, then the U.S., Ahmed discusses the currency crisis in Lebanon, his family history and perspectives in Lebanon before and after hyperinflation and currency collapse, the role of Bitcoin in global sound money adoption, and the importance of creative design and branding in the Bitcoin space. We explore the potential of Bitcoin as a store of value, better money and its impact on the future of money. Follow Ahmed on nostrYou can find Trey on nostr and via the pod's social channels EXCLUSIVE SPONSORS:ZEUS is an open-source, self-custodial Bitcoin wallet that gives you full control over how you make payments. Head to Zuesln.com to learn more and download. Save 5% on LSP fees by using code ‘TPB' in the access code under LSP settings.BitBox: Get the open-source Bitbox02 Bitcoin only edition. It's my favorite bitcoin hardware wallet for you to take self-custody of your bitcoin and keep your private keys safe in cold storage. Use promo-code TPB during checkout at https://bitbox.swiss/tpb to get 5% off your purchase.You, our listener! Thank you to our supporters. To support The Progressive Bitcoiner and access rewards, including our new TPB merch, head to our geyser page: https://geyser.fund/project/theprogressivebitcoinerPROMO CODES:Sazmining: Hosted Bitcoin mining made easy, using 100% cheap and renewable energy. Get $50 off the purchase of a miner using the following link: https://app.sazmining.com/purchase?ref=byyhN2mCGXluLightning Store: Head to https://lightning.store/ and use promo-code ‘TPB' to get 20% off all products.To learn more, visit our websiteFollow the pod on X | Nostr | Bluesky | Instagram | Threads | Facebook | LinkedIn | TikTokJoin in on the conversation at our Progressive Bitcoiner Community telegram group!The Team: Producer/Editor: @DamienSomerset | Branding/Art: @Daniel | Website: @EvanPrim Get full access to TPB Weekly Digest at progressivebitcoiner.substack.com/subscribe

the progressive bitcoiner
TPB81 - Universal Basic Income: Coercion or Freedom? with Scott Santens

the progressive bitcoiner

Play Episode Listen Later Apr 2, 2024 115:45


"Basic income is not left or right. It's forward." - Scott SantensMy guest today is Scott Santens, with special guest co-host and friend of the pod Margot Paez. Scott is a leading Universal Basic Income (UBI) advocate and has been researching the subject since 2013. He is also the President and Founder of the Income to Support All (ITSA) Foundation which supports ambitious projects that help realize a foundation of unconditional universal basic income through research, storytelling, and implementation. In this episode we discuss Universal Basic Income's potential to address poverty, inequality, and the future of work. Santens explains how UBI can empower individuals, boost entrepreneurship, and create a more voluntary labor market, while addressing misconceptions and basic facts of what UBI is. We also explore the intersection of UBI and Bitcoin, with particular focus on renewable bitcoin mining projects and community benefits. To learn more, including Scott's social links, head to Scott's website: https://www.scottsantens.com/ You can find Trey on nostr, X and via the pod's social channelsYou can find Margot on X: https://twitter.com/jyn_ursoEXCLUSIVE SPONSORS:ZEUS is an open-source, self-custodial Bitcoin wallet that gives you full control over how you make payments. Head to Zuesln.com to learn more and download. Save 5% on LSP fees by using code ‘TPB' in the access code under LSP settings.BitBox: Get the open-source Bitbox02 Bitcoin only edition. It's my favorite bitcoin hardware wallet for you to take self-custody of your bitcoin and keep your private keys safe in cold storage. Use promo-code TPB during checkout at https://bitbox.swiss/tpb to get 5% off your purchase.You, our listener! Thank you to our supporters. To support The Progressive Bitcoiner and access rewards, including our new TPB merch, head to our geyser page: https://geyser.fund/project/theprogressivebitcoinerPROMO CODES:Sazmining: Hosted Bitcoin mining made easy, using 100% cheap and renewable energy. Get $50 off the purchase of a miner using the following link: https://app.sazmining.com/purchase?ref=byyhN2mCGXluLightning Store: Head to https://lightning.store/ and use promo-code ‘TPB' to get 20% off all products.To learn more, visit our websiteFollow the pod on X | Nostr | Bluesky | Instagram | Threads | Facebook | LinkedIn | TikTokJoin in on the conversation at our Progressive Bitcoiner Community telegram group!The Team: Producer/Editor: @DamienSomerset | Branding/Art: @Daniel | Website: @EvanPrim Get full access to TPB Weekly Digest at progressivebitcoiner.substack.com/subscribe

the progressive bitcoiner
TPB80 - Bitcoin Rights are Human Rights with Lyudmyla Kozlovska

the progressive bitcoiner

Play Episode Listen Later Mar 26, 2024 70:00


“We have to be proactive. Otherwise, if you don't defend your rights, you lose it.” - Lyudmyla KozlovskaMy guest today is Lyudmyla Kozlovska. Lyudmyla Kozlovska is a human rights activist and President of the Open Dialogue Foundation, a nonprofit dedicated to defending human rights, democracy and rule of law throughout post-Soviet states. In this episode we discusses the importance of protecting privacy, self-custody, and proof-of-work for Bitcoin in Europe. Lyudmyla shares her experience of being financially excluded and targeted by authoritarian regimes and how Bitcoin became a crucial tool for human rights activists. Lyudmyla emphasizes the need to defend Bitcoin developers and combat false narratives.Follow Lyudmyla on X and nostr You can find Trey on nostr and via the pod's social channels Please consider supporting the important work of the Open Dialogue Foundation donating in Bitcoin or fiat: https://odfoundation.eu/jak-mozesz-pomoc/ EXCLUSIVE SPONSORS:ZEUS is an open-source, self-custodial Bitcoin wallet that gives you full control over how you make payments. Head to Zuesln.com to learn more and download. Save 5% on LSP fees by using code ‘TPB' in the access code under LSP settings.BitBox: Get the open-source Bitbox02 Bitcoin only edition. It's my favorite bitcoin hardware wallet for you to take self-custody of your bitcoin and keep your private keys safe in cold storage. Use promo-code TPB during checkout at https://bitbox.swiss/tpb to get 5% off your purchase.You, our listener! Thank you to our supporters. To support The Progressive Bitcoiner and access rewards, including our new TPB merch, head to our geyser page: https://geyser.fund/project/theprogressivebitcoinerPROMO CODES:Sazmining: Hosted Bitcoin mining made easy, using 100% cheap and renewable energy. Get $50 off the purchase of a miner using the following link: https://app.sazmining.com/purchase?ref=byyhN2mCGXluLightning Store: Head to https://lightning.store/ and use promo-code ‘TPB' to get 20% off all products.To learn more, visit our websiteFollow the pod on X | Nostr | Bluesky | Instagram | Threads | Facebook | LinkedIn | TikTokJoin in on the conversation at our Progressive Bitcoiner Community telegram group!The Team: Producer/Editor: @DamienSomerset | Branding/Art: @Daniel | Website: @EvanPrim Get full access to TPB Weekly Digest at progressivebitcoiner.substack.com/subscribe