Podcasts about Osmo

  • 272PODCASTS
  • 434EPISODES
  • 42mAVG DURATION
  • 1WEEKLY EPISODE
  • Jun 16, 2025LATEST
Osmo

POPULARITY

20172018201920202021202220232024


Best podcasts about Osmo

Latest podcast episodes about Osmo

Brian Crombie Radio Hour
Brian Crombie Radio Hour - Epi 1400 - Drone Warfare with Eliot Pence

Brian Crombie Radio Hour

Play Episode Listen Later Jun 16, 2025 47:49


Tonight on The Brian Crombie Hour, Brian interviews Eliot Pence. Eliot is the Chief Business Officer of Osmo, a Lux Capital, Google Ventures backed deep tech company. From 2022-2024, Eliot was the Chief Commercial Officer of Cambium, an 8VC Build company that develops advanced materials for aerospace and defense. He discusses how drones have become a critical component of modern warfare and highlighted Ukraine's production capabilities, emphasizing the need for Canada to modernize its military technology and procurement processes.The conversation explores various aspects of drone technology, including manufacturing, defense challenges, and potential applications for border control and Arctic monitoring. Eliot and Brian discuss the importance of investing in autonomous systems, AI, and collaborative combat aircraft, while noting the need for Canada to develop its own military technology capabilities and reform its procurement processes to better respond to emerging security threats. 

The Cryptonaut Podcast
#393: Ufonauts, The Future And A Boy Named Osmo

The Cryptonaut Podcast

Play Episode Listen Later Jun 9, 2025 71:44


In the mid-50s a pair of Finnish brothers had a terrifying, all too close encounter with a speeding saucer. While the boys were both scared by this near miss, for one of them—Osmo Liene — his ordeal was just beginning as just days later, at the stroke of midnight, a trio of aged aliens would invade his home to interrogate him... and reveal something that no one should ever know. The Cryptonaut Podcast Patreon:https://www.patreon.com/cryptonautpodcast  The Cryptonaut Podcast Merch Stores:Hellorspace.com - Cryptonautmerch.com  Stay Connected with the Cryptonaut Podcast: Website - Instagram - TikTok - YouTube- Twitter - Facebook  THIS EPISODE IS SPONSORED BY INCOGNIEXCLUSIVE Incogi Deal ➼ https://incogni.com/hellorspace 60% off an annual Incogi plan 

Interchain.FM
Is Cosmos Back? ATOM Hub Slated to Generate Net New Liquidity

Interchain.FM

Play Episode Listen Later May 19, 2025 46:37


With a new sheriff in town and new roadmap in tow, here's how the ATOM ecosystem is ready to go big. With plans to 100x the current liquidity sloshing across the network, Interchain Labs is gearing up for a big Cosmos resurgence.#blockchaintech #technews #web3news #interchainfm #cryptocurrency #cryptopodcasts #cosmosatom #cosmosnetwork

Expert Talks CEOs
#35 - Na mesa com Rodrigo Osmo, CEO da Tenda

Expert Talks CEOs

Play Episode Listen Later May 16, 2025 70:02


Interchain.FM
How to Passively Print Fee Revenue with Magma Vaults on Osmosis

Interchain.FM

Play Episode Listen Later May 12, 2025 38:53


Magma vaults automates the process of generated concentrated liquidity yields and condenses it into 2 easy steps.

SoundBytes
STABILIZE YOUR SELFIES!

SoundBytes

Play Episode Listen Later May 11, 2025 1:01


DJI's Osmo 7p is a Stedi-Cam, tripod, selfie stick and a lot more – we loved it! The post STABILIZE YOUR SELFIES! appeared first on sound*bytes.

Podcast by Yuka Studio // ユカスタポッドキャスト

「Google、Androidの新デザイン『Material 3 Expressive』を誤って公開」Googleは、Androidの次期デザイン「Material 3 Expressive」の詳細を誤ってブログ投稿で公開し、その後すぐに削除しました。このデザインは、ユーザーインターフェースをより魅力的で使いやすくすることを目的としています。「Google、映画・テレビ業界に進出し、若者向けにテクノロジーのイメージ向上を図る」Googleは、映画やテレビ番組の制作を通じて自社のテクノロジーとブランドイメージを強化するため、新たな取り組み「100 Zeros」を開始しました。「Figma、AI活用の新機能群『Sites』『Make』『Buzz』『Draw』を発表」Figmaは、2025年5月7日に開催された「Config」イベントにて、AIを活用した4つの新機能「Figma Sites」「Figma Make」「Figma Buzz」「Figma Draw」を発表しました。これにより、FigmaはAdobe、WordPress、Canvaなどの競合と直接競争する、包括的なプロダクトデザインプラットフォームを目指しています。「Kindleで電子書籍の直接購入が可能に」Amazonは、Kindleユーザーがデバイス上から直接電子書籍を購入できる機能を導入し、長年の制限がついに解消されました。「Samsung、超薄型フラッグシップ『Galaxy S25 Edge』を5月12日に発表へ」Samsungは、2025年5月12日(月)午後8時(米国東部時間)に開催されるバーチャルイベント「Galaxy Unpacked」にて、超薄型フラッグシップスマートフォン「Galaxy S25 Edge」を正式に発表すると明らかにしました。「DJI、初の360度カメラ『Osmo 360』のプロトタイプがリーク」DJIの初となる360度カメラ「Osmo 360」のプロトタイプ画像がリークされ、同社がInsta360やGoPro Max 2と競合する可能性が浮上しています。「NBAのラッセル・ウェストブルック、AIを活用した葬儀計画スタートアップを立ち上げ」NBAのスーパースター、ラッセル・ウェストブルックは水曜日、人工知能技術を使って葬儀の計画を効率化することを目指す新たなスタートアップ企業を立ち上げました。「わざと蛇に噛まれる男」= = = = = = = = = = = = = = = = = = = = = = = = =【ユカスタポッドキャスト // Podcast by Yuka Studio】ユカスタポッドキャストは、テックとクリエイティビティがもっと身近になる、トーク番組です。ニューヨークを拠点に、テック系クリエイターとして活動する大石結花がメインホストとして、テックニュースや、インタビューコンテンツをお届けします。

Not-a-Perfumery Podcast
№ 25 – Could You Build a Fragrance Giant Today? With Osmo CEO Alex Wiltschko

Not-a-Perfumery Podcast

Play Episode Listen Later May 6, 2025 35:07


What if the next fragrance powerhouse wasn't born from heritage, but from code, curiosity, and a clean slate? In this episode, Tanya Mironova sits down with Alex Wiltschko, CEO of Osmo and a fragrance house Generation — companies on a mission to digitize scent and rethink the future of fragrance from the ground up.They talk about:What Givaudan or IFF might do if they were founded in 2025How bravery isn't about being fearless — but about acting anywayWhat it means to build a “fragrance house” in a world shaped by AI and scienceThe emotional and personal side of leading innovation in scentWhether you're in perfumery, tech, or just curious about where the future of smell is headed, this conversation will spark something.

Radio mazā lasītava
"Kāpēc laikmetīgā mūzika ir tik sarežģīta" jautā somu komponists Osmo Tapio Reihele

Radio mazā lasītava

Play Episode Listen Later Apr 27, 2025 36:05


Šo grāmatu viņš veltīja savai sievai Marijai, kas ir vijolniece. Pastaigājoties un sarunājoties, somu komponists un mūzikas žurnālists Osmo Tapio Reihele radīja grāmatu, ko varētu ielikt "populāri populāro" grāmatu plauktiņā, ja vien teksts kaut kur ap vidu nekļūtu sarežģītāks, kā atzīst pats autors. Un tātad – grāmatas nosaukums ir "Kāpēc laikmetīgā mūzika ir tik sarežģīta". Osmo Tapio Reihele lieto terminu "laikmetīgā mākslas mūzika" un pierāda, ka jūs noteikti esat dzirdējuši šo mūziku, jo jūs taču skatāties filmas un seriālus, kur dramatiskos spriedzes brīžos tiek izmantota laikmetīgā mūzika. Viņš pats bija pārsteigts, ka par šo grāmatu, kad tā iznāca Somijā, bija tik liela interese, galu galā – viņš 30 gadus bija pats rakstījis mūziku, bet pastiprināta interese par viņu radās tikai pēc grāmatas "Kāpēc laikmetīgā mūzika ir tik sarežģīta". Grāmata arī 2021. gadā ieguva Somijas vissvarīgāko literāro balvu "Finlandia'" populārzinātniskās literatūras kategorijā. Osmo Tapio Reiheles grāmatu "Kāpēc laikmetīgā mūzika ir tik sarežģīta" no somu valodas tulkojusi Maima Grīnberga, izdevis Jāņa Rozes apgāds. Tā kā autors viesojās Rīgā Grāmatu svētkos, mums ir iespēja dzirdēt arī fragmentus no viņa sarunas ar "'Klasikas" kolēģi Orestu Silabriedi. Raidījumu atbalsta:

MacVoices Video
MacVoices #25119: NAB Show - DJI's Latest Gimbles and Mic System

MacVoices Video

Play Episode Listen Later Apr 24, 2025


At NAB Show 2025, Donovan Davis, Product Specialist for Osmo for DJI showcases the Osmo Mobile 7 gamble with a new tracking module that works with any app, built-in tripod, lighting controls, and 10-hour battery life. The DJI Mic Mini offers compact wireless audio with Bluetooth and receiver options. The RS-4 Mini gimbal supports both smartphones and mirrorless cameras, featuring customizable controls, gesture tracking, and 13-hour battery life.  Show Notes: Chapters: 00:07 Introduction to NAB Show 202507:39 Gimbal Innovations and Features07:58 Quick Start and Durability09:29 Introducing the DJI Mic Mini13:15 Pricing and Compatibility of Mic Mini15:04 The RS-4 Mini Gimbal18:24 Customization and Pricing of RS-4 Mini19:46 Closing Remarks and Future Updates Links: DJI Osmo Mobile 7P Gimbal Stabilizerhttps://amzn.to/42PDCPj DJI Mic Mini (2 TX + 1 RX + Charging Case), Wireless Microphonehttps://amzn.to/4jRY1dB DJI RS 4 Mini Gimbal Stabilizer for Camerashttps://amzn.to/42xivTa Support:      Become a MacVoices Patron on Patreon     http://patreon.com/macvoices      Enjoy this episode? Make a one-time donation with PayPal Connect:      Web:     http://macvoices.com      Twitter:     http://www.twitter.com/chuckjoiner     http://www.twitter.com/macvoices      Mastodon:     https://mastodon.cloud/@chuckjoiner      Facebook:     http://www.facebook.com/chuck.joiner      MacVoices Page on Facebook:     http://www.facebook.com/macvoices/      MacVoices Group on Facebook:     http://www.facebook.com/groups/macvoice      LinkedIn:     https://www.linkedin.com/in/chuckjoiner/      Instagram:     https://www.instagram.com/chuckjoiner/ Subscribe:      Audio in iTunes     Video in iTunes      Subscribe manually via iTunes or any podcatcher:      Audio: http://www.macvoices.com/rss/macvoicesrss      Video: http://www.macvoices.com/rss/macvoicesvideorss

MacVoices Audio
MacVoices #25119: NAB Show - DJI's Latest Gimbles and Mic System

MacVoices Audio

Play Episode Listen Later Apr 24, 2025 20:53


At NAB Show 2025, Donovan Davis, Product Specialist for Osmo for DJI showcases the Osmo Mobile 7 gamble with a new tracking module that works with any app, built-in tripod, lighting controls, and 10-hour battery life. The DJI Mic Mini offers compact wireless audio with Bluetooth and receiver options. The RS-4 Mini gimbal supports both smartphones and mirrorless cameras, featuring customizable controls, gesture tracking, and 13-hour battery life.  Show Notes: Chapters: 00:07 Introduction to NAB Show 2025 07:39 Gimbal Innovations and Features 07:58 Quick Start and Durability 09:29 Introducing the DJI Mic Mini 13:15 Pricing and Compatibility of Mic Mini 15:04 The RS-4 Mini Gimbal 18:24 Customization and Pricing of RS-4 Mini 19:46 Closing Remarks and Future Updates Links: DJI Osmo Mobile 7P Gimbal Stabilizer https://amzn.to/42PDCPj DJI Mic Mini (2 TX + 1 RX + Charging Case), Wireless Microphone https://amzn.to/4jRY1dB DJI RS 4 Mini Gimbal Stabilizer for Cameras https://amzn.to/42xivTa Support:      Become a MacVoices Patron on Patreon      http://patreon.com/macvoices      Enjoy this episode? Make a one-time donation with PayPal Connect:      Web:      http://macvoices.com      Twitter:      http://www.twitter.com/chuckjoiner      http://www.twitter.com/macvoices      Mastodon:      https://mastodon.cloud/@chuckjoiner      Facebook:      http://www.facebook.com/chuck.joiner      MacVoices Page on Facebook:      http://www.facebook.com/macvoices/      MacVoices Group on Facebook:      http://www.facebook.com/groups/macvoice      LinkedIn:      https://www.linkedin.com/in/chuckjoiner/      Instagram:      https://www.instagram.com/chuckjoiner/ Subscribe:      Audio in iTunes      Video in iTunes      Subscribe manually via iTunes or any podcatcher:      Audio: http://www.macvoices.com/rss/macvoicesrss      Video: http://www.macvoices.com/rss/macvoicesvideorss

Just Focus
#OSMO Energie : le pari d'un immobilier européen bas carbone - avec Foulques de Sainte Marie

Just Focus

Play Episode Listen Later Apr 24, 2025 43:30


Dans ce nouvel épisode Focus Fonds, nous recevons Foulques de Sainte Marie, directeur général de Mata Capital IM en charge des stratégies grand public. Mata Capital IM est une société de gestion indépendante fondée en 2015. A travers la marque Osmo, elle met son savoir-faire institutionnel au service du grand public dans le but de rendre l'investissement immobilier plus accessible, sans renoncer à l'exigence.  Foulques pilote aujourd'hui la marque Osmo et nous présente Osmo Énergie, une SCPI lancée début 2024 avec commeobjectif : allier performance financière et extra-financière. Déjà labellisée ISR et classée Article 9 SFDR*, elle incarne une nouvelle génération de véhicules immobiliers responsables —  avec un objectif de distribution supérieur ou égal à 6% sur le long terme**. Découvrez :  Le parcours inspirant de Foulques et son rôle dans l'ouverture de Mata Capital IM à l'épargne grand public. Son décryptage du marché immobilier actuel et la genèse d'Osmo, un projet pensé pour répondre aux nouveaux enjeux de performance et de sens. Les piliers fondateurs d'Osmo Énergie et les secrets de sa réussite***. La stratégie opportuniste d'Osmo Energie et son développement progressif sur les marchés européens Les 3 critères selon Foulques pour reconnaître une SCPI solide et pérenne. Bonne écoute !   ----------------------- Savez-vous vraiment comment fonctionne un société de gestion ? Un fonds d'investissement ? Ce qu'il se passe au-delà des chiffres et des due diligences ? Dans "Focus Fonds", nous explorons ce qui se trouve de manière très concrète derrière les investissements des fonds et de leurs investisseurs. ----------------------- Pour accéder aux solutions d'investissements en immobilier proposées par Sapians : hhttps://sapians.com/investissement-actifs-reels  *OSMO Energie poursuit un objectif d'investissement durable conformément à l'article 9 du règlement (UE) 2019/2088 sur la publication d'information en matière de durabilité dans le secteur financier (SFDR) **La SCPI OSMO Energie poursuit un objectif de distribution de 6% annuel sur la durée de détention recommandée (10 ans). Le rendement cible est de 5,50% sur la durée de détention recommandée. Ces objectifs ne sont pas garantis. ***Attention : Les performances passées ne préjugent pas des performances futures et investir comporte des risques de perte partielle ou totale en capital. Cet épisode est une communication à caractère publicitaire qui vise à informer sur le fonctionnement d'une société de gestion et ne constitue pas un conseil d'investissement.  Si vous souhaitez bénéficier de conseils personnalisés, veuillez créer votre compte ou prendre rendez-vous avec un conseiller Sapians.

Radio Marija Latvija
Osmoģenēze - Pārdabiskie fenomeni | Svešvārdu ievārījums | RML S10E29 | māsas Ginta, Inese un Viesturs Vizulis pr Tadeušs Ciesļaks | 08.04.2025

Radio Marija Latvija

Play Episode Listen Later Apr 7, 2025 6:17


Radio Marija ir klausītāju veidots radio, kas nes Dieva Vārdu pasaulē. Radio Marija balss skan 24 stundas diennaktī. Šajos raidījumos klausītājiem kā saviem draugiem neatkarīgi no viņu reliģiskās pārliecības cenšamies sniegt Kristus Labo Vēsti – Evaņģēliju, skaidru katoliskās Baznīcas mācību. Cenšamies vairot lūgšanas pieredzi un sniegt iespēju ielūkoties visas cilvēces kultūras daudzveidībā. Radio Marija visā pasaulē darbojas uz brīvprātīgo kalpošanas pamata. Labprātīga savu talantu un laika ziedošana Dieva godam un jaunās evaņģelizācijas labā ir daļa no Radio Marija harizmas. Tā ir lieliska iespēja ikvienam īstenot savus talantus Evaņģēlija pasludināšanas darbā, piedzīvojot kalpošanas prieku. Ticam, ka Dievs īpaši lietos ikvienu cilvēku, kurš atsauksies šai kalpošanai, lai ar Radio Marija starpniecību paveiktu Latvijā lielas lietas. Radio Marija ir arī ģimene, kas vieno dažādu vecumu, dažādu konfesiju, dažādu sociālo slāņu cilvēkus, ļaujot katram būt iederīgam un sniegt savu pienesumu Dieva Vārda pasludināšanā, kā arī kopīgā lūgšanas pieredzē. "Patvērums Dievā 24 stundas diennaktī", - tā ir Radio Marija Latvija devīze. RML var uztvert Rīgā 97.3, Liepājā 97.1, Krāslavā 97.0, Valkā 93.2, kā arī ar [satelītuztvērēja palīdzību un interneta aplikācijās](http://www.rml.lv/klausies/).

to-ku.
54 カメラ派?スマホ派?春の撮影トーク(2017/04/08)

to-ku.

Play Episode Listen Later Apr 6, 2025 22:17


※AIによる概要 新年度初の収録回で、各地での桜の開花状況を話題に始まります。DJIが「桜を撮ろう」キャンペーンを開催し、ドローンやスタビライザーを使った桜の撮影コンテストが4月から5月中旬まで行われており、最優秀賞にはドローン「マビックプロ」や「OSMOモバイル」が贈られます。 さらに、メンバーの友人の結婚式でドローンを使用する計画やその許可取得のエピソード、撮影機材を持ち運ぶ際の悩みなども共有。カメラやドローンを通して春の撮影を楽しむためのアイデアが詰まったエピソードです。 花見や春のお出かけでの撮影が気軽にできるようになった今、リスナーにとっても参考になる情報が盛りだくさんです。

 チャプター:

Smell Ya Later
185: The computers are smelling [feat. Christophe Laudamiel of OSMO]

Smell Ya Later

Play Episode Listen Later Apr 1, 2025 71:50


The fragrance industry is trotting out AI more and more in new creations and creative development, but when it comes to digitizing smells, OSMO is a Google-born initiative that uses machine learning, sensory neuroscience, data science, engineering, fine fragrance, analytical chemistry, and product development to map smells, engineer new smell molecules, recreate living scents, and finagle how olfactory technology can aid other industries. We speak with Christophe Laudamiel, OSMO's master perfumer on this episode. Also, we're hosting a Scent Swap at Talea in Williamsburg on May 7th from 6-9pm. It's free with RSVP. Bring at least one fragrance to swap and enjoy the bevs and snacks from our host. [What we smell like today: Roja Lost in Paris, Philosophy Fresh Cream Soft Velvet]

Invest Like the Best with Patrick O'Shaughnessy
Alex Wiltschko - Giving Computers A Sense Of Smell - [Invest Like the Best, EP.415]

Invest Like the Best with Patrick O'Shaughnessy

Play Episode Listen Later Mar 18, 2025 63:47


My guest today is Alex Wiltschko. Alex is the founder and CEO of Osmo, a science and technology company giving computers a sense of smell. He set out on a mission to digitize our sense of smell and he describes how Osmo is teaching computers to both read and write scent. Alex was kind enough to walk me through the laboratory which you can watch in the video version of this interview on Youtube and Spotify, where he demonstrates their method to the madness. We discuss their first commercial application, Generation, which is revolutionizing the fragrance industry by dramatically accelerating the typically years-long process of custom scent creation. We discuss all of the potential business implications this technology unlocks, applications ranging from counterfeit detection to health monitoring, and creating a cutting-edge proprietary platform in a historically routine industry. Please enjoy my conversation with Alex Wiltschko. Subscribe to Colossus Review. For the full show notes, transcript, and links to mentioned content, check out the episode page here. ----- This episode is brought to you by Ramp. Ramp's mission is to help companies manage their spend in a way that reduces expenses and frees up time for teams to work on more valuable projects. Ramp is the fastest-growing FinTech company in history, and it's backed by more of my favorite past guests (at least 16 of them!) than probably any other company I'm aware of. Go to Ramp.com/invest to sign up for free and get a $250 welcome bonus. – This episode is brought to you by AlphaSense. AlphaSense has completely transformed the research process with cutting-edge AI technology and a vast collection of top-tier, reliable business content. Imagine completing your research five to ten times faster with search that delivers the most relevant results, helping you make high-conviction decisions with confidence. Invest Like the Best listeners can get a free trial now at Alpha-Sense.com/Invest and experience firsthand how AlphaSense and Tegus help you make smarter decisions faster. – This episode is brought to you by Ridgeline. Ridgeline has built a complete, real-time, modern operating system for investment managers. It handles trading, portfolio management, compliance, customer reporting, and much more through an all-in-one real-time cloud platform. I think this platform will become the standard for investment managers, and if you run an investing firm, I highly recommend you find time to speak with them. Head to ridgelineapps.com to learn more about the platform. ----- Editing and post-production work for this episode was provided by The Podcast Consultant (https://thepodcastconsultant.com). Show Notes: (00:00:00) Welcome to Invest Like the Best (00:04:25) Introduction to Plum 1.0 (00:06:20) Synthetic Chemistry & OI (00:07:09) Perfumer's Organ & Fragrance Creation (00:08:03) Launching Generation Fragrance House (00:09:15) The Fragrance Design Process (00:12:04) AI and Olfactory Intelligence (00:14:59) The GCMS Machine (00:22:08) The Scent Printer (00:26:47) The Journey of a Fragrance Enthusiast (00:30:47) The Unsolved Problem of Scent (00:32:45) Applying AI to the World of Scent (00:33:17) Validating AI Predictions with Double-Blind Trials (00:39:42) The Emotional Power of Scent (00:49:01) Challenges & Future Prospects (00:59:08) The Defining Moment: Digitizing Smell (01:00:41) The Kindest Thing Anyone Has Ever Done For Alex

Kultūras Rondo
Tulkošanas smalkumi Jāna Kaplinska un Osmo Tapio Reiheles darbos

Kultūras Rondo

Play Episode Listen Later Mar 11, 2025 16:28


"Viss ir brīnumains" – tā Jāns Kaplinskis tulko Alberta Šveicera vārdus Ehrfurcht vor dem Leben, un par to raksta esejās, kuras tulkojusi Maima Grīnberga. Ar viņu tikāmies, kad tulkotāja bija ieradusies Rīgas grāmatu svētkos, lai iepazīstinātu ar somu komponista un muzikologa Osmo Tapio Reiheles grāmatu "Kāpēc laikmetīgā mūzika ir tik sarežģīta". Sarunā par abām šīm grāmatām.

The Untrapped Podcast With Keith Kalfas
Doing Stuff You HATE, Just For the Money?...Listen to This!

The Untrapped Podcast With Keith Kalfas

Play Episode Listen Later Feb 24, 2025 16:27


In this episode, Keith gets real about the tough balance between working for money and chasing what you love. He talks about his journey dealing with jobs that paid the bills versus work that brought him joy. Keith shares some eye-opening insights on whether doing something just for the cash or trying to turn your passion into your paycheck is worth it. He even brings in some tips from the pros on how to juggle financial needs while still following your heart. If you've ever felt stuck doing a job you can't stand or dreamt of making money doing what you love, Keith's got some solid advice to help you figure it out. So, tune in for an honest chat about finding that sweet spot between responsibility and fulfillment.   Check out these episode highlights

Shop Talk Live - Fine Woodworking
STL335: Morley takes on Maine

Shop Talk Live - Fine Woodworking

Play Episode Listen Later Feb 21, 2025 63:45


Ben and Amanda chat with Phil Morley, the newest Lead Instructor for Center for Furniture Craftmanships nine month comprehensive. Sign up for Randy's wood identification course here: https://courses.finewoodworking.com/identifying-wood-randy-wilkinson For more information about our other eLearning courses - http://www.finewoodworking.com/elearning For more information about our Woodworking Fundamentals journey - http://www.finewoodworking.com/fundamentals Join us on our new Discord server! - https://discord.gg/8hyuwqu4JH Links from this episode can be found here - http://www.shoptalklive.com Sign up for the Fine Woodworking weekly eLetter - https://www.finewoodworking.com/newsletter Sign up for a Fine Woodworking Unlimited membership - https://www.finewoodworking.com/unlimited Every two weeks, a team of Fine Woodworking staffers answers questions from readers on Shop Talk Live, Fine Woodworking‘s biweekly podcast. Send your woodworking questions to shoptalk@finewoodworking.com for consideration in the regular broadcast! Our continued existence relies upon listener support. So if you enjoy the show, be sure to leave us a five-star rating and maybe even a nice comment on our iTunes page. Join us on our Discord server here.

Geek Therapy Radio Podcast
DJI Osmo Pocket 3 thoughts...and I had my wisdom teeth removed | 287

Geek Therapy Radio Podcast

Play Episode Listen Later Feb 19, 2025 24:00 Transcription Available


I just had my wisdom teeth removed, so bare with me as I discuss my thoughts on my DJI Osmo Pocket 3.Video version: https://youtu.be/W6o8SQoKXuwHMNS Beyond Bones Podcast: https://www.hmns.org/podcast/#dji

Geek Therapy Radio Podcast
DJI OP3, M3 MBA, & EDC | 286

Geek Therapy Radio Podcast

Play Episode Listen Later Feb 3, 2025 20:02 Transcription Available


The DJI Osmo Pocket 3, the Macbook Air M3, and how they've got me thinking about a lighter EverDay Carry.johnny@geektherapyradio.comHMNS Beyond Bones Podcasthttps://www.hmns.org/podcast/

Interchain.FM
Litecoin ETF Sparks Speculation About Long Tail of ETFs

Interchain.FM

Play Episode Listen Later Jan 31, 2025 28:09


Interchain.FM
DeFi Yield Secrets the Pros Don't Want You to Know About | Mitosis

Interchain.FM

Play Episode Listen Later Jan 17, 2025 34:52


Have you wondered why you're not getting the six figure airdrops like you did from back in DeFi Summer of 2020? Jake Kim of Mitosis—and former Luna Anchor dev—spills the tea.#blockchaintech #technews #web3news #interchainfm #cryptocurrency #cryptopodcasts #mitosis

Capital Projects Podcast
Episódio #185 – Healthcare 4.0 – unindo projetos e tecnologia

Capital Projects Podcast

Play Episode Listen Later Jan 15, 2025 29:10


O Capital Projects Podcast traz mais uma série especial, dessa vez diretamente do 22° Seminário Internacional do PMI-SP, em parceria com a Prosperi e a Planisware! Nesse quinto episódio da série, eu converso com Hélio Osmo, Sócio Fundador da Science & Strategy. Nesse episódio, falamos sobre os avanços na tecnologia, os novos tratamentos, e como isso abrirá cada vez mais oportunidades para os profissionais de gestão de projetos! Dê um play e vamos juntos! Essa série só foi possível graças ao apoio especial da Prosperi e da Planisware! Conheça mais sobre a solução mais moderna para PPM em https://prosperiglobal.com/pt/ ATENÇÃO: oferta especial de Masterclasses com as melhores práticas de gestão de projetos! Quer saber mais? Acesse aqui: https://lp.andrechoma.com.br/mc-oplano Quer entrar no grupo VIP para saber em primeira mão sobre as lives e a nova turma do Curso GPI/FEL? Acesse: https://chat.whatsapp.com/KZNt0vR1zLfBt4ZeqflVGN #CapitalProjectsPodcast #GestãodeProjetos #CapitalProjects #AndreChoma #Construção #Engenharia #ProjectManagement #PMI #ProjectManagementInstitute #FEL #Frontendloading #MetodologiaFEL #PPM #Prosperi #Planisware #Healthcare #Tecnologia #Saúde #HelioOsmo

Mind & Matter

Subscriber-only episodeSend us a textPodcast episodes are fully available to paid subscribers on the M&M Substack and on YouTube. Partial versions are available elsewhere.About the guest: Alex Wiltschko, PhD is the founder and CEO of Osmo, a startup using AI, neuroscience, and chemistry to digitize the sense of smell.Episode summary: Nick talks to Dr. Wiltschko about: the sense of smell & olfactory perception; aroma and why certain molecules are smelled but not others; how the brain encodes odors; using AI and machine learning to create “odor maps”; designing novel scents for the fragrance industry; Osmo and its goal of digitizing the sense of smell.Related episodes:M&M #114: Marijuana, Plant Chemistry, Terpenes, Volatile Sulfur Compounds, Cannabis Industry, What Pungent Weed Smells Like & Why | Iain OswaldM&M #22: Machine Learning, Artificial Intelligence, Animal Behavior & Giving Computers a Sense of Smell | Alex Wiltschko*Not medical adviceAll episodes (audio & video), show notes, transcripts, and more at the M&M Substack Affiliates: MASA Chips—delicious tortilla chips made from organic corn and grass-fed beef tallow. No seed oils or artificial ingredients. Use code MIND for 20% off. Lumen device to optimize your metabolism for weight loss or athletic performance. Use code MIND for 10% off. Athletic Greens: Comprehensive & convenient daily nutrition. Free 1-year supply of vitamin D with purchase. KetoCitra—Ketone body BHB + potassium, calcium & magnesium, formulated with kidney health in mind. Use code MIND20 for 20% off any subscription. Learn all the ways you can support my efforts

Mind & Matter
Aroma, Olfaction & Using AI to Digitize Smell | Alex Wiltschko | #201

Mind & Matter

Play Episode Listen Later Dec 22, 2024 49:46


Send us a textPodcast episodes are fully available to paid subscribers on the M&M Substack and on YouTube. Partial versions are available elsewhere.About the guest: Alex Wiltschko, PhD is the founder and CEO of Osmo, a startup using AI, neuroscience, and chemistry to digitize the sense of smell.Episode summary: Nick talks to Dr. Wiltschko about: the sense of smell & olfactory perception; aroma and why certain molecules are smelled but not others; how the brain encodes odors; using AI and machine learning to create “odor maps”; designing novel scents for the fragrance industry; Osmo and its goal of digitizing the sense of smell.Related episodes:M&M #114: Marijuana, Plant Chemistry, Terpenes, Volatile Sulfur Compounds, Cannabis Industry, What Pungent Weed Smells Like & Why | Iain OswaldM&M #22: Machine Learning, Artificial Intelligence, Animal Behavior & Giving Computers a Sense of Smell | Alex WiltschkoSpecial offer: Use MINDMATTERSPECIAL2 for a free 1-year premium subscription to Consensus, an AI-powered research tool that helps you find the best science, faster. ($150 value, limited-time offer).*This content is never meant to serve as medical adviceSupport the showAll episodes (audio & video), show notes, transcripts, and more at the M&M Substack Affiliates: Consensus: AI-powered academic research tool. Find & understand the best science, faster. Free 1-year premium sub with code MINDMATTERSPECIAL2 (exp 12.23.24) MASA Chips—delicious tortilla chips made from organic corn and grass-fed beef tallow. No seed oils or artificial ingredients. Use code MIND for 20% off. Lumen device to optimize your metabolism for weight loss or athletic performance. Use code MIND for 10% off. Athletic Greens: Comprehensive & convenient daily nutrition. Free 1-year supply of vitamin D with purchase. KetoCitra—Ketone body BHB + potassium, calcium & magnesium, formulated with kidney health in mind. Use code MIND20 for 20% off any subscription. Learn all the ways you can support my efforts...

Interchain.FM
Cross Rollup UX sucks. @Polymer_Labs fixes it.

Interchain.FM

Play Episode Listen Later Dec 20, 2024 53:09


Polymer brings IBC as a primitive to the Ethereum rollup ecosystem. Currently, rollups take a long time to checkpoint to Ethereum's main chain and the rollup to rollup user experience leaves a lot to be desired. Polymer implements an intermediary blockchain to reduce the lag time to real time latency.Polymer primer: https://www.youtube.com/live/eFtELwbTQ4Y#blockchaintech #technews #web3news #interchainfm #cryptocurrency #cryptopodcasts #ethereum #cosmosibc #ethrollups

Interchain.FM
Lava Network Revolutionizes RPC Centralization that Plagues Ethereum

Interchain.FM

Play Episode Listen Later Dec 13, 2024 60:09


Ethan from Lava Network shares how you can become a contributor, farm yield from every supported blockchain in the network, while supporting decentralization. RPCs today are huge yet unseen sources of centralization and trust without the accountability or reliability. Lava changes that.#blockchaintech #technews #web3news #interchainfm #cryptocurrency #cryptopodcasts #rpcproviders #rpc #infura

Interchain.FM
Agentic AI are going onchain, and they will outtrade you | Talus Smart Agents

Interchain.FM

Play Episode Listen Later Dec 6, 2024 42:11


Talus network is enabling AI smart agents to come onchain and coordinate with each other to automate much of the tedious workflows that you may not want to bother with. It's a race to build the best onchain AI trader—and it's winner take all. We're at the precipice of onchain AI today just like the advent of HFT bots swarming tradfi in the 2000s.#blockchaintech #technews #web3news #interchainfm #cryptocurrency #cryptopodcasts #ai #aiagents #depinprojects

The Casey Adams Show
Pramod Sharma - Founder of Napkin AI Talks Simplifying Visuals for Presentations

The Casey Adams Show

Play Episode Listen Later Nov 20, 2024 48:37


In this conversation, Casey Adams interviews Pramod Sharma, the founder and CEO of Napkin AI, discussing his entrepreneurial journey from Osmo to Napkin AI. Pramod shares insights on the importance of graphics in communication, the challenges of building a startup, and the significance of user feedback in product development. He emphasizes the need for conviction in pursuing ideas, the lessons learned from the acquisition of Osmo, and the evolving landscape of AI technology. The discussion also touches on the dynamics of raising capital and the value of small, agile teams in driving innovation. Learn more: www.napkin.ai 00:00 Introduction to Napkin AI 03:07 The Genesis of Napkin AI 05:56 Pramod's Journey: From Google to Osmo 09:08 The Conviction to Start Osmo 12:01 Early Influences and Passion for Physics 14:54 The Move to the U.S. and College Experience 17:47 Osmo's Growth and Acquisition 21:04 Lessons Learned from Building Osmo 23:59 Fundraising Insights and Building a Product People Love 26:24 The Importance of Finding Champions for Your Idea 28:17 The Story Behind the Name 'Napkin' 31:51 User-Driven Development: Lessons from Launching Napkin 36:34 The Power of Small Teams in Startup Success 39:15 AI's Role in Personal Efficiency and Creativity 42:24 Looking Ahead: The Future of Visual Quality in Graphics 44:30 Personal Growth Through Startup Challenges

Interchain.FM
Shocking Speed: Monad Revolutionizes EVM Performance

Interchain.FM

Play Episode Listen Later Nov 15, 2024 43:33


Monad introduces extreme parallelized performance for EVM and it's ripping out of the gate with 70+ major dApps even before mainnet has launched. Monad is a decentralized, developer-forward L1 smart contract platform that ushers in the ultimate blockchain scaling solution. The dream of achieving 10,000 TPS without sacrificing decentralization or safety just became reality.#blockchaintech #technews #web3news #interchainfm #cryptocurrency #cryptopodcasts #monad #gmonad

An Aromatic Life
#127: Whiff of Wisdom: How AI Will Enhance Perfumery | Alex Wiltschko

An Aromatic Life

Play Episode Listen Later Nov 14, 2024 11:40


In this week's whiff of wisdom, neuroscientist and Osmo CEO Alex Wiltschko shares how artificial intelligence will enhance Perfumery in new and exciting ways. In fact, since this original conversation aired, the company has made some great strides. Check out their new, innovative AI scent creation platform called Inspire (link below) - you're gonna be blown away! To listen to the original full conversation go to episode #79. Whiff of Wisdom is a format that's being added biweekly, and spotlights an inspirational insight from a guest on the pod. The goal is to offer you a whiff of wisdom for your aromatic life, as well as give you some inspiration to try new things that use your sense of smell more.  Try out the Inspire AI Scent Creation Platform: https://inspire.osmo.ai/landing  Learn more about ⁠Osmo⁠. ⁠Get No Place for Plants children's book on Amazon⁠ Follow Frauke on Instagram: ⁠⁠⁠⁠@an_aromatic_life ⁠⁠⁠⁠ Subscribe to Frauke's Substack: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://anaromaticlife.substack.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Visit Frauke's website ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠www.anaromaticlife.com⁠⁠⁠⁠⁠⁠⁠⁠ Learn about Frauke's ⁠⁠⁠⁠⁠⁠⁠Scent*Tattoo Project

The OT School House for School-Based OTs Podcast
A Practical Guide to Building a Budget-Friendly OT Toolbox

The OT School House for School-Based OTs Podcast

Play Episode Listen Later Nov 12, 2024 62:44


Join Jayson and Amanda Gibbs as they discuss building a school-based OT toolbox. Together, they dive into tools and resources—like the interactive Osmo, 'Tools to Grow,' goodwill, and more—that make therapy sessions both effective and budget-friendly. Amanda shares her journey from navigating limited resources as a new grad to finding creative solutions that enhance her practice. Don't miss this episode full of insights and practical tips to elevate your OT practice on a budget!Listen now to learn the following objectives:Learners will identify cost-effective tools and resources for supporting skill development in school-based occupational therapy.Learners will understand district budgeting and resource processes to advocate for necessary occupational therapy materials, including recognizing key contacts for resource support in schools with limited budgets.Learners will identify tools and strategies for implementing a multisensory learning approach.Thanks for tuning in! Thanks for tuning into the OT Schoolhouse Podcast brought to you by the OT Schoolhouse Collaborative Community for school-based OTPs. In OTS Collab, we use community-powered professional development to learn together and implement strategies together. Don't forget to subscribe to the show and check out the show notes for every episode at OTSchoolhouse.comSee you in the next episode!

Tech&Co
Osmo : quand l'intelligence artificielle apprend à reconnaître les odeurs – 05/11

Tech&Co

Play Episode Listen Later Nov 5, 2024 27:06


Ce mardi 5 novembre, François Sorel a reçu Frédéric Simottel, journaliste BFM Business ; Yves Maitre, operating partner Jolt capital et consultant, ancien PDG de HTC ; Philippe Dewost, fondateur de Phileos, ancien directeur général de l'EPITA et cofondateur de Wanadoo, ainsi que Michel Levy Provençal, Prospectiviste, fondateur de TEDxParis et de l'agence Brightness. Ils se sont penchés sur Osmo, une start-up veut créer des odeurs avec l'IA, Apple qui étudie le marché des lunettes connectées et le recrutement par OpeAi de l'ancienne cheffe du hardware chez Meta, dans l'émission Tech & Co, la quotidienne, sur BFM Business. Retrouvez l'émission du lundi au jeudi et réécoutez la en podcast.

Mon Podcast Immo
Quentin Sackzewski (Mata Capital IS) : "Osmo Énergie, une SCPI accessible et durable" #906

Mon Podcast Immo

Play Episode Listen Later Oct 22, 2024 9:54 Transcription Available


Quentin Saczewski, directeur des partenariats chez Mata Capital, est l'invité de ce nouvel épisode de Mon Podcast Immo. Au micro d'Ariane Artinian, il présente Osmo Énergie, la première SCPI de Mata Capital, qui combine diversification immobilière et engagement environnemental. Classée article 9 SFDR, cette SCPI vise une performance cible de 6%, revue à 7% pour 2024. Accessible dès 300 €, elle s'adresse à un large public via une plateforme 100% digitale. "Chaque euro collecté, c'est de la performance pour les années à venir," explique Quentin Sakzevski. Attention toutefois, cette performance n'est pas garantie et l'horizon de placement recommandé est de 10 ans.Hébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.

Flavour Talks
Flavour Talk with Christophe Laudamiel

Flavour Talks

Play Episode Listen Later Sep 27, 2024 91:19


Welcome to this episode of our podcast, recorded on World Arts Day, featuring the visionary perfumer Christophe Laudamiel. Known for his innovative approach to fragrance design, Christophe has created scents for some of the world's leading brands and has a reputation for pushing the boundaries of olfactory artistry. In this inspiring conversation, he shares his insights on the importance of forging your own path in the creative world of flavours and fragrances. He encourages listeners to embrace their individuality and resist the limitations imposed by external influences. Join us as we dive into the power of self-expression and chat about how to stay true to ourselves on our creative journeys. (02:45) How Christophe became a perfumer? (15:30) Memorable innovation (21:25) Flavours and society's acceptance (26:15) What keeps the perfumer up at night (32:00) Enhancing flavours from a perfumer's perspective (37:45) Christophe and the work at Osmo (42:30) Collaboration with chefs and bartenders (46:16) The First Synthetic Molecule (51:40) Industry Constraints (53:05) Realism in Perfumery and flavours (55:00) Smelling “Spacewood” (1:00:10) Perfumers and flavourists – who are seen as artists? (1:06:40) Inspiration in unlikely places (1:17:05) How life's challenges inspire creativity? (1:24:00) A Message to the Next Generation Host: Aidan Kirkwood, Michel Aubanel, Seán Ryan, Music: Aidan Kirkwood, Editing: Britta Nobis, Publishing: Ján Peťka

TechFan
TechFan 512 - To Innovate or Not

TechFan

Play Episode Listen Later Sep 22, 2024 76:28


Is Qualcomm buying Intel? Tim and David delve deep on the topic. Also included is the latest iPhones and macOS, AnandTech and Touch Arcade closing, Columbus tries to shut up a whistle blower, and much more!

Woodworking with The Wood Whisperer (HD)
Solving Your Floor Refinishing Dilemma: Osmo PolyX Oil Hardwood Floor Application

Woodworking with The Wood Whisperer (HD)

Play Episode Listen Later Sep 16, 2024


Transform your hardwood floor with Osmo PolyX Oil. Discover how this natural-colored finish can brighten up your space.

TechFan
TechFan 511 - Vacation Gear

TechFan

Play Episode Listen Later Sep 2, 2024 65:52


David and Tim discuss Kindle, Fire 11, Razr 40, failing AI, large TVs, Logitech, and much more.

The Hustle Daily Show
How an AI startup is breaking into the $60B fragrance market

The Hustle Daily Show

Play Episode Listen Later Aug 28, 2024 14:31


Osmo is a startup that wants to give computers a sense of smell. This Google-backed company has some huge potential on its horizon- including the $60B fragrance industry and mosquito repellent. So how can this odd innovation change the world?  Join our hosts Jon Weigell and Sara Friedman, as they take you through our most interesting stories of the day. Grab the free Entrepreneurship Kit here https://clickhubspot.com/ent Follow us on social media: TikTok: https://www.tiktok.com/@thehustle.co Instagram: https://www.instagram.com/thehustledaily/ Thank You For Listening to The Hustle Daily Show. Don't forget to hit Subscribe or Follow us on Apple Podcasts so you never miss an episode! If you want this news delivered to your inbox, join millions of others and sign up for The Hustle Daily newsletter, here: https://thehustle.co/email/  Plus! Your engagement matters to us. If you are a fan of the show, be sure to leave us a 5-Star Review on Apple Podcasts https://podcasts.apple.com/us/podcast/the-hustle-daily-show/id1606449047 (and share your favorite episodes with your friends, clients, and colleagues).

The Food Tech News Show
Is Smell-O-Vision Ready for Primetime? - FTNS #6

The Food Tech News Show

Play Episode Listen Later Aug 24, 2024 32:14


This week, Mike and Carlos discuss the following stories: Osmo is trying to digitize the world of smells and use AI to finally help us create Smell-O-Vision The company's The Principal Odor Map (POM) created by Osmo's model, outperformed human panelists in predicting the consensus scent of molecules, marking a significant advancement in olfactory science and demonstrating that AI can predict smells based on molecular structure better than individual human experts in many cases. How'd You Like a Nice Glass of 2D Printed Oat Milk? This week, Milkadamia, known for its range of macadamia-based milks, announced its first oat milk. However, this isn't just any oat milk; the company is introducing Flat Pack oat milk, which are printed sheets of plant-based milk that are designed to be rehydrated in water overnight or blended for an instant beverage. According to the company, these sheets are created by printing oat milk paste onto flat sheets using a proprietary 2D printing process. Each package contains eight of these lightweight sheets, reducing both packaging and weight. Anova Informs Community Its App Is Going Subscription, and It's Not Going Well Last week, Anova CEO Steve Svajian announced that the company will begin charging a subscription fee for new users of its sous vide circulator app starting August 21st, 2024. However, existing users who have downloaded the app and created an account before this date will not be impacted by the change. These users will be grandfathered into free access to the app's full features. Svajian explained that the decision to introduce a subscription fee stems from the fact that “each connected cook costs us money,” a cost that has become significant as the number of connected cooks now numbers in the “hundreds of millions.” The new Anova Sous Vide Subscription will be priced at $1.99 per month or $9.99 per year. Unsurprisingly, the news has sparked discontent among Anova users.  We've got a new humanoid robot that not only performs kung fu, but will also make a meal. The S1 robot assistant is claimed to have unparalleled agility, dexterity, and accuracy, which help it perform all kinds of tasks. Launched by Stardust Intelligence, a Chinese company, the robot has a human-like upper body structure mounted on a wheeled base. In a video, the robot can be seen feeding cats and making waffles. Chick-fil-A Is Launching a Streaming Service. Yes, You Read That Right Extra sauces and some unscripted content, please. Chick-fil-A — yes, that Chick-fil-A — is looking to launch a streaming platform. The fast food chain has been working with Hollywood production companies and studios to create family-friendly, mostly unscripted original shows. The chicken house is also in talks to license and acquire content, according to a source that's pitched a project. Find more episodes on The Spoon. Learn more about your ad choices. Visit megaphone.fm/adchoices

Noob Spearo Podcast | Spearfishing Talk with Shrek and Turbo
NSP:269 Neuro Divergence, Crowdfunding a Wetsuit & Comedic Creativity | Luke Potts

Noob Spearo Podcast | Spearfishing Talk with Shrek and Turbo

Play Episode Listen Later Aug 3, 2024 116:33


Interview with Luke Potts Todays interview is with Luke Potts, aka @AquaticRehab and now @luke_likes_ledges from New Zealand! Today is a big chat all about the mental side of life and how it affects our spearfishing, life on land and the world we live in. Shrek and Luke discuss ways to get ahead of and treat depression and anxiety and the role that spearfishing plays in managing your mental health. Luke also shares some great spearfishing tips on how to hunt snapper and using ledges to stalk fish, or "snapper snooping". If today's episode has helped in any way or made you think of life a bit differently, reach out and let us know in the comments! Important times 00:13 Intro 05:30 Welcome back Luke! Osmo vs GoPro 09:00 Out The Gate 11:05 Mental health issues 19:25 How do you treat your health? 25:30 Spearfishing as therapy 30:00 How to treat depression and get help 43:20 The art of ledge hunting! 48:20 Why big fish stay too far away 50:15 What advice do you give to noob snapper spearos? 52:55 How to stalk fish in open ground 57:20 Dealing with silly comments 01:10:10 Catch and release vs catch and cook 01:15:30 If you can film a fish, you can hunt them 01:20:40 01:26:30 Creating, writing and stand up comedy 01:36:05 Mental process 01:44:55 What's next in your life? 01:48:40 Last thoughts 01:51:40 Outro Listen in and subscribe on iOS or Android Important Links   Noob Spearo Partners and Discount Codes | Get Spear Ready and make the most of your next spearfishing trip! 50 days to better spearfishing! - Use the code NOOBSPEARO for a free hat of your choice from FuckTheTaxman.com . Use the code NOOBSPEARO save $20 on every purchase over $200 at checkout – Flat shipping rate, especially in AUS! – Use the code NOOB10 to save 10% off anything store-wide. Free Shipping on USA orders over $99 | Simple, Effective, Dependable Wooden Spearguns. Use the Code NOOB to save $30 on any speargun:) | 10% off for listeners with code: NOOBSPEARO | Get 10% off Sharkshield Technology | Freedom7 or Scuba7 enter the code NOOBSPEARO | ‘Spearo Dad' | ‘Jobfish Tribute' | 99 Spearo Recipes use the code SPEARO to get 20% off any course 28-day Freediving Transformation | Equalization Masterclass – Roadmap to Frenzel | The 5 minute Freediver | Break the 10 Meter Barrier – Use the code NOOBSPEARO to save . Listen to 99 Tips to Get Better at Spearfishing | Wickedly tough and well thought out gear! Check out the legendary

Interchain.FM
Earn Passive Yield, Automatically Copy Trade CT Traders & Hedge Against 20% Drawdowns w/ Mars

Interchain.FM

Play Episode Listen Later Jul 30, 2024 52:39


Copy trading is arriving with Mars protocol's onchain vaults that will let you copy the moves of your favorite CT trader in real time. Gone are the days they use you for exit liquidity when you can simply copy their ever move as each transaction is made! This, alongside Mars' perpetual money market and lending product, will broker the supply and demand needs of your everyday degen with that of sophisticated market makers. Win-win!This podcast was sponsored by Hadron Labs (Duality)

Interchain.FM
Future of Thorchain - Rune on Road to $5Bn, Powers X-Chain Liquidity Hub for BTC, SOL, Cosmos

Interchain.FM

Play Episode Listen Later Jul 16, 2024 42:18


Thorchain is aiming to accelerate onchain liquidity usage by evolving into the ultimate liquidity aggregation layer, integrating Solana, Bitcoin, and finally opening up its gateway to IBC chains. Thorchad JP returns to Interchain.FM for an update to tell all about Rune's journey to $5 billion mcap. It gets a bit esoteric but it's worth the ride. Enjoy!#blockchaintech #technews #web3news #interchainfm #cryptocurrency #cryptopodcasts #thorchain #runeprice

Interchain.FM
Moonbeam's $13M Innovation Fund to Galvanize Onchain Games & RWAs | Interchain.FM

Interchain.FM

Play Episode Listen Later Jul 8, 2024 34:03


Polkadot's EVM-compatible parachain, Moonbeam, gets a facelift and is investing serious funds into the gaming and RWA applications, targeting global gamers and Latin America.

Brave New World -- hosted by Vasant Dhar
Ep 81: Alex Wiltschko on the Sense of Smell

Brave New World -- hosted by Vasant Dhar

Play Episode Listen Later Apr 4, 2024 67:10


Smell is the most underrated of our senses -- and it affects everything. Alex Wiltschko joins Vasant Dhar in episode 81 of Brave New World to discuss the role of smell in our lives -- and in this new digital age. Useful resources: 1. Alex Wiltschko on LinkedIn, Google Ventures, Google Scholar and Twitter. 2. Osmo. 3. Perfumes: The A-Z Guide -- Luca Turin and Tania Sanchez. 4.  Perfume: The Story of a Murderer -- Patrick Suskind. 5. The Mystery of Smell -- Lydialyle Gibson on Sandeep Robert Datta. 6. A novel multigene family may encode odorant receptors: a molecular basis for odor recognition -- Linda Buck and Richard Axel. 7. Metabolic activity organizes olfactory representations -- Wesley W Qian et al (including Alex  Wiltschko.) 8. Hyperbolic geometry of the olfactory space -- Yuansheng Zhou, Brian H Smith & Tatyana Sharpee. 9. Odor Perception and the Variability in Natural Odor Scenes -- Geraldine A Wright and Mitchell G.A. Thomson. 10. The Biological Sense of Smell -- Christine WJ Chee-Ruiter. 11. Also check out the work of Jim DiCarlo, David Marr and Eero Simoncelli. Check out Vasant Dhar's newsletter on Substack. Subscription is free!

In My Heart with Heather Thomson
Dr. Stacy Sim Re-Release

In My Heart with Heather Thomson

Play Episode Listen Later Mar 19, 2024 42:52


Dr. Stacy Sims is a Leading Global Expert on Female Physiology and Training. She is a forward-thinking international exercise physiologist and nutrition scientist who aims to revolutionize exercise nutrition and performance for women. Her contributions to the international research environment and the sports nutrition industry has established a new niche in sports nutrition; and established her reputation as the expert in sex differences in training, nutrition, and health. Stacy is a former elite athlete, coach, nutritionist, known for the phrase, “Women are not Small Men.” She's been on many influential shows and is a consultant for brands you know such as Nike, WHOOP, Tonal and Nuun. She's developed multiple hydration products ( Osmo ) and she gave a powerful TEDx talk too. For active women, menopause hits hard. Overnight, your body doesn't feel like the one you know and love anymore--you're battling new symptoms, might be gaining weight, losing endurance and strength, and taking longer to bounce back from workouts that used to be easy. The things that have always kept you fit and healthy just seem to stop working the way they used to -- A comprehensive, physiology-based guide to peak performance for active women approaching or experiencing menopause--from the author of Roar, renowned exercise and nutrition scientist Dr. Stacy Sims But menopause doesn't have to be the end of you kicking ass at the gym, on the trail, in the saddle, or wherever you work out. Once you understand your physiology, you can work with it--not against it--to optimize your performance. That's where Stacy Sims, PhD comes in. In Next Level, you'll learn the underlying causes of menopause: the hormonal changes that are causing all the symptoms you're feeling, and their impact on your wellness and performance. Inside you'll find science-backed advice about training, nutrition, sleep and recovery and supplements, as well as sample exercise routines, meal plans, macronutrient planning charts, and case studies from real women Stacy has coached through the transition. It's the ultimate guide to navigating the Next Level. With the unique opportunities, Silicon Valley has to offer, during her tenure at Stanford, she had the opportunity to translate earlier research into consumer products and a science-based layperson's book (ROAR) written to explain sex differences in training and nutrition across the lifespan. Both the consumer products and the book challenged the existing dogma for women in exercise, nutrition, and health. This paradigm shift is the focus of her famous "Women Are Not Small Men” TEDx talk. She currently resides at the beach in Mt. Maunganui, New Zealand with her husband and young daughter. SPONSOR: OUAI: Give your hair a glow-up with OUAI. Go to www.theouai.com  and use promo code INMYHEART for 15% off any product. Learn more about your ad choices. Visit megaphone.fm/adchoices

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Speaker CFPs and Sponsor Guides are now available for AIE World's Fair — join us on June 25-27 for the biggest AI Engineer conference of 2024!Soumith Chintala needs no introduction in the ML world — his insights are incredibly accessible across Twitter, LinkedIn, podcasts, and conference talks (in this pod we'll assume you'll have caught up on the History of PyTorch pod from last year and cover different topics). He's well known as the creator of PyTorch, but he's more broadly the Engineering Lead on AI Infra, PyTorch, and Generative AI at Meta.Soumith was one of the earliest supporters of Latent Space (and more recently AI News), and we were overjoyed to catch up with him on his latest SF visit for a braindump of the latest AI topics, reactions to some of our past guests, and why Open Source AI is personally so important to him.Life in the GPU-Rich LaneBack in January, Zuck went on Instagram to announce their GPU wealth: by the end of 2024, Meta will have 350k H100s. By adding all their GPU clusters, you'd get to 600k H100-equivalents of compute. At FP16 precision, that's ~1,200,000 PFLOPS. If we used George Hotz's (previous guest!) "Person of Compute" measure, Meta now has 60k humans of compute in their clusters. Occasionally we get glimpses into the GPU-rich life; on a recent ThursdAI chat, swyx prompted PaLM tech lead Yi Tay to write down what he missed most from Google, and he commented that UL2 20B was trained by accidentally leaving the training job running for a month, because hardware failures are so rare in Google.Meta AI's Epic LLM RunBefore Llama broke the internet, Meta released an open source LLM in May 2022, OPT-175B, which was notable for how “open” it was - right down to the logbook! They used only 16 NVIDIA V100 GPUs and Soumith agrees that, with hindsight, it was likely under-trained for its parameter size.In Feb 2023 (pre Latent Space pod), Llama was released, with a 7B version trained on 1T tokens alongside 65B and 33B versions trained on 1.4T tokens. The Llama authors included Guillaume Lample and Timothée Lacroix, who went on to start Mistral.July 2023 was Llama2 time (which we covered!): 3 model sizes, 7B, 13B, and 70B, all trained on 2T tokens. The three models accounted for a grand total of 3,311,616 GPU hours for all pre-training work. CodeLlama followed shortly after, a fine-tune of Llama2 specifically focused on code generation use cases. The family had models in the 7B, 13B, 34B, and 70B size, all trained with 500B extra tokens of code and code-related data, except for 70B which is trained on 1T.All of this on top of other open sourced models like Segment Anything (one of our early hits!), Detectron, Detectron 2, DensePose, and Seamless, and in one year, Meta transformed from a company people made fun of for its “metaverse” investments to one of the key players in the AI landscape and its stock has almost tripled since (about $830B in market value created in the past year).Why Open Source AIThe obvious question is why Meta would spend hundreds of millions on its AI efforts and then release them for free. Zuck has addressed this in public statements:But for Soumith, the motivation is even more personal:“I'm irrationally interested in open source. I think open source has that fundamental way to distribute opportunity in a way that is very powerful. Like, I grew up in India… And knowledge was very centralized, but I saw that evolution of knowledge slowly getting decentralized. And that ended up helping me learn quicker and faster for like zero dollars. And I think that was a strong reason why I ended up where I am. So like that, like the open source side of things, I always push regardless of like what I get paid for, like I think I would do that as a passion project on the side……I think at a fundamental level, the most beneficial value of open source is that you make the distribution to be very wide. It's just available with no friction and people can do transformative things in a way that's very accessible. Maybe it's open source, but it has a commercial license and I'm a student in India. I don't care about the license. I just don't even understand the license. But like the fact that I can use it and do something with it is very transformative to me……Like, okay, I again always go back to like I'm a student in India with no money. What is my accessibility to any of these closed source models? At some scale I have to pay money. That makes it a non-starter and stuff. And there's also the control issue: I strongly believe if you want human aligned AI, you want all humans to give feedback. And you want all humans to have access to that technology in the first place. And I actually have seen, living in New York, whenever I come to Silicon Valley, I see a different cultural bubble.We like the way Soumith put it last year: Closed AI “rate-limits against people's imaginations and needs”!What It Takes For Open Source AI to WinHowever Soumith doesn't think Open Source will simply win by popular demand. There is a tremendous coordination problem with the decentralized nature of the open source AI development right now: nobody is collecting the valuable human feedback in the way that OpenAI or Midjourney are doing.“Open source in general always has a coordination problem. If there's a vertically integrated provider with more resources, they will just be better coordinated than open source. And so now open source has to figure out how to have coordinated benefits. And the reason you want coordinated benefits is because these models are getting better based on human feedback. And if you see with open source models, like if you go to the /r/localllama subreddit, like there's so many variations of models that are being produced from, say, Nous research. I mean, like there's like so many variations built by so many people. And one common theme is they're all using these fine-tuning or human preferences datasets that are very limited and they're not sufficiently diverse. And you look at the other side, say front-ends like Oobabooga or like Hugging Chat or Ollama, they don't really have feedback buttons. All the people using all these front-ends, they probably want to give feedback, but there's no way for them to give feedback… So we're just losing all of this feedback. Maybe open source models are being as used as GPT is at this point in like all kinds of, in a very fragmented way, like in aggregate all the open source models together are probably being used as much as GPT is, maybe close to that. But the amount of feedback that is driving back into the open source ecosystem is like negligible, maybe less than 1% of like the usage. So I think like some, like the blueprint here I think is you'd want someone to create a sinkhole for the feedback… I think if we do that, if that actually happens, I think that probably has a real chance of the open source models having a runaway effect against OpenAI, I think like there's a clear chance we can take at truly winning open source.”If you're working on solving open source coordination, please get in touch!Show Notes* Soumith Chintala Twitter* History of PyTorch episode on Gradient Podcast* The Llama Ecosystem* Apple's MLX* Neural ODEs (Ordinary Differential Equations)* AlphaGo* LMSys arena* Dan Pink's "Drive"* Robotics projects:* Dobb-E* OK Robot* Yann LeCun* Yangqing Jia of Lepton AI* Ed Catmull* George Hotz on Latent Space* Chris Lattner on Latent Space* Guillaume Lample* Yannic Kilcher of OpenAssistant* LMSys* Alex Atallah of OpenRouter* Carlo Sferrazza's 3D tactile research* Alex Wiltschko of Osmo* Tangent by Alex Wiltschko* Lerrel Pinto - RoboticsTimestamps* [00:00:00] Introductions* [00:00:51] Extrinsic vs Intrinsic Success* [00:02:40] Importance of Open Source and Its Impact* [00:03:46] PyTorch vs TinyGrad* [00:08:33] Why PyTorch is the Switzerland of frameworks* [00:10:27] Modular's Mojo + PyTorch?* [00:13:32] PyTorch vs Apple's MLX* [00:16:27] FAIR / PyTorch Alumni* [00:18:50] How can AI inference providers differentiate?* [00:21:41] How to build good benchmarks and learnings from AnyScale's* [00:25:28] Most interesting unexplored ideas* [00:28:18] What people get wrong about synthetic data* [00:35:57] Meta AI's evolution* [00:38:42] How do you allocate 600,000 GPUs?* [00:42:05] Even the GPU Rich are GPU Poor* [00:47:31] Meta's MTIA silicon* [00:50:09] Why we need open source* [00:59:00] Open source's coordination problem for feedback gathering* [01:08:59] Beyond text generation* [01:15:37] Osmo and the Future of Smell Recognition TechnologyTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO in residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:15]: Hey, and today we have in the studio Soumith Chintala, welcome.Soumith [00:00:17]: Thanks for having me.Swyx [00:00:18]: On one of your rare visits from New York where you live. You got your start in computer vision at NYU with Yann LeCun. That was a very fortuitous start. I was actually listening to your interview on the Gradient podcast. So if people want to know more about the history of Soumith, history of PyTorch, they can go to that podcast. We won't spend that much time there, but I just was marveling at your luck, or I don't know if it's your luck or your drive to find AI early and then find the right quality mentor because I guess Yan really sort of introduced you to that world.Soumith [00:00:51]: Yeah, I think you're talking about extrinsic success, right? A lot of people just have drive to do things that they think is fun, and a lot of those things might or might not be extrinsically perceived as good and successful. I think I just happened to like something that is now one of the coolest things in the world or whatever. But if I happen, the first thing I tried to become was a 3D VFX artist, and I was really interested in doing that, but I turned out to be very bad at it. So I ended up not doing that further. But even if I was good at that, whatever, and I ended up going down that path, I probably would have been equally happy. It's just like maybe like the perception of, oh, is this person successful or not might be different. I think like after a baseline, like your happiness is probably more correlated with your intrinsic stuff.Swyx [00:01:44]: Yes. I think Dan Pink has this book on drive that I often refer to about the power of intrinsic motivation versus extrinsic and how long extrinsic lasts. It's not very long at all. But anyway, now you are an investor in Runway, so in a way you're working on VFX. Yes.Soumith [00:02:01]: I mean, in a very convoluted way.Swyx [00:02:03]: It reminds me of Ed Catmull. I don't know if you guys know, but he actually tried to become an animator in his early years and failed or didn't get accepted by Disney and then went and created Pixar and then got bought by Disney and created Toy Story. So you joined Facebook in 2014 and eventually became a creator and maintainer of PyTorch. And there's this long story there you can refer to on the gradient. I think maybe people don't know that you also involved in more sort of hardware and cluster decision affair. And we can dive into more details there because we're all about hardware this month. Yeah. And then finally, I don't know what else, like what else should people know about you on a personal side or professional side?Soumith [00:02:40]: I think open source is definitely a big passion of mine and probably forms a little bit of my identity at this point. I'm irrationally interested in open source. I think open source has that fundamental way to distribute opportunity in a way that is very powerful. Like, I grew up in India. I didn't have internet for a while. In college, actually, I didn't have internet except for GPRS or whatever. And knowledge was very centralized, but I saw that evolution of knowledge slowly getting decentralized. And that ended up helping me learn quicker and faster for zero dollars. And I think that was a strong reason why I ended up where I am. So the open source side of things, I always push regardless of what I get paid for, like I think I would do that as a passion project on the side.Swyx [00:03:35]: Yeah, that's wonderful. Well, we'll talk about the challenges as well that open source has, open models versus closed models. Maybe you want to touch a little bit on PyTorch before we move on to the sort of Meta AI in general.PyTorch vs Tinygrad tradeoffsAlessio [00:03:46]: Yeah, we kind of touched on PyTorch in a lot of episodes. So we had George Hotz from TinyGrad. He called PyTorch a CISC and TinyGrad a RISC. I would love to get your thoughts on PyTorch design direction as far as, I know you talk a lot about kind of having a happy path to start with and then making complexity hidden away but then available to the end user. One of the things that George mentioned is I think you have like 250 primitive operators in PyTorch, I think TinyGrad is four. So how do you think about some of the learnings that maybe he's going to run into that you already had in the past seven, eight years almost of running PyTorch?Soumith [00:04:24]: Yeah, I think there's different models here, but I think it's two different models that people generally start with. Either they go like, I have a grand vision and I'm going to build a giant system that achieves this grand vision and maybe one is super feature complete or whatever. Or other people say they will get incrementally ambitious, right? And they say, oh, we'll start with something simple and then we'll slowly layer out complexity in a way that optimally applies Huffman coding or whatever. Like where the density of users are and what they're using, I would want to keep it in the easy, happy path and where the more niche advanced use cases, I'll still want people to try them, but they need to take additional frictional steps. George, I think just like we started with PyTorch, George started with the incrementally ambitious thing. I remember TinyGrad used to be, like we would be limited to a thousand lines of code and I think now it's at 5,000. So I think there is no real magic to which why PyTorch has the kind of complexity. I think it's probably partly necessitated and partly because we built with the technology available under us at that time, PyTorch is like 190,000 lines of code or something at this point. I think if you had to rewrite it, we would probably think about ways to rewrite it in a vastly simplified way for sure. But a lot of that complexity comes from the fact that in a very simple, explainable way, you have memory hierarchies. You have CPU has three levels of caches and then you have DRAM and SSD and then you have network. Similarly, GPU has several levels of memory and then you have different levels of network hierarchies, NVLink plus InfiniBand or Rocky or something like that, right? And the way the flops are available on your hardware, they are available in a certain way and your computation is in a certain way and you have to retrofit your computation onto both the memory hierarchy and like the flops available. When you're doing this, it is actually a fairly hard mathematical problem to do this setup, like you find the optimal thing. And finding the optimal thing is, what is optimal depends on the input variables themselves. So like, okay, what is the shape of your input tensors and what is the operation you're trying to do and various things like that. Finding that optimal configuration and writing it down in code is not the same for every input configuration you have. Like for example, just as the shape of the tensors change, let's say you have three input tensors into a Sparstar product or something like that. The shape of each of these input tensors will vastly change how you do this optimally placing this operation onto the hardware in a way that will get you maximal throughput. So a lot of our complexity comes from writing out hundreds of configurations for each single PyTorch operator and templatizing these things and symbolically generating the final CUDA code or CPU code. There's no way to avoid it because mathematically we haven't found symbolic ways to do this that also keep compile time near zero. You can write a very simple framework, but then you also should be willing to eat the long compile time. So if searching for that optimal performance at runtime, but that's the trade off. There's no, like, I don't think unless we have great breakthroughs George's vision is achievable, he should be thinking about a narrower problem such as I'm only going to make this for work for self-driving car connets or I'm only going to make this work for LLM transformers of the llama style. Like if you start narrowing the problem down, you can make a vastly simpler framework. But if you don't, if you need the generality to power all of the AI research that is happening and keep zero compile time and in all these other factors, I think it's not easy to avoid the complexity.Pytorch vs MojoAlessio [00:08:33]: That's interesting. And we kind of touched on this with Chris Lattner when he was on the podcast. If you think about frameworks, they have the model target. They have the hardware target. They have different things to think about. He mentioned when he was at Google, TensorFlow trying to be optimized to make TPUs go brr, you know, and go as fast. I think George is trying to make especially AMD stack be better than ROCm. How come PyTorch has been such as Switzerland versus just making Meta hardware go brr?Soumith [00:09:00]: First, Meta is not in the business of selling hardware. Meta is not in the business of cloud compute. The way Meta thinks about funding PyTorch is we're funding it because it's net good for Meta to fund PyTorch because PyTorch has become a standard and a big open source project. And generally it gives us a timeline edge. It gives us leverage and all that within our own work. So why is PyTorch more of a Switzerland rather than being opinionated? I think the way we think about it is not in terms of Switzerland or not. We actually the way we articulate it to all hardware vendors and software vendors and all who come to us being we want to build a backend in core for PyTorch and ship it by default is we just only look at our user side of things. Like if users are using a particular piece of hardware, then we want to support it. We very much don't want to king make the hardware side of things. So as the MacBooks have GPUs and as that stuff started getting increasingly interesting, we pushed Apple to push some engineers and work on the NPS support and we spend significant time from Meta funded engineers on that as well because a lot of people are using the Apple GPUs and there's demand. So we kind of mostly look at it from the demand side. We never look at it from like oh which hardware should we start taking opinions on.Swyx [00:10:27]: Is there a future in which, because Mojo or Modular Mojo is kind of a superset of Python, is there a future in which PyTorch might use Mojo features optionally?Soumith [00:10:36]: I think it depends on how well integrated it is into the Python ecosystem. So if Mojo is like a pip install and it's readily available and users feel like they can use Mojo so smoothly within their workflows in a way that just is low friction, we would definitely look into that. Like in the same way PyTorch now depends on Triton, OpenAI Triton, and we never had a conversation that was like huh, that's like a dependency. Should we just build a Triton of our own or should we use Triton? It almost doesn't, like those conversations don't really come up for us. The conversations are more well does Triton have 10,000 dependencies and is it hard to install? We almost don't look at these things from a strategic leverage point of view. We look at these things from a user experience point of view, like is it easy to install? Is it smoothly integrated and does it give enough benefits for us to start depending on it? If so, yeah, we should consider it. That's how we think about it.Swyx [00:11:37]: You're inclusive by default as long as it meets the minimum bar of, yeah, but like maybe I phrased it wrongly. Maybe it's more like what problems would you look to solve that you have right now?Soumith [00:11:48]: I think it depends on what problems Mojo will be useful at.Swyx [00:11:52]: Mainly a performance pitch, some amount of cross compiling pitch.Soumith [00:11:56]: Yeah, I think the performance pitch for Mojo was like, we're going to be performant even if you have a lot of custom stuff, you're going to write arbitrary custom things and we will be performant. And that value proposition is not clear to us from the PyTorch side to consider it for PyTorch. So PyTorch, it's actually not 250 operators, it's like a thousand operators. PyTorch exposes about a thousand operators and people kind of write their ideas in the thousand operators of PyTorch. Mojo is like, well, maybe it's okay to completely sidestep those thousand operators of PyTorch and just write it in a more natural form. Just write raw Python, write for loops or whatever, right? So from the consideration of how do we intersect PyTorch with Mojo, I can see one use case where you have custom stuff for some parts of your program, but mostly it's PyTorch. And so we can probably figure out how to make it easier for say Torch.compile to smoothly also consume Mojo subgraphs and like, you know, the interoperability being actually usable, that I think is valuable. But Mojo as a fundamental front end would be replacing PyTorch, not augmenting PyTorch. So in that sense, I don't see a synergy in more deeply integrating Mojo.Pytorch vs MLXSwyx [00:13:21]: So call out to Mojo whenever they have written something in Mojo and there's some performance related thing going on. And then since you mentioned Apple, what should people think of PyTorch versus MLX?Soumith [00:13:32]: I mean, MLX is early and I know the folks well, Ani used to work at FAIR and I used to chat with him all the time. He used to be based out of New York as well. The way I think about MLX is that MLX is specialized for Apple right now. It has a happy path because it's defined its product in a narrow way. At some point MLX either says we will only be supporting Apple and we will just focus on enabling, you know, there's a framework if you use your MacBook, but once you like go server side or whatever, that's not my problem and I don't care. For MLS, it enters like the server side set of things as well. Like one of these two things will happen, right? If the first thing will happen, like MLX's overall addressable market will be small, but it probably do well within that addressable market. If it enters the second phase, they're going to run into all the same complexities that we have to deal with. They will not have any magic wand and they will have more complex work to do. They probably wouldn't be able to move as fast.Swyx [00:14:44]: Like having to deal with distributed compute?Soumith [00:14:48]: Distributed, NVIDIA and AMD GPUs, like just like having a generalization of the concept of a backend, how they treat compilation with plus overheads. Right now they're deeply assumed like the whole NPS graph thing. So they need to think about all these additional things if they end up expanding onto the server side and they'll probably build something like PyTorch as well, right? Like eventually that's where it will land. And I think there they will kind of fail on the lack of differentiation. Like it wouldn't be obvious to people why they would want to use it.Swyx [00:15:24]: I mean, there are some cloud companies offering M1 and M2 chips on servers. I feel like it might be interesting for Apple to pursue that market, but it's not their core strength.Soumith [00:15:33]: Yeah. If Apple can figure out their interconnect story, maybe, like then it can become a thing.Swyx [00:15:40]: Honestly, that's more interesting than the cars. Yes.Soumith [00:15:43]: I think the moat that NVIDIA has right now, I feel is that they have the interconnect that no one else has, like AMD GPUs are pretty good. I'm sure there's various silicon that is not bad at all, but the interconnect, like NVLink is uniquely awesome. I'm sure the other hardware providers are working on it, but-Swyx [00:16:04]: I feel like when you say it's uniquely awesome, you have some appreciation of it that the rest of us don't. I mean, the rest of us just like, you know, we hear marketing lines, but what do you mean when you say NVIDIA is very good at networking? Obviously they made the acquisition maybe like 15 years ago.Soumith [00:16:15]: Just the bandwidth it offers and the latency it offers. I mean, TPUs also have a good interconnect, but you can't buy them. So you have to go to Google to use it.PyTorch MafiaAlessio [00:16:27]: Who are some of the other FAIR PyTorch alumni that are building cool companies? I know you have Fireworks AI, Lightning AI, Lepton, and Yangqing, you knew since college when he was building Coffee?Soumith [00:16:40]: Yeah, so Yangqing and I used to be framework rivals, PyTorch, I mean, we were all a very small close-knit community back then. Caffe, Torch, Theano, Chainer, Keras, various frameworks. I mean, it used to be more like 20 frameworks. I can't remember all the names. CCV by Liu Liu, who is also based out of SF. And I would actually like, you know, one of the ways it was interesting is you went into the framework guts and saw if someone wrote their own convolution kernel or they were just copying someone else's. There were four or five convolution kernels that were unique and interesting. There was one from this guy out of Russia, I forgot the name, but I remembered who was awesome enough to have written their own kernel. And at some point there, I built out these benchmarks called ConNet benchmarks. They're just benchmarking all the convolution kernels that are available at that time. It hilariously became big enough that at that time AI was getting important, but not important enough that industrial strength players came in to do these kinds of benchmarking and standardization. Like we have MLPerf today. So a lot of the startups were using ConNet benchmarks in their pitch decks as like, oh, you know, on ConNet benchmarks, this is how we fare, so you should fund us. I remember Nirvana actually was at the top of the pack because Scott Gray wrote amazingly fast convolution kernels at that time. Very interesting, but separate times. But to answer your question, Alessio, I think mainly Lepton, Fireworks are the two most obvious ones, but I'm sure the fingerprints are a lot wider. They're just people who worked within the PyTorch Cafe2 cohort of things and now end up at various other places.Swyx [00:18:50]: I think as a, both as an investor and a people looking to build on top of their services, it's a uncomfortable slash like, I don't know what I don't know pitch. Because I've met Yang Tsing and I've met Lin Chao. Yeah, I've met these folks and they're like, you know, we are deep in the PyTorch ecosystem and we serve billions of inferences a day or whatever at Facebook and now we can do it for you. And I'm like, okay, that's great. Like, what should I be wary of or cautious of when these things happen? Because I'm like, obviously this experience is extremely powerful and valuable. I just don't know what I don't know. Like, what should people know about like these sort of new inference as a service companies?Soumith [00:19:32]: I think at that point you would be investing in them for their expertise of one kind. So if they've been at a large company, but they've been doing amazing work, you would be thinking about it as what these people bring to the table is that they're really good at like GPU programming or understanding the complexity of serving models once it hits a certain scale. You know, various expertise like from the infra and AI and GPUs point of view. What you would obviously want to figure out is whether their understanding of the external markets is clear, whether they know and understand how to think about running a business, understanding how to be disciplined about making money or, you know, various things like that.Swyx [00:20:23]: Maybe I'll put it like, actually I will de-emphasize the investing bit and just more as a potential customer. Oh, okay. Like, it's more okay, you know, you have PyTorch gods, of course. Like, what else should I know?Soumith [00:20:37]: I mean, I would not care about who's building something. If I'm trying to be a customer, I would care about whether...Swyx [00:20:44]: Benchmarks.Soumith [00:20:44]: Yeah, I use it and it's usability and reliability and speed, right?Swyx [00:20:51]: Quality as well.Soumith [00:20:51]: Yeah, if someone from some random unknown place came to me and say, user stuff is great. Like, and I have the bandwidth, I probably will give it a shot. And if it turns out to be great, like I'll just use it.Benchmark dramaSwyx [00:21:07]: Okay, great. And then maybe one more thing about benchmarks, since we already brought it up and you brought up Confident Benchmarks. There was some recent drama around AnyScale. AnyScale released their own benchmarks and obviously they look great on their own benchmarks, but maybe didn't give the other... I feel there are two lines of criticism. One, which is they didn't test some apples for apples on the kind of endpoints that the other providers, that they are competitors with, on their benchmarks and that is due diligence baseline. And then the second would be more just optimizing for the right thing. You had some commentary on it. I'll just kind of let you riff.Soumith [00:21:41]: Yeah, I mean, in summary, basically my criticism of that was AnyScale built these benchmarks for end users to just understand what they should pick, right? And that's a very good thing to do. I think what they didn't do a good job of is give that end user a full understanding of what they should pick. Like they just gave them a very narrow slice of understanding. I think they just gave them latency numbers and that's not sufficient, right? You need to understand your total cost of ownership at some reasonable scale. Not oh, one API call is one cent, but a thousand API calls are 10 cents. Like people can misprice to cheat on those benchmarks. So you want to understand, okay, like how much is it going to cost me if I actually subscribe to you and do like a million API calls a month or something? And then you want to understand the latency and reliability, not just from one call you made, but an aggregate of calls you've made over several various times of the day and times of the week. And the nature of the workloads, is it just some generic single paragraph that you're sending that is cashable? Or is it like testing of real world workload? I think that kind of rigor, like in presenting that benchmark wasn't there. It was a much more narrow sliver of what should have been a good benchmark. That was my main criticism. And I'm pretty sure if before they released it, they showed it to their other stakeholders who would be caring about this benchmark because they are present in it, they would have easily just pointed out these gaps. And I think they didn't do that and they just released it. So I think those were the two main criticisms. I think they were fair and Robert took it well.Swyx [00:23:40]: And he took it very well. And we'll have him on at some point and we'll discuss it. But I think it's important for, I think the market being maturing enough that people start caring and competing on these kinds of things means that we need to establish what best practice is because otherwise everyone's going to play dirty.Soumith [00:23:55]: Yeah, absolutely. My view of the LLM inference market in general is that it's the laundromat model. Like the margins are going to drive down towards the bare minimum. It's going to be all kinds of arbitrage between how much you can get the hardware for and then how much you sell the API and how much latency your customers are willing to let go. You need to figure out how to squeeze your margins. Like what is your unique thing here? Like I think Together and Fireworks and all these people are trying to build some faster CUDA kernels and faster, you know, hardware kernels in general. But those modes only last for a month or two. These ideas quickly propagate.Swyx [00:24:38]: Even if they're not published?Soumith [00:24:39]: Even if they're not published, the idea space is small. So even if they're not published, the discovery rate is going to be pretty high. It's not like we're talking about a combinatorial thing that is really large. You're talking about Llama style LLM models. And we're going to beat those to death on a few different hardware SKUs, right? Like it's not even we have a huge diversity of hardware you're going to aim to run it on. Now when you have such a narrow problem and you have a lot of people working on it, the rate at which these ideas are going to get figured out is going to be pretty rapid.Swyx [00:25:15]: Is it a standard bag of tricks? Like the standard one that I know of is, you know, fusing operators and-Soumith [00:25:22]: Yeah, it's the standard bag of tricks on figuring out how to improve your memory bandwidth and all that, yeah.Alessio [00:25:28]: Any ideas instead of things that are not being beaten to death that people should be paying more attention to?Novel PyTorch ApplicationsSwyx [00:25:34]: One thing I was like, you know, you have a thousand operators, right? Like what's the most interesting usage of PyTorch that you're seeing maybe outside of this little bubble?Soumith [00:25:41]: So PyTorch, it's very interesting and scary at the same time, but basically it's used in a lot of exotic ways, like from the ML angle, what kind of models are being built? And you get all the way from state-based models and all of these things to stuff nth order differentiable models, like neural ODEs and stuff like that. I think there's one set of interestingness factor from the ML side of things. And then there's the other set of interesting factor from the applications point of view. It's used in Mars Rover simulations, to drug discovery, to Tesla cars. And there's a huge diversity of applications in which it is used. So in terms of the most interesting application side of things, I think I'm scared at how many interesting things that are also very critical and really important it is used in. I think the scariest was when I went to visit CERN at some point and they said they were using PyTorch and they were using GANs at the same time for particle physics research. And I was scared more about the fact that they were using GANs than they were using PyTorch, because at that time I was a researcher focusing on GANs. But the diversity is probably the most interesting. How many different things it is being used in. I think that's the most interesting to me from the applications perspective. From the models perspective, I think I've seen a lot of them. Like the really interesting ones to me are where we're starting to combine search and symbolic stuff with differentiable models, like the whole AlphaGo style models is one example. And then I think we're attempting to do it for LLMs as well, with various reward models and search. I mean, I don't think PyTorch is being used in this, but the whole alpha geometry thing was interesting because again, it's an example of combining the symbolic models with the gradient based ones. But there are stuff like alpha geometry that PyTorch is used at, especially when you intersect biology and chemistry with ML. In those areas, you want stronger guarantees on the output. So yeah, maybe from the ML side, those things to me are very interesting right now.Swyx [00:28:03]: Yeah. People are very excited about the alpha geometry thing. And it's kind of like, for me, it's theoretical. It's great. You can solve some Olympia questions. I'm not sure how to make that bridge over into the real world applications, but I'm sure people smarter than me will figure it out.Synthetic Data vs Symbolic ModelsSoumith [00:28:18]: Let me give you an example of it. You know how the whole thing about synthetic data will be the next rage in LLMs is a thing?Swyx [00:28:27]: Already is a rage.Soumith [00:28:28]: Which I think is fairly misplaced in how people perceive it. People think synthetic data is some kind of magic wand that you wave and it's going to be amazing. Synthetic data is useful in neural networks right now because we as humans have figured out a bunch of symbolic models of the world or made up certain symbolic models because of human innate biases. So we've figured out how to ground particle physics in a 30 parameter model. And it's just very hard to compute as in it takes a lot of flops to compute, but it only has 30 parameters or so. I mean, I'm not a physics expert, but it's a very low rank model. We built mathematics as a field that basically is very low rank. Language, a deep understanding of language, like the whole syntactic parse trees and just understanding how language can be broken down and into a formal symbolism is something that we figured out. So we basically as humans have accumulated all this knowledge on these subjects, either synthetic, we created those subjects in our heads, or we grounded some real world phenomenon into a set of symbols. But we haven't figured out how to teach neural networks symbolic world models directly. The only way we have to teach them is generating a bunch of inputs and outputs and gradient dissenting over them. So in areas where we have the symbolic models and we need to teach all the knowledge we have that is better encoded in the symbolic models, what we're doing is we're generating a bunch of synthetic data, a bunch of input output pairs, and then giving that to the neural network and asking it to learn the same thing that we already have a better low rank model of in gradient descent in a much more over-parameterized way. Outside of this, like where we don't have good symbolic models, like synthetic data obviously doesn't make any sense. So synthetic data is not a magic wand where it'll work in all cases in every case or whatever. It's just where we as humans already have good symbolic models off. We need to impart that knowledge to neural networks and we figured out the synthetic data is a vehicle to impart this knowledge to. So, but people, because maybe they don't know enough about synthetic data as a notion, but they hear, you know, the next wave of data revolution is synthetic data. They think it's some kind of magic where we just create a bunch of random data somehow. They don't think about how, and then they think that's just a revolution. And I think that's maybe a gap in understanding most people have in this hype cycle.Swyx [00:31:23]: Yeah, well, it's a relatively new concept, so. Oh, there's two more that I'll put in front of you and then you can see what you respond. One is, you know, I have this joke that it's, you know, it's only synthetic data if it's from the Mistral region of France, otherwise it's just a sparkling distillation, which is what news research is doing. Like they're distilling GPT-4 by creating synthetic data from GPT-4, creating mock textbooks inspired by Phi 2 and then fine tuning open source models like Llama. And so I don't know, I mean, I think that's, should we call that synthetic data? Should we call it something else? I don't know.Soumith [00:31:57]: Yeah, I mean, the outputs of LLMs, are they synthetic data? They probably are, but I think it depends on the goal you have. If your goal is you're creating synthetic data with the goal of trying to distill GPT-4's superiority into another model, I guess you can call it synthetic data, but it also feels like disingenuous because your goal is I need to copy the behavior of GPT-4 and-Swyx [00:32:25]: It's also not just behavior, but data set. So I've often thought of this as data set washing. Like you need one model at the top of the chain, you know, unnamed French company that has that, you know, makes a model that has all the data in it that we don't know where it's from, but it's open source, hey, and then we distill from that and it's great. To be fair, they also use larger models as judges for preference ranking, right? So that is, I think, a very, very accepted use of synthetic.Soumith [00:32:53]: Correct. I think it's a very interesting time where we don't really have good social models of what is acceptable depending on how many bits of information you use from someone else, right? It's like, okay, you use one bit. Is that okay? Yeah, let's accept it to be okay. Okay, what about if you use 20 bits? Is that okay? I don't know. What if you use 200 bits? I don't think we as society have ever been in this conundrum where we have to be like, where is the boundary of copyright or where is the boundary of socially accepted understanding of copying someone else? We haven't been tested this mathematically before,Swyx [00:33:38]: in my opinion. Whether it's transformative use. Yes. So yeah, I think this New York Times opening eye case is gonna go to the Supreme Court and we'll have to decide it because I think we never had to deal with it before. And then finally, for synthetic data, the thing that I'm personally exploring is solving this great stark paradigm difference between rag and fine tuning, where you can kind of create synthetic data off of your retrieved documents and then fine tune on that. That's kind of synthetic. All you need is variation or diversity of samples for you to fine tune on. And then you can fine tune new knowledge into your model. I don't know if you've seen that as a direction for synthetic data.Soumith [00:34:13]: I think you're basically trying to, what you're doing is you're saying, well, language, I know how to parametrize language to an extent. And I need to teach my model variations of this input data so that it's resilient or invariant to language uses of that data.Swyx [00:34:32]: Yeah, it doesn't overfit on the wrong source documents.Soumith [00:34:33]: So I think that's 100% synthetic. You understand, the key is you create variations of your documents and you know how to do that because you have a symbolic model or like some implicit symbolic model of language.Swyx [00:34:48]: Okay.Alessio [00:34:49]: Do you think the issue with symbolic models is just the architecture of the language models that we're building? I think maybe the thing that people grasp is the inability of transformers to deal with numbers because of the tokenizer. Is it a fundamental issue there too? And do you see alternative architectures that will be better with symbolic understanding?Soumith [00:35:09]: I am not sure if it's a fundamental issue or not. I think we just don't understand transformers enough. I don't even mean transformers as an architecture. I mean the use of transformers today, like combining the tokenizer and transformers and the dynamics of training, when you show math heavy questions versus not. I don't have a good calibration of whether I know the answer or not. I, you know, there's common criticisms that are, you know, transformers will just fail at X. But then when you scale them up to sufficient scale, they actually don't fail at that X. I think there's this entire subfield where they're trying to figure out these answers called like the science of deep learning or something. So we'll get to know more. I don't know the answer.Meta AI and Llama 2/3Swyx [00:35:57]: Got it. Let's touch a little bit on just Meta AI and you know, stuff that's going on there. Maybe, I don't know how deeply you're personally involved in it, but you're our first guest with Meta AI, which is really fantastic. And Llama 1 was, you know, you are such a believer in open source. Llama 1 was more or less the real breakthrough in open source AI. The most interesting thing for us covering on this, in this podcast was the death of Chinchilla, as people say. Any interesting insights there around the scaling models for open source models or smaller models or whatever that design decision was when you guys were doing it?Soumith [00:36:31]: So Llama 1 was Guillaume Lample and team. There was OPT before, which I think I'm also very proud of because we bridged the gap in understanding of how complex it is to train these models to the world. Like until then, no one really in gory detail published.Swyx [00:36:50]: The logs.Soumith [00:36:51]: Yeah. Like, why is it complex? And everyone says, oh, it's complex. But no one really talked about why it's complex. I think OPT was cool.Swyx [00:37:02]: I met Susan and she's very, very outspoken. Yeah.Soumith [00:37:05]: We probably, I think, didn't train it for long enough, right? That's kind of obvious in retrospect.Swyx [00:37:12]: For a 175B. Yeah. You trained it according to Chinchilla at the time or?Soumith [00:37:17]: I can't remember the details, but I think it's a commonly held belief at this point that if we trained OPT longer, it would actually end up being better. Llama 1, I think, was Guillaume Lample and team Guillaume is fantastic and went on to build Mistral. I wasn't too involved in that side of things. So I don't know what you're asking me, which is how did they think about scaling loss and all of that? Llama 2, I was more closely involved in. I helped them a reasonable amount with their infrastructure needs and stuff. And Llama 2, I think, was more like, let's get to the evolution. At that point, we kind of understood what we were missing from the industry's understanding of LLMs. And we needed more data and we needed more to train the models for longer. And we made, I think, a few tweaks to the architecture and we scaled up more. And that was Llama 2. I think Llama 2, you can think of it as after Guillaume left, the team kind of rebuilt their muscle around Llama 2. And Hugo, I think, who's the first author is fantastic. And I think he did play a reasonable big role in Llama 1 as well.Soumith [00:38:35]: And he overlaps between Llama 1 and 2. So in Llama 3, obviously, hopefully, it'll be awesome.Alessio [00:38:42]: Just one question on Llama 2, and then we'll try and fish Llama 3 spoilers out of you. In the Llama 2 paper, the loss curves of the 34 and 70B parameter, they still seem kind of steep. Like they could go lower. How, from an infrastructure level, how do you allocate resources? Could they have just gone longer or were you just, hey, this is all the GPUs that we can burn and let's just move on to Llama 3 and then make that one better?Soumith [00:39:07]: Instead of answering specifically about that Llama 2 situation or whatever, I'll tell you how we think about things. Generally, we're, I mean, Mark really is some numbers, right?Swyx [00:39:20]: So let's cite those things again. All I remember is like 600K GPUs.Soumith [00:39:24]: That is by the end of this year and 600K H100 equivalents. With 250K H100s, including all of our other GPU or accelerator stuff, it would be 600-and-something-K aggregate capacity.Swyx [00:39:38]: That's a lot of GPUs.Soumith [00:39:39]: We'll talk about that separately. But the way we think about it is we have a train of models, right? Llama 1, 2, 3, 4. And we have a bunch of GPUs. I don't think we're short of GPUs. Like-Swyx [00:39:54]: Yeah, no, I wouldn't say so. Yeah, so it's all a matter of time.Soumith [00:39:56]: I think time is the biggest bottleneck. It's like, when do you stop training the previous one and when do you start training the next one? And how do you make those decisions? The data, do you have net new data, better clean data for the next one in a way that it's not worth really focusing on the previous one? It's just a standard iterative product. You're like, when is the iPhone 1? When do you start working on iPhone 2? Where is the iPhone? And so on, right? So mostly the considerations are time and generation, rather than GPUs, in my opinion.Alessio [00:40:31]: So one of the things with the scaling loss, like Chinchilla is optimal to balance training and inference costs. I think at Meta's scale, you would rather pay a lot more maybe at training and then save on inference. How do you think about that from infrastructure perspective? I think in your tweet, you say you can try and guess on like how we're using these GPUs. Can you just give people a bit of understanding? It's like, because I've already seen a lot of VCs say, Llama 3 has been trained on 600,000 GPUs and that's obviously not true, I'm sure. How do you allocate between the research, FAIR and the Llama training, the inference on Instagram suggestions that get me to scroll, like AI-generated stickers on WhatsApp and all of that?Soumith [00:41:11]: Yeah, we haven't talked about any of this publicly, but as a broad stroke, it's like how we would allocate resources of any other kinds at any company. You run a VC portfolio, how do you allocate your investments between different companies or whatever? You kind of make various trade-offs and you kind of decide, should I invest in this project or this other project, or how much should I invest in this project? It's very much a zero sum of trade-offs. And it also comes into play, how are your clusters configured, like overall, what you can fit of what size and what cluster and so on. So broadly, there's no magic sauce here. I mean, I think the details would add more spice, but also wouldn't add more understanding. It's just gonna be like, oh, okay, I mean, this looks like they just think about this as I would normally do.Alessio [00:42:05]: So even the GPU rich run through the same struggles of having to decide where to allocate things.Soumith [00:42:11]: Yeah, I mean, at some point I forgot who said it, but you kind of fit your models to the amount of compute you have. If you don't have enough compute, you figure out how to make do with smaller models. But no one as of today, I think would feel like they have enough compute. I don't think I've heard any company within the AI space be like, oh yeah, like we feel like we have sufficient compute and we couldn't have done better. So that conversation, I don't think I've heard from any of my friends at other companies.EleutherSwyx [00:42:47]: Stella from Eleuther sometimes says that because she has a lot of donated compute. She's trying to put it to interesting uses, but for some reason she's decided to stop making large models.Soumith [00:42:57]: I mean, that's a cool, high conviction opinion that might pay out.Swyx [00:43:01]: Why?Soumith [00:43:02]: I mean, she's taking a path that most people don't care to take about in this climate and she probably will have very differentiated ideas. I mean, think about the correlation of ideas in AI right now. It's so bad, right? So everyone's fighting for the same pie. In some weird sense, that's partly why I don't really directly work on LLMs. I used to do image models and stuff and I actually stopped doing GANs because GANs were getting so hot that I didn't have any calibration of whether my work would be useful or not because, oh yeah, someone else did the same thing you did. It's like, there's so much to do, I don't understand why I need to fight for the same pie. So I think Stella's decision is very smart.Making BetsAlessio [00:43:53]: And how do you reconcile that with how we started the discussion about intrinsic versus extrinsic kind of like accomplishment or success? How should people think about that especially when they're doing a PhD or early in their career? I think in Europe, I walked through a lot of the posters and whatnot, there seems to be mode collapse in a way in the research, a lot of people working on the same things. Is it worth for a PhD to not take a bet on something that is maybe not as interesting just because of funding and visibility and whatnot? Or yeah, what suggestions would you give?Soumith [00:44:28]: I think there's a baseline level of compatibility you need to have with the field. Basically, you need to figure out if you will get paid enough to eat, right? Like whatever reasonable normal lifestyle you want to have as a baseline. So you at least have to pick a problem within the neighborhood of fundable. Like you wouldn't wanna be doing something so obscure that people are like, I don't know, like you can work on it.Swyx [00:44:59]: Would a limit on fundability, I'm just observing something like three months of compute, right? That's the top line, that's the like max that you can spend on any one project.Soumith [00:45:09]: But like, I think that's very ill specified, like how much compute, right? I think that the notion of fundability is broader. It's more like, hey, are these family of models within the acceptable set of, you're not crazy or something, right? Even something like neural or DS, which is a very boundary pushing thing or states-based models or whatever. Like all of these things I think are still in fundable territory. When you're talking about, I'm gonna do one of the neuromorphic models and then apply image classification to them or something, then it becomes a bit questionable. Again, it depends on your motivation. Maybe if you're a neuroscientist, it actually is feasible. But if you're an AI engineer, like the audience of these podcasts, then it's more questionable. The way I think about it is, you need to figure out how you can be in the baseline level of fundability just so that you can just live. And then after that, really focus on intrinsic motivation and depends on your strengths, like how you can play to your strengths and your interests at the same time. Like I try to look at a bunch of ideas that are interesting to me, but also try to play to my strengths. I'm not gonna go work on theoretical ML. I'm interested in it, but when I want to work on something like that, I try to partner with someone who is actually a good theoretical ML person and see if I actually have any value to provide. And if they think I do, then I come in. So I think you'd want to find that intersection of ideas you like, and that also play to your strengths. And I'd go from there. Everything else, like actually finding extrinsic success and all of that, I think is the way I think about it is like somewhat immaterial. When you're talking about building ecosystems and stuff, slightly different considerations come into play, but that's a different conversation.Swyx [00:47:06]: We're gonna pivot a little bit to just talking about open source AI. But one more thing I wanted to establish for Meta is this 600K number, just kind of rounding out the discussion, that's for all Meta. So including your own inference needs, right? It's not just about training.Soumith [00:47:19]: It's gonna be the number in our data centers for all of Meta, yeah.Swyx [00:47:23]: Yeah, so there's a decent amount of workload serving Facebook and Instagram and whatever. And then is there interest in like your own hardware?MTIASoumith [00:47:31]: We already talked about our own hardware. It's called MTIA. Our own silicon, I think we've even showed the standard photograph of you holding the chip that doesn't work. Like as in the chip that you basically just get like-Swyx [00:47:51]: As a test, right?Soumith [00:47:52]: Yeah, a test chip or whatever. So we are working on our silicon and we'll probably talk more about it when the time is right, but-Swyx [00:48:00]: Like what gaps do you have that the market doesn't offer?Soumith [00:48:04]: Okay, I mean, this is easy to answer. So basically, remember how I told you about there's this memory hierarchy and like sweet spots and all of that? Fundamentally, when you build a hardware, you make it general enough that a wide set of customers and a wide set of workloads can use it effectively while trying to get the maximum level of performance they can. The more specialized you make the chip, the more hardware efficient it's going to be, the more power efficient it's gonna be, the more easier it's going to be to find the software, like the kernel's right to just map that one or two workloads to that hardware and so on. So it's pretty well understood across the industry that if you have a sufficiently large volume, enough workload, you can specialize it and get some efficiency gains, like power gains and so on. So the way you can think about everyone building, every large company building silicon, I think a bunch of the other large companies are building their own silicon as well, is they, each large company has a sufficient enough set of verticalized workloads that can be specialized that have a pattern to them that say a more generic accelerator like an NVIDIA or an AMD GPU does not exploit. So there is some level of power efficiency that you're leaving on the table by not exploiting that. And you have sufficient scale and you have sufficient forecasted stability that those workloads will exist in the same form, that it's worth spending the time to build out a chip to exploit that sweet spot. Like obviously something like this is only useful if you hit a certain scale and that your forecasted prediction of those kind of workloads being in the same kind of specializable exploitable way is true. So yeah, that's why we're building our own chips.Swyx [00:50:08]: Awesome.Open Source AIAlessio [00:50:09]: Yeah, I know we've been talking a lot on a lot of different topics and going back to open source, you had a very good tweet. You said that a single company's closed source effort rate limits against people's imaginations and needs. How do you think about all the impact that some of the Meta AI work in open source has been doing and maybe directions of the whole open source AI space?Soumith [00:50:32]: Yeah, in general, I think first, I think it's worth talking about this in terms of open and not just open source, because like with the whole notion of model weights, no one even knows what source means for these things. But just for the discussion, when I say open source, you can assume it's just I'm talking about open. And then there's the whole notion of licensing and all that, commercial, non-commercial, commercial with clauses and all that. I think at a fundamental level, the most benefited value of open source is that you make the distribution to be very wide. It's just available with no friction and people can do transformative things in a way that's very accessible. Maybe it's open source, but it has a commercial license and I'm a student in India. I don't care about the license. I just don't even understand the license. But like the fact that I can use it and do something with it is very transformative to me. Like I got this thing in a very accessible way. And then it's various degrees, right? And then if it's open source, but it's actually a commercial license, then a lot of companies are gonna benefit from gaining value that they didn't previously have, that they maybe had to pay a closed source company for it. So open source is just a very interesting tool that you can use in various ways. So there's, again, two kinds of open source. One is some large company doing a lot of work and then open sourcing it. And that kind of effort is not really feasible by say a band of volunteers doing it the same way. So there's both a capital and operational expenditure that the large company just decided to ignore and give it away to the world for some benefits of some kind. They're not as tangible as direct revenue. So in that part, Meta has been doing incredibly good things. They fund a huge amount of the PyTorch development. They've open sourced Llama and those family of models and several other fairly transformative projects. FICE is one, Segment Anything, Detectron, Detectron 2. Dense Pose. I mean, it's-Swyx [00:52:52]: Seamless. Yeah, seamless.Soumith [00:52:53]: Like it's just the list is so long that we're not gonna cover. So I think Meta comes into that category where we spend a lot of CapEx and OpEx and we have a high talent density of great AI people and we open our stuff. And the thesis for that, I remember when FAIR was started, the common thing was like, wait, why would Meta wanna start a open AI lab? Like what exactly is a benefit from a commercial perspective? And for then the thesis was very simple. It was AI is currently rate limiting Meta's ability to do things. Our ability to build various product integrations, moderation, various other factors. Like AI was the limiting factor and we just wanted AI to advance more and we didn't care if the IP of the AI was uniquely in our possession or not. However the field advances, that accelerates Meta's ability to build a better product. So we just built an open AI lab and we said, if this helps accelerate the progress of AI, that's strictly great for us. But very easy, rational, right? Still the same to a large extent with the Llama stuff. And it's the same values, but the argument, it's a bit more nuanced. And then there's a second kind of open source, which is, oh, we built this project, nights and weekends and we're very smart people and we open sourced it and then we built a community around it. This is the Linux kernel and various software projects like that. So I think about open source, like both of these things being beneficial and both of these things being different. They're different and beneficial in their own ways. The second one is really useful when there's an active arbitrage to be done. If someone's not really looking at a particular space because it's not commercially viable or whatever, like a band of volunteers can just coordinate online and do something and then make that happen. And that's great.Open Source LLMsI wanna cover a little bit about open source LLMs maybe. So open source LLMs have been very interesting because I think we were trending towards an increase in open source in AI from 2010 all the way to 2017 or something. Like where more and more pressure within the community was to open source their stuff so that their methods and stuff get adopted. And then the LLMs revolution kind of took the opposite effect OpenAI stopped open sourcing their stuff and DeepMind kind of didn't, like all the other cloud and all these other providers, they didn't open source their stuff. And it was not good in the sense that first science done in isolation probably will just form its own bubble where people believe their own b******t or whatever. So there's that problem. And then there was the other problem which was the accessibility part. Like, okay, I again always go back to I'm a student in India with no money. What is my accessibility to any of these closers models? At some scale I have to pay money. That makes it a non-starter and stuff. And there's also the control thing. I strongly believe if you want human aligned stuff, you want all humans to give feedback. And you want all humans to have access to that technology in the first place. And I actually have seen, living in New York, whenever I come to Silicon Valley, I see a different cultural bubble. Like all the friends I hang out with talk about some random thing like Dyson Spheres or whatever, that's a thing. And most of the world doesn't know or care about any of this stuff. It's definitely a bubble and bubbles can form very easily. And when you make a lot of decisions because you're in a bubble, they're probably not globally optimal decisions. So I think open source, the distribution of open source powers a certain kind of non-falsifiability that I think is very important. I think on the open source models, like it's going great in the fact that LoRa I think came out of the necessity of open source models needing to be fine-tunable in some way. Yeah, and I think DPO also came out of the academic open source side of things. So do any of the closed source labs, did any of them already have LoRa or DPO internally? Maybe, but that does not advance humanity in any way. It advances some companies probability of doing the winner takes all that I talked about earlier in the podcast.Open Source and TrustI don't know, it just feels fundamentally good. Like when people try to, you know, people are like, well, what are the ways in which it is not okay? I find most of these arguments, and this might be a little controversial, but I find a lot of arguments based on whether closed source models are safer or open source models are safer very much related to what kind of culture they grew up in, what kind of society they grew up in. If they grew up in a society that they trusted, then I think they take the closed source argument. And if they grew up in a society that they couldn't trust, where the norm was that you didn't trust your government, obviously it's corrupt or whatever, then I think the open source argument is what they take. I think there's a deep connection to like people's innate biases from their childhood and their trust in society and governmental aspects that push them towards one opinion or the other. And I'm definitely in the camp of open source is definitely going to actually have better outcomes for society. Closed source to me just means that centralization of power, which, you know, is really hard to trust. So I think it's going well

Sway
Social Media In Wartime + Betting on the Future + A.I. Passes the Smell Test

Sway

Play Episode Listen Later Oct 13, 2023 66:35


As the Israel-Hamas war broke out, misinformation and fake imagery surged on X, the platform formerly known at Twitter. Can Meta's Threads fill the real-time news hole that X created? Should it?Then, Kevin debriefs us on his reporting on Manifold Markets, where Silicon Valley Rationalists bet on the likelihoods of different events.Plus: The company digitizing smell.Today's Guest:Alex Wiltschko is the founder of Osmo, a company trying to digitize smell.Additional Reading:Casey Newton on how the war in Israel may change Threads.Some tech insiders believe betting can change the world.The company Osmo put out a research paper showing that an A.I. model it had created was performing better than the “average human panelist” in predicting odor. We want to hear from you.

How I Built This with Guy Raz
HIBT Lab! Osmo Salt: Nick DiGiovanni

How I Built This with Guy Raz

Play Episode Listen Later Jan 26, 2023 43:48


Do you know who holds the record for making the world's largest chicken nugget? How about the world's largest sushi roll?If you know Nick DiGiovanni, then you know the answer to those questions. Each week, more than 15 million followers across YouTube and TikTok gawk and drool over Nick's masterful and over-the-top culinary creations. Nick is at the helm of some analog business ventures too, including a DTC salt and seasoning company and his debut cookbook, Knife Drop, which publishes later this year. This week on How I Built This Lab, Nick talks with Guy about overcoming shyness to become an on-camera personality, and his recent decision to forego Harvard Business School to continue on his path as a creator. Nick also opens up about his struggles to set strong work-life boundaries and speculates about his professional future. See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.