Podcasts about risc

  • 221PODCASTS
  • 390EPISODES
  • 38mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • May 6, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about risc

Latest podcast episodes about risc

Presa internaţională
Folosirea AI – între beneficiile evidente și pericolele ascunse

Presa internaţională

Play Episode Listen Later May 6, 2025 26:45


Românii au început să își cumpere tot mai multe asigurări de viață, ceea ce este un semnal încurajator atât pentru familiile acestora, cât și pentru guvernanți și asiguratori. Cât de mare este, însă, impactul evoluției inteligentei artificiale (AI) în industria asigurărilor de viață, aflăm din noua ediție a emisiunii ”Ora de Risc” by XPRIMM.Invitat: Cristian Dan IONESCU, Consultant VIP la NN Asigurări de Viață si Pensii.

Tech Path Podcast
Ethereum Bull Case

Tech Path Podcast

Play Episode Listen Later Apr 28, 2025 16:29


As Ethereum gears up for one of its biggest network upgrades in history, we're diving into why Ethereum ($ETH) might be the most underrated asset in crypto right now.~This Episode is Sponsored By Coinbase~ Buy $50 & Get $50 for getting started on Coinbase➜ https://bit.ly/CBARRON00:00 Intro00:19 Sponsor: iTrust Capital00:52 Fear & Greed bottomed01:30 ETH quarterly02:03 Whales Buying ETH after Vitalik Roadmap02:20 Risc-v02:40 China Risc-v02:50 Pectra upgrade03:15 Eth upgrades ramping up03:40 Vitalik: upgrades a decade in the making04:08 Gas fees04:34 ETH vs Bitcoin06:12 Cathie buys Solana06:30 Galaxy buys Solana06:54 Don't be Cathie07:33 Buy the Fear08:00 Ethereum is better money09:14 Unichain explosion09:42 Vitalik: One-Click transactions incoming10:13 Vitalik: Upgrades will kill Visa11:10 Chains bridging to ETH for growth11:45 ETH is gold11:57 Shanghai gold exchange12:07 cnbc Shanghai Gold Exchange12:50 Insurance companies buying tokenized gold13:11 Scott Bessent on Shanghai Gold Exchange vs Bitcoin13:56 Scott Bessent is pro-stablecoins14:18 Stablecoin regulation this week15:16 Sony Playstation & Xbox on ETH16:00 Outro#Ethereum #Crypto #cryptocurrency ~Ethereum Bull Case

All TWiT.tv Shows (MP3)
Untitled Linux Show 200: Who Needs A Desktop Anyway?

All TWiT.tv Shows (MP3)

Play Episode Listen Later Apr 27, 2025 107:08


Cosmic is nearly Beta-worthy, The NVIDIA Beta driver is solid, and we look back on a Code of Conduct legacy at Gnome. Then a shiny new RISC gadget catches our eyes and wallets, there's plenty of controversy in the Kernel, and new things are coming for Linux Graphics. For tips we have mispipe for a slightly different take on piping commands, Bitwarden's Command Line interface, and a quick primer on quotation marks on the command line. The show notes are at https://bit.ly/4d0dxlh and happy 200th! Host: Jonathan Bennett Co-Hosts: Rob Campbell and Jeff Massie Guest: Leo Laporte Download or subscribe to Untitled Linux Show at https://twit.tv/shows/untitled-linux-show Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.

Radio Leo (Audio)
Untitled Linux Show 200: Who Needs A Desktop Anyway?

Radio Leo (Audio)

Play Episode Listen Later Apr 27, 2025 107:08


Cosmic is nearly Beta-worthy, The NVIDIA Beta driver is solid, and we look back on a Code of Conduct legacy at Gnome. Then a shiny new RISC gadget catches our eyes and wallets, there's plenty of controversy in the Kernel, and new things are coming for Linux Graphics. For tips we have mispipe for a slightly different take on piping commands, Bitwarden's Command Line interface, and a quick primer on quotation marks on the command line. The show notes are at https://bit.ly/4d0dxlh and happy 200th! Host: Jonathan Bennett Co-Hosts: Rob Campbell and Jeff Massie Guest: Leo Laporte Download or subscribe to Untitled Linux Show at https://twit.tv/shows/untitled-linux-show Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.

Danielle Newnham Podcast
Steve Furber: Reverse Engineering the Human Brain

Danielle Newnham Podcast

Play Episode Listen Later Apr 25, 2025 51:08


As April 2025 marks the 40th anniversary of the Arm architecture, I am re-releasing my episode with Steve Furber. What began as an ambitious project in a small corner of Cambridge, U.K., has grown into the world's most widely adopted computing architecture, now powering billions of devices – from sensors, smartphones and laptops to vehicles, datacenters and beyond.It was at 3pm on 26th April 1985, the chip that led to the world's first commercial RISC processor powered up... and changed the world!Steve Furber is a seminal computer scientist, mathematician and hardware designer whose work includes the BBC Microcomputer and the ARM 32-bit RISC microprocessor which can be found in over 100 billion devices today.Steve studied both Maths followed by a PhD in Aerodynamics at Cambridge University before joining Herman Hauser and Chris Curry at Acorn Computers. For the next decade, he would work with a first-class team of engineers and designers to revolutionise the home computer market before he and Sophie Wilson went on to design the ARM processor with a relatively small team and budget and with little inkling of the consequence it might bring to the world.In 1990, Steve left Acorn moved to Manchester where he is now Professor of Computer Engineering at the university there. He was charged with leading research into asynchronous systems, low-power electronics and neural engineering which  led to the SpiNNaker project - a super computer incorporating a million ARM processors which are optimised for computational neuroscience. He is basically trying to reverse engineer the brain – a lofty ambition even by his own admission.In this wide-ranging conversation, we discuss Steve's life journey from studying maths with professors such as the famed John Conway and Sir James Lighthill to the highs and lows of building the BBC Micro and the story behind the ARM 32-bit RISC microprocessor.I thoroughly enjoyed talking to Steve and am overly excited about his SpiNNaker project which we also discuss today.Enjoy!--------------Steve Furber info / SpiNNaker info / Micro Men filmDanielle on Twitter @daniellenewnham and  Instagram @daniellenewnham   / Newsletter Watch Steve and Sophie talk about those early arm days tomorrow - buy your tickets here.

MeteoMauri
Acabem la temporada de neu, parlem d'atropellaments a la fauna i del risc d'inundaci

MeteoMauri

Play Episode Listen Later Apr 24, 2025 73:30


GreenMeteoMauri, que acaba la temporada de neu avui, ens parla d'empreses catalanes cada cop m

Presa internaţională
Vară 2025 - între o vacanță grozavă și una ... de groază

Presa internaţională

Play Episode Listen Later Apr 16, 2025 27:09


 În scurt timp ne vom bucura din nou de vacanțe: mai întâi de cele de Paște și de 1 Mai, apoi, de cele de vară. Mulți dintre noi își fac deja rezervările și îsi pun în bagaj asigurările de călătorie. De ce? Poate pentru că ne costă foarte puțin și ne pot face foarte mult bine. Dar chiar știm să le folosim? Știm de la cine să ni le cumpărăm? Ce facem dacă ni se întâmplă ceva? Ce trebuie să mai stim și cine ne va ajuta? Aflăm răspunsurile din emisiunea ”Ora de Risc” by XPRIMM, difuzata de RFI România, cea mai citata sursa de radio de la nivel național.Invitat: Liviu CRISTIAN, Director Vânzări (EUROLIFE FFH)

Presa internaţională
Avertisment Erste: România riscă să coboare la rating-ul junk

Presa internaţională

Play Episode Listen Later Apr 16, 2025 47:31


O instituție bancară internațională avertizează România: Amânarea măsurilor fiscale din cauza alegerilor va duce țara direct în categoria "junk". Ratingul „junk” înseamnă că țara respectivă nu mai este recomandată investitorilor. Agențiile de rating au redus în ultimele luni perspectiva României de la „stabilă” la „negativă”, in contextul lipsei reformelor.     Explicațiile Elenei Lasconi pentru situația în care a ajuns USRTrădarea nu-i ușoară. M-am aruncat în foc pentru USR, acum ei mă împing. Am simtit ca mi se pregateste ceva imediat dupa anularea alegerilor de anul trecut. Sunt comentariile candidatului la prezidentiale Elena Lasconi.  Toate explicatiile pentru situatia in care s-a ajuns intr-un interviu in aceasta seara.  Un an de când funcționează Spitalul Dăruiește ViațăS-a implinit un an de cand functioneaza singurul spital construit exclusiv din donatii si sponsorizari de Asociatia Daruieste Viata. Peste 3.500 de copii au fost trataţi în acest timp in unitatea medicala. Ce alte planuri are ONG-ul va spunem in 40 de minute. Criza franco-algeriană se adânceșteParisul anunță că expulzează 12 oficiali algerieni si si-a rechemat ambasadorul de la Alger. Totul a pornit după ce procurorii francezi au pus sub acuzare trei cetățeni algerieni, inclusiv un oficial consular, pentru suspiciunea de răpire a unui critic proeminent al guvernului care se consideră că a avut loc la Paris anul trecut.    

Software Engineering Daily
Turing Award Special: A Conversation with David Patterson

Software Engineering Daily

Play Episode Listen Later Apr 10, 2025 55:46


David A. Patterson is a pioneering computer scientist known for his contributions to computer architecture, particularly as a co-developer of Reduced Instruction Set Computing, or RISC, which revolutionized processor design. He has co-authored multiple books, including the highly influential Computer Architecture: A Quantitative Approach. David is a UC Berkeley Pardee professor emeritus, a Google distinguished The post Turing Award Special: A Conversation with David Patterson appeared first on Software Engineering Daily.

Podcast – Software Engineering Daily
Turing Award Special: A Conversation with David Patterson

Podcast – Software Engineering Daily

Play Episode Listen Later Apr 10, 2025 55:46


David A. Patterson is a pioneering computer scientist known for his contributions to computer architecture, particularly as a co-developer of Reduced Instruction Set Computing, or RISC, which revolutionized processor design. He has co-authored multiple books, including the highly influential Computer Architecture: A Quantitative Approach. David is a UC Berkeley Pardee professor emeritus, a Google distinguished The post Turing Award Special: A Conversation with David Patterson appeared first on Software Engineering Daily.

Podcast Notes Playlist: Latest Episodes
Turing Award Special: A Conversation with John Hennessy

Podcast Notes Playlist: Latest Episodes

Play Episode Listen Later Apr 4, 2025


Software Engineering Daily: Read the notes at at podcastnotes.org. Don't forget to subscribe for free to our newsletter, the top 10 ideas of the week, every Monday --------- John Hennessy is a computer scientist, entrepreneur, and academic known for his significant contributions to computer architecture. He co-developed the RISC architecture, which revolutionized modern computing by enabling faster and more efficient processors. Hennessy served as the president of Stanford University from 2000 to 2016 and later co-founded MIPS Computer Systems and Atheros Communications. Currently, The post Turing Award Special: A Conversation with John Hennessy appeared first on Software Engineering Daily.

Software Engineering Daily
Turing Award Special: A Conversation with John Hennessy

Software Engineering Daily

Play Episode Listen Later Apr 3, 2025 38:53


John Hennessy is a computer scientist, entrepreneur, and academic known for his significant contributions to computer architecture. He co-developed the RISC architecture, which revolutionized modern computing by enabling faster and more efficient processors. Hennessy served as the president of Stanford University from 2000 to 2016 and later co-founded MIPS Computer Systems and Atheros Communications. Currently, The post Turing Award Special: A Conversation with John Hennessy appeared first on Software Engineering Daily.

Podcast – Software Engineering Daily
Turing Award Special: A Conversation with John Hennessy

Podcast – Software Engineering Daily

Play Episode Listen Later Apr 3, 2025 38:53


John Hennessy is a computer scientist, entrepreneur, and academic known for his significant contributions to computer architecture. He co-developed the RISC architecture, which revolutionized modern computing by enabling faster and more efficient processors. Hennessy served as the president of Stanford University from 2000 to 2016 and later co-founded MIPS Computer Systems and Atheros Communications. Currently, The post Turing Award Special: A Conversation with John Hennessy appeared first on Software Engineering Daily.

Notícies Migdia
Més de la meitat de les persones de risc no participen als programes de cribratge del càncer de còlon

Notícies Migdia

Play Episode Listen Later Mar 31, 2025


Més de la meitat de les persones de risc no participen als programes de cribratge del càncer de còlon

TechCast Podcast
#72 x86 vs ARM - co musimy wiedzieć

TechCast Podcast

Play Episode Listen Later Mar 21, 2025 21:58


W niniejszym odcinku wnikliwie analizuję jedną z kluczowych kwestii w technologicznym świecie - doskonale znaną architekturę x86 z coraz śmielej wkraczającymi układami ARM. Czy Apple Silicon oraz Qualcomm Snapdragon X rzeczywiście zmienią świat pecetów jaki aktualnie znamy?  Jak rysuje się przyszłość ARM na Windowsie w kontekście układów Qualcomm?  W materiale omawiam różnice między RISC a CISC, kompatybilność oprogramowania oraz kluczowe aspekty wydajności i energooszczędności.  Jeżeli chcesz zrozumieć, dlaczego ARM nie zawsze działa tak świetnie na Windows, jak na macOS, i co czeka nas w najbliższych latach jest to odcinek dla Ciebie!

Presa internaţională
Ultima grijă pe lista de zi cu zi a românilor

Presa internaţională

Play Episode Listen Later Mar 19, 2025 27:42


Desi spunem adesea că “viața e neprețuită”, paradoxal, ea este ultima grijă pe lista de zi cu zi a românilor, spun studiile și sondajele publicate. Pe de altă parte, deși să ai un plan financiar “beton” este, probabil, cel mai important pas în construirea unui echilibru al vietii noastre de zi cu zi, mulți dintre noi ... încă nu îl avem. De ce ne-am face însă o asigurare și care este rolul ei în stabilitatea și echilibrarea bugetului propriu, o să aflam din emisiunea ”Ora de Risc”.Invitat: Cătălin VASILE, Director Național Vânzări ( NN Asigurări de Viață)

Sons de la r�dio - Cugat Radio
Un estudi de la UAB i la UAB detecta difer

Sons de la r�dio - Cugat Radio

Play Episode Listen Later Mar 13, 2025 12:52


Una recerca conjunta de la UAB i la UB ha detectat difer

Presa internaţională
Noi decizii ale Guvernului privind pensiile românilor

Presa internaţională

Play Episode Listen Later Mar 12, 2025 26:01


Ce legătură există între câți copii se nasc azi, câți angajați lucrează în acest moment și ce pensii primesc actualii pensionari din România? În contextul în care tot mai mulți români optează pentru o pensie privată facultativă (Pilon III), căci pe cea obligatorie (Pilon II) o vor primi oricum, apar întrebări privind modul cum vor evolua pensiile în următorul deceniu. Poate că tocmai de aceea Guvernul are astăzi pe masă noi decizii privind acest sistem. Aflăm detalii importante la emisiunea ”Ora de Risc”. Invitat: George MOȚ, specialist pensii private și fondator desprepensiiprivate.ro

le Psy du Travail
#15 Vincent de Gaulejac : Soft skills et Idéologie managériale

le Psy du Travail

Play Episode Listen Later Feb 27, 2025 44:37


Vincent de Gaulejac, Professeur émérite à l'université Paris 7, est fondateur du courant de la Sociologie Clinique et, il y a 10 ans, du RISC (Réseau International de Sociologie Clinique : https://www.sociologie-clinique.org/).J'ai immédiatement pensé à son travail lorsque j'ai pris connaissance de la nouvelle norme AFNOR « Habiletés sociocognitives (soft skills) - classification, terminologie et utilisation » (XP X50-766).La société ne serait-elle pas en train, en voulant à ce point décrire et prescrire les attitudes et comportements en entreprise, d'aggraver son intoxication gestionnaire ? Peut-on lier cette initiative à l'idéologie des « ressources humaines » ? Autant de questions que j'ai pu poser à notre invité.A partir de ce prétexte de la normalisation des « softskills », nous explorons ainsi dans cet épisode quelques matériaux du quotidien des psychologues du travail. Vincent de Gaulejac aborde quelques concepts clés de sociologie clinique, d'idéologie managériale et de souffrance au travail, offrant un regard critique sur les pratiques de gestion du personnel. Il analyse les paradoxes du management moderne, l'impact du culte de la performance sur la santé psychologique des salariés, et propose des pistes de réflexion pour repenser l'organisation du travail (je divulgâche : essentiellement la critique, au sens le plus noble et le plus nécessaire du terme). Pour aller plus loin, je conseille particulièrement dans l'importante bibliographie de notre invité : Le coût de l'excellence (Nouvelle édition), avec Nicole Aubert (2007) : https://www.seuil.com/ouvrage/le-cout-de-l-excellence-nicole-aubert/9782020889988 La Société malade de la gestion* (2009) : https://www.eyrolles.com/Litterature/Livre/la-societe-malade-de-la-gestion-9782757813256/ Le capitalisme paradoxant*, avec Fabienne Hanique (2018) : https://www.seuil.com/ouvrage/le-capitalisme-paradoxant-vincent-de-gaulejac/9782021188257(* disponible au prêt numérique avec l'offre pass lecture de la BnF)N'hésitez pas par ailleurs à jeter un œil à l'offre de formation du RISC : https://sociologie-clinique-formations.comLa possibilité de pouvoir proposer de telles rencontres est le résultat des partages dans vos réseaux et de vos encouragements (cœurs – pouces – étoiles) sur toutes les plateformes 

Going Linux
Going Linux #464 · 2024 Year End Review

Going Linux

Play Episode Listen Later Feb 24, 2025 77:55


Bill commits to running MX Linux for a year and has issues with Ubuntu based distros. We discuss Linux drivers, the Cosmic desktop, Wayland display manager, gaming on Linux and much, much more. Episode Time Stamps 00:00 Going Linux #464 · 2024 Year End Review 04:44 Bill commits to running MX Linux for a year 07:57 Bill has issues with Ubuntu based distros 17:44 Some Linux driver maintainers de-listed 21:23 New file system accepted - no bovine intervention 25:31 Good news for team green - Nvidia 28:08 The Cosmic desktop from System76 is making great progress 30:18 What's going on with Mozilla? 30% layoffs? 34:38 The Rasberry Pie foundation has been busy 37:00 Wayland display manager on Fedora and Ubuntu 42:40 RISC 44:43 Faster installs 46:39 HEIC - HEIF image support in Linux 50:14 Linux kernel cadence changed 54:25 Better gaming for everyone 56:34 Gnome feature fest 58:02 Ubuntu's anniversary flourishes 61:05 Wayland: All the cool kids are doing it 63:09 Ubuntu's desktop security center 65:00 Ubuntu app center can install .deb packages 68:50 Advances in gaming on Linux 69:42 Steamdeck uses Arch Linux 71:39 Fedora desktops galore 74:57 Intel has problems with 13th and 14th gen chips 76:60 goinglinux.com, goinglinux@gmail.com, +1-904-468-7889, @goinglinux, feedback, listen, subscribe 77:55 End

Rehash: A Web3 Podcast
S11 E3 | Understanding Zero Knowledge Infrastructure w/Reka (RISC Zero)

Rehash: A Web3 Podcast

Play Episode Listen Later Feb 13, 2025 52:56


In this episode, we're bringing back Reka, Head of Community at RISC Zero, about the state of zero-knowledge (ZK) infrastructure and strategies for successful go-to-market campaigns for blockchain protocols. Reka shares her journey from being a founder to joining RISC Zero, her insights on the importance of ZK technology, and the challenges and opportunities it presents. She also talks about her experience advising various crypto projects and her thoughts on combining education and community engagement in the blockchain space. Reka previously appeared on Rehash S4 E1 alongside LDF: https://youtu.be/TXzEpbvSVo0?si=Nzj9Dvbq6yMU0CM_ ⏳ TIMESTAMPS: 0:00 Intro 01:49 Updates from Reka's past appearance in S4 E1 02:58 Why intent infrastructure? 06:17 Understanding ZK technology 13:44 Applications and benefits of ZK 18:21 Challenges and future of ZK 23:24 Joining RISC Zero and building Boundless Protocol 29:07 Education through memes 30:13 Marketing strategies for protocols 34:17 Balancing developer adoption and end user growth 37:13 Strategies for building a community from scratch 44:27 Questions from the community 50:09 Follow Reka 

Darrers podcast - Ràdio Mollet
Fem salut del 13/2/2025 - Activitat física i risc cardiovascular

Darrers podcast - Ràdio Mollet

Play Episode Listen Later Feb 13, 2025 60:00


Consells sanitaris i mèdics des d'un punt de vista de la detecció, la prevenció i el tractament, amb col·laboració de l'Hospital de Mollet i l'Institut Català de la Salut (ICS). podcast recorded with enacast.com

Timpul prezent
Robert Lupițu: „Planul lui Donald Trump pentru Fîșia Gaza riscă să afecteze stabilitatea în regiune”

Timpul prezent

Play Episode Listen Later Feb 5, 2025 28:29


Într-un moment dificil, în care acordul de încetare a focului dintre Hamas și Israel avansează în faza a doua, președintele Donald Trump vine cu o propunere inclasabilă: anume, ca populația palestiniană din Fîșia Gaza să fie mutată în alte țări iar SUA să preia controlul asupra teritoriului. „Îl vom dezvolta, vom crea mii și mii de locuri de muncă și va fi ceva de care întregul Orient Mijlociu să fie foarte mândru” a spus președintele american în cadrul unei conferințe de presă la care a participat și premierul israelian Benjamin Netanyahu. Acesta este primul lider străin invitat la Casa Albă de cînd Donald Trump și-a preluat al doilea mandat. Discutăm despre propunerile lui Donald Trump și implicațiile lor cu analistul de politică internațională Robert Lupițu, redactor-șef al platformei „Calea europeană”.Robert Lupițu consideră că declarațiile lui Donald Trump arată că „sîntem la finalul ordinii internaționale bazate pe reguli, dacă nu cumva aceasta s-a și încheiat. Atunci cînd jandarmul lumii libere, cum sînt numite SUA, își asumă prin cea de-a doua administrație Trump o astfel de politică și-o astfel de abordare, arată că modul de abordare a relațiilor internaționale în al doilea mandat al său va fi diferit. Poate chiar diferit de primul unde, deși excentric în declarații, a fost ceva mai cumpătat în acțiuni.” Planurile lui Trump nu țin cont de traumele populației palestiniene, de dreptul internațional, de decenii de eforturi diplomatice pentru punerea în fapt a soluției celor două state.Robert Lupițu: „Nu doar că Donald Trump nu joacă după reguli. Dar nici partenerul său, din perspectivă de leadership, din Israel, Benjamin Netanyahu, nu mai joacă după reguli. Are un mandat de arestare pe numele său, nu a călătorit la Auschwitz la comemorarea a 80 de ani de la încheierea Holocaustului, deși s-a spus că se va face o excepție în privința sa și nu va fi arestat. Așa că declarațiile lui Trump pot fi privite și în cheia în care acest plan ar putea să tensioneze suplimentar, ar putea să arunce în aer armistițiul și să toarne gaz pe foc. (...) Israelul nu a ținut cont de anumite avertismente internaționale, legate nu de destructurarea rețelei Hamas, nu de lupta împotriva terorismului sau de dreptul de a se autoapăra continuu, ca urmare a atacului Hamas din 7 octombrie 2023, ci legate de țintirea, chiar și colaterală, a populației civile, a celor nevinovați. Această idee riscă, dacă urmează să aibă acțiuni concrete - fie printr-o prezență militară, fie printr-o incursiune, fie printr-o strămutare de populație - să afecteze și mai mult drumul către stabilitate.” Apasă PLAY pentru a asculta interviul integral! O emisiune de Adela Greceanu și Matei Martin Un produs Radio România Cultural 

Notícies Migdia
Portar una vida sana redueix el risc de patir un càncer mortal

Notícies Migdia

Play Episode Listen Later Feb 4, 2025


Portar una vida sana redueix el risc de patir un càncer mortal

Business Breakdowns
Arm: The Silicon Blueprint - [Business Breakdowns, EP.200]

Business Breakdowns

Play Episode Listen Later Jan 8, 2025 43:43


This is Zack Fuss. Today, we're breaking down Arm Holdings. Arm designs the architecture powering billions of devices, from smartphones and data centers to IoT devices and automotive systems. In this episode, we'll explore Arm's unique value proposition and how it thrives as a licensing giant in a market dominated by leading-edge manufacturers. To break down Arm, I am joined by Jay Goldberg, who is the CEO and lead analyst at D2D Advisory, a technology and strategy consultancy. We discuss its business model, the partnerships that drive its growth, and its role in enabling companies like Apple, NVIDIA, and Qualcomm. We will also unpack Arm's business history, including its acquisition by SoftBank, its failed takeover by NVIDIA, and its IPO earlier this year. Arm currently sports a $150 billion market cap with sales approaching $5 billion, a rather robust 30x revenue multiple. Please enjoy this Breakdown of Arm. For the full show notes, transcript, and links to the best content to learn more, check out the episode page here. ----- Business Breakdowns is a property of Colossus, LLC. For more episodes of Business Breakdowns, visit joincolossus.com/episodes. Editing and post-production work for this episode was provided by The Podcast Consultant (https://thepodcastconsultant.com). Show Notes (00:00:00) Introduction to Business Breakdowns (00:00:52) Introduction to Arm (00:02:27) Arm's Business Model Explained (00:05:05) CPU vs GPU Dynamics (00:07:33) Arm's Competitive Landscape (00:08:52) Historical Growth and Market Expansion (00:14:06) RISC vs CISC: Architectural Approaches (00:18:38) Arm's Licensing and Partnership Model (00:22:12) Arm's Chip Design Evolution (00:22:39) The Critical Role of Software (00:23:34) Arm's Compatibility and Ecosystem (00:23:41) Dramatic Recent History (00:24:12) SoftBank's Acquisition and Nvidia's Interest (00:25:15) Nvidia's Ambitious Bet (00:26:25) SoftBank's Wake-Up Call (00:27:02) Arm's Market Penetration (00:28:07) Arm's Ubiquity in Electronics (00:29:22) Influential Figures in Arm's Success (00:30:33) Arm's Financials (00:33:32) Risks and Competitive Threats (00:40:16) Future Opportunities and Lessons (00:41:10) Conclusion and Final Thoughts

BURN 4 IT
Embedded Evolution mit Robert Jeutter

BURN 4 IT

Play Episode Listen Later Jan 3, 2025 35:43


In dieser Folge gehen wir auf eine spannende Reise in die Welt der unsichtbaren Helfer, die unser tägliches Leben prägen: Embedded Systeme. Mit unserem Gast Robert Jeutter sprechen wir darüber, wie diese kleinen Wunderwerke hinter den Kulissen unser Leben erleichtern – von der Steuerung komplexer Maschinen bis zu vernetzten Lösungen, die immer intelligenter werden. Robert erzählt aus seiner langjährigen Erfahrung, wie ältere Systeme modernisiert und für die Herausforderungen der Gegenwart fit gemacht werden. Dabei werfen wir einen Blick darauf, was es braucht, um in solchen Projekten erfolgreich zu sein – von kluger Planung hin zu kreativem Problemlösen. Abseits der Technik gibt Robert auch einen Einblick in seine Leidenschaft für Wissen und Gemeinschaft, die ihn regelmäßig in spannende Workshops und innovative Projekte führt. Diese Folge ist eine Einladung, die Magie hinter den Kulissen zu entdecken und zu verstehen, wie technologische Entwicklung unser Leben prägt – oft, ohne dass wir es bemerken. Links zur Folge: Homepage und LinkedIn Robert: https://wieerwill.de/ https://www.linkedin.com/in/wieerwill/ GIT Robert: https://git.wieerwill.dev/wieerwill/rust-embassy-stm32 RUST Learning: https://doc.rust-lang.org/stable/book/ https://google.github.io/comprehensive-rust/index.html

Nova Ràdio Lloret
Es presenta el programa ‘Camina’, que garanteix la pràctica esportiva a infants en risc d’exclusió

Nova Ràdio Lloret

Play Episode Listen Later Nov 22, 2024 0:57


Ara mateix hi ha 15 clubs de Lloret que ofereixen més de 100 places becades.

Identity At The Center
#318 - SailPoint Navigate 2024 - SSF, CAEP, RISC, and SCIM Events with SailPoint's Mike Kiser

Identity At The Center

Play Episode Listen Later Nov 18, 2024 50:09


In this episode of the Identity at the Center podcast, hosts Jim McDonald and Jeff Steadman delve into the significance of shared signals in identity and access management (IAM). Featuring Mike Kiser, Director of Strategy and Standards at SailPoint, the discussion spans Kiser's career journey from IBM to SailPoint, the importance of standards and security in IAM, and the influence of AI on authenticity. The episode highlights the Shared Signals Framework, drawing parallels to cooperative dolphins and fishermen, and underscores the benefits of a standardized approach to signal sharing. The conversation also touches on the challenges and potential of event-based architectures and the evolving role of identity in cybersecurity. 00:00 Introduction and Initial Thoughts 02:50 Conference and Discount Codes 05:33 Guest Introduction and Background 11:31 AI and Authenticity 15:21 Shared Signals Framework 25:40 Decentralized Identity Management 26:28 Real-Time Identity Data Sharing 27:55 Developing Identity Standards 29:19 Vendor Collaboration and Challenges 31:28 Event-Based Identity Architectures 33:03 The Role of Big Tech in Identity Security 39:22 Customer Demand for Identity Solutions 40:49 Identity Security and Digital Identity 42:47 Technology vs. Humanity: A Musical Perspective 48:41 Conclusion and Final Thoughts Connect with Mike: https://www.linkedin.com/in/mike-kiser/ Learn more about SailPoint: https://www.sailpoint.com/ SailPoint Navigate 2024 London - Use code IDAC for a £300 discount - https://www.sailpoint.com/navigate/london Semperis' Hybrid Identity Protection Conference (HIP Conf) - Use code IDACpod for 20% off: https://www.hipconf.com/ Gartner IAM Summit - Save $375 on registration using our exclusive code IDAC375: https://www.gartner.com/en/conferences/na/identity-access-management-us Connect with us on LinkedIn: Jim McDonald: https://www.linkedin.com/in/jimmcdonaldpmp/ Jeff Steadman: https://www.linkedin.com/in/jeffsteadman/ Visit the show on the web at http://idacpodcast.com

The VAUMC Connection
Voting While Christian - RISC

The VAUMC Connection

Play Episode Listen Later Oct 22, 2024 39:59


RISC (Richmonders Involved in Strengthening Communities) is a justice ministry organization made of 25+ congregations from around the Richmond area. Their work is centered in people coming together to do justice to make a collective impact and systemic change. RISC is part of a larger network called DART (Direct Action Research Training Center), and their partner organization is IMPACT in Charlottesville. Their congregational involvement and organizational partnerships, however, goes beyond Richmond and Charlottesville proper and involve communities within the surrounding areas. The Missional and Community Engagement Office partners with RISC alongside other community organizing and advocacy groups across the Commonwealth to strengthen the link between the work of mission and the work of justice

Presa internaţională
„Echipele medicale au făcut tot ceea ce ținea de ele”. Ce spuneau ministrul Rafila și șefa Colegiului Medicilor București după anchetele privind decesele de la Spitalul Pantelimon. Ce reacții au acum (HotNews)

Presa internaţională

Play Episode Listen Later Aug 8, 2024 4:18


Riscăm un război distrugător dacă doborâm rachetele rusești în Ucraina? Un general român explică un adevăr incomod (Adevărul) - Mihaela Cambei, medalie de argint la Paris în competiția categoriei 49 kg. Chinezoaica i-a luat aurul la ultima încercare (Gazeta Sporturilor) „Echipele medicale au făcut tot ceea ce ținea de ele”. Ce spuneau ministrul Rafila și șefa Colegiului Medicilor București după anchetele privind decesele de la Spitalul Pantelimon. Ce reacții au acum (HotNews)Anunțul de miercuri al procurorilor privind reținerea a doi medici de la Spitalul Sf. Pantelimon pentru omor calificat și tentativă de omor în dosarul celor 17 decese suspecte din luna aprilie vine la 4 luni după ce atât Ministerul Sănătății, cât și Colegiul Medicilor București, au făcut propriile anchete la spital și au anunțat că nu au găsit niciun fel de nereguli.În luna aprilie, și spitalul a făcut o anchetă internă, în urma căreia a anunțat că nu a găsit nimic. Managerul Bogdan Socea a anunțat că urmează să își dea demisia, la solicitarea ministrului Rafila.Miercuri dimineață, Marcel Ciolacu a fost întrebat de jurnaliști dacă poate oferi mai multe informații despre acest caz. Ciolacu le-a răspuns jurnaliștilor că ministrul Sănătății, Alexandru Rafila, va ieși în cursul zilei să ofere clarificări.După anunțul de miercuri seară făcut de procurori privind reținerea celor două doctorițe, HotNews a încercat să îl contacteze pe ministrul Sănătății pentru mai multe explicații în acest caz, însă acesta a fost de negăsit.Cătălina Poiană, șefa Colegiului Medicilor București, a fost singura care a răspuns apelului HotNews.Ea a spus doar că ancheta CMB și concluziile prezentate atunci s-au bazat strict pe mijloacele la care Colegiul a avut acces: foile de observație ale pacienților decedați și declarațiile medicilor. Acestea nu au ridicat niciun semn de întrebare.Cătălina Poiană a mai spus că va putea da detalii în cursul zilei de joi, după ce va citi cu atenție ce acuzații aduc procurorii în acest caz și va putea formula un punct de vedere mai clar. Riscăm un război distrugător dacă doborâm rachetele rusești în Ucraina? Un general român explică un adevăr incomod (Adevărul)Președintele Ucrainei, Volodimir Zelenski, insistă ca România și Polonia să doboare dronele și rachetele rusești deasupra țării sale, însă, cel puțin deocamdată, NATO nu e de acord. „E improbabil scenariul ca România și Polonia să doboare dronele rușilor în Ucraina. Ar exista un risc de escaladare”, spune din start generalul Dan Grecu, doctor în științe militare și vicepreședinte al Asociaţiei Ofiţerilor în Rezervă din România, decorat de americani în Irak și Afganistan.El amintește și că NATO, deși susține Ucraina și ajută această țară, se opune ca Polonia sau România să intervină direct. În schimb, lucru deloc surprinzător, președintele ucrainean este interesat să atragă NATO în război, pentru că astfel Rusia ar fi învinsă pe front.În plus, mai există un detaliu de luat în calcul. România nu are, în acest moment, o lege care să-i permită să doboare dronele care îi survolează teritoriul. În acest moment, se lucrează la o astfel de lege, însă deocamdată nu se știe când ar putea fi trecută prin Parlament, așa cum menționează reprezentanții Ministerului Apărării Naționale. În continuare, generalul Dan Grecu s-a referit la situația existentă pe front. Vestea proastă este, spune el, că trupele ucrainene sunt supuse unor presiuni uriașe, iar rușii înaintează încet, dar sigur.Integral în Adevărul. Mihaela Cambei, medalie de argint la Paris în competiția categoriei 49 kg. Chinezoaica i-a luat aurul la ultima încercare (Gazeta Sporturilor)Mihaela Cambei (21 de ani) a cucerit medalia de ARGINT la haltere, categoria 49 de kilograme, fiind depășită in extremis de chinezoaica Hou Zhihui, la chiar ultima încercare a acesteia, una de totul sau nimic, cu care a stabilit un nou record olimpic la această categorie de greutate. Românca a reușit cel mai bun rezultat la stilul smuls, 93 kg, și a avut o ridicare maximă de 112 kg la aruncat. Per total, a fost depășită cu numai 1 kg de rivala din China, anunță Gazeta Sporturilor.Cambei, declarată anul trecut cea ai bună halterofilă a Europei, a început să viseze la participarea olimpică încă din 2018, atunci când a cucerit medalia de bronz la Jocurile Olimpice pentru tineret de la Buenos Aires. Chiar și-a tatuat cele cinci cercuri colorate pe brațul drept, sperând să poată adăuga după Paris și medalia.Mihaela ridică tone întregi la antrenamente în fiecare săptămână. Se antrenează câte patru ore pe zi, iar când nu este în sală, face exerciții de tehnică în fața oglinzii. Îi place muzica, are mereu boxa în bagaj și peste 200 de melodii descărcate, iar dintre artiștii români, îl admiră pe Puya.

Jurnal RFI
Nicușor Dan: Clădirile cu risc seismic, verificate de studenții de la Construcții

Jurnal RFI

Play Episode Listen Later Jul 23, 2024


Primarul general al Capitalei, Nicuşor Dan, anunţă că s-a făcut o inventariere a 15.000 de clădiri din zone protejate din Bucureşti, în cazul cărora urmează să fie făcută o evaluare vizuală rapidă pentru a se stabili riscul seismic. ”Noi ne aşteptăm că trebuie să intervenim la 8.000 de imobile. În momentul în care nu ştim nici măcar care sunt alea, e dificil, nu o să ajungi niciodată să faci expertize la toate. Trebuie să începi cu un inventar”, afirmă el, citat de News.ro. 

Ràdio Maricel de Sitges
A l'aigua, risc zero. Nova campanya preventiva d'estiu

Ràdio Maricel de Sitges

Play Episode Listen Later Jul 10, 2024


El conseller d'interior de la Generalitat ha visitat avui Sitges per presentar una nova campanya de prevenció per un bany segur. L'acte que s'ha realitzat a la platja de La Fragata i ha comptat amb l'alcaldessa Aurora Carbonell que ha recordat que a hores d'ara Sitges ha comptabilitzat un augment de 36 rescats més respecte les mateixes dates l'any passat i que la xifra d'atencions del 2023 va ser de prop de dos mil cinc centes persones. Davant d'aquest increment Protecció Civil ha impulsat aquesta campanya que recorda els consells bàsics de seguretat per minimitzar els riscos en el bany a platges o piscines. L'entrada A l’aigua, risc zero. Nova campanya preventiva d’estiu ha aparegut primer a Radio Maricel.

Presa internaţională
Albanez riscă pedeapsa cu moartea în China pentru trafic cu o tonă de cocaină

Presa internaţională

Play Episode Listen Later Jul 8, 2024 3:09


Guvernul albanez se implică pentru evitarea pedepsei cu moartea în China, în cazul unui cetățean albanez acuzat de trafic de droguri. Bulgaria are un nou patriarh, iar în Macedonia de Nord a fost făcută o descoperire arheologică importantă într-un sit din capitala Skopje.Revista presei Europa Plus:  top-channel.tv scrie despre cazul unui tânăr albanez care riscă pedeapsa cu moartea în China pentru trafic de droguri. Familia sa a cerut ajutor guvernului albanez.„Aceștia solicită intervenția statului pentru a-l salva, deoarece tânărul de 36 de ani nu are un rol principal și de conducere în grupul de deținuți acuzați că ar fi introdus o tonă de cocaină în China.”În răspunsul pe care Ministerul Afacerilor Externe de la Tirana l-a trimis Top Channel, se arată: „Cerând respectarea tuturor drepturilor cetățeanului în cauză, în toate etapele procesului judiciar, partea albaneză a solicitat autorităților chineze să evite aplicarea pedepsei cu moartea.” Patriarhul Daniel – noul conducător spiritual al BulgarieiIată un titlu din bntnews.bg. „Mitropolitul Daniel de Vidin a fost ales și înscăunat drept al patrulea șef al Bisericii Ortodoxe Bulgare după restaurarea Patriarhiei Bulgare,” adaugă publicația.Mitropolitul în vârstă de 52 de ani s-a născut sub numele laic Atanas Nikolov în 1972 în Smolyan. În 1996, s-a înscris la Universitatea din Sofia unde a studiat literatura engleză, dar în anul următor s-a transferat la Facultatea de Teologie a universității.Când, în septembrie 2023, preoții Bisericii Ruse au fost expulzați pentru spionaj, mitropolitul Daniel a lansat un apostrof în care a anunțat că este plin de amărăciune și că prejudecățile politice vor dispărea, dar că adevărul lui Hristos va rămâne.Descoperire arheologică importantă în Macedonia de NordO nouă figurină de tip „Mama Mare” a fost descoperită la situl neolitic „Tumba Madjari,” o așezare neolitică situată în partea de nord-est a orașului Skopje, cea mai importantă așezare neolitică din valea Skopje, aflăm din slobodenpecat.mk, Europa Liberă în limba nord macedoneană. Situl a fost descoperit în 1961, în cursul săpăturilor arheologice de probă legate de construcția unei autostrăzi.„Figurina recent descoperită, împreună cu o altă amforă, sunt obiecte importante care îmbogățesc tezaurul de descoperiri rare de pe pământul țării și oferă o exclusivitate deosebită arheologiei, istoriei și culturii nord-macedonene”, a spus ministrul culturii de la Skopje.„Acest sit va fi declarat patrimoniu cultural de o importanță deosebită, iar Tumba Madjari”, ca parte a orașului Skopje, va fi pus în funcțiune mai vizibil pentru turismul cultural,” a adăugat ministrul. Cercetările arheologice au fost reluate la acest sit după o pauză de zece ani. Autoritățile vor face o cerere de includere a acestuia pe lista patrimoniului cultural mondial UNESCO. Au participat la Revista Presei Europa Plus:ERZEN SHUSHKU - Albania; IVANOVSKA Borislava -  Bulgaria; IVANA PANOVSKA - Macedonia de Nord Europa Plus este un proiect RFI România realizat în parteneriat cu Agenția Universitară a Francofoniei 

Oxide and Friends
Is NVIDIA like Sun from the Dot Com Bubble?

Oxide and Friends

Play Episode Listen Later Jun 27, 2024 88:58 Transcription Available


Every so often we like to give our Oxide and Friends hot takes (or as Adam puts it "Bryan getting trolled on Twitter"). This time, a viral tweet suggests that NVIDIA is on the same trajectory as Sun Microsystems on its ascent during the Dot Com Bubble. From two alumni of Sun's rise and fall: maaaaybe not.In addition to Bryan Cantrill and Adam Leventhal, speakers included Todd Gamblin.Some of the topics we hit on, in the order that we hit them:The Tweet!OxF: Innovation Stagnation? -- wherein we forgot to read the tweetFramework laptop RISC-V mainboardTadpole SPARCbookOxF: A Requiem for SPARC with Tom Lyon -- we're RISC dead-endersAcquired on NVIDIA: part I, part II, part III, JensenRIVA 128OxF: Steve Jobs & the Next Big ThingIf we got something wrong or missed something, please file a PR! Our next show will likely be on Monday at 5p Pacific Time on our Discord server; stay tuned to our Mastodon feeds for details, or subscribe to this calendar. We'd love to have you join us, as we always love to hear from new speakers!

DioCast - The Open Way of Thinking
Essa Computex pode ter mudado o Desktop para sempre - Diocast

DioCast - The Open Way of Thinking

Play Episode Listen Later Jun 20, 2024 69:49


Sem sombra de dúvidas essa Computex pode ter mudado o Desktop para sempre, ao apresentar diversos modelos de laptops que serão agraciados com os novos processadores Snapdragon X Elite da Qualcomm que foram apresentados na feira. Neste episódio do Diocast exploramos o universo dos processadores ARM para desktops, abrindo as portas para um mundo de possibilidades. Hoje, vamos aprofundar um pouco mais nesse assunto, desvendando os mistérios das arquiteturas RISC e CISC. Estas novidades trazem de volta uma briga antiga no mercado de tecnologia, no centro dessa história épica estão dois titãs: RISC (Reduced Instruction Set Computer) e CISC (Complex Instruction Set Computer). Estas que são as arquiteturas-base para os principais processadores usados atualmente, cada um com seus pontos fortes e fracos. O RISC, mestre da agilidade, se destaca por sua elegância e eficiência. Com um conjunto de instruções enxuto e preciso, ele executa cada comando com maestria, gastando menos energia e gerando menos calor. Essa filosofia o torna ideal para dispositivos móveis e embarcados, onde o consumo de bateria e o tamanho compacto são cruciais. Já o CISC, um mago veterano, impressiona com seu arsenal de instruções complexas e versáteis. Capaz de realizar diversas operações com um único comando, ele oferece um código mais conciso e facilita a vida do programador. No entanto, esse poder vem com um preço: maior consumo de energia e mais calor gerado. Perfeito para desktops e servidores, onde a performance bruta reina suprema. E é nesse cenário que os novos processadores Snapdragon X Elite surgem como um sopro de inovação. Unindo o melhor dos dois mundos, eles prometem revolucionar o mundo dos desktops com uma performance épica, eficiência energética de última geração e compatibilidade com uma vasta gama de softwares. Realmente, essa Computex pode ter mudado o Desktop para sempre. --- Deixe seu comentário, ele pode ser lido no próximo programa. https://diolinux.com.br/podcast/essa-computex-mudou-o-desktop.html --- Send in a voice message: https://podcasters.spotify.com/pod/show/diolinux/message

Ethereum Daily - Crypto News Briefing
RISC Zero Launches zkVM 1.0

Ethereum Daily - Crypto News Briefing

Play Episode Listen Later Jun 17, 2024 4:27


RISC Zero launches its zkVM 1.0. Over 450k addresses claim their ZK tokens. Paradigm releases Alloy v0.1. And Ternoa launches its ZKEVM+ on testnet. Read more: https://ethdaily.io/488 Sponsor: Harpie is an onchain security solution that protects your wallet from theft in real time. Harpie helps you detect and block suspicious transactions before they execute, safeguarding your assets from malicious attacks and scams. Try Harpie for free at harpie.io/ethdaily.

Ràdio Maricel de Sitges
José Luis Herrera, nou director del Parc del Garraf, explica com el massís s'ha preparat per a la temporada de risc d'incendis

Ràdio Maricel de Sitges

Play Episode Listen Later Jun 17, 2024


José Luis Herrera és el nou director del Parc del Garraf i del Foix. Tècnic forestal procedent de la mateixa Diputació de Barcelona, la seva arribada ha coincidit amb un increment de la pluviometria que ha alleugerit força l'estrès hídric de les passades temporades. Tanmateix, a l'estiu tots els operatius prioritzen la protecció davant del risc d'incendis, tot i que el model de resposta, competència de diverses administracions, va canviant amb el temps. Amb ell hem comentat l'inici de la campanya, i les estratègies de futur de l'espai protegit. L'entrada José Luis Herrera, nou director del Parc del Garraf, explica com el massís s’ha preparat per a la temporada de risc d’incendis ha aparegut primer a Radio Maricel.

Linux Weekly Daily Wednesday
Microsoft Copilot+ PCs Have Total Wincall

Linux Weekly Daily Wednesday

Play Episode Listen Later May 30, 2024 28:49


Mozilla publishes a roadmap packed with feature goodness! Canonical's releases Ubuntu server for the $48 RISC powered Milk-V, finding an open-source Android keyboard with swipe, and Microsoft Copilot+ PCs like to take a bunch of screenshots.

They Create Worlds
SEGA Saturn Part 1

They Create Worlds

Play Episode Listen Later Mar 15, 2024 70:33


TCW Podcast Episode 206 - SEGA Saturn Part 1   In this in-depth exploration of the SEGA Saturn, our focus delves into the intricate technology that underpins the console. We delve into the challenges associated with 3D modeling, collision detection, and object occlusion, scrutinizing how the Saturn, primarily a sprite-based system, manages to incorporate 3D elements. The pivotal moment arrives with the announcement of the Sony PlayStation, prompting SEGA's aspirations for enhanced 3D capabilities. SEGA sought Hitachi's involvement to augment the machine's speed for handling 3D demands. Fortuitously, the utilization of the SH2 chip presented an opportunity for two processors to collaborate seamlessly. Our discussion delves into the profound technical intricacies, laying the groundwork for the subsequent episode where we will unravel the repercussions of these technological endeavors for SEGA and the Saturn.   TCW 127 - Dreams of SEGA: http://podcast.theycreateworlds.com/e/dreams-of-sega/ TCW 129 - The Crash That Almost Was: http://podcast.theycreateworlds.com/e/the-crash-that-almost-was/ Workstation VS Desktop: https://www.youtube.com/watch?v=KpBWaiWcXAk Workstation VS Gaming PCs: https://www.youtube.com/watch?v=IMSx2MUHSWM The Abyss - CGI Making of 1989: https://www.youtube.com/watch?v=gAFIUuFRkBA Terminator 2 Advanced Photorealistic Morphing: https://www.youtube.com/watch?v=37YrnLbQda0 How Did They Get the CGI in Jurassic Park to Look so Good?: https://www.youtube.com/watch?v=l4UuQxjFpfU TCW 087 - Virtual History: http://podcast.theycreateworlds.com/e/virtual-history/ Z-Buffer - Friday Minis: https://www.youtube.com/watch?v=F9GyYKcLDaw How Games Have Worked for 30 Years to Do Less Work: https://www.youtube.com/watch?v=CHYxjpYep_M Doom Engine - Limited but still 3D: https://www.youtube.com/watch?v=ZYGJQqhMN1U Collision Detection in Quake: https://www.youtube.com/watch?v=wLHXn8IlAiA MDShock: https://mdshock.com/ RISC vs CISK: https://www.youtube.com/watch?v=6Rxade2nEjk Universal Logic Gates: https://www.youtube.com/watch?v=jPLg_P9dHNY Rodrigo Copetti Technical Breakdown Website https://www.copetti.org/ SEGA Saturn Architecture: https://www.copetti.org/writings/consoles/sega-saturn/ SEGA Saturn Polygon Distorted Sprite Demo: https://www.youtube.com/watch?v=8TleepxIORU Why Was the SEGA Saturn so Hard to Develop On: https://www.youtube.com/watch?v=oa5pIfDvd68 Why Triangulate Game Assets: https://www.youtube.com/watch?v=6oY_Ogj9Gh0   New episodes are on the 1st and 15th of every month!   TCW Email: feedback@theycreateworlds.com  Twitter: @tcwpodcast Patreon: https://www.patreon.com/theycreateworlds Alex's Video Game History Blog: http://videogamehistorian.wordpress.com Alex's book, published Dec 2019, is available at CRC Press and at major on-line retailers: http://bit.ly/TCWBOOK1     Intro Music: Josh Woodward - Airplane Mode -  Music - "Airplane Mode" by Josh Woodward. Free download: http://joshwoodward.com/song/AirplaneMode  Outro Music: RolemMusic - Bacterial Love: http://freemusicarchive.org/music/Rolemusic/Pop_Singles_Compilation_2014/01_rolemusic_-_bacterial_love    Copyright: Attribution: http://creativecommons.org/licenses/by/4.0/

Vast and Curious, cu Andreea Roșca
Laura Țeposu. Responsabilitatea personală e o formă de putere, risc și siguranță și cea mai bună decizie de business

Vast and Curious, cu Andreea Roșca

Play Episode Listen Later Mar 8, 2024 63:55


Laura Țeposu este cofondator al Libris.ro, una dintre cele mai mari librării online și una dintre cele foarte puține profitabile. A devenit librar în primul an de facultate și s-a pregătit pentru a fi contabil. A pornit libris.ro când avea 33 de ani, în subsolul librăriei fizice, cu banii din economiile personale. Laura este o combinație interesantă de optimism și precauție: deși a făcut uneori pariuri mai mari decât putea duce, și-a păstrat până de curând apartenența la breasla contabililor. Spune că are mereu planul B și C pregătite. Am vorbit cu ea despre cum evaluează riscurile, despre importanța de a fi atent la toate detaliile, despre cea mai bună decizie de business - aceea de a preda controlul în mâinile echipei. Despre principiile ei de viață și cum echilibrează viața de antreprenor cu cea de familie, despre obiceiurile care-i dau energie și principiile care o ajută să rezolve probleme. **** Acest podcast este susținut de Dedeman, cea mai mare companie antreprenorială, 100% românească, ce crede în puterea de a schimba lumea prin perseverență și implicare, dar și în puterea de a o construi prin fiecare proiect - personal sau profesional, mai mic sau mai mare. Dedeman promovează inovația, educația și spiritul antreprenorial și este partenerul de încredere al The Vast&The Curious aproape de la început. Împreună, creăm oportunități pentru conversații cu sens și întrebări care ne ajută să evoluăm și să devenim mai buni, ca oameni și ca organizații.  **** Podcastul Vast&Curious este susținut de AROBS, cea mai mare companie de tehnologie listată la Bursă. E o companie românească, fondată acum 25 de ani la Cluj de antreprenorul Voicu Oprean. AROBS este astăzi o companie internațională, cu birouri în nouă țări și mai mult de 1.200 de oameni și parteneri în Europa, Asia și America.  AROBS crede într-o cultură a implicării, a evoluției continue și a parteneriatului pe termen lung.  **** Note, un sumar al conversației, precum și cărțile și oamenii la care facem referire în podcast se găsesc pe andreearosca.ro Pentru a primi noi episoade, vă puteți abona la newsletter pe andreearosca.ro. Dacă ascultați acest podcast, vă rog lăsați un review în Apple Podcasts. Durează câteva secunde și ne ajută să îmbunătățim temele și calitatea și să intervievăm noi oameni interesanți. 

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Speaker CFPs and Sponsor Guides are now available for AIE World's Fair — join us on June 25-27 for the biggest AI Engineer conference of 2024!Soumith Chintala needs no introduction in the ML world — his insights are incredibly accessible across Twitter, LinkedIn, podcasts, and conference talks (in this pod we'll assume you'll have caught up on the History of PyTorch pod from last year and cover different topics). He's well known as the creator of PyTorch, but he's more broadly the Engineering Lead on AI Infra, PyTorch, and Generative AI at Meta.Soumith was one of the earliest supporters of Latent Space (and more recently AI News), and we were overjoyed to catch up with him on his latest SF visit for a braindump of the latest AI topics, reactions to some of our past guests, and why Open Source AI is personally so important to him.Life in the GPU-Rich LaneBack in January, Zuck went on Instagram to announce their GPU wealth: by the end of 2024, Meta will have 350k H100s. By adding all their GPU clusters, you'd get to 600k H100-equivalents of compute. At FP16 precision, that's ~1,200,000 PFLOPS. If we used George Hotz's (previous guest!) "Person of Compute" measure, Meta now has 60k humans of compute in their clusters. Occasionally we get glimpses into the GPU-rich life; on a recent ThursdAI chat, swyx prompted PaLM tech lead Yi Tay to write down what he missed most from Google, and he commented that UL2 20B was trained by accidentally leaving the training job running for a month, because hardware failures are so rare in Google.Meta AI's Epic LLM RunBefore Llama broke the internet, Meta released an open source LLM in May 2022, OPT-175B, which was notable for how “open” it was - right down to the logbook! They used only 16 NVIDIA V100 GPUs and Soumith agrees that, with hindsight, it was likely under-trained for its parameter size.In Feb 2023 (pre Latent Space pod), Llama was released, with a 7B version trained on 1T tokens alongside 65B and 33B versions trained on 1.4T tokens. The Llama authors included Guillaume Lample and Timothée Lacroix, who went on to start Mistral.July 2023 was Llama2 time (which we covered!): 3 model sizes, 7B, 13B, and 70B, all trained on 2T tokens. The three models accounted for a grand total of 3,311,616 GPU hours for all pre-training work. CodeLlama followed shortly after, a fine-tune of Llama2 specifically focused on code generation use cases. The family had models in the 7B, 13B, 34B, and 70B size, all trained with 500B extra tokens of code and code-related data, except for 70B which is trained on 1T.All of this on top of other open sourced models like Segment Anything (one of our early hits!), Detectron, Detectron 2, DensePose, and Seamless, and in one year, Meta transformed from a company people made fun of for its “metaverse” investments to one of the key players in the AI landscape and its stock has almost tripled since (about $830B in market value created in the past year).Why Open Source AIThe obvious question is why Meta would spend hundreds of millions on its AI efforts and then release them for free. Zuck has addressed this in public statements:But for Soumith, the motivation is even more personal:“I'm irrationally interested in open source. I think open source has that fundamental way to distribute opportunity in a way that is very powerful. Like, I grew up in India… And knowledge was very centralized, but I saw that evolution of knowledge slowly getting decentralized. And that ended up helping me learn quicker and faster for like zero dollars. And I think that was a strong reason why I ended up where I am. So like that, like the open source side of things, I always push regardless of like what I get paid for, like I think I would do that as a passion project on the side……I think at a fundamental level, the most beneficial value of open source is that you make the distribution to be very wide. It's just available with no friction and people can do transformative things in a way that's very accessible. Maybe it's open source, but it has a commercial license and I'm a student in India. I don't care about the license. I just don't even understand the license. But like the fact that I can use it and do something with it is very transformative to me……Like, okay, I again always go back to like I'm a student in India with no money. What is my accessibility to any of these closed source models? At some scale I have to pay money. That makes it a non-starter and stuff. And there's also the control issue: I strongly believe if you want human aligned AI, you want all humans to give feedback. And you want all humans to have access to that technology in the first place. And I actually have seen, living in New York, whenever I come to Silicon Valley, I see a different cultural bubble.We like the way Soumith put it last year: Closed AI “rate-limits against people's imaginations and needs”!What It Takes For Open Source AI to WinHowever Soumith doesn't think Open Source will simply win by popular demand. There is a tremendous coordination problem with the decentralized nature of the open source AI development right now: nobody is collecting the valuable human feedback in the way that OpenAI or Midjourney are doing.“Open source in general always has a coordination problem. If there's a vertically integrated provider with more resources, they will just be better coordinated than open source. And so now open source has to figure out how to have coordinated benefits. And the reason you want coordinated benefits is because these models are getting better based on human feedback. And if you see with open source models, like if you go to the /r/localllama subreddit, like there's so many variations of models that are being produced from, say, Nous research. I mean, like there's like so many variations built by so many people. And one common theme is they're all using these fine-tuning or human preferences datasets that are very limited and they're not sufficiently diverse. And you look at the other side, say front-ends like Oobabooga or like Hugging Chat or Ollama, they don't really have feedback buttons. All the people using all these front-ends, they probably want to give feedback, but there's no way for them to give feedback… So we're just losing all of this feedback. Maybe open source models are being as used as GPT is at this point in like all kinds of, in a very fragmented way, like in aggregate all the open source models together are probably being used as much as GPT is, maybe close to that. But the amount of feedback that is driving back into the open source ecosystem is like negligible, maybe less than 1% of like the usage. So I think like some, like the blueprint here I think is you'd want someone to create a sinkhole for the feedback… I think if we do that, if that actually happens, I think that probably has a real chance of the open source models having a runaway effect against OpenAI, I think like there's a clear chance we can take at truly winning open source.”If you're working on solving open source coordination, please get in touch!Show Notes* Soumith Chintala Twitter* History of PyTorch episode on Gradient Podcast* The Llama Ecosystem* Apple's MLX* Neural ODEs (Ordinary Differential Equations)* AlphaGo* LMSys arena* Dan Pink's "Drive"* Robotics projects:* Dobb-E* OK Robot* Yann LeCun* Yangqing Jia of Lepton AI* Ed Catmull* George Hotz on Latent Space* Chris Lattner on Latent Space* Guillaume Lample* Yannic Kilcher of OpenAssistant* LMSys* Alex Atallah of OpenRouter* Carlo Sferrazza's 3D tactile research* Alex Wiltschko of Osmo* Tangent by Alex Wiltschko* Lerrel Pinto - RoboticsTimestamps* [00:00:00] Introductions* [00:00:51] Extrinsic vs Intrinsic Success* [00:02:40] Importance of Open Source and Its Impact* [00:03:46] PyTorch vs TinyGrad* [00:08:33] Why PyTorch is the Switzerland of frameworks* [00:10:27] Modular's Mojo + PyTorch?* [00:13:32] PyTorch vs Apple's MLX* [00:16:27] FAIR / PyTorch Alumni* [00:18:50] How can AI inference providers differentiate?* [00:21:41] How to build good benchmarks and learnings from AnyScale's* [00:25:28] Most interesting unexplored ideas* [00:28:18] What people get wrong about synthetic data* [00:35:57] Meta AI's evolution* [00:38:42] How do you allocate 600,000 GPUs?* [00:42:05] Even the GPU Rich are GPU Poor* [00:47:31] Meta's MTIA silicon* [00:50:09] Why we need open source* [00:59:00] Open source's coordination problem for feedback gathering* [01:08:59] Beyond text generation* [01:15:37] Osmo and the Future of Smell Recognition TechnologyTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO in residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:15]: Hey, and today we have in the studio Soumith Chintala, welcome.Soumith [00:00:17]: Thanks for having me.Swyx [00:00:18]: On one of your rare visits from New York where you live. You got your start in computer vision at NYU with Yann LeCun. That was a very fortuitous start. I was actually listening to your interview on the Gradient podcast. So if people want to know more about the history of Soumith, history of PyTorch, they can go to that podcast. We won't spend that much time there, but I just was marveling at your luck, or I don't know if it's your luck or your drive to find AI early and then find the right quality mentor because I guess Yan really sort of introduced you to that world.Soumith [00:00:51]: Yeah, I think you're talking about extrinsic success, right? A lot of people just have drive to do things that they think is fun, and a lot of those things might or might not be extrinsically perceived as good and successful. I think I just happened to like something that is now one of the coolest things in the world or whatever. But if I happen, the first thing I tried to become was a 3D VFX artist, and I was really interested in doing that, but I turned out to be very bad at it. So I ended up not doing that further. But even if I was good at that, whatever, and I ended up going down that path, I probably would have been equally happy. It's just like maybe like the perception of, oh, is this person successful or not might be different. I think like after a baseline, like your happiness is probably more correlated with your intrinsic stuff.Swyx [00:01:44]: Yes. I think Dan Pink has this book on drive that I often refer to about the power of intrinsic motivation versus extrinsic and how long extrinsic lasts. It's not very long at all. But anyway, now you are an investor in Runway, so in a way you're working on VFX. Yes.Soumith [00:02:01]: I mean, in a very convoluted way.Swyx [00:02:03]: It reminds me of Ed Catmull. I don't know if you guys know, but he actually tried to become an animator in his early years and failed or didn't get accepted by Disney and then went and created Pixar and then got bought by Disney and created Toy Story. So you joined Facebook in 2014 and eventually became a creator and maintainer of PyTorch. And there's this long story there you can refer to on the gradient. I think maybe people don't know that you also involved in more sort of hardware and cluster decision affair. And we can dive into more details there because we're all about hardware this month. Yeah. And then finally, I don't know what else, like what else should people know about you on a personal side or professional side?Soumith [00:02:40]: I think open source is definitely a big passion of mine and probably forms a little bit of my identity at this point. I'm irrationally interested in open source. I think open source has that fundamental way to distribute opportunity in a way that is very powerful. Like, I grew up in India. I didn't have internet for a while. In college, actually, I didn't have internet except for GPRS or whatever. And knowledge was very centralized, but I saw that evolution of knowledge slowly getting decentralized. And that ended up helping me learn quicker and faster for zero dollars. And I think that was a strong reason why I ended up where I am. So the open source side of things, I always push regardless of what I get paid for, like I think I would do that as a passion project on the side.Swyx [00:03:35]: Yeah, that's wonderful. Well, we'll talk about the challenges as well that open source has, open models versus closed models. Maybe you want to touch a little bit on PyTorch before we move on to the sort of Meta AI in general.PyTorch vs Tinygrad tradeoffsAlessio [00:03:46]: Yeah, we kind of touched on PyTorch in a lot of episodes. So we had George Hotz from TinyGrad. He called PyTorch a CISC and TinyGrad a RISC. I would love to get your thoughts on PyTorch design direction as far as, I know you talk a lot about kind of having a happy path to start with and then making complexity hidden away but then available to the end user. One of the things that George mentioned is I think you have like 250 primitive operators in PyTorch, I think TinyGrad is four. So how do you think about some of the learnings that maybe he's going to run into that you already had in the past seven, eight years almost of running PyTorch?Soumith [00:04:24]: Yeah, I think there's different models here, but I think it's two different models that people generally start with. Either they go like, I have a grand vision and I'm going to build a giant system that achieves this grand vision and maybe one is super feature complete or whatever. Or other people say they will get incrementally ambitious, right? And they say, oh, we'll start with something simple and then we'll slowly layer out complexity in a way that optimally applies Huffman coding or whatever. Like where the density of users are and what they're using, I would want to keep it in the easy, happy path and where the more niche advanced use cases, I'll still want people to try them, but they need to take additional frictional steps. George, I think just like we started with PyTorch, George started with the incrementally ambitious thing. I remember TinyGrad used to be, like we would be limited to a thousand lines of code and I think now it's at 5,000. So I think there is no real magic to which why PyTorch has the kind of complexity. I think it's probably partly necessitated and partly because we built with the technology available under us at that time, PyTorch is like 190,000 lines of code or something at this point. I think if you had to rewrite it, we would probably think about ways to rewrite it in a vastly simplified way for sure. But a lot of that complexity comes from the fact that in a very simple, explainable way, you have memory hierarchies. You have CPU has three levels of caches and then you have DRAM and SSD and then you have network. Similarly, GPU has several levels of memory and then you have different levels of network hierarchies, NVLink plus InfiniBand or Rocky or something like that, right? And the way the flops are available on your hardware, they are available in a certain way and your computation is in a certain way and you have to retrofit your computation onto both the memory hierarchy and like the flops available. When you're doing this, it is actually a fairly hard mathematical problem to do this setup, like you find the optimal thing. And finding the optimal thing is, what is optimal depends on the input variables themselves. So like, okay, what is the shape of your input tensors and what is the operation you're trying to do and various things like that. Finding that optimal configuration and writing it down in code is not the same for every input configuration you have. Like for example, just as the shape of the tensors change, let's say you have three input tensors into a Sparstar product or something like that. The shape of each of these input tensors will vastly change how you do this optimally placing this operation onto the hardware in a way that will get you maximal throughput. So a lot of our complexity comes from writing out hundreds of configurations for each single PyTorch operator and templatizing these things and symbolically generating the final CUDA code or CPU code. There's no way to avoid it because mathematically we haven't found symbolic ways to do this that also keep compile time near zero. You can write a very simple framework, but then you also should be willing to eat the long compile time. So if searching for that optimal performance at runtime, but that's the trade off. There's no, like, I don't think unless we have great breakthroughs George's vision is achievable, he should be thinking about a narrower problem such as I'm only going to make this for work for self-driving car connets or I'm only going to make this work for LLM transformers of the llama style. Like if you start narrowing the problem down, you can make a vastly simpler framework. But if you don't, if you need the generality to power all of the AI research that is happening and keep zero compile time and in all these other factors, I think it's not easy to avoid the complexity.Pytorch vs MojoAlessio [00:08:33]: That's interesting. And we kind of touched on this with Chris Lattner when he was on the podcast. If you think about frameworks, they have the model target. They have the hardware target. They have different things to think about. He mentioned when he was at Google, TensorFlow trying to be optimized to make TPUs go brr, you know, and go as fast. I think George is trying to make especially AMD stack be better than ROCm. How come PyTorch has been such as Switzerland versus just making Meta hardware go brr?Soumith [00:09:00]: First, Meta is not in the business of selling hardware. Meta is not in the business of cloud compute. The way Meta thinks about funding PyTorch is we're funding it because it's net good for Meta to fund PyTorch because PyTorch has become a standard and a big open source project. And generally it gives us a timeline edge. It gives us leverage and all that within our own work. So why is PyTorch more of a Switzerland rather than being opinionated? I think the way we think about it is not in terms of Switzerland or not. We actually the way we articulate it to all hardware vendors and software vendors and all who come to us being we want to build a backend in core for PyTorch and ship it by default is we just only look at our user side of things. Like if users are using a particular piece of hardware, then we want to support it. We very much don't want to king make the hardware side of things. So as the MacBooks have GPUs and as that stuff started getting increasingly interesting, we pushed Apple to push some engineers and work on the NPS support and we spend significant time from Meta funded engineers on that as well because a lot of people are using the Apple GPUs and there's demand. So we kind of mostly look at it from the demand side. We never look at it from like oh which hardware should we start taking opinions on.Swyx [00:10:27]: Is there a future in which, because Mojo or Modular Mojo is kind of a superset of Python, is there a future in which PyTorch might use Mojo features optionally?Soumith [00:10:36]: I think it depends on how well integrated it is into the Python ecosystem. So if Mojo is like a pip install and it's readily available and users feel like they can use Mojo so smoothly within their workflows in a way that just is low friction, we would definitely look into that. Like in the same way PyTorch now depends on Triton, OpenAI Triton, and we never had a conversation that was like huh, that's like a dependency. Should we just build a Triton of our own or should we use Triton? It almost doesn't, like those conversations don't really come up for us. The conversations are more well does Triton have 10,000 dependencies and is it hard to install? We almost don't look at these things from a strategic leverage point of view. We look at these things from a user experience point of view, like is it easy to install? Is it smoothly integrated and does it give enough benefits for us to start depending on it? If so, yeah, we should consider it. That's how we think about it.Swyx [00:11:37]: You're inclusive by default as long as it meets the minimum bar of, yeah, but like maybe I phrased it wrongly. Maybe it's more like what problems would you look to solve that you have right now?Soumith [00:11:48]: I think it depends on what problems Mojo will be useful at.Swyx [00:11:52]: Mainly a performance pitch, some amount of cross compiling pitch.Soumith [00:11:56]: Yeah, I think the performance pitch for Mojo was like, we're going to be performant even if you have a lot of custom stuff, you're going to write arbitrary custom things and we will be performant. And that value proposition is not clear to us from the PyTorch side to consider it for PyTorch. So PyTorch, it's actually not 250 operators, it's like a thousand operators. PyTorch exposes about a thousand operators and people kind of write their ideas in the thousand operators of PyTorch. Mojo is like, well, maybe it's okay to completely sidestep those thousand operators of PyTorch and just write it in a more natural form. Just write raw Python, write for loops or whatever, right? So from the consideration of how do we intersect PyTorch with Mojo, I can see one use case where you have custom stuff for some parts of your program, but mostly it's PyTorch. And so we can probably figure out how to make it easier for say Torch.compile to smoothly also consume Mojo subgraphs and like, you know, the interoperability being actually usable, that I think is valuable. But Mojo as a fundamental front end would be replacing PyTorch, not augmenting PyTorch. So in that sense, I don't see a synergy in more deeply integrating Mojo.Pytorch vs MLXSwyx [00:13:21]: So call out to Mojo whenever they have written something in Mojo and there's some performance related thing going on. And then since you mentioned Apple, what should people think of PyTorch versus MLX?Soumith [00:13:32]: I mean, MLX is early and I know the folks well, Ani used to work at FAIR and I used to chat with him all the time. He used to be based out of New York as well. The way I think about MLX is that MLX is specialized for Apple right now. It has a happy path because it's defined its product in a narrow way. At some point MLX either says we will only be supporting Apple and we will just focus on enabling, you know, there's a framework if you use your MacBook, but once you like go server side or whatever, that's not my problem and I don't care. For MLS, it enters like the server side set of things as well. Like one of these two things will happen, right? If the first thing will happen, like MLX's overall addressable market will be small, but it probably do well within that addressable market. If it enters the second phase, they're going to run into all the same complexities that we have to deal with. They will not have any magic wand and they will have more complex work to do. They probably wouldn't be able to move as fast.Swyx [00:14:44]: Like having to deal with distributed compute?Soumith [00:14:48]: Distributed, NVIDIA and AMD GPUs, like just like having a generalization of the concept of a backend, how they treat compilation with plus overheads. Right now they're deeply assumed like the whole NPS graph thing. So they need to think about all these additional things if they end up expanding onto the server side and they'll probably build something like PyTorch as well, right? Like eventually that's where it will land. And I think there they will kind of fail on the lack of differentiation. Like it wouldn't be obvious to people why they would want to use it.Swyx [00:15:24]: I mean, there are some cloud companies offering M1 and M2 chips on servers. I feel like it might be interesting for Apple to pursue that market, but it's not their core strength.Soumith [00:15:33]: Yeah. If Apple can figure out their interconnect story, maybe, like then it can become a thing.Swyx [00:15:40]: Honestly, that's more interesting than the cars. Yes.Soumith [00:15:43]: I think the moat that NVIDIA has right now, I feel is that they have the interconnect that no one else has, like AMD GPUs are pretty good. I'm sure there's various silicon that is not bad at all, but the interconnect, like NVLink is uniquely awesome. I'm sure the other hardware providers are working on it, but-Swyx [00:16:04]: I feel like when you say it's uniquely awesome, you have some appreciation of it that the rest of us don't. I mean, the rest of us just like, you know, we hear marketing lines, but what do you mean when you say NVIDIA is very good at networking? Obviously they made the acquisition maybe like 15 years ago.Soumith [00:16:15]: Just the bandwidth it offers and the latency it offers. I mean, TPUs also have a good interconnect, but you can't buy them. So you have to go to Google to use it.PyTorch MafiaAlessio [00:16:27]: Who are some of the other FAIR PyTorch alumni that are building cool companies? I know you have Fireworks AI, Lightning AI, Lepton, and Yangqing, you knew since college when he was building Coffee?Soumith [00:16:40]: Yeah, so Yangqing and I used to be framework rivals, PyTorch, I mean, we were all a very small close-knit community back then. Caffe, Torch, Theano, Chainer, Keras, various frameworks. I mean, it used to be more like 20 frameworks. I can't remember all the names. CCV by Liu Liu, who is also based out of SF. And I would actually like, you know, one of the ways it was interesting is you went into the framework guts and saw if someone wrote their own convolution kernel or they were just copying someone else's. There were four or five convolution kernels that were unique and interesting. There was one from this guy out of Russia, I forgot the name, but I remembered who was awesome enough to have written their own kernel. And at some point there, I built out these benchmarks called ConNet benchmarks. They're just benchmarking all the convolution kernels that are available at that time. It hilariously became big enough that at that time AI was getting important, but not important enough that industrial strength players came in to do these kinds of benchmarking and standardization. Like we have MLPerf today. So a lot of the startups were using ConNet benchmarks in their pitch decks as like, oh, you know, on ConNet benchmarks, this is how we fare, so you should fund us. I remember Nirvana actually was at the top of the pack because Scott Gray wrote amazingly fast convolution kernels at that time. Very interesting, but separate times. But to answer your question, Alessio, I think mainly Lepton, Fireworks are the two most obvious ones, but I'm sure the fingerprints are a lot wider. They're just people who worked within the PyTorch Cafe2 cohort of things and now end up at various other places.Swyx [00:18:50]: I think as a, both as an investor and a people looking to build on top of their services, it's a uncomfortable slash like, I don't know what I don't know pitch. Because I've met Yang Tsing and I've met Lin Chao. Yeah, I've met these folks and they're like, you know, we are deep in the PyTorch ecosystem and we serve billions of inferences a day or whatever at Facebook and now we can do it for you. And I'm like, okay, that's great. Like, what should I be wary of or cautious of when these things happen? Because I'm like, obviously this experience is extremely powerful and valuable. I just don't know what I don't know. Like, what should people know about like these sort of new inference as a service companies?Soumith [00:19:32]: I think at that point you would be investing in them for their expertise of one kind. So if they've been at a large company, but they've been doing amazing work, you would be thinking about it as what these people bring to the table is that they're really good at like GPU programming or understanding the complexity of serving models once it hits a certain scale. You know, various expertise like from the infra and AI and GPUs point of view. What you would obviously want to figure out is whether their understanding of the external markets is clear, whether they know and understand how to think about running a business, understanding how to be disciplined about making money or, you know, various things like that.Swyx [00:20:23]: Maybe I'll put it like, actually I will de-emphasize the investing bit and just more as a potential customer. Oh, okay. Like, it's more okay, you know, you have PyTorch gods, of course. Like, what else should I know?Soumith [00:20:37]: I mean, I would not care about who's building something. If I'm trying to be a customer, I would care about whether...Swyx [00:20:44]: Benchmarks.Soumith [00:20:44]: Yeah, I use it and it's usability and reliability and speed, right?Swyx [00:20:51]: Quality as well.Soumith [00:20:51]: Yeah, if someone from some random unknown place came to me and say, user stuff is great. Like, and I have the bandwidth, I probably will give it a shot. And if it turns out to be great, like I'll just use it.Benchmark dramaSwyx [00:21:07]: Okay, great. And then maybe one more thing about benchmarks, since we already brought it up and you brought up Confident Benchmarks. There was some recent drama around AnyScale. AnyScale released their own benchmarks and obviously they look great on their own benchmarks, but maybe didn't give the other... I feel there are two lines of criticism. One, which is they didn't test some apples for apples on the kind of endpoints that the other providers, that they are competitors with, on their benchmarks and that is due diligence baseline. And then the second would be more just optimizing for the right thing. You had some commentary on it. I'll just kind of let you riff.Soumith [00:21:41]: Yeah, I mean, in summary, basically my criticism of that was AnyScale built these benchmarks for end users to just understand what they should pick, right? And that's a very good thing to do. I think what they didn't do a good job of is give that end user a full understanding of what they should pick. Like they just gave them a very narrow slice of understanding. I think they just gave them latency numbers and that's not sufficient, right? You need to understand your total cost of ownership at some reasonable scale. Not oh, one API call is one cent, but a thousand API calls are 10 cents. Like people can misprice to cheat on those benchmarks. So you want to understand, okay, like how much is it going to cost me if I actually subscribe to you and do like a million API calls a month or something? And then you want to understand the latency and reliability, not just from one call you made, but an aggregate of calls you've made over several various times of the day and times of the week. And the nature of the workloads, is it just some generic single paragraph that you're sending that is cashable? Or is it like testing of real world workload? I think that kind of rigor, like in presenting that benchmark wasn't there. It was a much more narrow sliver of what should have been a good benchmark. That was my main criticism. And I'm pretty sure if before they released it, they showed it to their other stakeholders who would be caring about this benchmark because they are present in it, they would have easily just pointed out these gaps. And I think they didn't do that and they just released it. So I think those were the two main criticisms. I think they were fair and Robert took it well.Swyx [00:23:40]: And he took it very well. And we'll have him on at some point and we'll discuss it. But I think it's important for, I think the market being maturing enough that people start caring and competing on these kinds of things means that we need to establish what best practice is because otherwise everyone's going to play dirty.Soumith [00:23:55]: Yeah, absolutely. My view of the LLM inference market in general is that it's the laundromat model. Like the margins are going to drive down towards the bare minimum. It's going to be all kinds of arbitrage between how much you can get the hardware for and then how much you sell the API and how much latency your customers are willing to let go. You need to figure out how to squeeze your margins. Like what is your unique thing here? Like I think Together and Fireworks and all these people are trying to build some faster CUDA kernels and faster, you know, hardware kernels in general. But those modes only last for a month or two. These ideas quickly propagate.Swyx [00:24:38]: Even if they're not published?Soumith [00:24:39]: Even if they're not published, the idea space is small. So even if they're not published, the discovery rate is going to be pretty high. It's not like we're talking about a combinatorial thing that is really large. You're talking about Llama style LLM models. And we're going to beat those to death on a few different hardware SKUs, right? Like it's not even we have a huge diversity of hardware you're going to aim to run it on. Now when you have such a narrow problem and you have a lot of people working on it, the rate at which these ideas are going to get figured out is going to be pretty rapid.Swyx [00:25:15]: Is it a standard bag of tricks? Like the standard one that I know of is, you know, fusing operators and-Soumith [00:25:22]: Yeah, it's the standard bag of tricks on figuring out how to improve your memory bandwidth and all that, yeah.Alessio [00:25:28]: Any ideas instead of things that are not being beaten to death that people should be paying more attention to?Novel PyTorch ApplicationsSwyx [00:25:34]: One thing I was like, you know, you have a thousand operators, right? Like what's the most interesting usage of PyTorch that you're seeing maybe outside of this little bubble?Soumith [00:25:41]: So PyTorch, it's very interesting and scary at the same time, but basically it's used in a lot of exotic ways, like from the ML angle, what kind of models are being built? And you get all the way from state-based models and all of these things to stuff nth order differentiable models, like neural ODEs and stuff like that. I think there's one set of interestingness factor from the ML side of things. And then there's the other set of interesting factor from the applications point of view. It's used in Mars Rover simulations, to drug discovery, to Tesla cars. And there's a huge diversity of applications in which it is used. So in terms of the most interesting application side of things, I think I'm scared at how many interesting things that are also very critical and really important it is used in. I think the scariest was when I went to visit CERN at some point and they said they were using PyTorch and they were using GANs at the same time for particle physics research. And I was scared more about the fact that they were using GANs than they were using PyTorch, because at that time I was a researcher focusing on GANs. But the diversity is probably the most interesting. How many different things it is being used in. I think that's the most interesting to me from the applications perspective. From the models perspective, I think I've seen a lot of them. Like the really interesting ones to me are where we're starting to combine search and symbolic stuff with differentiable models, like the whole AlphaGo style models is one example. And then I think we're attempting to do it for LLMs as well, with various reward models and search. I mean, I don't think PyTorch is being used in this, but the whole alpha geometry thing was interesting because again, it's an example of combining the symbolic models with the gradient based ones. But there are stuff like alpha geometry that PyTorch is used at, especially when you intersect biology and chemistry with ML. In those areas, you want stronger guarantees on the output. So yeah, maybe from the ML side, those things to me are very interesting right now.Swyx [00:28:03]: Yeah. People are very excited about the alpha geometry thing. And it's kind of like, for me, it's theoretical. It's great. You can solve some Olympia questions. I'm not sure how to make that bridge over into the real world applications, but I'm sure people smarter than me will figure it out.Synthetic Data vs Symbolic ModelsSoumith [00:28:18]: Let me give you an example of it. You know how the whole thing about synthetic data will be the next rage in LLMs is a thing?Swyx [00:28:27]: Already is a rage.Soumith [00:28:28]: Which I think is fairly misplaced in how people perceive it. People think synthetic data is some kind of magic wand that you wave and it's going to be amazing. Synthetic data is useful in neural networks right now because we as humans have figured out a bunch of symbolic models of the world or made up certain symbolic models because of human innate biases. So we've figured out how to ground particle physics in a 30 parameter model. And it's just very hard to compute as in it takes a lot of flops to compute, but it only has 30 parameters or so. I mean, I'm not a physics expert, but it's a very low rank model. We built mathematics as a field that basically is very low rank. Language, a deep understanding of language, like the whole syntactic parse trees and just understanding how language can be broken down and into a formal symbolism is something that we figured out. So we basically as humans have accumulated all this knowledge on these subjects, either synthetic, we created those subjects in our heads, or we grounded some real world phenomenon into a set of symbols. But we haven't figured out how to teach neural networks symbolic world models directly. The only way we have to teach them is generating a bunch of inputs and outputs and gradient dissenting over them. So in areas where we have the symbolic models and we need to teach all the knowledge we have that is better encoded in the symbolic models, what we're doing is we're generating a bunch of synthetic data, a bunch of input output pairs, and then giving that to the neural network and asking it to learn the same thing that we already have a better low rank model of in gradient descent in a much more over-parameterized way. Outside of this, like where we don't have good symbolic models, like synthetic data obviously doesn't make any sense. So synthetic data is not a magic wand where it'll work in all cases in every case or whatever. It's just where we as humans already have good symbolic models off. We need to impart that knowledge to neural networks and we figured out the synthetic data is a vehicle to impart this knowledge to. So, but people, because maybe they don't know enough about synthetic data as a notion, but they hear, you know, the next wave of data revolution is synthetic data. They think it's some kind of magic where we just create a bunch of random data somehow. They don't think about how, and then they think that's just a revolution. And I think that's maybe a gap in understanding most people have in this hype cycle.Swyx [00:31:23]: Yeah, well, it's a relatively new concept, so. Oh, there's two more that I'll put in front of you and then you can see what you respond. One is, you know, I have this joke that it's, you know, it's only synthetic data if it's from the Mistral region of France, otherwise it's just a sparkling distillation, which is what news research is doing. Like they're distilling GPT-4 by creating synthetic data from GPT-4, creating mock textbooks inspired by Phi 2 and then fine tuning open source models like Llama. And so I don't know, I mean, I think that's, should we call that synthetic data? Should we call it something else? I don't know.Soumith [00:31:57]: Yeah, I mean, the outputs of LLMs, are they synthetic data? They probably are, but I think it depends on the goal you have. If your goal is you're creating synthetic data with the goal of trying to distill GPT-4's superiority into another model, I guess you can call it synthetic data, but it also feels like disingenuous because your goal is I need to copy the behavior of GPT-4 and-Swyx [00:32:25]: It's also not just behavior, but data set. So I've often thought of this as data set washing. Like you need one model at the top of the chain, you know, unnamed French company that has that, you know, makes a model that has all the data in it that we don't know where it's from, but it's open source, hey, and then we distill from that and it's great. To be fair, they also use larger models as judges for preference ranking, right? So that is, I think, a very, very accepted use of synthetic.Soumith [00:32:53]: Correct. I think it's a very interesting time where we don't really have good social models of what is acceptable depending on how many bits of information you use from someone else, right? It's like, okay, you use one bit. Is that okay? Yeah, let's accept it to be okay. Okay, what about if you use 20 bits? Is that okay? I don't know. What if you use 200 bits? I don't think we as society have ever been in this conundrum where we have to be like, where is the boundary of copyright or where is the boundary of socially accepted understanding of copying someone else? We haven't been tested this mathematically before,Swyx [00:33:38]: in my opinion. Whether it's transformative use. Yes. So yeah, I think this New York Times opening eye case is gonna go to the Supreme Court and we'll have to decide it because I think we never had to deal with it before. And then finally, for synthetic data, the thing that I'm personally exploring is solving this great stark paradigm difference between rag and fine tuning, where you can kind of create synthetic data off of your retrieved documents and then fine tune on that. That's kind of synthetic. All you need is variation or diversity of samples for you to fine tune on. And then you can fine tune new knowledge into your model. I don't know if you've seen that as a direction for synthetic data.Soumith [00:34:13]: I think you're basically trying to, what you're doing is you're saying, well, language, I know how to parametrize language to an extent. And I need to teach my model variations of this input data so that it's resilient or invariant to language uses of that data.Swyx [00:34:32]: Yeah, it doesn't overfit on the wrong source documents.Soumith [00:34:33]: So I think that's 100% synthetic. You understand, the key is you create variations of your documents and you know how to do that because you have a symbolic model or like some implicit symbolic model of language.Swyx [00:34:48]: Okay.Alessio [00:34:49]: Do you think the issue with symbolic models is just the architecture of the language models that we're building? I think maybe the thing that people grasp is the inability of transformers to deal with numbers because of the tokenizer. Is it a fundamental issue there too? And do you see alternative architectures that will be better with symbolic understanding?Soumith [00:35:09]: I am not sure if it's a fundamental issue or not. I think we just don't understand transformers enough. I don't even mean transformers as an architecture. I mean the use of transformers today, like combining the tokenizer and transformers and the dynamics of training, when you show math heavy questions versus not. I don't have a good calibration of whether I know the answer or not. I, you know, there's common criticisms that are, you know, transformers will just fail at X. But then when you scale them up to sufficient scale, they actually don't fail at that X. I think there's this entire subfield where they're trying to figure out these answers called like the science of deep learning or something. So we'll get to know more. I don't know the answer.Meta AI and Llama 2/3Swyx [00:35:57]: Got it. Let's touch a little bit on just Meta AI and you know, stuff that's going on there. Maybe, I don't know how deeply you're personally involved in it, but you're our first guest with Meta AI, which is really fantastic. And Llama 1 was, you know, you are such a believer in open source. Llama 1 was more or less the real breakthrough in open source AI. The most interesting thing for us covering on this, in this podcast was the death of Chinchilla, as people say. Any interesting insights there around the scaling models for open source models or smaller models or whatever that design decision was when you guys were doing it?Soumith [00:36:31]: So Llama 1 was Guillaume Lample and team. There was OPT before, which I think I'm also very proud of because we bridged the gap in understanding of how complex it is to train these models to the world. Like until then, no one really in gory detail published.Swyx [00:36:50]: The logs.Soumith [00:36:51]: Yeah. Like, why is it complex? And everyone says, oh, it's complex. But no one really talked about why it's complex. I think OPT was cool.Swyx [00:37:02]: I met Susan and she's very, very outspoken. Yeah.Soumith [00:37:05]: We probably, I think, didn't train it for long enough, right? That's kind of obvious in retrospect.Swyx [00:37:12]: For a 175B. Yeah. You trained it according to Chinchilla at the time or?Soumith [00:37:17]: I can't remember the details, but I think it's a commonly held belief at this point that if we trained OPT longer, it would actually end up being better. Llama 1, I think, was Guillaume Lample and team Guillaume is fantastic and went on to build Mistral. I wasn't too involved in that side of things. So I don't know what you're asking me, which is how did they think about scaling loss and all of that? Llama 2, I was more closely involved in. I helped them a reasonable amount with their infrastructure needs and stuff. And Llama 2, I think, was more like, let's get to the evolution. At that point, we kind of understood what we were missing from the industry's understanding of LLMs. And we needed more data and we needed more to train the models for longer. And we made, I think, a few tweaks to the architecture and we scaled up more. And that was Llama 2. I think Llama 2, you can think of it as after Guillaume left, the team kind of rebuilt their muscle around Llama 2. And Hugo, I think, who's the first author is fantastic. And I think he did play a reasonable big role in Llama 1 as well.Soumith [00:38:35]: And he overlaps between Llama 1 and 2. So in Llama 3, obviously, hopefully, it'll be awesome.Alessio [00:38:42]: Just one question on Llama 2, and then we'll try and fish Llama 3 spoilers out of you. In the Llama 2 paper, the loss curves of the 34 and 70B parameter, they still seem kind of steep. Like they could go lower. How, from an infrastructure level, how do you allocate resources? Could they have just gone longer or were you just, hey, this is all the GPUs that we can burn and let's just move on to Llama 3 and then make that one better?Soumith [00:39:07]: Instead of answering specifically about that Llama 2 situation or whatever, I'll tell you how we think about things. Generally, we're, I mean, Mark really is some numbers, right?Swyx [00:39:20]: So let's cite those things again. All I remember is like 600K GPUs.Soumith [00:39:24]: That is by the end of this year and 600K H100 equivalents. With 250K H100s, including all of our other GPU or accelerator stuff, it would be 600-and-something-K aggregate capacity.Swyx [00:39:38]: That's a lot of GPUs.Soumith [00:39:39]: We'll talk about that separately. But the way we think about it is we have a train of models, right? Llama 1, 2, 3, 4. And we have a bunch of GPUs. I don't think we're short of GPUs. Like-Swyx [00:39:54]: Yeah, no, I wouldn't say so. Yeah, so it's all a matter of time.Soumith [00:39:56]: I think time is the biggest bottleneck. It's like, when do you stop training the previous one and when do you start training the next one? And how do you make those decisions? The data, do you have net new data, better clean data for the next one in a way that it's not worth really focusing on the previous one? It's just a standard iterative product. You're like, when is the iPhone 1? When do you start working on iPhone 2? Where is the iPhone? And so on, right? So mostly the considerations are time and generation, rather than GPUs, in my opinion.Alessio [00:40:31]: So one of the things with the scaling loss, like Chinchilla is optimal to balance training and inference costs. I think at Meta's scale, you would rather pay a lot more maybe at training and then save on inference. How do you think about that from infrastructure perspective? I think in your tweet, you say you can try and guess on like how we're using these GPUs. Can you just give people a bit of understanding? It's like, because I've already seen a lot of VCs say, Llama 3 has been trained on 600,000 GPUs and that's obviously not true, I'm sure. How do you allocate between the research, FAIR and the Llama training, the inference on Instagram suggestions that get me to scroll, like AI-generated stickers on WhatsApp and all of that?Soumith [00:41:11]: Yeah, we haven't talked about any of this publicly, but as a broad stroke, it's like how we would allocate resources of any other kinds at any company. You run a VC portfolio, how do you allocate your investments between different companies or whatever? You kind of make various trade-offs and you kind of decide, should I invest in this project or this other project, or how much should I invest in this project? It's very much a zero sum of trade-offs. And it also comes into play, how are your clusters configured, like overall, what you can fit of what size and what cluster and so on. So broadly, there's no magic sauce here. I mean, I think the details would add more spice, but also wouldn't add more understanding. It's just gonna be like, oh, okay, I mean, this looks like they just think about this as I would normally do.Alessio [00:42:05]: So even the GPU rich run through the same struggles of having to decide where to allocate things.Soumith [00:42:11]: Yeah, I mean, at some point I forgot who said it, but you kind of fit your models to the amount of compute you have. If you don't have enough compute, you figure out how to make do with smaller models. But no one as of today, I think would feel like they have enough compute. I don't think I've heard any company within the AI space be like, oh yeah, like we feel like we have sufficient compute and we couldn't have done better. So that conversation, I don't think I've heard from any of my friends at other companies.EleutherSwyx [00:42:47]: Stella from Eleuther sometimes says that because she has a lot of donated compute. She's trying to put it to interesting uses, but for some reason she's decided to stop making large models.Soumith [00:42:57]: I mean, that's a cool, high conviction opinion that might pay out.Swyx [00:43:01]: Why?Soumith [00:43:02]: I mean, she's taking a path that most people don't care to take about in this climate and she probably will have very differentiated ideas. I mean, think about the correlation of ideas in AI right now. It's so bad, right? So everyone's fighting for the same pie. In some weird sense, that's partly why I don't really directly work on LLMs. I used to do image models and stuff and I actually stopped doing GANs because GANs were getting so hot that I didn't have any calibration of whether my work would be useful or not because, oh yeah, someone else did the same thing you did. It's like, there's so much to do, I don't understand why I need to fight for the same pie. So I think Stella's decision is very smart.Making BetsAlessio [00:43:53]: And how do you reconcile that with how we started the discussion about intrinsic versus extrinsic kind of like accomplishment or success? How should people think about that especially when they're doing a PhD or early in their career? I think in Europe, I walked through a lot of the posters and whatnot, there seems to be mode collapse in a way in the research, a lot of people working on the same things. Is it worth for a PhD to not take a bet on something that is maybe not as interesting just because of funding and visibility and whatnot? Or yeah, what suggestions would you give?Soumith [00:44:28]: I think there's a baseline level of compatibility you need to have with the field. Basically, you need to figure out if you will get paid enough to eat, right? Like whatever reasonable normal lifestyle you want to have as a baseline. So you at least have to pick a problem within the neighborhood of fundable. Like you wouldn't wanna be doing something so obscure that people are like, I don't know, like you can work on it.Swyx [00:44:59]: Would a limit on fundability, I'm just observing something like three months of compute, right? That's the top line, that's the like max that you can spend on any one project.Soumith [00:45:09]: But like, I think that's very ill specified, like how much compute, right? I think that the notion of fundability is broader. It's more like, hey, are these family of models within the acceptable set of, you're not crazy or something, right? Even something like neural or DS, which is a very boundary pushing thing or states-based models or whatever. Like all of these things I think are still in fundable territory. When you're talking about, I'm gonna do one of the neuromorphic models and then apply image classification to them or something, then it becomes a bit questionable. Again, it depends on your motivation. Maybe if you're a neuroscientist, it actually is feasible. But if you're an AI engineer, like the audience of these podcasts, then it's more questionable. The way I think about it is, you need to figure out how you can be in the baseline level of fundability just so that you can just live. And then after that, really focus on intrinsic motivation and depends on your strengths, like how you can play to your strengths and your interests at the same time. Like I try to look at a bunch of ideas that are interesting to me, but also try to play to my strengths. I'm not gonna go work on theoretical ML. I'm interested in it, but when I want to work on something like that, I try to partner with someone who is actually a good theoretical ML person and see if I actually have any value to provide. And if they think I do, then I come in. So I think you'd want to find that intersection of ideas you like, and that also play to your strengths. And I'd go from there. Everything else, like actually finding extrinsic success and all of that, I think is the way I think about it is like somewhat immaterial. When you're talking about building ecosystems and stuff, slightly different considerations come into play, but that's a different conversation.Swyx [00:47:06]: We're gonna pivot a little bit to just talking about open source AI. But one more thing I wanted to establish for Meta is this 600K number, just kind of rounding out the discussion, that's for all Meta. So including your own inference needs, right? It's not just about training.Soumith [00:47:19]: It's gonna be the number in our data centers for all of Meta, yeah.Swyx [00:47:23]: Yeah, so there's a decent amount of workload serving Facebook and Instagram and whatever. And then is there interest in like your own hardware?MTIASoumith [00:47:31]: We already talked about our own hardware. It's called MTIA. Our own silicon, I think we've even showed the standard photograph of you holding the chip that doesn't work. Like as in the chip that you basically just get like-Swyx [00:47:51]: As a test, right?Soumith [00:47:52]: Yeah, a test chip or whatever. So we are working on our silicon and we'll probably talk more about it when the time is right, but-Swyx [00:48:00]: Like what gaps do you have that the market doesn't offer?Soumith [00:48:04]: Okay, I mean, this is easy to answer. So basically, remember how I told you about there's this memory hierarchy and like sweet spots and all of that? Fundamentally, when you build a hardware, you make it general enough that a wide set of customers and a wide set of workloads can use it effectively while trying to get the maximum level of performance they can. The more specialized you make the chip, the more hardware efficient it's going to be, the more power efficient it's gonna be, the more easier it's going to be to find the software, like the kernel's right to just map that one or two workloads to that hardware and so on. So it's pretty well understood across the industry that if you have a sufficiently large volume, enough workload, you can specialize it and get some efficiency gains, like power gains and so on. So the way you can think about everyone building, every large company building silicon, I think a bunch of the other large companies are building their own silicon as well, is they, each large company has a sufficient enough set of verticalized workloads that can be specialized that have a pattern to them that say a more generic accelerator like an NVIDIA or an AMD GPU does not exploit. So there is some level of power efficiency that you're leaving on the table by not exploiting that. And you have sufficient scale and you have sufficient forecasted stability that those workloads will exist in the same form, that it's worth spending the time to build out a chip to exploit that sweet spot. Like obviously something like this is only useful if you hit a certain scale and that your forecasted prediction of those kind of workloads being in the same kind of specializable exploitable way is true. So yeah, that's why we're building our own chips.Swyx [00:50:08]: Awesome.Open Source AIAlessio [00:50:09]: Yeah, I know we've been talking a lot on a lot of different topics and going back to open source, you had a very good tweet. You said that a single company's closed source effort rate limits against people's imaginations and needs. How do you think about all the impact that some of the Meta AI work in open source has been doing and maybe directions of the whole open source AI space?Soumith [00:50:32]: Yeah, in general, I think first, I think it's worth talking about this in terms of open and not just open source, because like with the whole notion of model weights, no one even knows what source means for these things. But just for the discussion, when I say open source, you can assume it's just I'm talking about open. And then there's the whole notion of licensing and all that, commercial, non-commercial, commercial with clauses and all that. I think at a fundamental level, the most benefited value of open source is that you make the distribution to be very wide. It's just available with no friction and people can do transformative things in a way that's very accessible. Maybe it's open source, but it has a commercial license and I'm a student in India. I don't care about the license. I just don't even understand the license. But like the fact that I can use it and do something with it is very transformative to me. Like I got this thing in a very accessible way. And then it's various degrees, right? And then if it's open source, but it's actually a commercial license, then a lot of companies are gonna benefit from gaining value that they didn't previously have, that they maybe had to pay a closed source company for it. So open source is just a very interesting tool that you can use in various ways. So there's, again, two kinds of open source. One is some large company doing a lot of work and then open sourcing it. And that kind of effort is not really feasible by say a band of volunteers doing it the same way. So there's both a capital and operational expenditure that the large company just decided to ignore and give it away to the world for some benefits of some kind. They're not as tangible as direct revenue. So in that part, Meta has been doing incredibly good things. They fund a huge amount of the PyTorch development. They've open sourced Llama and those family of models and several other fairly transformative projects. FICE is one, Segment Anything, Detectron, Detectron 2. Dense Pose. I mean, it's-Swyx [00:52:52]: Seamless. Yeah, seamless.Soumith [00:52:53]: Like it's just the list is so long that we're not gonna cover. So I think Meta comes into that category where we spend a lot of CapEx and OpEx and we have a high talent density of great AI people and we open our stuff. And the thesis for that, I remember when FAIR was started, the common thing was like, wait, why would Meta wanna start a open AI lab? Like what exactly is a benefit from a commercial perspective? And for then the thesis was very simple. It was AI is currently rate limiting Meta's ability to do things. Our ability to build various product integrations, moderation, various other factors. Like AI was the limiting factor and we just wanted AI to advance more and we didn't care if the IP of the AI was uniquely in our possession or not. However the field advances, that accelerates Meta's ability to build a better product. So we just built an open AI lab and we said, if this helps accelerate the progress of AI, that's strictly great for us. But very easy, rational, right? Still the same to a large extent with the Llama stuff. And it's the same values, but the argument, it's a bit more nuanced. And then there's a second kind of open source, which is, oh, we built this project, nights and weekends and we're very smart people and we open sourced it and then we built a community around it. This is the Linux kernel and various software projects like that. So I think about open source, like both of these things being beneficial and both of these things being different. They're different and beneficial in their own ways. The second one is really useful when there's an active arbitrage to be done. If someone's not really looking at a particular space because it's not commercially viable or whatever, like a band of volunteers can just coordinate online and do something and then make that happen. And that's great.Open Source LLMsI wanna cover a little bit about open source LLMs maybe. So open source LLMs have been very interesting because I think we were trending towards an increase in open source in AI from 2010 all the way to 2017 or something. Like where more and more pressure within the community was to open source their stuff so that their methods and stuff get adopted. And then the LLMs revolution kind of took the opposite effect OpenAI stopped open sourcing their stuff and DeepMind kind of didn't, like all the other cloud and all these other providers, they didn't open source their stuff. And it was not good in the sense that first science done in isolation probably will just form its own bubble where people believe their own b******t or whatever. So there's that problem. And then there was the other problem which was the accessibility part. Like, okay, I again always go back to I'm a student in India with no money. What is my accessibility to any of these closers models? At some scale I have to pay money. That makes it a non-starter and stuff. And there's also the control thing. I strongly believe if you want human aligned stuff, you want all humans to give feedback. And you want all humans to have access to that technology in the first place. And I actually have seen, living in New York, whenever I come to Silicon Valley, I see a different cultural bubble. Like all the friends I hang out with talk about some random thing like Dyson Spheres or whatever, that's a thing. And most of the world doesn't know or care about any of this stuff. It's definitely a bubble and bubbles can form very easily. And when you make a lot of decisions because you're in a bubble, they're probably not globally optimal decisions. So I think open source, the distribution of open source powers a certain kind of non-falsifiability that I think is very important. I think on the open source models, like it's going great in the fact that LoRa I think came out of the necessity of open source models needing to be fine-tunable in some way. Yeah, and I think DPO also came out of the academic open source side of things. So do any of the closed source labs, did any of them already have LoRa or DPO internally? Maybe, but that does not advance humanity in any way. It advances some companies probability of doing the winner takes all that I talked about earlier in the podcast.Open Source and TrustI don't know, it just feels fundamentally good. Like when people try to, you know, people are like, well, what are the ways in which it is not okay? I find most of these arguments, and this might be a little controversial, but I find a lot of arguments based on whether closed source models are safer or open source models are safer very much related to what kind of culture they grew up in, what kind of society they grew up in. If they grew up in a society that they trusted, then I think they take the closed source argument. And if they grew up in a society that they couldn't trust, where the norm was that you didn't trust your government, obviously it's corrupt or whatever, then I think the open source argument is what they take. I think there's a deep connection to like people's innate biases from their childhood and their trust in society and governmental aspects that push them towards one opinion or the other. And I'm definitely in the camp of open source is definitely going to actually have better outcomes for society. Closed source to me just means that centralization of power, which, you know, is really hard to trust. So I think it's going well

Fain & Simplu Podcast
DE CE RISCĂ ROMÂNIA SĂ DISPARĂ CA NAȚIUNE? MIRCEA GEOANĂ. | Fain & Simplu Podcast 172

Fain & Simplu Podcast

Play Episode Listen Later Jan 18, 2024 95:01


De la vârful ierarhiei NATO, Mircea Geoană ne dezvăluie situația geopolitică actuală.Secretarul general adjunct al celei mai importante alianțe militare vine pe scena Filarmonicii Banatului, în capitala culturală europeană Timișoara! Și vine cu o sumedenie de informații cu adevărat exclusiviste. Nu rata ocazia să afli ce ne așteaptă, ca țară, de la unul dintre liderii europeni ai momentului.Urmează să te uiți la podcastul cu viitorul președinte al României?Află astăzi. La Fain & Simplu, cu Mihai Morar.

Auto Remarketing Podcast
USED CAR WEEK 2023 PODCAST: Boosting repo percentages via compliance

Auto Remarketing Podcast

Play Episode Listen Later Jan 17, 2024 33:09


We continue our series on the Auto Remarketing Podcast highlighting some of the panels and presentations from Used Car Week 2023. This episode features a presentation and discussion with Holly Balogh and Stamatis Ferarolis of RISC titled, “Improve Repossession Percentages Through Managed Direct Compliance.”

Interchain.FM
Risc Zero Seeks to Scale Decentralized Computing for Modular Architectures through zk

Interchain.FM

Play Episode Listen Later Jan 1, 2024 25:17


✨About Risc Zero ✨Risc Zero is a computational platform that brings the power of zero knowledge (zk) primitives to any onchain or offchain network. Applications using Risc Zero can range from onchain orderbooks, gamefi, RWAs, and any onchain application that needs higher throughput without the cost. They're developing the zkVM, a generalized virtual machine (VM) with a built-in zk proving system that implements the same kind of computer architecture as what developers are familiar with in web2 with an execution environment that can be written in any preferred programming language that isn't limited to just Solidity.Products discussed:BonsaizkVM#blockchaintech #technews #web3news #interchainfm #cryptocurrency #cryptopodcasts

Tech Café
Steampunk Café : histoire de la tech en Europe continentale (2/2)

Tech Café

Play Episode Listen Later Dec 29, 2023 77:21


Infomaniak partage les valeurs de Tech Café : éthique, écologie et respect de la vie privée. Découvrez les services de notre partenaire sur Infomaniak.comL'histoire de la tech à partir des années 50 en France. ❤️ Patreon

The Personal Computer Radio Show
The Personal Computer Radio Show 12-27-23

The Personal Computer Radio Show

Play Episode Listen Later Dec 27, 2023 54:00


 The Personal Computer Show Wednesday December 27th 2023 PRN.live Streaming on the Internet 6:00 PM Eastern Time IN THE NEWS  Rite Aid Faces 5 Year Facial Recognition Ban Intel is Buying Leading Edge Lithography Tools  Windows on RISC versus Windows on x86 Architecture What's the Big Rush to Migrate Windows 10 to Windows 11? ITPro Series with Benjamin Rockwell Touching on Some of the Ergnomics of Computing From the Tech Corner The Hybrid Work Model is Here to Stay and Companies Should Prepare for this Reality. Technology Chatter with Benjamin Rockwell and Marty Winston Targus LED Disinfection Light, Anker 733 Power Bank

Let's Talk AI
#148 - Imagen 2, Midjourney on web, FunSearch, OpenAI ‘Preparedness Framework', campaigning voice clone

Let's Talk AI

Play Episode Listen Later Dec 24, 2023 115:47


Our 148th episode with a summary and discussion of last week's big AI news! Read out our text newsletter and comment on the podcast at https://lastweekin.ai/ Email us your questions and feedback at contact@lastweekin.ai Timestamps + links: (00:00:00) Intro / Banter Tools & Apps(00:02:43) Google Deepmind unveils its most advanced AI image generator, Imagen 2 (00:08:21) Anthropic will help users if they get sued for copyright infringement (00:13:50) Midjourney Alpha is here with AI image generations on the web (00:16:34) Instagram introduces gen-AI powered background editing tool (00:17:09) Microsoft drastically expands Azure AI Studio to include Llama 2 Model-as-a-Service, GPT-4 Turbo with Vision (00:18:54) ChatGPT Is Apparently Becoming Lazy as It Has Started Asking Users to Solve Their Own Problems (00:22:17) You can create your own AI songs with this new Copilot extension (00:23:57) Stability AI announces paid membership for commercial use of its models Applications & Business(00:25:42) ByteDance is secretly using OpenAI's tech to build a competitor (00:31:55) Intel unveils new AI chip to compete with Nvidia and AMD (00:36:36) Chinese chip-related companies shutting down with record speed — 10,900, or around 30 per day, shut down in 2023 (00:40:11) TSMC mentions 1.4nm process tech for the first time, says 2nm remains on track (00:42:57) Meta has done something that will get Nvidia and AMD very, very worried — it gave up on GPU and CPU to take a RISC-y route for AI training and inference acceleration (00:46:17) Nvidia rushes to deliver modified AI GPU chips to China customers, allegedly places 'Super Hot Run' priority order with TSMC (00:49:17) Sam Altman's OpenAI agrees to pay German media giant Axel Springer for using its content to train AI models Projects & Open Source(00:52:20) Introducing DeciLM-7B: The Fastest and Most Accurate 7 Billion-Parameter LLM to Date (00:57:27) Introducing Stable Zero123: Quality 3D Object Generation from Single Images Research & Advancements(01:00:42) FunSearch: Making new discoveries in mathematical sciences using Large Language Models (01:09:12) OpenAI Demos a Control Method for Superintelligent AI (01:16:41) Cheating Fears Over Chatbots Were Overblown, New Research Suggests (01:18:13) SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention (01:20:04) CogAgent: A Visual Language Model for GUI Agents (01:21:10) Limits to the Energy Efficiency of CMOS Microprocessors Policy & Safety(01:24:24) OpenAI announces ‘Preparedness Framework' to track and mitigate AI risks (01:32:08) Pro-China YouTube Network Used A.I. to Malign U.S., Report Finds (01:37:07) AI is a danger to the financial system, regulators warn for the first time (01:38:42) Anonymous Sudan hacking group sets sights on ChatGPT (01:40:33) Scenario planning for an AGI future (01:42:51) The widening web of effective altruism in AI security Synthetic Media & Art(01:49:22) Facebook Is Being Overrun With Stolen, AI-Generated Images That People Think Are Real (01:52:34) Pakistan's former prime minister is using an AI voice clone to campaign from prison

Zero Knowledge
Episode 296: Zeth, Bonsai and RISC Zero with Brian and Jeremy

Zero Knowledge

Play Episode Listen Later Oct 25, 2023 62:13


In this week's episode, Anna (https://twitter.com/annarrose) and Nico (https://twitter.com/nico_mnbl) catch up with Brian Retford (https://twitter.com/BrianRetford) and Jeremy Bruestle (https://twitter.com/BruestleJeremy) from RISC Zero (https://www.risczero.com/). They delve into the current status of the project, breaking down the components of the stack, from the RISC Zero zkVM leveraging the RISC-V instruction set architecture to the Bonsai proving service and their new zkEVM, Zeth. They also touch on their design methodology, how the system components integrate and future developments for RISC Zero. Here's some additional resources for this episode: RISC Zero Developer Guide: Rust Resources (https://dev.risczero.com/zkvm/developer-guide/rust-resources) RISC Zero GitHub: Rust Crates (https://github.com/risc0/risc0#rust-libraries) Using Continuations to Prove Any EVM Transaction (https://www.risczero.com/news/continuations) RISC-V Website (https://riscv.org/) https://zkbench.dev (https://zkbench.dev/) Episode 251: Exploring RISC Zero with Brian Retford and Jeremy Bruestle (https://zeroknowledge.fm/251-2/) ZK9: Future ZK Emerging Use Cases and Key Enablers – Brian Retford (RISC Zero) (https://www.youtube.com/watch?v=MYYb5TXdm4c&pp=ygUNYnJpYW4gcmV0Zm9yZA%3D%3D) ZK Hack Lisbon: Creating Zero-Knowledge Proofs with RISC Zero (https://www.youtube.com/watch?v=saVD9qo3aJ0&list=PLcPzhUaCxlCgCvzkkaBWzVuHdBRsTNxj1&index=7) Applications are now open to attend zkHack Istanbul - Nov 10-12! Apply here: https://www.zkistanbul.com/ (https://www.zkistanbul.com/) Launching soon, Namada (https://namada.net/) is a proof-of-stake L1 blockchain focused on multichain, asset-agnostic privacy, via a unified shielded set. Namada is natively interoperable with fast-finality chains via IBC, and with Ethereum using a trust-minimised bridge. Follow Namada on Twitter @namada (https://twitter.com/namada) for more information and join the community on Discord discord.gg/namada (http://discord.gg/namada) If you like what we do: * Find all our links here! @ZeroKnowledge | Linktree (https://linktr.ee/zeroknowledge) * Subscribe to our podcast newsletter (https://zeroknowledge.substack.com) * Follow us on Twitter @zeroknowledgefm (https://twitter.com/zeroknowledgefm) * Join us on Telegram (https://zeroknowledge.fm/telegram) * Catch us on YouTube (https://zeroknowledge.fm/)

The WAN Show Podcast
Unity? More Like Divorce - WAN Show September 15, 2023

The WAN Show Podcast

Play Episode Listen Later Sep 18, 2023 220:06


Get hooked up with the latest and greatest audio gear at https://lmg.gg/Sweetwater Add a little fun and personality to your printed products! Check out VistaPrint at https://lmg.gg/vistaprint Enable your creative side! Check out Moment at https://lmg.gg/ShopMoment Timestamps (Courtesy of NoKi1119) Note: Timing may be off due to sponsor change: 0:00 Chapters 1:05 Intro 1:32 Topic #1 - Unity's runtime fee angers all 3:45 Fraud detection, silently deleted clause & TOS 5:11 Unity's income, CEO selling stocks before changes 6:38 Unity's response, Linus on hiding it 10:24 Luke mentions Mega Crit's tweet, Linus on "premeditated" 14:38 Luke recalls "Pay to Reload," Linus advocates for Unity 17:45 UE4 & SOLIDWORKS licenses, Godot, what should Unity do? 25:14 Linus on changing terms of subscription 30:06 Topic #2 - Plex blocks access to Hetzner 31:47 Explaining Plex, its usage, reason behind the block 34:49 Plex's sharing & premium feature, Jellyfin & Emby 39:36 Liking products V.S. working with sponsor, Luke on eufy babycam 42:08 "Plex is a company with a liability," "WAN VPN" 43:43 What game companies are doing things right? ft. Kitty bread 44:44 Jake's Pirate Party of Canada comment 48:18 Merch Messages #1 49:17 Linus's autonomous lawn mower update 53:03 Entered mall competition, received details of every entry, who to report this to? 56:02 Guess the purpose of this Wish product! ft. Jessica 58:04 Rules of the bit 58:41 Product #1 1:02:22 Product #2 1:05:20 Product #3 1:10:38 Product #4 1:12:36 Product #5 1:25:19 Sponsors 1:36:00 Scrapyard Wars 9 1:39:31 Merch Messages #2 1:39:42 Are you planning to shave your beard? 1:40:51 What if Apple invested 5% in Valve? ft. Games discussion 2:02:11 Will there be a console with an upgradeable graphics card? 2:03:58 LTTStore's new reversible bomber jacket 2:06:50 LTTStore's new Merino T-shirt 2:09:12 LTT retro screwdriver newsletter 2:09:55 Send over favorite garment & your review on it 2:10:34 Topic #3 - California's right to repair bill 2:13:43 Topic #4 - Destiny 2 cheater barred from playing games 2:24:02 How many videos were shot after the break? 2:28:55 Answer, FP poll's result 2:31:56 Topic #5 - Pitstoptech's handheld Framework DIY project 2:34:16 Specs, Linus on ROG Ally's repairability 2:37:02 Topic #6 - Meta to allow cross-apps messaging 2:38:51 Topic #7 - Apple's iPhone 15 has Type-C 2:40:28 Micro-B & Mini-B, discussing Lightning 2:47:30 Linus on being stuck with Type-C 2:51:58 Topic #8 - Intel announces Thunderbolt 5 2:54:13 Topic #9 - Twitter (X) monetization pays LTT 2:55:05 Topic #10 - MS ends Surface Duo's support 2:58:59 Merch Messages #3 ft. WAN Show After Dark 2:59:16 How hard was it to set up LTTStore's desk configurator? 3:01:02 Your take on the potential requirement of battery replacement? 3:03:15 Has LTT considered physical copies of their content? 3:04:16 What do you think of kids using devices during school days? 3:12:38 Timeline for serious RISC adoption for gaming? 3:14:57 Opinion on Asus charging $750 to replace an $800 monitor's LCD? 3:16:41 What is the point you decide to move on from tech? 3:18:40 What made you decide to make “Working for Linus” videos? 3:21:00 Given YouTube's algorithm, are you getting back to daily uploads? 3:22:05 Is the swacket coming back? 3:22:33 Which subscriber makes the most revenue for you - YT, YT Premium or FP? 3:24:07 History of hiring Riley, impact on LMG if you didn't? 3:24:42 What content would you do on an experimental channel? 3:26:56 Will Linus be upgrading his Framework 13 to Ryzen? 3:27:37 Thoughts on AYANEO KUN? 3:28:38 Why have you dropped the Amazon store? 3:32:06 US's large lithium deposit, will we see lithium products getting cheaper? 3:34:49 Did the CVO idea come organically or was it borrowed from Simon Cynic's book? 3:36:50 Are you considering bulk ordering Framework 16 for LMG? 3:41:25 Outro