Podcasts about 128k

  • 92PODCASTS
  • 274EPISODES
  • 2hAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Apr 29, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about 128k

Latest podcast episodes about 128k

Fantasy Golf Degenerates
Pulling Teeth for Outrights | THE 2025 CJ CUP BYRON NELSON

Fantasy Golf Degenerates

Play Episode Listen Later Apr 29, 2025 110:59


CJ CUP BYRON NELSON 2025, Fantasy Golf Picks & Bets | Fantasy Golf DegeneratesJoin Kenny Kim and Byron Lindeque as they dive into The 2025 CJ Cup Byron Nelson at TPC Craig Ranch. Get insider previews of the course, expert analysis of the odds, and exclusive Fantasy Golf picks and best bets from the "Fantasy Golf Degenerates" podcast. Tune in for a deep dive into this week's PGA Tour action!Episode “413” | Pulling Teeth for Outrights#CJCup #ByronNelson #CJCupByronNelson #TPCCraigRanch #FantasyGolf #PGATourSub to the Mayo Media Network: https://bit.ly/YTMMNUse Code “FGDegenerates” for 1.5% Cashback up to $200 at ProphetX today www.ProphetX.co/registerGet 20% off https://www.fantasynational.com/FGDUse Code “FGD15” on checkout at https://kickbackgolf.com for 15% off your order.Use Code “FGD50” for your copy of all of Byron's tools here: https://www.patreon.com/TheModelManiacSHOW INDEXIntro - 0:00Recap - 1:07Byron's Story Time - 10:50LIV Golf Recap - 16:34The Chevron Championship - 19:11ProphetX Picks - 32:16Kenny's Story Time - 41:42Course Preview - 47:25DFS Strategy - 57:24Kick Back Golf Bets - 1:02:47Tiers 10K - 1:16:469K - 1:24:128K - 1:27:407K - 1:31:53Hustler Play OTW - 1:33:20Back to the 7K - 1:34:166K - 1:41:13Outro - 1:48:55Video: https://bit.ly/YTMMNApple: https://bit.ly/FGDAppleSpotify: https://bit.ly/FGDSpotifyGoogle: https://bit.ly/FGDGoogleStitcher: https://bit.ly/FGDStitchKenny Kim Twitter: https://twitter.com/KendoVTByron Lindeque Twitter: https://twitter.com/TheModelManiacFantasy Golf Degenerates Twitter: https://twitter.com/FGDegeneratesProduced by: Mike Baxter: https://twitter.com/MikeTookThat

El Mundo del Spectrum Podcast
13x05 La USRR en el Spectrum - José Ignacio Murria - El Mundo del Spectrum Podcast

El Mundo del Spectrum Podcast

Play Episode Listen Later Apr 1, 2025 162:49


Vuelve El Mundo del Spectrum Podcast con un programa muy original dedicado a los juegos con temática de la URSS en el que aprovechamos para hablar de aquella época en la que vivimos la guerra fría y la caída del muro de Berlín. Entrevistamos a José Ignacio Murria, creador de World Destruction, juego publicado por Ventamatic. Con él hablaremos de su trabajo y de las peripecias de publicar en aquel momento. En la sección Actualidad hablaremos de varios temas, por ejemplo de documentales, en el que destacaremos el lanzamiento de la edición física en Blu-ray de The Rubber-Eyed Wonder. No faltará la mención al nuevo Saboteur que llevará como título «Saboteur: the porto problem» y que está siendo realizado por el propio Clive Townsend. Y por supuesto hablaremos sobre Don priestley, maestro del Spectrum, recientemente fallecido. No te pierdas este apasionante programa con la participación de Juan Francisco Torres (¡Sí, lo tenemos de vuelta!), Jesús Martínez del Vas, Alejandro Ibáñez y para la entrevista, también con Jesús Relinque «Pedja». Programa dedicado a la memoria de Nazaret.

Tapping Into Crypto
Why Greeny Bought a Crypto Punk in 2025 - Plus the Top 3 Altcoins He's Holding Right Now

Tapping Into Crypto

Play Episode Listen Later Mar 27, 2025 27:07


98% of meme coins are scams… but the two percent have been making millions Greeny returns to dive deep into meme coin mania, his 100k airdrop, and the future of altcoins. He's also come to share a little secret… the NFT he spent a house deposit on! From Solana utility plays to rumors of GTA 6 crypto integration, The boys jump into all this week's biggest stories. With insights on macro trends, low-cap gems, and a bear market survival guide that will give you some teeth, this is a must-listen for traders and holders alike. You'll hear:  How the Trump and Melania meme coins tanked the crypto market Greeny's surprising pivot from meme coins to utility plays.. Why Bitcoin's dominance is crushing altcoins (and when that might change) How to spot meme coins with staying power. Why Greeny bet $140K on a NFT. The truth about Pudgy Penguins and Bored Apes in the market today. The wild coin that turned 128K into 20M overnight. Greeny's top macro indicators for timing the next crypto boom. … and much more! Follow Greeny over on X @greenytrades or join his trading community @greenysgroup Check out Greeny's YouTube channel here. Want to see what we're looking at every episode? Watch the YouTube version of the podcast here. Keen to join in TIC Tipping? Reset your demo mode and let us know your picks on @tappingintocrypto on instagram or X @tappingintocrypto Ready to start? Get $10 of FREE Bitcoin on Swyftx when you sign up and verify:  https://trade.swyftx.com.au/register/?promoRef=tappingintocrypto10btc

The Making Of
Eddie AI's Co-Founder on Their Revolutionary Solution, Evolving Post Workflows, & More

The Making Of

Play Episode Listen Later Mar 6, 2025 28:06


In this episode, we welcome Shamir Allibhai. Shamir is the co-founder of Eddie AI, a revolutionary new AI-powered software for filmmakers, editors, and content creators. In our chat, he shares about his early days, career working in production and post, and about the creation of his company, Eddie AI. He also deep dives on the solutions it provides — and other insights on the industry and production workflows.“The Making Of” is presented by AJA:How Cromorama solves HDR production challenges with AJA ColorBoxCromorama is transforming HDR workflows for live production across the globe, using AJA ColorBox and its integrated ORION-CONVERT pipeline to power SDR/HDR transforms, quality control checks, and more for high-stakes productions like the UEFA EURO 2024 Championship. Find out how in this interview with Cromorama CEO and CTO Pablo Garcia hereIgelkott Studios: Redefining Driving PlatesSay goodbye to the limitations of array rig plates. Igelkott's precision-crafted single-lens driving plates deliver perfect parallax, seamless stitching, and true-to-life depth—no mismatched angles or post headaches. The choice of top filmmakers for flawless in-camera realism. Experience the future of driving plates at www.igelkottplates.comIntroducing Atomos Sun Dragon: A Rope Light Made for Filmmakers.The world's first full sun-spectrum rope light, Sun Dragon offers creatives more options. It's uniquely flexible, so it fits into places other lights can't. You can wrap it around objects for creative highlighting and special, colour-controllable effects including dramatic underlighting. The world's first sun spectrum, HDR, waterproof, DMX controlled, 2000 lumen 5-color LED, mount-anywhere, lightweight flexible production and cinema rope light.Learn more here Explore the OWC Jellyfish Nomad:Discover how the OWC Jellyfish Nomad turned a desolate location in the Utah Salt Flats into a fully equipped, mobile production studio. This compact, powerful device allows video professionals to manage, share, and collaborate on high-resolution projects in remote environments. Click through to see how you can streamline your workflow, no matter where your next shoot takes you! Read hereZEISS Introduces the Otus ML:The ZEISS Otus ML lenses are crafted for photographers who live to tell stories. Inspired by the legendary ZEISS Otus family, the new lenses bring ZEISS' renowned optical excellence combined with precise mechanics to mirrorless system cameras. Thanks to the distinctive ZEISS Look of true color, outstanding sharpness and the iconic “3D-Pop” of micro-contrast, your story will come to life exactly like you envisioned. A wide f1.4 aperture provides outstanding depth of field directing attention to your focus area, providing a soft bokeh that elegantly separates subjects from the background. The aspherical design effectively minimizes distortion and chromatic aberrations. Coupled with ZEISS T* coating that reduce reflections within a lens, minimizing lens flare and enhancing image contrast, and color fidelity.Learn more herePodcast Rewind:Feb 2025 - Ep. 69…“The Making Of” is published by Michael Valinsky.To advertise your products or services to 128K filmmakers, video pros, TV, broadcast, live event production pros, & photographers reading this newsletter, email us at mvalinsky@me.com Get full access to The Making Of at themakingof.substack.com/subscribe

Recalog
204. 2025/03/02 米月着陸機が軟着陸成功

Recalog

Play Episode Listen Later Mar 2, 2025


以下のようなトピックについて話をしました。 01. 宇宙飛行士の野口聡一がISSシミュレーターを体験評価 宇宙飛行士の野口聡一さんが、NASAの協力を得て株式会社スペースデータが開発した「ISS Simulator」というゲームをプレイした感想を述べた動画の内容です。 ISSは、16カ国が共同で運用する国際宇宙ステーションで、1998年に打ち上げられ、現在も運用中です。ゲームでは、温度や風などの実際のデータを元にISSの環境が再現されています。 動画内では、自由に移動できる球形ロボット「イントボール」の操作の難しさや、無重力空間での風の流れ、ロボットアーム操作パネルの設計、ケーブル配線の問題、運動設備やトイレの特徴などが紹介されました。 野口さんは、シミュレーターの技術的な再現度の高さを評価する一方で、ゲームとしては現実以上に宇宙っぽい表現があってもよいと述べ、クリエイターとの連携でより魅力的なものになる可能性を示唆しました。また、オープンワールドゲームの実物以上にリアルな表現に魅力を感じていることも明かしました。 02. OpenAIが新言語モデルGPT-4.5を発表 OpenAIが新たな言語モデル「GPT-4.5」をリリースしました。GPT-4.5は、OpenAIの最大かつ最も知識豊富な言語モデルとして位置付けられています。 主な特徴として、教師なし学習を大規模に活用することで、より広範な「世界モデル」を獲得し、パターン認識や関連付け、洞察生成の能力が向上しました。また、感情的知性(EQ)が高まり、より自然で暖かみのある対話が可能になりました。 GPT-4.5は、ChatGPTのProプラン利用者とAPI開発者向けに先行提供され、その後段階的に他のプランにも展開される予定です。ChatGPTウェブ版では、ウェブ検索機能やファイル・画像のアップロード、Canvas機能などが追加されました。 安全性に関しては、従来の教師あり学習と強化学習を組み合わせた手法で訓練されており、ハルシネーション(幻覚)の発生率も低減されています。 OpenAIは、GPT-4.5を最後の非推論モデルと位置付けており、将来的にはユーザーがモデルを意識せずに利用できる体験を目指しています。また、o系列の推論モデルとGPT系列のモデルを統合する方針も示されました。 03. Anthropic社が高性能AI『Claude 3.7 Sonnet』を発表 Anthropic社が発表した「Claude 3.7 Sonnet」は、AIモデルの新たな進化を示す画期的な製品です。このモデルの最大の特徴は、高速な応答と深い思考を1つのシステムで実現する「ハイブリッド推論モデル」という点です。 ユーザーは状況に応じて、迅速な回答を得られる標準モードと、複雑な問題に対して段階的に推論を重ねる拡張思考モードを切り替えて使用できます。拡張思考モードでは、AIの思考プロセスを可視化することも可能になりました。 特筆すべきは、コーディングと前端開発における性能向上です。ソフトウェア開発のベンチマークテストで最高水準の結果を記録し、実用性が大幅に向上しています。 また、Claude 3.7 Sonnetと同時に発表された「Claude Code」は、開発者向けのコマンドラインツールで、コードの検索や編集、テスト、GitHub連携などを直接ターミナルから行えるようになりました。 さらに、このモデルは128Kトークンの長文処理能力を持ち、より複雑で長い文章の理解と生成が可能になっています。安全性の面でも改善が見られ、有害なリクエストの識別精度が45%向上しました。 Claude 3.7 Sonnetは、AIの実用性と柔軟性を大きく前進させる革新的なモデルとして、幅広い分野での活用が期待されています。 04. 10倍高速なAI言語モデル『Mercury Coder』登場 AI開発企業Inceptionが、従来のAIモデルよりも最大10倍高速なテキスト生成が可能な大規模言語モデル「Mercury Coder」をリリースしました。Mercury Coderは拡散型の言語モデルで、ノイズから単語を抽出してコードを生成する新しいアプローチを採用しています。 このモデルの特徴は以下の通りです: 高速性: 既存のNVIDIAハードウェア上で毎秒10,000トークンまで生成可能。 パフォーマンス: Gemini 2.0 Flashlight、GPT-4o miniなどの小型フロンティアモデルと同等の性能。 並列処理: 従来の左から右へのトークン生成ではなく、一度にすべてを処理。 マルチモーダル対応: 将来的に動画や画像生成と組み合わせた機能が期待される。 コーディング能力: 複雑なコード生成タスクにも対応可能。 Mercury Coderは現在、無料でテスト利用が可能ですが、1時間あたり10リクエストの制限があります。この新しいアーキテクチャは、特に高速な推論速度を必要とする分野でイノベーションを促進する可能性があります。 05. 米民間企業の月着陸機『ブルーゴースト』が軟着陸に成功 アメリカの民間企業Firefly Aerospaceの月着陸機「Blue Ghost」が2025年3月2日17時35分頃、月面への軟着陸に成功しました。これは民間企業による2回目の月面軟着陸成功となります。 Blue Ghostは2025年1月15日にSpaceXのFalcon 9ロケットで打ち上げられ、危難の海にあるラトレイユ山の近くに着陸しました。このミッションはNASAの商業月輸送サービス(CLPS)の一環として実施されました。 搭載されたペイロードには、月面下10フィートまで測定可能な熱流量計や、全地球航法衛星システム(GNSS)の信号を月環境で利用できるかを実証する受信器など、計10の機器が含まれています。 着陸地点は丁度日の出を迎えたタイミングで、日の入りは3月16日の予定です。Blue Ghostのミッションはこの2週間にわたって行われる見込みです。 Firefly AerospaceはBlue Ghostの着陸を「完全に成功した月面着陸」と表現しており、これは以前の民間月着陸機Odysseusが横転した状態で接地したことを意識したものと思われます。 06. 手のひらサイズの月面探査車YAOKIが開発 YAOKIは、月面開発の最前線で活躍する超小型・超軽量・高強度の月面探査車(月面ローバー)です。以下がYAOKIの主な特徴と目標です: 特徴: 超小型:15×15×10cmと手のひらに乗るサイズ 超軽量:498gと非常に軽量 高強度:100Gの衝撃に耐え、洞窟への投げ込み探査も可能 確実走行:転倒しても走行可能な設計 目標: 民間企業による月面探査の実現:NASAの月輸送ミッション「CLPS」に日本企業として参加 アルテミス計画と連携した月面開発への貢献:2025年頃からモビリティシステム分野での貢献を目指す 月面基地建設への貢献:2028年頃から始まる月面基地建設を支援し、多数のYAOKIが月で働く未来を実現 YAOKIは、コストを抑えて月面に送り込むことができる設計となっており、民間企業による月面探査を実現し、月面開発を着実に前進させることを目指しています。将来的には、大量のYAOKIが月で活躍する姿を描いています。 本ラジオはあくまで個人の見解であり現実のいかなる団体を代表するものではありません ご理解頂ますようよろしくおねがいします

El Mundo del Spectrum Podcast
13x04 OCEAN parte 2 - Jesús Alonso Microhobby - El Mundo del Spectrum Podcast

El Mundo del Spectrum Podcast

Play Episode Listen Later Feb 6, 2025 228:35


OCEAN fue la compañía más influyente y con el mejor catálogo para Spectrum. Hoy, 6 años después, afrontamos la segunda parte de este especial en el que analizamos la empresa y sus títulos desde 1996. También contaremos con Jesús Alonso, miembro de Microhobby y creador del famoso manual de Código Máquina de esta publicación. Junto a él tendremos a José Manuel Claros, primer colaborador de esta casa en la década de los 90 y actual responsable de El Trastero del Spectrum. En la sección de actualidad repasaremos las noticias de mayor interés y por supuesto seguiremos hablando de lo último del The Spectrum. Con Jesús Martínez del Vas, Jesús Relinque (Pedja) y Alejandro Ibáñez. Casi 4 horas de entretenimiento con espíritu ochentero para desconectar del mundanal ruido y la rutina diaria.

Recalog
202. 2025/02/02 準天頂衛星みちびき6号機を打ち上げ成功

Recalog

Play Episode Listen Later Feb 2, 2025


以下のようなトピックについて話をしました。 01. DeepSeekが低コストで高性能な推論モデルを公開 DeepSeekが2025年1月20日に公開した推論モデル「DeepSeek-R1-Zero」と「DeepSeek-R1」が、AIの開発に対する業界の見方を大きく変えました。これらのモデルはMITライセンスの下でオープンソースとして公開され、トレーニングコストがOpenAIの推論モデル「o1」の約3%程度だと伝えられています。 R1は、Mixture of Experts(MoE)アーキテクチャを採用したAIモデル「DeepSeek-V3-Base」をベースにしており、総パラメータ数は6710億、コンテキスト長は128Kです。特にコーディング、数学、ロジックの分野で高品質な結果を生み出すことが確認されています。 DeepSeekの革新的な点は、AI開発におけるブレークスルーを達成したことです。従来の大規模言語モデルが人間のフィードバックによる強化学習(RLHF)を用いていたのに対し、R1-Zeroは人間によるフィードバックを削除し、ほぼ強化学習(RL)のみでトレーニングを行いました。この新しいアプローチにより、モデルの性能と効率が大幅に向上しています。 DeepSeekの登場は、AI業界に大きな影響を与え、株式市場にも影響を及ぼしています。その低コストと高性能は、AI開発の新たな可能性を示唆しており、今後のAI技術の進化に大きな期待が寄せられています。 02. 準天頂衛星みちびき6号機を打ち上げ成功 2025年2月2日、宇宙航空研究開発機構(JAXA)と三菱重工業は、国産基幹ロケット「H3」5号機を種子島宇宙センターから打ち上げ、日本版GPS衛星「みちびき」6号機を軌道に投入することに成功しました。H3ロケットはこれで2号機以降、4機連続での打ち上げ成功を記録しています。 打ち上げ後、固体ロケットブースター「SRB-3」や第1段・第2段機体の分離が順調に行われ、約29分後に「みちびき」6号機が静止トランスファー軌道に投入されました。この衛星は、準天頂衛星システム「みちびき」の一部であり、位置情報や時刻情報を提供する社会インフラとして機能します。 今回の6号機は、既存の4機体制を7機体制に拡張するための追加3機のうちの1機目です。この拡張により、日本上空に常に4機以上の衛星が滞空することが可能となり、他国のシステムに依存せず「みちびき」単独での持続測位が実現します。また、新たな高精度測位システムの実証も行われる予定です。 将来的には、バックアップ強化のために11機体制への拡張が計画されており、1機が故障しても測位機能を維持できる体制を目指しています。この成功は、日本の宇宙技術と測位システムの自立性向上に大きく貢献するものです。 03. 小惑星と誤認されたテスラ車、宇宙で再発見 2025年1月2日、ハーバード・スミソニアン天体物理学センターの小惑星センターが新たな小惑星「2018 CN41」を発見したと発表しました。この天体は地球近傍天体として登録されましたが、後に小惑星ではなくイーロン・マスクのテスラ・ロードスターであることが判明し、登録が取り消されました。 この「小惑星」はトルコのアマチュア天文家が開発したソフトウェアを使って発見されました。天文家たちは軌道を計算し、地球に接近する可能性があると考えました。しかし、軌道の3D表示を見た際に、火星に向かう宇宙船の軌道に似ていることに気づきました。 調査の結果、この天体は2018年2月にSpaceXのFalcon Heavyロケットで打ち上げられたテスラ・ロードスターであることが判明しました。このロードスターには「スターマン」と呼ばれる宇宙服を着た人形が乗っており、当初は火星を目指していましたが、予想以上に加速して小惑星帯まで到達してしまいました。 この出来事は、地球の周回軌道を越えた範囲で運用される宇宙船の追跡と透明性の問題を浮き彫りにしました。専門家は、このような未追跡の物体が増えると、地球を危険な小惑星から守る取り組みや、小惑星の研究に支障をきたす可能性があると警告しています。 本ラジオはあくまで個人の見解であり現実のいかなる団体を代表するものではありません ご理解頂ますようよろしくおねがいします

Amigos: Everything Amiga Podcast
Tiny Dungeons - a modern game for the ZX Spectrum 128k! It's Our Sinclair 116

Amigos: Everything Amiga Podcast

Play Episode Listen Later Jan 13, 2025 41:59


Our Sinclair is BACK and to kick off the new year, our game selection committee decided to check out some new-ish fare on the ZX Spectrum 128k! Join THE BRENT and Amigo Aaron has we explore the deepest, darkest corridors of the TINY DUNGEONS! It's Our Sinclair 116! Purchase Tiny Dungeons at: https://retrosouls.itch.io/tiny-dungeons

Our Sinclair: A ZX Spectrum Podcast
Tiny Dungeons - a modern game for the ZX Spectrum 128k! It's Our Sinclair 116

Our Sinclair: A ZX Spectrum Podcast

Play Episode Listen Later Jan 13, 2025 41:59


Our Sinclair is BACK and to kick off the new year, our game selection committee decided to check out some new-ish fare on the ZX Spectrum 128k! Join THE BRENT and Amigo Aaron has we explore the deepest, darkest corridors of the TINY DUNGEONS! It's Our Sinclair 116! Purchase Tiny Dungeons at: https://retrosouls.itch.io/tiny-dungeons

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Applications for the 2025 AI Engineer Summit are up, and you can save the date for AIE Singapore in April and AIE World's Fair 2025 in June.Happy new year, and thanks for 100 great episodes! Please let us know what you want to see/hear for the next 100!Full YouTube Episode with Slides/ChartsLike and subscribe and hit that bell to get notifs!Timestamps* 00:00 Welcome to the 100th Episode!* 00:19 Reflecting on the Journey* 00:47 AI Engineering: The Rise and Impact* 03:15 Latent Space Live and AI Conferences* 09:44 The Competitive AI Landscape* 21:45 Synthetic Data and Future Trends* 35:53 Creative Writing with AI* 36:12 Legal and Ethical Issues in AI* 38:18 The Data War: GPU Poor vs. GPU Rich* 39:12 The Rise of GPU Ultra Rich* 40:47 Emerging Trends in AI Models* 45:31 The Multi-Modality War* 01:05:31 The Future of AI Benchmarks* 01:13:17 Pionote and Frontier Models* 01:13:47 Niche Models and Base Models* 01:14:30 State Space Models and RWKB* 01:15:48 Inference Race and Price Wars* 01:22:16 Major AI Themes of the Year* 01:22:48 AI Rewind: January to March* 01:26:42 AI Rewind: April to June* 01:33:12 AI Rewind: July to September* 01:34:59 AI Rewind: October to December* 01:39:53 Year-End Reflections and PredictionsTranscript[00:00:00] Welcome to the 100th Episode![00:00:00] Alessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co host Swyx for the 100th time today.[00:00:12] swyx: Yay, um, and we're so glad that, yeah, you know, everyone has, uh, followed us in this journey. How do you feel about it? 100 episodes.[00:00:19] Alessio: Yeah, I know.[00:00:19] Reflecting on the Journey[00:00:19] Alessio: Almost two years that we've been doing this. We've had four different studios. Uh, we've had a lot of changes. You know, we used to do this lightning round. When we first started that we didn't like, and we tried to change the question. The answer[00:00:32] swyx: was cursor and perplexity.[00:00:34] Alessio: Yeah, I love mid journey. It's like, do you really not like anything else?[00:00:38] Alessio: Like what's, what's the unique thing? And I think, yeah, we, we've also had a lot more research driven content. You know, we had like 3DAO, we had, you know. Jeremy Howard, we had more folks like that.[00:00:47] AI Engineering: The Rise and Impact[00:00:47] Alessio: I think we want to do more of that too in the new year, like having, uh, some of the Gemini folks, both on the research and the applied side.[00:00:54] Alessio: Yeah, but it's been a ton of fun. I think we both started, I wouldn't say as a joke, we were kind of like, Oh, we [00:01:00] should do a podcast. And I think we kind of caught the right wave, obviously. And I think your rise of the AI engineer posts just kind of get people. Sombra to congregate, and then the AI engineer summit.[00:01:11] Alessio: And that's why when I look at our growth chart, it's kind of like a proxy for like the AI engineering industry as a whole, which is almost like, like, even if we don't do that much, we keep growing just because there's so many more AI engineers. So did you expect that growth or did you expect that would take longer for like the AI engineer thing to kind of like become, you know, everybody talks about it today.[00:01:32] swyx: So, the sign of that, that we have won is that Gartner puts it at the top of the hype curve right now. So Gartner has called the peak in AI engineering. I did not expect, um, to what level. I knew that I was correct when I called it because I did like two months of work going into that. But I didn't know, You know, how quickly it could happen, and obviously there's a chance that I could be wrong.[00:01:52] swyx: But I think, like, most people have come around to that concept. Hacker News hates it, which is a good sign. But there's enough people that have defined it, you know, GitHub, when [00:02:00] they launched GitHub Models, which is the Hugging Face clone, they put AI engineers in the banner, like, above the fold, like, in big So I think it's like kind of arrived as a meaningful and useful definition.[00:02:12] swyx: I think people are trying to figure out where the boundaries are. I think that was a lot of the quote unquote drama that happens behind the scenes at the World's Fair in June. Because I think there's a lot of doubt or questions about where ML engineering stops and AI engineering starts. That's a useful debate to be had.[00:02:29] swyx: In some sense, I actually anticipated that as well. So I intentionally did not. Put a firm definition there because most of the successful definitions are necessarily underspecified and it's actually useful to have different perspectives and you don't have to specify everything from the outset.[00:02:45] Alessio: Yeah, I was at um, AWS reInvent and the line to get into like the AI engineering talk, so to speak, which is, you know, applied AI and whatnot was like, there are like hundreds of people just in line to go in.[00:02:56] Alessio: I think that's kind of what enabled me. People, right? Which is what [00:03:00] you kind of talked about. It's like, Hey, look, you don't actually need a PhD, just, yeah, just use the model. And then maybe we'll talk about some of the blind spots that you get as an engineer with the earlier posts that we also had on on the sub stack.[00:03:11] Alessio: But yeah, it's been a heck of a heck of a two years.[00:03:14] swyx: Yeah.[00:03:15] Latent Space Live and AI Conferences[00:03:15] swyx: You know, I was, I was trying to view the conference as like, so NeurIPS is I think like 16, 17, 000 people. And the Latent Space Live event that we held there was 950 signups. I think. The AI world, the ML world is still very much research heavy. And that's as it should be because ML is very much in a research phase.[00:03:34] swyx: But as we move this entire field into production, I think that ratio inverts into becoming more engineering heavy. So at least I think engineering should be on the same level, even if it's never as prestigious, like it'll always be low status because at the end of the day, you're manipulating APIs or whatever.[00:03:51] swyx: But Yeah, wrapping GPTs, but there's going to be an increasing stack and an art to doing these, these things well. And I, you know, I [00:04:00] think that's what we're focusing on for the podcast, the conference and basically everything I do seems to make sense. And I think we'll, we'll talk about the trends here that apply.[00:04:09] swyx: It's, it's just very strange. So, like, there's a mix of, like, keeping on top of research while not being a researcher and then putting that research into production. So, like, people always ask me, like, why are you covering Neuralibs? Like, this is a ML research conference and I'm like, well, yeah, I mean, we're not going to, to like, understand everything Or reproduce every single paper, but the stuff that is being found here is going to make it through into production at some point, you hope.[00:04:32] swyx: And then actually like when I talk to the researchers, they actually get very excited because they're like, oh, you guys are actually caring about how this goes into production and that's what they really really want. The measure of success is previously just peer review, right? Getting 7s and 8s on their um, Academic review conferences and stuff like citations is one metric, but money is a better metric.[00:04:51] Alessio: Money is a better metric. Yeah, and there were about 2200 people on the live stream or something like that. Yeah, yeah. Hundred on the live stream. So [00:05:00] I try my best to moderate, but it was a lot spicier in person with Jonathan and, and Dylan. Yeah, that it was in the chat on YouTube.[00:05:06] swyx: I would say that I actually also created.[00:05:09] swyx: Layen Space Live in order to address flaws that are perceived in academic conferences. This is not NeurIPS specific, it's ICML, NeurIPS. Basically, it's very sort of oriented towards the PhD student, uh, market, job market, right? Like literally all, basically everyone's there to advertise their research and skills and get jobs.[00:05:28] swyx: And then obviously all the, the companies go there to hire them. And I think that's great for the individual researchers, but for people going there to get info is not great because you have to read between the lines, bring a ton of context in order to understand every single paper. So what is missing is effectively what I ended up doing, which is domain by domain, go through and recap the best of the year.[00:05:48] swyx: Survey the field. And there are, like NeurIPS had a, uh, I think ICML had a like a position paper track, NeurIPS added a benchmarks, uh, datasets track. These are ways in which to address that [00:06:00] issue. Uh, there's always workshops as well. Every, every conference has, you know, a last day of workshops and stuff that provide more of an overview.[00:06:06] swyx: But they're not specifically prompted to do so. And I think really, uh, Organizing a conference is just about getting good speakers and giving them the correct prompts. And then they will just go and do that thing and they do a very good job of it. So I think Sarah did a fantastic job with the startups prompt.[00:06:21] swyx: I can't list everybody, but we did best of 2024 in startups, vision, open models. Post transformers, synthetic data, small models, and agents. And then the last one was the, uh, and then we also did a quick one on reasoning with Nathan Lambert. And then the last one, obviously, was the debate that people were very hyped about.[00:06:39] swyx: It was very awkward. And I'm really, really thankful for John Franco, basically, who stepped up to challenge Dylan. Because Dylan was like, yeah, I'll do it. But He was pro scaling. And I think everyone who is like in AI is pro scaling, right? So you need somebody who's ready to publicly say, no, we've hit a wall.[00:06:57] swyx: So that means you're saying Sam Altman's wrong. [00:07:00] You're saying, um, you know, everyone else is wrong. It helps that this was the day before Ilya went on, went up on stage and then said pre training has hit a wall. And data has hit a wall. So actually Jonathan ended up winning, and then Ilya supported that statement, and then Noam Brown on the last day further supported that statement as well.[00:07:17] swyx: So it's kind of interesting that I think the consensus kind of going in was that we're not done scaling, like you should believe in a better lesson. And then, four straight days in a row, you had Sepp Hochreiter, who is the creator of the LSTM, along with everyone's favorite OG in AI, which is Juergen Schmidhuber.[00:07:34] swyx: He said that, um, we're pre trading inside a wall, or like, we've run into a different kind of wall. And then we have, you know John Frankel, Ilya, and then Noam Brown are all saying variations of the same thing, that we have hit some kind of wall in the status quo of what pre trained, scaling large pre trained models has looked like, and we need a new thing.[00:07:54] swyx: And obviously the new thing for people is some make, either people are calling it inference time compute or test time [00:08:00] compute. I think the collective terminology has been inference time, and I think that makes sense because test time, calling it test, meaning, has a very pre trained bias, meaning that the only reason for running inference at all is to test your model.[00:08:11] swyx: That is not true. Right. Yeah. So, so, I quite agree that. OpenAI seems to have adopted, or the community seems to have adopted this terminology of ITC instead of TTC. And that, that makes a lot of sense because like now we care about inference, even right down to compute optimality. Like I actually interviewed this author who recovered or reviewed the Chinchilla paper.[00:08:31] swyx: Chinchilla paper is compute optimal training, but what is not stated in there is it's pre trained compute optimal training. And once you start caring about inference, compute optimal training, you have a different scaling law. And in a way that we did not know last year.[00:08:45] Alessio: I wonder, because John is, he's also on the side of attention is all you need.[00:08:49] Alessio: Like he had the bet with Sasha. So I'm curious, like he doesn't believe in scaling, but he thinks the transformer, I wonder if he's still. So, so,[00:08:56] swyx: so he, obviously everything is nuanced and you know, I told him to play a character [00:09:00] for this debate, right? So he actually does. Yeah. He still, he still believes that we can scale more.[00:09:04] swyx: Uh, he just assumed the character to be very game for, for playing this debate. So even more kudos to him that he assumed a position that he didn't believe in and still won the debate.[00:09:16] Alessio: Get rekt, Dylan. Um, do you just want to quickly run through some of these things? Like, uh, Sarah's presentation, just the highlights.[00:09:24] swyx: Yeah, we can't go through everyone's slides, but I pulled out some things as a factor of, like, stuff that we were going to talk about. And we'll[00:09:30] Alessio: publish[00:09:31] swyx: the rest. Yeah, we'll publish on this feed the best of 2024 in those domains. And hopefully people can benefit from the work that our speakers have done.[00:09:39] swyx: But I think it's, uh, these are just good slides. And I've been, I've been looking for a sort of end of year recaps from, from people.[00:09:44] The Competitive AI Landscape[00:09:44] swyx: The field has progressed a lot. You know, I think the max ELO in 2023 on LMSys used to be 1200 for LMSys ELOs. And now everyone is at least at, uh, 1275 in their ELOs, and this is across Gemini, Chadjibuti, [00:10:00] Grok, O1.[00:10:01] swyx: ai, which with their E Large model, and Enthopic, of course. It's a very, very competitive race. There are multiple Frontier labs all racing, but there is a clear tier zero Frontier. And then there's like a tier one. It's like, I wish I had everything else. Tier zero is extremely competitive. It's effectively now three horse race between Gemini, uh, Anthropic and OpenAI.[00:10:21] swyx: I would say that people are still holding out a candle for XAI. XAI, I think, for some reason, because their API was very slow to roll out, is not included in these metrics. So it's actually quite hard to put on there. As someone who also does charts, XAI is continually snubbed because they don't work well with the benchmarking people.[00:10:42] swyx: Yeah, yeah, yeah. It's a little trivia for why XAI always gets ignored. The other thing is market share. So these are slides from Sarah. We have it up on the screen. It has gone from very heavily open AI. So we have some numbers and estimates. These are from RAMP. Estimates of open AI market share in [00:11:00] December 2023.[00:11:01] swyx: And this is basically, what is it, GPT being 95 percent of production traffic. And I think if you correlate that with stuff that we asked. Harrison Chase on the LangChain episode, it was true. And then CLAUD 3 launched mid middle of this year. I think CLAUD 3 launched in March, CLAUD 3. 5 Sonnet was in June ish.[00:11:23] swyx: And you can start seeing the market share shift towards opening, uh, towards that topic, uh, very, very aggressively. The more recent one is Gemini. So if I scroll down a little bit, this is an even more recent dataset. So RAM's dataset ends in September 2 2. 2024. Gemini has basically launched a price war at the low end, uh, with Gemini Flash, uh, being basically free for personal use.[00:11:44] swyx: Like, I think people don't understand the free tier. It's something like a billion tokens per day. Unless you're trying to abuse it, you cannot really exhaust your free tier on Gemini. They're really trying to get you to use it. They know they're in like third place, um, fourth place, depending how you, how you count.[00:11:58] swyx: And so they're going after [00:12:00] the Lower tier first, and then, you know, maybe the upper tier later, but yeah, Gemini Flash, according to OpenRouter, is now 50 percent of their OpenRouter requests. Obviously, these are the small requests. These are small, cheap requests that are mathematically going to be more.[00:12:15] swyx: The smart ones obviously are still going to OpenAI. But, you know, it's a very, very big shift in the market. Like basically 2023, 2022, To going into 2024 opening has gone from nine five market share to Yeah. Reasonably somewhere between 50 to 75 market share.[00:12:29] Alessio: Yeah. I'm really curious how ramped does the attribution to the model?[00:12:32] Alessio: If it's API, because I think it's all credit card spin. . Well, but it's all, the credit card doesn't say maybe. Maybe the, maybe when they do expenses, they upload the PDF, but yeah, the, the German I think makes sense. I think that was one of my main 2024 takeaways that like. The best small model companies are the large labs, which is not something I would have thought that the open source kind of like long tail would be like the small model.[00:12:53] swyx: Yeah, different sizes of small models we're talking about here, right? Like so small model here for Gemini is AB, [00:13:00] right? Uh, mini. We don't know what the small model size is, but yeah, it's probably in the double digits or maybe single digits, but probably double digits. The open source community has kind of focused on the one to three B size.[00:13:11] swyx: Mm-hmm . Yeah. Maybe[00:13:12] swyx: zero, maybe 0.5 B uh, that's moon dream and that is small for you then, then that's great. It makes sense that we, we have a range for small now, which is like, may, maybe one to five B. Yeah. I'll even put that at, at, at the high end. And so this includes Gemma from Gemini as well. But also includes the Apple Foundation models, which I think Apple Foundation is 3B.[00:13:32] Alessio: Yeah. No, that's great. I mean, I think in the start small just meant cheap. I think today small is actually a more nuanced discussion, you know, that people weren't really having before.[00:13:43] swyx: Yeah, we can keep going. This is a slide that I smiley disagree with Sarah. She's pointing to the scale SEAL leaderboard. I think the Researchers that I talked with at NeurIPS were kind of positive on this because basically you need private test [00:14:00] sets to prevent contamination.[00:14:02] swyx: And Scale is one of maybe three or four people this year that has really made an effort in doing a credible private test set leaderboard. Llama405B does well compared to Gemini and GPT 40. And I think that's good. I would say that. You know, it's good to have an open model that is that big, that does well on those metrics.[00:14:23] swyx: But anyone putting 405B in production will tell you, if you scroll down a little bit to the artificial analysis numbers, that it is very slow and very expensive to infer. Um, it doesn't even fit on like one node. of, uh, of H100s. Cerebras will be happy to tell you they can serve 4 or 5B on their super large chips.[00:14:42] swyx: But, um, you know, if you need to do anything custom to it, you're still kind of constrained. So, is 4 or 5B really that relevant? Like, I think most people are basically saying that they only use 4 or 5B as a teacher model to distill down to something. Even Meta is doing it. So with Lama 3. [00:15:00] 3 launched, they only launched the 70B because they use 4 or 5B to distill the 70B.[00:15:03] swyx: So I don't know if like open source is keeping up. I think they're the, the open source industrial complex is very invested in telling you that the, if the gap is narrowing, I kind of disagree. I think that the gap is widening with O1. I think there are very, very smart people trying to narrow that gap and they should.[00:15:22] swyx: I really wish them success, but you cannot use a chart that is nearing 100 in your saturation chart. And look, the distance between open source and closed source is narrowing. Of course it's going to narrow because you're near 100. This is stupid. But in metrics that matter, is open source narrowing?[00:15:38] swyx: Probably not for O1 for a while. And it's really up to the open source guys to figure out if they can match O1 or not.[00:15:46] Alessio: I think inference time compute is bad for open source just because, you know, Doc can donate the flops at training time, but he cannot donate the flops at inference time. So it's really hard to like actually keep up on that axis.[00:15:59] Alessio: Big, big business [00:16:00] model shift. So I don't know what that means for the GPU clouds. I don't know what that means for the hyperscalers, but obviously the big labs have a lot of advantage. Because, like, it's not a static artifact that you're putting the compute in. You're kind of doing that still, but then you're putting a lot of computed inference too.[00:16:17] swyx: Yeah, yeah, yeah. Um, I mean, Llama4 will be reasoning oriented. We talked with Thomas Shalom. Um, kudos for getting that episode together. That was really nice. Good, well timed. Actually, I connected with the AI meta guy, uh, at NeurIPS, and, um, yeah, we're going to coordinate something for Llama4. Yeah, yeah,[00:16:32] Alessio: and our friend, yeah.[00:16:33] Alessio: Clara Shi just joined to lead the business agent side. So I'm sure we'll have her on in the new year.[00:16:39] swyx: Yeah. So, um, my comment on, on the business model shift, this is super interesting. Apparently it is wide knowledge that OpenAI wanted more than 6. 6 billion dollars for their fundraise. They wanted to raise, you know, higher, and they did not.[00:16:51] swyx: And what that means is basically like, it's very convenient that we're not getting GPT 5, which would have been a larger pre train. We should have a lot of upfront money. And [00:17:00] instead we're, we're converting fixed costs into variable costs, right. And passing it on effectively to the customer. And it's so much easier to take margin there because you can directly attribute it to like, Oh, you're using this more.[00:17:12] swyx: Therefore you, you pay more of the cost and I'll just slap a margin in there. So like that lets you control your growth margin and like tie your. Your spend, or your sort of inference spend, accordingly. And it's just really interesting to, that this change in the sort of inference paradigm has arrived exactly at the same time that the funding environment for pre training is effectively drying up, kind of.[00:17:36] swyx: I feel like maybe the VCs are very in tune with research anyway, so like, they would have noticed this, but, um, it's just interesting.[00:17:43] Alessio: Yeah, and I was looking back at our yearly recap of last year. Yeah. And the big thing was like the mixed trial price fights, you know, and I think now it's almost like there's nowhere to go, like, you know, Gemini Flash is like basically giving it away for free.[00:17:55] Alessio: So I think this is a good way for the labs to generate more revenue and pass down [00:18:00] some of the compute to the customer. I think they're going to[00:18:02] swyx: keep going. I think that 2, will come.[00:18:05] Alessio: Yeah, I know. Totally. I mean, next year, the first thing I'm doing is signing up for Devin. Signing up for the pro chat GBT.[00:18:12] Alessio: Just to try. I just want to see what does it look like to spend a thousand dollars a month on AI?[00:18:17] swyx: Yes. Yes. I think if your, if your, your job is a, at least AI content creator or VC or, you know, someone who, whose job it is to stay on, stay on top of things, you should already be spending like a thousand dollars a month on, on stuff.[00:18:28] swyx: And then obviously easy to spend, hard to use. You have to actually use. The good thing is that actually Google lets you do a lot of stuff for free now. So like deep research. That they just launched. Uses a ton of inference and it's, it's free while it's in preview.[00:18:45] Alessio: Yeah. They need to put that in Lindy.[00:18:47] Alessio: I've been using Lindy lately. I've been a built a bunch of things once we had flow because I liked the new thing. It's pretty good. I even did a phone call assistant. Um, yeah, they just launched Lindy voice. Yeah, I think once [00:19:00] they get advanced voice mode like capability today, still like speech to text, you can kind of tell.[00:19:06] Alessio: Um, but it's good for like reservations and things like that. So I have a meeting prepper thing. And so[00:19:13] swyx: it's good. Okay. I feel like we've, we've covered a lot of stuff. Uh, I, yeah, I, you know, I think We will go over the individual, uh, talks in a separate episode. Uh, I don't want to take too much time with, uh, this stuff, but that suffice to say that there is a lot of progress in each field.[00:19:28] swyx: Uh, we covered vision. Basically this is all like the audience voting for what they wanted. And then I just invited the best people I could find in each audience, especially agents. Um, Graham, who I talked to at ICML in Vienna, he is currently still number one. It's very hard to stay on top of SweetBench.[00:19:45] swyx: OpenHand is currently still number one. switchbench full, which is the hardest one. He had very good thoughts on agents, which I, which I'll highlight for people. Everyone is saying 2025 is the year of agents, just like they said last year. And, uh, but he had [00:20:00] thoughts on like eight parts of what are the frontier problems to solve in agents.[00:20:03] swyx: And so I'll highlight that talk as well.[00:20:05] Alessio: Yeah. The number six, which is the Hacken agents learn more about the environment, has been a Super interesting to us as well, just to think through, because, yeah, how do you put an agent in an enterprise where most things in an enterprise have never been public, you know, a lot of the tooling, like the code bases and things like that.[00:20:23] Alessio: So, yeah, there's not indexing and reg. Well, yeah, but it's more like. You can't really rag things that are not documented. But people know them based on how they've been doing it. You know, so I think there's almost this like, you know, Oh, institutional knowledge. Yeah, the boring word is kind of like a business process extraction.[00:20:38] Alessio: Yeah yeah, I see. It's like, how do you actually understand how these things are done? I see. Um, and I think today the, the problem is that, Yeah, the agents are, that most people are building are good at following instruction, but are not as good as like extracting them from you. Um, so I think that will be a big unlock just to touch quickly on the Jeff Dean thing.[00:20:55] Alessio: I thought it was pretty, I mean, we'll link it in the, in the things, but. I think the main [00:21:00] focus was like, how do you use ML to optimize the systems instead of just focusing on ML to do something else? Yeah, I think speculative decoding, we had, you know, Eugene from RWKB on the podcast before, like he's doing a lot of that with Fetterless AI.[00:21:12] swyx: Everyone is. I would say it's the norm. I'm a little bit uncomfortable with how much it costs, because it does use more of the GPU per call. But because everyone is so keen on fast inference, then yeah, makes sense.[00:21:24] Alessio: Exactly. Um, yeah, but we'll link that. Obviously Jeff is great.[00:21:30] swyx: Jeff is, Jeff's talk was more, it wasn't focused on Gemini.[00:21:33] swyx: I think people got the wrong impression from my tweet. It's more about how Google approaches ML and uses ML to design systems and then systems feedback into ML. And I think this ties in with Lubna's talk.[00:21:45] Synthetic Data and Future Trends[00:21:45] swyx: on synthetic data where it's basically the story of bootstrapping of humans and AI in AI research or AI in production.[00:21:53] swyx: So her talk was on synthetic data, where like how much synthetic data has grown in 2024 in the pre training side, the post training side, [00:22:00] and the eval side. And I think Jeff then also extended it basically to chips, uh, to chip design. So he'd spend a lot of time talking about alpha chip. And most of us in the audience are like, we're not working on hardware, man.[00:22:11] swyx: Like you guys are great. TPU is great. Okay. We'll buy TPUs.[00:22:14] Alessio: And then there was the earlier talk. Yeah. But, and then we have, uh, I don't know if we're calling them essays. What are we calling these? But[00:22:23] swyx: for me, it's just like bonus for late in space supporters, because I feel like they haven't been getting anything.[00:22:29] swyx: And then I wanted a more high frequency way to write stuff. Like that one I wrote in an afternoon. I think basically we now have an answer to what Ilya saw. It's one year since. The blip. And we know what he saw in 2014. We know what he saw in 2024. We think we know what he sees in 2024. He gave some hints and then we have vague indications of what he saw in 2023.[00:22:54] swyx: So that was the Oh, and then 2016 as well, because of this lawsuit with Elon, OpenAI [00:23:00] is publishing emails from Sam's, like, his personal text messages to Siobhan, Zelis, or whatever. So, like, we have emails from Ilya saying, this is what we're seeing in OpenAI, and this is why we need to scale up GPUs. And I think it's very prescient in 2016 to write that.[00:23:16] swyx: And so, like, it is exactly, like, basically his insights. It's him and Greg, basically just kind of driving the scaling up of OpenAI, while they're still playing Dota. They're like, no, like, we see the path here.[00:23:30] Alessio: Yeah, and it's funny, yeah, they even mention, you know, we can only train on 1v1 Dota. We need to train on 5v5, and that takes too many GPUs.[00:23:37] Alessio: Yeah,[00:23:37] swyx: and at least for me, I can speak for myself, like, I didn't see the path from Dota to where we are today. I think even, maybe if you ask them, like, they wouldn't necessarily draw a straight line. Yeah,[00:23:47] Alessio: no, definitely. But I think like that was like the whole idea of almost like the RL and we talked about this with Nathan on his podcast.[00:23:55] Alessio: It's like with RL, you can get very good at specific things, but then you can't really like generalize as much. And I [00:24:00] think the language models are like the opposite, which is like, you're going to throw all this data at them and scale them up, but then you really need to drive them home on a specific task later on.[00:24:08] Alessio: And we'll talk about the open AI reinforcement, fine tuning, um, announcement too, and all of that. But yeah, I think like scale is all you need. That's kind of what Elia will be remembered for. And I think just maybe to clarify on like the pre training is over thing that people love to tweet. I think the point of the talk was like everybody, we're scaling these chips, we're scaling the compute, but like the second ingredient which is data is not scaling at the same rate.[00:24:35] Alessio: So it's not necessarily pre training is over. It's kind of like What got us here won't get us there. In his email, he predicted like 10x growth every two years or something like that. And I think maybe now it's like, you know, you can 10x the chips again, but[00:24:49] swyx: I think it's 10x per year. Was it? I don't know.[00:24:52] Alessio: Exactly. And Moore's law is like 2x. So it's like, you know, much faster than that. And yeah, I like the fossil fuel of AI [00:25:00] analogy. It's kind of like, you know, the little background tokens thing. So the OpenAI reinforcement fine tuning is basically like, instead of fine tuning on data, you fine tune on a reward model.[00:25:09] Alessio: So it's basically like, instead of being data driven, it's like task driven. And I think people have tasks to do, they don't really have a lot of data. So I'm curious to see how that changes, how many people fine tune, because I think this is what people run into. It's like, Oh, you can fine tune llama. And it's like, okay, where do I get the data?[00:25:27] Alessio: To fine tune it on, you know, so it's great that we're moving the thing. And then I really like he had this chart where like, you know, the brain mass and the body mass thing is basically like mammals that scaled linearly by brain and body size, and then humans kind of like broke off the slope. So it's almost like maybe the mammal slope is like the pre training slope.[00:25:46] Alessio: And then the post training slope is like the, the human one.[00:25:49] swyx: Yeah. I wonder what the. I mean, we'll know in 10 years, but I wonder what the y axis is for, for Ilya's SSI. We'll try to get them on.[00:25:57] Alessio: Ilya, if you're listening, you're [00:26:00] welcome here. Yeah, and then he had, you know, what comes next, like agent, synthetic data, inference, compute, I thought all of that was like that.[00:26:05] Alessio: I don't[00:26:05] swyx: think he was dropping any alpha there. Yeah, yeah, yeah.[00:26:07] Alessio: Yeah. Any other new reps? Highlights?[00:26:10] swyx: I think that there was comparatively a lot more work. Oh, by the way, I need to plug that, uh, my friend Yi made this, like, little nice paper. Yeah, that was really[00:26:20] swyx: nice.[00:26:20] swyx: Uh, of, uh, of, like, all the, he's, she called it must read papers of 2024.[00:26:26] swyx: So I laid out some of these at NeurIPS, and it was just gone. Like, everyone just picked it up. Because people are dying for, like, little guidance and visualizations And so, uh, I thought it was really super nice that we got there.[00:26:38] Alessio: Should we do a late in space book for each year? Uh, I thought about it. For each year we should.[00:26:42] Alessio: Coffee table book. Yeah. Yeah. Okay. Put it in the will. Hi, Will. By the way, we haven't introduced you. He's our new, you know, general organist, Jamie. You need to[00:26:52] swyx: pull up more things. One thing I saw that, uh, Okay, one fun one, and then one [00:27:00] more general one. So the fun one is this paper on agent collusion. This is a paper on steganography.[00:27:06] swyx: This is secret collusion among AI agents, multi agent deception via steganography. I tried to go to NeurIPS in order to find these kinds of papers because the real reason Like NeurIPS this year has a lottery system. A lot of people actually even go and don't buy tickets because they just go and attend the side events.[00:27:22] swyx: And then also the people who go and end up crowding around the most popular papers, which you already know and already read them before you showed up to NeurIPS. So the only reason you go there is to talk to the paper authors, but there's like something like 10, 000 other. All these papers out there that, you know, are just people's work that they, that they did on the air and they failed to get attention for one reason or another.[00:27:42] swyx: And this was one of them. Uh, it was like all the way at the back. And this is a deep mind paper that actually focuses on collusion between AI agents, uh, by hiding messages in the text that they generate. Uh, so that's what steganography is. So a very simple example would be the first letter of every word.[00:27:57] swyx: If you Pick that out, you know, and the code sends a [00:28:00] different message than that. But something I've always emphasized is to LLMs, we read left to right. LLMs can read up, down, sideways, you know, in random character order. And it's the same to them as it is to us. So if we were ever to get You know, self motivated, underlined LLMs that we're trying to collaborate to take over the planet.[00:28:19] swyx: This would be how they do it. They spread messages among us in the messages that we generate. And he developed a scaling law for that. So he marked, I'm showing it on screen right now, the emergence of this phenomenon. Basically, for example, for Cypher encoding, GPT 2, Lama 2, mixed trial, GPT 3. 5, zero capabilities, and sudden 4.[00:28:40] swyx: And this is the kind of Jason Wei type emergence properties that people kind of look for. I think what made this paper stand out as well, so he developed the benchmark for steganography collusion, and he also focused on shelling point collusion, which is very low coordination. For agreeing on a decoding encoding format, you kind of need to have some [00:29:00] agreement on that.[00:29:00] swyx: But, but shelling point means like very, very low or almost no coordination. So for example, if I, if I ask someone, if the only message I give you is meet me in New York and you're not aware. Or when you would probably meet me at Grand Central Station. That is the Grand Central Station is a shelling point.[00:29:16] swyx: And it's probably somewhere, somewhere during the day. That is the shelling point of New York is Grand Central. To that extent, shelling points for steganography are things like the, the, the common decoding methods that we talked about. It will be interesting at some point in the future when we are worried about alignment.[00:29:30] swyx: It is not interesting today, but it's interesting that DeepMind is already thinking about this.[00:29:36] Alessio: I think that's like one of the hardest things about NeurIPS. It's like the long tail. I[00:29:41] swyx: found a pricing guy. I'm going to feature him on the podcast. Basically, this guy from NVIDIA worked out the optimal pricing for language models.[00:29:51] swyx: It's basically an econometrics paper at NeurIPS, where everyone else is talking about GPUs. And the guy with the GPUs is[00:29:57] Alessio: talking[00:29:57] swyx: about economics instead. [00:30:00] That was the sort of fun one. So the focus I saw is that model papers at NeurIPS are kind of dead. No one really presents models anymore. It's just data sets.[00:30:12] swyx: This is all the grad students are working on. So like there was a data sets track and then I was looking around like, I was like, you don't need a data sets track because every paper is a data sets paper. And so data sets and benchmarks, they're kind of flip sides of the same thing. So Yeah. Cool. Yeah, if you're a grad student, you're a GPU boy, you kind of work on that.[00:30:30] swyx: And then the, the sort of big model that people walk around and pick the ones that they like, and then they use it in their models. And that's, that's kind of how it develops. I, I feel like, um, like, like you didn't last year, you had people like Hao Tian who worked on Lava, which is take Lama and add Vision.[00:30:47] swyx: And then obviously actually I hired him and he added Vision to Grok. Now he's the Vision Grok guy. This year, I don't think there was any of those.[00:30:55] Alessio: What were the most popular, like, orals? Last year it was like the [00:31:00] Mixed Monarch, I think, was like the most attended. Yeah, uh, I need to look it up. Yeah, I mean, if nothing comes to mind, that's also kind of like an answer in a way.[00:31:10] Alessio: But I think last year there was a lot of interest in, like, furthering models and, like, different architectures and all of that.[00:31:16] swyx: I will say that I felt the orals, oral picks this year were not very good. Either that or maybe it's just a So that's the highlight of how I have changed in terms of how I view papers.[00:31:29] swyx: So like, in my estimation, two of the best papers in this year for datasets or data comp and refined web or fine web. These are two actually industrially used papers, not highlighted for a while. I think DCLM got the spotlight, FineWeb didn't even get the spotlight. So like, it's just that the picks were different.[00:31:48] swyx: But one thing that does get a lot of play that a lot of people are debating is the role that's scheduled. This is the schedule free optimizer paper from Meta from Aaron DeFazio. And this [00:32:00] year in the ML community, there's been a lot of chat about shampoo, soap, all the bathroom amenities for optimizing your learning rates.[00:32:08] swyx: And, uh, most people at the big labs are. Who I asked about this, um, say that it's cute, but it's not something that matters. I don't know, but it's something that was discussed and very, very popular. 4Wars[00:32:19] Alessio: of AI recap maybe, just quickly. Um, where do you want to start? Data?[00:32:26] swyx: So to remind people, this is the 4Wars piece that we did as one of our earlier recaps of this year.[00:32:31] swyx: And the belligerents are on the left, journalists, writers, artists, anyone who owns IP basically, New York Times, Stack Overflow, Reddit, Getty, Sarah Silverman, George RR Martin. Yeah, and I think this year we can add Scarlett Johansson to that side of the fence. So anyone suing, open the eye, basically. I actually wanted to get a snapshot of all the lawsuits.[00:32:52] swyx: I'm sure some lawyer can do it. That's the data quality war. On the right hand side, we have the synthetic data people, and I think we talked about Lumna's talk, you know, [00:33:00] really showing how much synthetic data has come along this year. I think there was a bit of a fight between scale. ai and the synthetic data community, because scale.[00:33:09] swyx: ai published a paper saying that synthetic data doesn't work. Surprise, surprise, scale. ai is the leading vendor of non synthetic data. Only[00:33:17] Alessio: cage free annotated data is useful.[00:33:21] swyx: So I think there's some debate going on there, but I don't think it's much debate anymore that at least synthetic data, for the reasons that are blessed in Luna's talk, Makes sense.[00:33:32] swyx: I don't know if you have any perspectives there.[00:33:34] Alessio: I think, again, going back to the reinforcement fine tuning, I think that will change a little bit how people think about it. I think today people mostly use synthetic data, yeah, for distillation and kind of like fine tuning a smaller model from like a larger model.[00:33:46] Alessio: I'm not super aware of how the frontier labs use it outside of like the rephrase, the web thing that Apple also did. But yeah, I think it'll be. Useful. I think like whether or not that gets us the big [00:34:00] next step, I think that's maybe like TBD, you know, I think people love talking about data because it's like a GPU poor, you know, I think, uh, synthetic data is like something that people can do, you know, so they feel more opinionated about it compared to, yeah, the optimizers stuff, which is like,[00:34:17] swyx: they don't[00:34:17] Alessio: really work[00:34:18] swyx: on.[00:34:18] swyx: I think that there is an angle to the reasoning synthetic data. So this year, we covered in the paper club, the star series of papers. So that's star, Q star, V star. It basically helps you to synthesize reasoning steps, or at least distill reasoning steps from a verifier. And if you look at the OpenAI RFT, API that they released, or that they announced, basically they're asking you to submit graders, or they choose from a preset list of graders.[00:34:49] swyx: Basically It feels like a way to create valid synthetic data for them to fine tune their reasoning paths on. Um, so I think that is another angle where it starts to make sense. And [00:35:00] so like, it's very funny that basically all the data quality wars between Let's say the music industry or like the newspaper publishing industry or the textbooks industry on the big labs.[00:35:11] swyx: It's all of the pre training era. And then like the new era, like the reasoning era, like nobody has any problem with all the reasoning, especially because it's all like sort of math and science oriented with, with very reasonable graders. I think the more interesting next step is how does it generalize beyond STEM?[00:35:27] swyx: We've been using O1 for And I would say like for summarization and creative writing and instruction following, I think it's underrated. I started using O1 in our intro songs before we killed the intro songs, but it's very good at writing lyrics. You know, I can actually say like, I think one of the O1 pro demos.[00:35:46] swyx: All of these things that Noam was showing was that, you know, you can write an entire paragraph or three paragraphs without using the letter A, right?[00:35:53] Creative Writing with AI[00:35:53] swyx: So like, like literally just anything instead of token, like not even token level, character level manipulation and [00:36:00] counting and instruction following. It's, uh, it's very, very strong.[00:36:02] swyx: And so no surprises when I ask it to rhyme, uh, and to, to create song lyrics, it's going to do that very much better than in previous models. So I think it's underrated for creative writing.[00:36:11] Alessio: Yeah.[00:36:12] Legal and Ethical Issues in AI[00:36:12] Alessio: What do you think is the rationale that they're going to have in court when they don't show you the thinking traces of O1, but then they want us to, like, they're getting sued for using other publishers data, you know, but then on their end, they're like, well, you shouldn't be using my data to then train your model.[00:36:29] Alessio: So I'm curious to see how that kind of comes. Yeah, I mean, OPA has[00:36:32] swyx: many ways to publish, to punish people without bringing, taking them to court. Already banned ByteDance for distilling their, their info. And so anyone caught distilling the chain of thought will be just disallowed to continue on, on, on the API.[00:36:44] swyx: And it's fine. It's no big deal. Like, I don't even think that's an issue at all, just because the chain of thoughts are pretty well hidden. Like you have to work very, very hard to, to get it to leak. And then even when it leaks the chain of thought, you don't know if it's, if it's [00:37:00] The bigger concern is actually that there's not that much IP hiding behind it, that Cosign, which we talked about, we talked to him on Dev Day, can just fine tune 4.[00:37:13] swyx: 0 to beat 0. 1 Cloud SONET so far is beating O1 on coding tasks without, at least O1 preview, without being a reasoning model, same for Gemini Pro or Gemini 2. 0. So like, how much is reasoning important? How much of a moat is there in this, like, All of these are proprietary sort of training data that they've presumably accomplished.[00:37:34] swyx: Because even DeepSeek was able to do it. And they had, you know, two months notice to do this, to do R1. So, it's actually unclear how much moat there is. Obviously, you know, if you talk to the Strawberry team, they'll be like, yeah, I mean, we spent the last two years doing this. So, we don't know. And it's going to be Interesting because there'll be a lot of noise from people who say they have inference time compute and actually don't because they just have fancy chain of thought.[00:38:00][00:38:00] swyx: And then there's other people who actually do have very good chain of thought. And you will not see them on the same level as OpenAI because OpenAI has invested a lot in building up the mythology of their team. Um, which makes sense. Like the real answer is somewhere in between.[00:38:13] Alessio: Yeah, I think that's kind of like the main data war story developing.[00:38:18] The Data War: GPU Poor vs. GPU Rich[00:38:18] Alessio: GPU poor versus GPU rich. Yeah. Where do you think we are? I think there was, again, going back to like the small model thing, there was like a time in which the GPU poor were kind of like the rebel faction working on like these models that were like open and small and cheap. And I think today people don't really care as much about GPUs anymore.[00:38:37] Alessio: You also see it in the price of the GPUs. Like, you know, that market is kind of like plummeted because there's people don't want to be, they want to be GPU free. They don't even want to be poor. They just want to be, you know, completely without them. Yeah. How do you think about this war? You[00:38:52] swyx: can tell me about this, but like, I feel like the, the appetite for GPU rich startups, like the, you know, the, the funding plan is we will raise 60 million and [00:39:00] we'll give 50 of that to NVIDIA.[00:39:01] swyx: That is gone, right? Like, no one's, no one's pitching that. This was literally the plan, the exact plan of like, I can name like four or five startups, you know, this time last year. So yeah, GPU rich startups gone.[00:39:12] The Rise of GPU Ultra Rich[00:39:12] swyx: But I think like, The GPU ultra rich, the GPU ultra high net worth is still going. So, um, now we're, you know, we had Leopold's essay on the trillion dollar cluster.[00:39:23] swyx: We're not quite there yet. We have multiple labs, um, you know, XAI very famously, you know, Jensen Huang praising them for being. Best boy number one in spinning up 100, 000 GPU cluster in like 12 days or something. So likewise at Meta, likewise at OpenAI, likewise at the other labs as well. So like the GPU ultra rich are going to keep doing that because I think partially it's an article of faith now that you just need it.[00:39:46] swyx: Like you don't even know what it's going to, what you're going to use it for. You just, you just need it. And it makes sense that if, especially if we're going into. More researchy territory than we are. So let's say 2020 to 2023 was [00:40:00] let's scale big models territory because we had GPT 3 in 2020 and we were like, okay, we'll go from 1.[00:40:05] swyx: 75b to 1. 8b, 1. 8t. And that was GPT 3 to GPT 4. Okay, that's done. As far as everyone is concerned, Opus 3. 5 is not coming out, GPT 4. 5 is not coming out, and Gemini 2, we don't have Pro, whatever. We've hit that wall. Maybe I'll call it the 2 trillion perimeter wall. We're not going to 10 trillion. No one thinks it's a good idea, at least from training costs, from the amount of data, or at least the inference.[00:40:36] swyx: Would you pay 10x the price of GPT Probably not. Like, like you want something else that, that is at least more useful. So it makes sense that people are pivoting in terms of their inference paradigm.[00:40:47] Emerging Trends in AI Models[00:40:47] swyx: And so when it's more researchy, then you actually need more just general purpose compute to mess around with, uh, at the exact same time that production deployments of the old, the previous paradigm is still ramping up,[00:40:58] swyx: um,[00:40:58] swyx: uh, pretty aggressively.[00:40:59] swyx: So [00:41:00] it makes sense that the GPU rich are growing. We have now interviewed both together and fireworks and replicates. Uh, we haven't done any scale yet. But I think Amazon, maybe kind of a sleeper one, Amazon, in a sense of like they, at reInvent, I wasn't expecting them to do so well, but they are now a foundation model lab.[00:41:18] swyx: It's kind of interesting. Um, I think, uh, you know, David went over there and started just creating models.[00:41:25] Alessio: Yeah, I mean, that's the power of prepaid contracts. I think like a lot of AWS customers, you know, they do this big reserve instance contracts and now they got to use their money. That's why so many startups.[00:41:37] Alessio: Get bought through the AWS marketplace so they can kind of bundle them together and prefer pricing.[00:41:42] swyx: Okay, so maybe GPU super rich doing very well, GPU middle class dead, and then GPU[00:41:48] Alessio: poor. I mean, my thing is like, everybody should just be GPU rich. There shouldn't really be, even the GPU poorest, it's like, does it really make sense to be GPU poor?[00:41:57] Alessio: Like, if you're GPU poor, you should just use the [00:42:00] cloud. Yes, you know, and I think there might be a future once we kind of like figure out what the size and shape of these models is where like the tiny box and these things come to fruition where like you can be GPU poor at home. But I think today is like, why are you working so hard to like get these models to run on like very small clusters where it's like, It's so cheap to run them.[00:42:21] Alessio: Yeah, yeah,[00:42:22] swyx: yeah. I think mostly people think it's cool. People think it's a stepping stone to scaling up. So they aspire to be GPU rich one day and they're working on new methods. Like news research, like probably the most deep tech thing they've done this year is Distro or whatever the new name is.[00:42:38] swyx: There's a lot of interest in heterogeneous computing, distributed computing. I tend generally to de emphasize that historically, but it may be coming to a time where it is starting to be relevant. I don't know. You know, SF compute launched their compute marketplace this year, and like, who's really using that?[00:42:53] swyx: Like, it's a bunch of small clusters, disparate types of compute, and if you can make that [00:43:00] useful, then that will be very beneficial to the broader community, but maybe still not the source of frontier models. It's just going to be a second tier of compute that is unlocked for people, and that's fine. But yeah, I mean, I think this year, I would say a lot more on device, We are, I now have Apple intelligence on my phone.[00:43:19] swyx: Doesn't do anything apart from summarize my notifications. But still, not bad. Like, it's multi modal.[00:43:25] Alessio: Yeah, the notification summaries are so and so in my experience.[00:43:29] swyx: Yeah, but they add, they add juice to life. And then, um, Chrome Nano, uh, Gemini Nano is coming out in Chrome. Uh, they're still feature flagged, but you can, you can try it now if you, if you use the, uh, the alpha.[00:43:40] swyx: And so, like, I, I think, like, you know, We're getting the sort of GPU poor version of a lot of these things coming out, and I think it's like quite useful. Like Windows as well, rolling out RWKB in sort of every Windows department is super cool. And I think the last thing that I never put in this GPU poor war, that I think I should now, [00:44:00] is the number of startups that are GPU poor but still scaling very well, as sort of wrappers on top of either a foundation model lab, or GPU Cloud.[00:44:10] swyx: GPU Cloud, it would be Suno. Suno, Ramp has rated as one of the top ranked, fastest growing startups of the year. Um, I think the last public number is like zero to 20 million this year in ARR and Suno runs on Moto. So Suno itself is not GPU rich, but they're just doing the training on, on Moto, uh, who we've also talked to on, on the podcast.[00:44:31] swyx: The other one would be Bolt, straight cloud wrapper. And, and, um, Again, another, now they've announced 20 million ARR, which is another step up from our 8 million that we put on the title. So yeah, I mean, it's crazy that all these GPU pores are finding a way while the GPU riches are also finding a way. And then the only failures, I kind of call this the GPU smiling curve, where the edges do well, because you're either close to the machines, and you're like [00:45:00] number one on the machines, or you're like close to the customers, and you're number one on the customer side.[00:45:03] swyx: And the people who are in the middle. Inflection, um, character, didn't do that great. I think character did the best of all of them. Like, you have a note in here that we apparently said that character's price tag was[00:45:15] Alessio: 1B.[00:45:15] swyx: Did I say that?[00:45:16] Alessio: Yeah. You said Google should just buy them for 1B. I thought it was a crazy number.[00:45:20] Alessio: Then they paid 2. 7 billion. I mean, for like,[00:45:22] swyx: yeah.[00:45:22] Alessio: What do you pay for node? Like, I don't know what the game world was like. Maybe the starting price was 1B. I mean, whatever it was, it worked out for everybody involved.[00:45:31] The Multi-Modality War[00:45:31] Alessio: Multimodality war. And this one, we never had text to video in the first version, which now is the hottest.[00:45:37] swyx: Yeah, I would say it's a subset of image, but yes.[00:45:40] Alessio: Yeah, well, but I think at the time it wasn't really something people were doing, and now we had VO2 just came out yesterday. Uh, Sora was released last month, last week. I've not tried Sora, because the day that I tried, it wasn't, yeah. I[00:45:54] swyx: think it's generally available now, you can go to Sora.[00:45:56] swyx: com and try it. Yeah, they had[00:45:58] Alessio: the outage. Which I [00:46:00] think also played a part into it. Small things. Yeah. What's the other model that you posted today that was on Replicate? Video or OneLive?[00:46:08] swyx: Yeah. Very, very nondescript name, but it is from Minimax, which I think is a Chinese lab. The Chinese labs do surprisingly well at the video models.[00:46:20] swyx: I'm not sure it's actually Chinese. I don't know. Hold me up to that. Yep. China. It's good. Yeah, the Chinese love video. What can I say? They have a lot of training data for video. Or a more relaxed regulatory environment.[00:46:37] Alessio: Uh, well, sure, in some way. Yeah, I don't think there's much else there. I think like, you know, on the image side, I think it's still open.[00:46:45] Alessio: Yeah, I mean,[00:46:46] swyx: 11labs is now a unicorn. So basically, what is multi modality war? Multi modality war is, do you specialize in a single modality, right? Or do you have GodModel that does all the modalities? So this is [00:47:00] definitely still going, in a sense of 11 labs, you know, now Unicorn, PicoLabs doing well, they launched Pico 2.[00:47:06] swyx: 0 recently, HeyGen, I think has reached 100 million ARR, Assembly, I don't know, but they have billboards all over the place, so I assume they're doing very, very well. So these are all specialist models, specialist models and specialist startups. And then there's the big labs who are doing the sort of all in one play.[00:47:24] swyx: And then here I would highlight Gemini 2 for having native image output. Have you seen the demos? Um, yeah, it's, it's hard to keep up. Literally they launched this last week and a shout out to Paige Bailey, who came to the Latent Space event to demo on the day of launch. And she wasn't prepared. She was just like, I'm just going to show you.[00:47:43] swyx: So they have voice. They have, you know, obviously image input, and then they obviously can code gen and all that. But the new one that OpenAI and Meta both have but they haven't launched yet is image output. So you can literally, um, I think their demo video was that you put in an image of a [00:48:00] car, and you ask for minor modifications to that car.[00:48:02] swyx: They can generate you that modification exactly as you asked. So there's no need for the stable diffusion or comfy UI workflow of like mask here and then like infill there in paint there and all that, all that stuff. This is small model nonsense. Big model people are like, huh, we got you in as everything in the transformer.[00:48:21] swyx: This is the multimodality war, which is, do you, do you bet on the God model or do you string together a whole bunch of, uh, Small models like a, like a chump. Yeah,[00:48:29] Alessio: I don't know, man. Yeah, that would be interesting. I mean, obviously I use Midjourney for all of our thumbnails. Um, they've been doing a ton on the product, I would say.[00:48:38] Alessio: They launched a new Midjourney editor thing. They've been doing a ton. Because I think, yeah, the motto is kind of like, Maybe, you know, people say black forest, the black forest models are better than mid journey on a pixel by pixel basis. But I think when you put it, put it together, have you tried[00:48:53] swyx: the same problems on black forest?[00:48:55] Alessio: Yes. But the problem is just like, you know, on black forest, it generates one image. And then it's like, you got to [00:49:00] regenerate. You don't have all these like UI things. Like what I do, no, but it's like time issue, you know, it's like a mid[00:49:06] swyx: journey. Call the API four times.[00:49:08] Alessio: No, but then there's no like variate.[00:49:10] Alessio: Like the good thing about mid journey is like, you just go in there and you're cooking. There's a lot of stuff that just makes it really easy. And I think people underestimate that. Like, it's not really a skill issue, because I'm paying mid journey, so it's a Black Forest skill issue, because I'm not paying them, you know?[00:49:24] Alessio: Yeah,[00:49:25] swyx: so, okay, so, uh, this is a UX thing, right? Like, you, you, you understand that, at least, we think that Black Forest should be able to do all that stuff. I will also shout out, ReCraft has come out, uh, on top of the image arena that, uh, artificial analysis has done, has apparently, uh, Flux's place. Is this still true?[00:49:41] swyx: So, Artificial Analysis is now a company. I highlighted them I think in one of the early AI Newses of the year. And they have launched a whole bunch of arenas. So, they're trying to take on LM Arena, Anastasios and crew. And they have an image arena. Oh yeah, Recraft v3 is now beating Flux 1. 1. Which is very surprising [00:50:00] because Flux And Black Forest Labs are the old stable diffusion crew who left stability after, um, the management issues.[00:50:06] swyx: So Recurve has come from nowhere to be the top image model. Uh, very, very strange. I would also highlight that Grok has now launched Aurora, which is, it's very interesting dynamics between Grok and Black Forest Labs because Grok's images were originally launched, uh, in partnership with Black Forest Labs as a, as a thin wrapper.[00:50:24] swyx: And then Grok was like, no, we'll make our own. And so they've made their own. I don't know, there are no APIs or benchmarks about it. They just announced it. So yeah, that's the multi modality war. I would say that so far, the small model, the dedicated model people are winning, because they are just focused on their tasks.[00:50:42] swyx: But the big model, People are always catching up. And the moment I saw the Gemini 2 demo of image editing, where I can put in an image and just request it and it does, that's how AI should work. Not like a whole bunch of complicated steps. So it really is something. And I think one frontier that we haven't [00:51:00] seen this year, like obviously video has done very well, and it will continue to grow.[00:51:03] swyx: You know, we only have Sora Turbo today, but at some point we'll get full Sora. Oh, at least the Hollywood Labs will get Fulsora. We haven't seen video to audio, or video synced to audio. And so the researchers that I talked to are already starting to talk about that as the next frontier. But there's still maybe like five more years of video left to actually be Soda.[00:51:23] swyx: I would say that Gemini's approach Compared to OpenAI, Gemini seems, or DeepMind's approach to video seems a lot more fully fledged than OpenAI. Because if you look at the ICML recap that I published that so far nobody has listened to, um, that people have listened to it. It's just a different, definitely different audience.[00:51:43] swyx: It's only seven hours long. Why are people not listening? It's like everything in Uh, so, so DeepMind has, is working on Genie. They also launched Genie 2 and VideoPoet. So, like, they have maybe four years advantage on world modeling that OpenAI does not have. Because OpenAI basically only started [00:52:00] Diffusion Transformers last year, you know, when they hired, uh, Bill Peebles.[00:52:03] swyx: So, DeepMind has, has a bit of advantage here, I would say, in, in, in showing, like, the reason that VO2, while one, They cherry pick their videos. So obviously it looks better than Sora, but the reason I would believe that VO2, uh, when it's fully launched will do very well is because they have all this background work in video that they've done for years.[00:52:22] swyx: Like, like last year's NeurIPS, I already was interviewing some of their video people. I forget their model name, but for, for people who are dedicated fans, they can go to NeurIPS 2023 and see, see that paper.[00:52:32] Alessio: And then last but not least, the LLMOS. We renamed it to Ragops, formerly known as[00:52:39] swyx: Ragops War. I put the latest chart on the Braintrust episode.[00:52:43] swyx: I think I'm going to separate these essays from the episode notes. So the reason I used to do that, by the way, is because I wanted to show up on Hacker News. I wanted the podcast to show up on Hacker News. So I always put an essay inside of there because Hacker News people like to read and not listen.[00:52:58] Alessio: So episode essays,[00:52:59] swyx: I remember [00:53:00] purchasing them separately. You say Lanchain Llama Index is still growing.[00:53:03] Alessio: Yeah, so I looked at the PyPy stats, you know. I don't care about stars. On PyPy you see Do you want to share your screen? Yes. I prefer to look at actual downloads, not at stars on GitHub. So if you look at, you know, Lanchain still growing.[00:53:20] Alessio: These are the last six months. Llama Index still growing. What I've basically seen is like things that, One, obviously these things have A commercial product. So there's like people buying this and sticking with it versus kind of hopping in between things versus, you know, for example, crew AI, not really growing as much.[00:53:38] Alessio: The stars are growing. If you look on GitHub, like the stars are growing, but kind of like the usage is kind of like flat. In the last six months, have they done some[00:53:4

god ceo new york amazon spotify time world europe google ai china apple vision pr voice future speaking san francisco new york times phd video thinking chinese simple data predictions elon musk iphone surprise impact legal code tesla chatgpt reflecting memory ga discord busy reddit lgbt cloud flash stem honestly ab pros jeff bezos windows excited researchers unicorns lower ip tackling sort survey insane tier cto vc whispers applications doc signing seal fireworks f1 genie academic sf openai gemini organizing nvidia ux api assembly davos frontier chrome makes scarlett johansson ui mm turbo gpt bash soda aws ml lama dropbox mosaic creative writing github drafting reinvent canvas 1b bolt apis ruler lava exact stripe dev pico strawberry wwdc hundred vm sander bt flux vcs taiwanese 200k moto arr gartner assumption opus sora google docs parting nemo blackwell sam altman google drive llm sombra gpu opa tbd ramp 3b elia elo agi gnome 5b estimates bytedance midjourney leopold dota ciso haiku dx sarah silverman coursera rag gpus sonnets george rr martin cypher quill getty cobalt sdks deepmind ilya noam sheesh perplexity v2 ttc alessio future trends grok anthropic lms satya r1 ssi stack overflow rl 8b itc emerging trends theoretically sota vo2 replicate yi mistral suno veo black forest inflection graphql aitor xai brain trust databricks chinchillas adept gpts nosql mcp grand central jensen huang ai models grand central station hacker news zep hacken ethical issues cosign claud ai news gpc distro lubna autogpt neo4j tpu jeremy howard o3 gbt o1 gpd quent heygen gradients exa loras 70b minimax langchain neurips 400b jeff dean 128k elos gemini pro cerebras code interpreter icml john franco lstm r1s ai winter aws reinvent muser latent space pypy dan gross nova pro paige bailey noam brown quiet capital john frankel
El Mundo del Spectrum Podcast
Especial Navidad – The Spectrum – New Frontier Bit Managers - El Mundo del Spectrum Podcast 13×03

El Mundo del Spectrum Podcast

Play Episode Listen Later Dec 21, 2024 339:32


Aquí tenéis, como todos los años, nuestro tradicional ESPECIAL NAVIDAD. Vuelve Juan Francisco Torres y nos acompaña por primera vez en el Podcast, Javier Recuenco. En sus más de 5 horas y media podréis disfrutar del mejor ambiente navideño con la presencia también de Jesús Martínez del Vas, Jesús Relinque (Pedja) y Alejandro Ibáñez. La sección de Actualidad vendrá repleta de noticias, comenzando con un repaso muy completo sobre el lanzamiento del The Spectrum y un debate «¿Es mejor The Spectrum que Spectrum Next?» y acabará con una entrevista a Sergio Martín, nuevo director de RetroGamer. El tema principal es «Homebrew de los 80's«. Hablaremos de aquellos juegos caseros que venían en cintas o en revistas y libros en formato de listados. Esto será la excusa perfecta para charlar sobre el trabajo de tantos programadores y grafistas que empezaron con eso y que algunos acabaron dedicándose profesionalmente y otros lo dejaron para siempre. En la segunda entrevista de este programa tendremos a Isidro Gilabert y Alberto McAlby, miembros de New Frontier / Bit Managers. Hablaremos del trabajo de uno de los más importantes grupos de desarrollo que ha habido en nuestro país. ¿Os gusta el menú? Esperamos que disfrutéis escuchando el programa tanto como nosotros grabándolo. Os deseamos a todos una feliz navidad y un próspero año nuevo. Nos escucharemos de nuevo en el 2025.

El Mundo del Spectrum Podcast
13x02 Artistas Portadas – Ángel Tirado – Víctor Morilla – Ángel Codón - El Mundo del Spectrum Podcast

El Mundo del Spectrum Podcast

Play Episode Listen Later Nov 27, 2024 231:00


Nuevo programa de El Mundo del Spectrum Podcast centrado esta vez en el arte que tenían los juegos externamente, es decir, en las portadas. Una serie de artistas dieron rienda suelta a su creatividad y nos sumergieron en mundos oníricos, épicos o de pura acción que hoy recordamos todavía intensamente. Nos acompañará un invitado que repite en el programa, Ángel Codón (Codón es un máquina, Tiempo de Culto y monologista), que de arte sabe mucho y contaremos como siempre con la presencia de Jesús Martínez del Vas, Jesús Relinque (Pedja) y Alejandro Ibáñez. Además repasaremos la actualidad del Spectrum en particular y del Retro en general y contaremos con dos entrevistas del máximo interés: Ángel Tirado (Capitán Sevilla, Documental «No me gusta Capitán Morcilla«) y Víctor Morilla (Thor). Ángel nos hablará en primicia del lanzamiento de «Píxel a Píxel» (Takemoto), el que será documental definitivo sobre la historia de los videojuegos en España y que emitirá RTVE y Canal Sur. Víctor nos contará su participación en la época dorada así como en los proyectos en que anda metido para el lanzamiento de un juego retro en la actualidad. No os perdáis estas casi 4 horas de puro entretenimiento retro con el que tanto disfrutamos a día de hoy.

El Mundo del Spectrum Podcast
Microdrive 011 - Vuelve MICROHOBBY y todo sobre PIXELS - El Mundo del Spectrum Podcast

El Mundo del Spectrum Podcast

Play Episode Listen Later Nov 8, 2024 37:59


Hoy contamos con un audio muy especial porque además de información muy interesante sobre la nueva revista PIXELS, tenemos una PRIMICIA espectacular: ¿Vuelve la MICROHOBBY?. Jesús Martínez del Vas entrevista a José Luis Sanz (MICROHOBBY / HOBBY CONSOLAS). No os perdáis este Podcast / Vídeo con contenido TOP. Tenéis dos opciones para escuchar el audio. Desde Youtube o desde vuestra plataforma de Podcast favorita.

El Mundo del Spectrum Podcast
13x01 Los Peores Juegos de Spectrum - Ricardo Cancho - The Spectrum - El Mundo del Spectrum Podcast

El Mundo del Spectrum Podcast

Play Episode Listen Later Oct 13, 2024 280:37


Comienza la temporada 13 con un programa repleto de contenidos para disfrutar durante algo más de 4 horas y media. Tu Podcast favorito vuelve como siempre denso en información y acompañado de buena música ochentera. Entretenimiento puro de tu temática preferida. Hemos querido empezar la temporada echándonos unas risas así que el tema principal será «Los Peores Juegos de Spectrum«. Y vaya si lo hemos conseguido. El catálogo del ordenador de Sinclair es cosa seria pero de vez en cuando no está de más tomárselo a guasa repasando aquellos títulos infumables que todos tenemos en mente. Recordad mencionar los que nos hayamos dejado para una posible segunda parte. En este programa entrevistamos a Ricardo Cancho, mítico componente de TOPO Soft que nos contará infinidad de detalles de aquella magnífica época. De extensa duración os garantizamos que disfrutaréis cada minuto con las palabras de Ricardo. Por supuesto repasaremos toda la actualidad centrándonos en el lanzamiento del nuevo THE SPECTRUM, una máquina que no ha dejado indiferente a nadie. No te pierdas nuestro regreso. Acompáñanos una vez más en El Mundo del Spectrum Podcast.

The top AI news from the past week, every ThursdAI

Hey folks, Alex here, back with another ThursdAI recap – and let me tell you, this week's episode was a whirlwind of open-source goodness, mind-bending inference techniques, and a whole lotta talk about talking AIs! We dove deep into the world of LLMs, from Alibaba's massive Qwen 2.5 drop to the quirky, real-time reactions of Moshi. We even got a sneak peek at Nous Research's ambitious new project, Forge, which promises to unlock some serious LLM potential. So grab your pumpkin spice latte (it's that time again isn't it?

The top AI news from the past week, every ThursdAI

Hey there, Alex here with an end of summer edition of our show, which did not disappoint. Today is the official anniversary of stable diffusion 1.4 can you believe it? It's the second week in the row that we have an exclusive LLM launch on the show (after Emozilla announced Hermes 3 on last week's show), and spoiler alert, we may have something cooking for next week as well!This edition of ThursdAI is brought to you by W&B Weave, our LLM observability toolkit, letting you evaluate LLMs for your own use-case easilyAlso this week, we've covered both ends of AI progress, doomerist CEO saying "Fck Gen AI" vs an 8yo coder and I continued to geek out on putting myself into memes (I promised I'll stop... at some point) so buckle up, let's take a look at another crazy week: TL;DR* Open Source LLMs * AI21 releases Jamba1.5 Large / Mini hybrid Mamba MoE (X, Blog, HF)* Microsoft Phi 3.5 - 3 new models including MoE (X, HF)* BFCL 2 - Berkley Function Calling Leaderboard V2 (X, Blog, Leaderboard)* NVIDIA - Mistral Nemo Minitron 8B - Distilled / Pruned from 12B (HF)* Cohere paper proves - code improves intelligence (X, Paper)* MOHAWK - transformer → Mamba distillation method (X, Paper, Blog)* AI Art & Diffusion & 3D* Ideogram launches v2 - new img diffusion king

Wisconsin in Focus
128K More Enrolled in Wisconsin Medicaid Four Years After COVID

Wisconsin in Focus

Play Episode Listen Later Aug 16, 2024 6:17


There are more than 100,000 more people enrolled in Medicaid in Wisconsin now than before COVID struck. Wisconsin Medicaid director Bill Hanna spoke at a Wisconsin Health News Newsmaker event Tuesday. He said while Wisconsin's Medicaid enrollments are down from their peak, more people are receiving government health care now than before the pandemic.Support this podcast: https://secure.anedot.com/franklin-news-foundation/ce052532-b1e4-41c4-945c-d7ce2f52c38a?source_code=xxxxxxFull story: https://www.thecentersquare.com/wisconsin/article_a37819cc-599f-11ef-a3bc-a751a5d747f9.html

Money Talk With Tiff
How to Teach Kids About Investing with Maya Corbic | Ep. 330

Money Talk With Tiff

Play Episode Listen Later Aug 8, 2024 15:56 Transcription Available


In this episode of Money Talk with Tiff, special guest Maya Corbic shares her insights on getting kids started with investing at a young age. Maya explains how parents can normalize money conversations with kids as young as 4 or 5 using simple concepts, then get them more involved around age 8 by explaining things like savings accounts, CDs, and stocks.Maya discusses her approach of having kids invest half their gift money, starting with individual stocks in companies they know and then moving up to ETFs and index funds. Her book "From Piggy Banks to Stocks" aims to explain investing basics in a 10-year-old-friendly way.Tune in to hear Maya's tips for raising financially savvy kids and her own journey learning to invest as a first-generation immigrant.About Our GuestFrom challenging beginnings in shelters and government housing, Maya Corbic is a first-generation immigrant and CPA who draws from her experience of overcoming financial challenges and simplifies money matters to inspire children to pursue financial success.Maya is the author of a kids' book, "From Piggy Banks to Stocks: The Ultimate Guide for a Young Investor," which simplifies investing concepts and equips children with essential investing skills while keeping them engaged.She founded the Wealthy Kids Investment Club and has a popular Instagram account @teach.kids.money with 128K+ subscribers, through which she inspires parents to raise financially independent kids.Connect with MayaGet the book From Piggy Banks to Stocks: The Ultimate Guide for a Young Investor (Amazon Link)Instagram: @teach.kids.moneyTwitter: @Educ8Money2KidsConnect with TiffanyWebsite: https://www.moneytalkwitht.comFacebook: Money Talk With TiffTwitter: @moneytalkwithtInstagram: @moneytalkwithtLinkedIn: Tiffany GrantYouTube: Money Talk With TiffPinterest: @moneytalkwithtTikTok: @moneytalkwithtTimestamps[00:00] Explaining investments to kids in simple terms.[04:52] Encouraging investment in stocks and diversified funds.[09:09] Investing in ETF gives broad US exposure.[12:11] Book made friendly for kids and adults.Key TakeawaysIntroducing kids to investing concepts earlyExplaining stocks as owning company sharesCertificates of deposit for guaranteed returnsInvesting in companies kids are familiar withETFs and index funds for diversification"From Piggy Banks to Stocks" book overviewSupport this PodcastCopyright 2024 Tiffany GrantThis podcast uses the following third-party services for analysis: Chartable - https://chartable.com/privacy

The top AI news from the past week, every ThursdAI

Holy s**t, folks! I was off for two weeks, last week OpenAI released GPT-4o-mini and everyone was in my mentions saying, Alex, how are you missing this?? and I'm so glad I missed that last week and not this one, because while GPT-4o-mini is incredible (GPT-4o level distill with incredible speed and almost 99% cost reduction from 2 years ago?) it's not open source. So welcome back to ThursdAI, and buckle up because we're diving into what might just be the craziest week in open-source AI since... well, ever!This week, we saw Meta drop LLAMA 3.1 405B like it's hot (including updated 70B and 8B), Mistral joining the party with their Large V2, and DeepSeek quietly updating their coder V2 to blow our minds. Oh, and did I mention Google DeepMind casually solving math Olympiad problems at silver level medal

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

If you see this in time, join our emergency LLM paper club on the Llama 3 paper!For everyone else, join our special AI in Action club on the Latent Space Discord for a special feature with the Cursor cofounders on Composer, their newest coding agent!Today, Meta is officially releasing the largest and most capable open model to date, Llama3-405B, a dense transformer trained on 15T tokens that beats GPT-4 on all major benchmarks:The 8B and 70B models from the April Llama 3 release have also received serious spec bumps, warranting the new label of Llama 3.1.If you are curious about the infra / hardware side, go check out our episode with Soumith Chintala, one of the AI infra leads at Meta. Today we have Thomas Scialom, who led Llama2 and now Llama3 post-training, so we spent most of our time on pre-training (synthetic data, data pipelines, scaling laws, etc) and post-training (RLHF vs instruction tuning, evals, tool calling).Synthetic data is all you needLlama3 was trained on 15T tokens, 7x more than Llama2 and with 4 times as much code and 30 different languages represented. But as Thomas beautifully put it:“My intuition is that the web is full of s**t in terms of text, and training on those tokens is a waste of compute.” “Llama 3 post-training doesn't have any human written answers there basically… It's just leveraging pure synthetic data from Llama 2.”While it is well speculated that the 8B and 70B were "offline distillations" of the 405B, there are a good deal more synthetic data elements to Llama 3.1 than the expected. The paper explicitly calls out:* SFT for Code: 3 approaches for synthetic data for the 405B bootstrapping itself with code execution feedback, programming language translation, and docs backtranslation.* SFT for Math: The Llama 3 paper credits the Let's Verify Step By Step authors, who we interviewed at ICLR:* SFT for Multilinguality: "To collect higher quality human annotations in non-English languages, we train a multilingual expert by branching off the pre-training run and continuing to pre-train on a data mix that consists of 90% multilingualtokens."* SFT for Long Context: "It is largely impractical to get humans to annotate such examples due to the tedious and time-consuming nature of reading lengthy contexts, so we predominantly rely on synthetic data to fill this gap. We use earlier versions of Llama 3 to generate synthetic data based on the key long-context use-cases: (possibly multi-turn) question-answering, summarization for long documents, and reasoning over code repositories, and describe them in greater detail below"* SFT for Tool Use: trained for Brave Search, Wolfram Alpha, and a Python Interpreter (a special new ipython role) for single, nested, parallel, and multiturn function calling.* RLHF: DPO preference data was used extensively on Llama 2 generations. This is something we partially covered in RLHF 201: humans are often better at judging between two options (i.e. which of two poems they prefer) than creating one (writing one from scratch). Similarly, models might not be great at creating text but they can be good at classifying their quality.Last but not least, Llama 3.1 received a license update explicitly allowing its use for synthetic data generation.Llama2 was also used as a classifier for all pre-training data that went into the model. It both labelled it by quality so that bad tokens were removed, but also used type (i.e. science, law, politics) to achieve a balanced data mix. Tokenizer size mattersThe tokens vocab of a model is the collection of all tokens that the model uses. Llama2 had a 34,000 tokens vocab, GPT-4 has 100,000, and 4o went up to 200,000. Llama3 went up 4x to 128,000 tokens. You can find the GPT-4 vocab list on Github.This is something that people gloss over, but there are many reason why a large vocab matters:* More tokens allow it to represent more concepts, and then be better at understanding the nuances.* The larger the tokenizer, the less tokens you need for the same amount of text, extending the perceived context size. In Llama3's case, that's ~30% more text due to the tokenizer upgrade. * With the same amount of compute you can train more knowledge into the model as you need fewer steps.The smaller the model, the larger the impact that the tokenizer size will have on it. You can listen at 55:24 for a deeper explanation.Dense models = 1 Expert MoEsMany people on X asked “why not MoE?”, and Thomas' answer was pretty clever: dense models are just MoEs with 1 expert :)[00:28:06]: I heard that question a lot, different aspects there. Why not MoE in the future? The other thing is, I think a dense model is just one specific variation of the model for an hyperparameter for an MOE with basically one expert. So it's just an hyperparameter we haven't optimized a lot yet, but we have some stuff ongoing and that's an hyperparameter we'll explore in the future.Basically… wait and see!Llama4Meta already started training Llama4 in June, and it sounds like one of the big focuses will be around agents. Thomas was one of the authors behind GAIA (listen to our interview with Thomas in our ICLR recap) and has been working on agent tooling for a while with things like Toolformer. Current models have “a gap of intelligence” when it comes to agentic workflows, as they are unable to plan without the user relying on prompting techniques and loops like ReAct, Chain of Thought, or frameworks like Autogen and Crew. That may be fixed soon?

Explora Commodore Retrokiosko
Retrokiosko #49

Explora Commodore Retrokiosko

Play Episode Listen Later Jul 21, 2024 182:54


¡Programa especial 4º aniversario! En este programa hacemos un repaso a algunas noticias de la actualidad commodoriana y a los lanzamientos commodorianos de las últimas semanas, y repasamos el pasado Explora Commodore del 6/7/24. Como en los últimos programas de aniversario, jugamos un Kahoot! con nuestros espectadores en directo. Y este mes, coincidiendo con los JJOO de París 2024, hacemos un repaso de algunos juegos commodorianos relacionados con la temática olímpica. Todo esto lo veremos con el equipo habitual formado por David Asenjo (https://twitter.com/darro99), Toni Bianchetti (https://twitter.com/seuck), Narciso Quintana "Narcisound" (https://twitter.com/narcisound), Jonatan Jiménez (https://twitter.com/jsabreman) y Paco Herrera (https://twitter.com/pacoblog64). Las noticias comentadas son: - Resumen Explora: https://www.flickr.com/photos/uoc_universitat/albums/72177720318664441 https://www.commodoreplus.org/2024/07/explora-commodore-8-rumsxplora.html https://www.pacoblog64.com/2024/07/cronica-del-explora-commodore-2024.html - Nuevo Kickstarter: Dare to Dream: Commodore and Amiga Today?, nuevo libro de David Pleasence sobre cómo habría sido la historia de Commodore de haberse hecho con los derechos en 1995: https://www.kickstarter.com/projects/daretodreamhardback/dare-to-dream-commodore-and-amiga-today - Nueva memoria interna de 64K para C16, y desarrollo de memoria de 64 y 128K para CPET, por Tynemouth Software: http://blog.tynemouthsoftware.co.uk/2024/06/new-commodore-16-internal-64k-ram-upgrade.html http://blog.tynemouthsoftware.co.uk/2024/07/commodore-pet-64k-128k-ram-boards.html - Lemon64/Amiga han sido atacadas: https://x.com/AmigaL0ve/status/1339014484216532992 https://www.lemon64.com/ - Documentada misteriosa placa con múltiples FPUs de Commodore: https://drive.google.com/drive/folders/1mXz3g5n_1d63TWdjLuGXTeMr-4pYAdMM?usp=drive_link - Reproducción de CMD SuperCPU128 MMU lista para comercializar: https://x.com/corei64/status/1811920147877363896?s=61 - Actualización de C64, intros: https://intros.c64.org/ - Actualización de Games That weren't 64: https://www.gamesthatwerent.com/gtw64 Los juegos y programas nuevos comentados son: - Castle Wolfenstein 3D (jimo9757, Commodore Pet (32KB RAM)): https://www.youtube.com/watch?v=WAPlSe8ueuU&feature=youtu.be - Galaga 500 (Jotd666, Amiga): https://jotd666.itch.io/galaga500 - Metrosiege (BitBeamCannon, Pixelglass, Amiga): https://x.com/dantemendes/status/1811782966697238801 - Aventura en la tumba Azteca (Aligata, C64): https://www.commodoreplus.org/2024/07/aventura-en-la-tumba-azteca.html https://commodore-plus.itch.io/aventura-en-la-tumba-azteca - Dr. Dangerous (HooGames2017, Amiga): https://hoogames2017.itch.io/dr-dangerous - Stick Man Arok Edition (Epy, Plus/4): https://plus4world.powweb.com/software/Stick_Man_Arok_Edition?s=09 - Crysis (lifeschool@lemonamiga, Amiga): https://lifeschool22.itch.io/crysis-amiga-os-interactive-demo - Ami Robbo 2 (Tukinem, Amiga): https://tukinem.itch.io/ami-robbo-2 - Koalamin (malcontent, C64): https://malcontentc64.itch.io/koalamin - PETSCII Wizard of Wor (Ko-Ko, C64): https://ko-ko74.itch.io/petscii-wizard-of-wor-c64-version https://ko-ko74.itch.io/petscii-wizard-of-wor-commodore-plus4-version https://ko-ko74.itch.io/wizard-of-wor-for-the-commodore-pet - Chopper Duel (izero79, Amiga): https://izero79.itch.io/chopperduel - Aira Force (Howprice, Amiga): https://howprice.itch.io/aira-force - Iowa Jack and the Crystals of Chaos (Rickyderocher, C128): https://rickyderocher.itch.io/iowa-jack-and-the-crystals-of-chaos-commodore-128 - 2112 (V3.2) (Roberto Sandri, C64): https://csdb.dk/release/?id=243934 - Quest For Two (Wil, C64 ): https://csdb.dk/release/?id=243990 - The Grid (LogicalByte, Amiga): https://logicalbyte.itch.io/the-grid - Sire Fire (Carmine Migliaccio (TSM), Plus/4): https://plus4world.powweb.com/software/Sire_Fire

Sospechosos Habituales
Solo128k E1 - Bienvenidos a solo 128k

Sospechosos Habituales

Play Episode Listen Later Jul 20, 2024 4:46


Hola a todos y bienvenidos a solo128k, el podcast retro enfocado a los emuladores móviles, videojuegos clásicos, microordenadores de 8 y 16 bits, consolas e historia del videojuego contada por un superviviente de los 80. Vivido en primera persona, la nostalgia y los recuerdos toman el control de un emocionante viaje a un mundo de pixeles, cintas y televisores de tubo. Impulsados por unos pocos megahercios, cruzamos el mundo real hacia una realidad virtual reinada por antiguos dioses, magos, toda clase de personajes y enemigos finales olvidados y encerrados en unos pocos ks… solo 128ks!!

The Generative AI Meetup Podcast
For the love of spreadsheets!

The Generative AI Meetup Podcast

Play Episode Listen Later Jul 19, 2024 47:46


In this week's episode of the Generative AI Meetup Podcast, hosts Mark and Shashank dive into the latest advancements in generative AI. They kick off with a detailed exploration of Mistral Nemo, a new AI model that has set a benchmark with its unprecedented 128K context window. The discussion then shifts to an intriguing development at Microsoft, where a specialized LLM is being designed for optimizing spreadsheet functions. Join us to understand how these innovations are shaping the future of AI and why they matter to developers and businesses alike. Whether you're a seasoned AI enthusiast or just curious about the technology shaping our future, this episode is packed with insights you won't want to miss.

El Mundo del Spectrum Podcast
12x06 Dibujos Animados - Pablo Crespo - El Mundo del Spectrum Podcast

El Mundo del Spectrum Podcast

Play Episode Listen Later Jul 6, 2024 247:54


Terminamos temporada, la 12, con este programa de 4 horas en el que hablaremos de los juegos con personajes de Dibujos Animados. Y lo haremos con un invitado especial , Viruete, que vuelve al programa para tratar este televisivo asunto. La entrevista en esta ocasión es a un hombre importante en la distribución de la época, Pablo Crespo, creador de Centro Mail. Nos hablará de un mundo que hemos tratado en pocas ocasiones, desde el rastro del Madrid de los 80 a las primeras tiendas de Centro Mail, hasta la actualidad, en la que ha sido el «mandamás» de GAME hasta hace poco. Vuelve la sección «Colour Clash» de Aitor Chávez, en la que, desde ya, buscará y grabará testimonios de gente normal, gente como cualquiera de nosotros, para ilustrar experiencias con las que muchos nos identificaremos. En esta ocasión, tendremos «Un Spectrum sin cassette y una carga interruptus». Por supuesto repasaremos toda la actualidad del Retro en general y del Spectrum en particular, con un análisis extenso del nuevo número de Retrogamer. Lo haremos con Jesús Martínez del Vas, Jesús Relinque «Pedja» y Alejandro Ibáñez. Este es el menú que os hemos preparado para despedir esta temporada número 12, intensa y repleta de experiencias como pocas, que esperamos hayáis disfrutado escuchándola casi tanto como nosotros haciéndola. Feliz verano y nos escuchamos a la vuelta.

Papers Read on AI
DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence

Papers Read on AI

Play Episode Listen Later Jul 4, 2024 37:18


We present DeepSeek-Coder-V2, an open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks. Specifically, DeepSeek-Coder-V2 is further pre-trained from an intermediate checkpoint of DeepSeek-V2 with additional 6 trillion tokens. Through this continued pre-training, DeepSeek-Coder-V2 substantially enhances the coding and mathematical reasoning capabilities of DeepSeek-V2, while maintaining comparable performance in general language tasks. Compared to DeepSeek-Coder-33B, DeepSeek-Coder-V2 demonstrates significant advancements in various aspects of code-related tasks, as well as reasoning and general capabilities. Additionally, DeepSeek-Coder-V2 expands its support for programming languages from 86 to 338, while extending the context length from 16K to 128K. In standard benchmark evaluations, DeepSeek-Coder-V2 achieves superior performance compared to closed-source models such as GPT4-Turbo, Claude 3 Opus, and Gemini 1.5 Pro in coding and math benchmarks. 2024: DeepSeek-AI, Qihao Zhu, Daya Guo, Zhihong Shao, Dejian Yang, Peiyi Wang, Runxin Xu, Y. Wu, Yukun Li, Huazuo Gao, Shirong Ma, Wangding Zeng, Xiao Bi, Zihui Gu, Hanwei Xu, Damai Dai, Kai Dong, Liyue Zhang, Yishi Piao, Zhibin Gou, Zhenda Xie, Zhewen Hao, Bing-Li Wang, Jun-Mei Song, Deli Chen, Xin Xie, Kang Guan, Yu-mei You, A. Liu, Qiushi Du, W. Gao, Xuan Lu, Qinyu Chen, Yaohui Wang, Chengqi Deng, Jiashi Li, Chenggang Zhao, Chong Ruan, Fuli Luo, W. Liang https://arxiv.org/pdf/2406.11931v1

The top AI news from the past week, every ThursdAI

Hey, this is Alex. Don't you just love when assumptions about LLMs hitting a wall just get shattered left and right and we get new incredible tools released that leapfrog previous state of the art models, that we barely got used to, from just a few months ago? I SURE DO! Today is one such day, this week was already busy enough, I had a whole 2 hour show packed with releases, and then Anthropic decided to give me a reason to use the #breakingNews button (the one that does the news show like sound on the live show, you should join next time!) and announced Claude Sonnet 3.5 which is their best model, beating Opus while being 2x faster and 5x cheaper! (also beating GPT-4o and Turbo, so... new king! For how long? ¯_(ツ)_/¯)Critics are already raving, it's been half a day and they are raving! Ok, let's get to the TL;DR and then dive into Claude 3.5 and a few other incredible things that happened this week in AI!

MacVoices Video
MacVoices #24150: Road to Macstock - Wally Cherwinski and the Macstock Film Festival

MacVoices Video

Play Episode Listen Later Jun 12, 2024 28:19


The biggest event inside  Macstock Conference and Expo is the annual Macstock Film Festival, organized and hosted by Wally Cherwinski. An accomplished videographer in his own right, Wally shares his thoughts on why you (yes, you!) should be creating a submission and joining in the fun. No prizes, no judging and no pressure mean that anyone can be part of the Festival. Wally provides some tips on how to approach a subject, creating something from content you already have, and the emotional impact of preserving memories through video.  Visit Macstock Conference and Expo and use the MacVoices discount code MACVOICES to save $30 on your registration fee.   Today's edition of MacVoices is supported by MacVoices Live!, our weekly live panel discussion of what is going in the Apple space as well as the larger tech world, and how it is impacting you. Join us live at YouTube.com/MacVoicesTV at 8 PM Eastern 5 PM Pacific, or whatever time that is wherever you are and participate in the chat, or catch the edited and segmented versions of the show on the regular MacVoices channels and feeds. Show Notes: Chapters: 02:22 The MacStock Short Film Festival04:31 Learning from the MacStock Film Submissions06:53 Submission Guidelines for MacStock Film Festival11:36 Creating Professional Videos with iMovie Trailers13:53 Tips and Tricks for Video Editing22:18 The Fun and Engrossing Process of Video Editing25:55 Encouragement to Create and Submit Videos for MacStock Links: Video To Go by Wally Cherwinski in the Apple Books Store Guests: Wally Cherwinski is a Videographer based in Ottawa, Canada. Originally trained as a scientist, he spent a portion of his career in research and teaching at the University of Cambridge, England while doubling as a freelance photographer and writer. Later, he joined Canada's National Research Council and spent many years managing communications for the Canadian Space Program. Starting with 16mm film, he has written and directed numerous documentaries and television features, including projects with Canada's National Film Board. More recently, he has combined his passion for video with his love of travel. Wally has been a Mac user since the original 128K in 1984 and his Apple "museum" includes 28 Macs (not to mention Newtons, iPods, iPhones & iPads). He has delivered video workshops at Macworld, at Macintosh User Groups in Canada and on three MacMania cruises. He also writes a regular video column in the ScreenCastsOnline monthly magazine. You can connect with him on X, or view his Cirque du Mac videos (and others) on his YouTube channel. Support:      Become a MacVoices Patron on Patreon     http://patreon.com/macvoices      Enjoy this episode? Make a one-time donation with PayPal Connect:      Web:     http://macvoices.com      Twitter:     http://www.twitter.com/chuckjoiner     http://www.twitter.com/macvoices      Mastodon:     https://mastodon.cloud/@chuckjoiner      Facebook:     http://www.facebook.com/chuck.joiner      MacVoices Page on Facebook:     http://www.facebook.com/macvoices/      MacVoices Group on Facebook:     http://www.facebook.com/groups/macvoice      LinkedIn:     https://www.linkedin.com/in/chuckjoiner/      Instagram:     https://www.instagram.com/chuckjoiner/ Subscribe:      Audio in iTunes     Video in iTunes      Subscribe manually via iTunes or any podcatcher:      Audio: http://www.macvoices.com/rss/macvoicesrss      Video: http://www.macvoices.com/rss/macvoicesvideorss

MacVoices Audio
MacVoices #24150: Road to Macstock - Wally Cherwinski and the Macstock Film Festival

MacVoices Audio

Play Episode Listen Later Jun 7, 2024 28:20


The biggest event inside  Macstock Conference and Expo is the annual Macstock Film Festival, organized and hosted by Wally Cherwinski. An accomplished videographer in his own right, Wally shares his thoughts on why you (yes, you!) should be creating a submission and joining in the fun. No prizes, no judging and no pressure mean that anyone can be part of the Festival. Wally provides some tips on how to approach a subject, creating something from content you already have, and the emotional impact of preserving memories through video. Visit Macstock Conference and Expo and use the MacVoices discount code MACVOICES to save $30 on your registration fee. Today's edition of MacVoices is supported by MacVoices Live!, our weekly live panel discussion of what is going in the Apple space as well as the larger tech world, and how it is impacting you. Join us live at YouTube.com/MacVoicesTV at 8 PM Eastern 5 PM Pacific, or whatever time that is wherever you are and participate in the chat, or catch the edited and segmented versions of the show on the regular MacVoices channels and feeds. Show Notes: Chapters: 02:22 The Macstock Short Film Festival 04:31 Learning from the Macstock Film Submissions 06:53 Submission Guidelines for Macstock Film Festival 11:36 Creating Professional Videos with iMovie Trailers 13:53 Tips and Tricks for Video Editing 22:18 The Fun and Engrossing Process of Video Editing 25:55 Encouragement to Create and Submit Videos for Macstock Links: Video To Go by Wally Cherwinski in the Apple Books Store Guests: Wally Cherwinski is a Videographer based in Ottawa, Canada. Originally trained as a scientist, he spent a portion of his career in research and teaching at the University of Cambridge, England while doubling as a freelance photographer and writer. Later, he joined Canada's National Research Council and spent many years managing communications for the Canadian Space Program. Starting with 16mm film, he has written and directed numerous documentaries and television features, including projects with Canada's National Film Board. More recently, he has combined his passion for video with his love of travel. Wally has been a Mac user since the original 128K in 1984 and his Apple "museum" includes 28 Macs (not to mention Newtons, iPods, iPhones & iPads). He has delivered video workshops at Macworld, at Macintosh User Groups in Canada and on three MacMania cruises. He also writes a regular video column in the ScreenCastsOnline monthly magazine. You can connect with him on X, or view his Cirque du Mac videos (and others) on his YouTube channel. Support: Become a MacVoices Patron on Patreon http://patreon.com/macvoices Enjoy this episode? Make a one-time donation with PayPal Connect: Web: http://macvoices.com Twitter: http://www.twitter.com/chuckjoiner http://www.twitter.com/macvoices Mastodon: https://mastodon.cloud/@chuckjoiner Facebook: http://www.facebook.com/chuck.joiner MacVoices Page on Facebook: http://www.facebook.com/macvoices/ MacVoices Group on Facebook: http://www.facebook.com/groups/macvoice LinkedIn: https://www.linkedin.com/in/chuckjoiner/ Instagram: https://www.instagram.com/chuckjoiner/ Subscribe:      Audio in iTunes Video in iTunes Subscribe manually via iTunes or any podcatcher: Audio: http://www.macvoices.com/rss/macvoicesrss Video: http://www.macvoices.com/rss/macvoicesvideorss

Build Your Tribe | Grow Your Business with Social Media
How I Got 128k Subscribers + $65k in Revenue in

Build Your Tribe | Grow Your Business with Social Media

Play Episode Listen Later Jun 6, 2024 56:21


Hey there! In this episode of "Build Your Tribe," I dive into the secrets behind my YouTube channel's explosive growth with Liz Germain, founder of VidFluence. Liz breaks down the power of YouTube analytics, evergreen content, and killer strategies for thumbnails and titles that helped us gain over 123K subscribers and $65K in AdSense in just five months! Tune in to learn how you can boost your YouTube game and get those views. Watch this episode on YouTube!! Check out InstaClubHub!!  For Just $7!! Go to InstaClubHub.com/Trial Related Episodes Midlifers Blow Up Your YouTube Channel In 2024 Watch Listen Learn More About Liz: Social Media:  All Platforms @lizdoesvideo Website: channelamplifier.com YouTube @LizDoesVideo Get Your FREE YouTube Growth Hacks Guide    Related Links Check out the top 25 Things You Can Delegate to a BELAY Virtual Assistant Today! Just text TRIBE —that's T-R-I-B-E—to 55123 to get access to this list and get started with BELAY today  Use the service we use to grow our email list, create custom flows, sales funnels and take care of our customers every day

Guys Of A Certain Age
The Summer of '24

Guys Of A Certain Age

Play Episode Listen Later May 25, 2024 37:16


Once a year, on a late spring day when Art usually has something better to do, Jay and Robbie talk about their summer plans.  Welcome to summer.   Movie releases abound, but will they go to the theater or wait till the streaming begins and perhaps watch on Robbie's new deck?  Will Jay get the hang of his fancy device that takes a lot of the stress out of smoking meats, and will his back porch be ready to host the first annual Guys of Summer eat-fest? Will they travel, or just let their kids come to them?   Most importantly, you'll hear Jay's plans for flying through his neighborhood in 2026 if he can raise the $128K necessary to make it happen.  Meanwhile, Robbie will be waiting for a resurrection of the ultimate Bat.  And for the first time in many weeks, Nelvana of the Northern Lights gives hope that not all superheroes from the 1940's are regrettable.   Summer is here, you're going to be working in the yard anyway, so you might as well listen.  

The top AI news from the past week, every ThursdAI

Hello hello everyone, this is Alex, typing these words from beautiful Seattle (really, it only rained once while I was here!) where I'm attending Microsoft biggest developer conference BUILD. This week we saw OpenAI get in the news from multiple angles, none of them positive and Microsoft clapped back at Google from last week with tons of new AI product announcements (CoPilot vs Gemini) and a few new PCs with NPU (Neural Processing Chips) that run alongside CPU/GPU combo we're familiar with. Those NPUs allow for local AI to run on these devices, making them AI native devices! While I'm here I also had the pleasure to participate in the original AI tinkerers thanks to my friend Joe Heitzberg who operates and runs the aitinkerers.org (of which we are a local branch in Denver) and it was amazing to see tons of folks who listen to ThursdAI + read the newsletter and talk about Weave and evaluations with all of them! (Btw, one the left is Vik from Moondream, which we covered multiple times). I Ok let's get to the news: TL;DR of all topics covered: * Open Source LLMs * HuggingFace commits 10M in ZeroGPU (X)* Microsoft open sources Phi-3 mini, Phi-3 small (7B) Medium (14B) and vision models w/ 128K context (Blog, Demo)* Mistral 7B 0.3 - Base + Instruct (HF)* LMSys created a "hard prompts" category (X)* Cohere for AI releases Aya 23 - 3 models, 101 languages, (X)* Big CO LLMs + APIs* Microsoft Build recap - New AI native PCs, Recall functionality, Copilot everywhere * Will post a dedicated episode to this on Sunday* OpenAI pauses GPT-4o Sky voice because Scarlet Johansson complained* Microsoft AI PCs - Copilot+ PCs (Blog)* Anthropic - Scaling Monosemanticity paper - about mapping the features of an LLM (X, Paper)* Vision & Video* OpenBNB - MiniCPM-Llama3-V 2.5 (X, HuggingFace)* Voice & Audio* OpenAI pauses Sky voice due to ScarJo hiring legal counsel* Tools & Hardware* Humane is looking to sell (blog)Open Source LLMs Microsoft open sources Phi-3 mini, Phi-3 small (7B) Medium (14B) and vision models w/ 128K context (Blog, Demo)Just in time for Build, Microsoft has open sourced the rest of the Phi family of models, specifically the small (7B) and the Medium (14B) models on top of the mini one we just knew as Phi-3. All the models have a small context version (4K and 8K) and a large that goes up to 128K (tho they recommend using the small if you don't need that whole context) and all can run on device super quick. Those models have MIT license, so use them as you will, and are giving an incredible performance comparatively to their size on benchmarks. Phi-3 mini, received an interesting split in the vibes, it was really good for reasoning tasks, but not very creative in it's writing, so some folks dismissed it, but it's hard to dismiss these new releases, especially when the benchmarks are that great! LMsys just updated their arena to include a hard prompts category (X) which select for complex, specific and knowledge based prompts and scores the models on those. Phi-3 mini actually gets a big boost in ELO ranking when filtered on hard prompts and beats GPT-3.5

The top AI news from the past week, every ThursdAI

Wow, holy s**t, insane, overwhelming, incredible, the future is here!, "still not there", there are many more words to describe this past week. (TL;DR at the end of the blogpost)I had a feeling it's going to be a big week, and the companies did NOT disappoint, so this is going to be a very big newsletter as well. As you may have read last week, I was very lucky to be in San Francisco the weekend before Google IO, to co-host a hackathon with Meta LLama-3 team, and it was a blast, I will add my notes on that in This weeks Buzz section. Then on Monday, we all got to watch the crazy announcements from OpenAI, namely a new flagship model called GPT-4o (we were right, it previously was im-also-a-good-gpt2-chatbot) that's twice faster, 50% cheaper (in English, significantly more so in other languages, more on that later) and is Omni (that's the o) which means it is end to end trained with voice, vision, text on inputs, and can generate text, voice and images on the output. A true MMIO (multimodal on inputs and outputs, that's not the official term) is here and it has some very very surprising capabilities that blew us all away. Namely the ability to ask the model to "talk faster" or "more sarcasm in your voice" or "sing like a pirate", though, we didn't yet get that functionality with the GPT-4o model, it is absolutely and incredibly exciting. Oh and it's available to everyone for free! That's GPT-4 level intelligence, for free for everyone, without having to log in!What's also exciting was how immediate it was, apparently not only the model itself is faster (unclear if it's due to newer GPUs or distillation or some other crazy advancements or all of the above) but that training an end to end omnimodel reduces the latency to incredibly immediate conversation partner, one that you can interrupt, ask to recover from a mistake, and it can hold a conversation very very well. So well, that indeed it seemed like, the Waifu future (digital girlfriends/wives) is very close to some folks who would want it, while we didn't get to try it (we got GPT-4o but not the new voice mode as Sam confirmed) OpenAI released a bunch of videos of their employees chatting with Omni (that's my nickname, use it if you'd like) and many online highlighted how thirsty / flirty it sounded. I downloaded all the videos for an X thread and I named one girlfriend.mp4, and well, just judge for yourself why: Ok, that's not all that OpenAI updated or shipped, they also updated the Tokenizer which is incredible news to folks all around, specifically, the rest of the world. The new tokenizer reduces the previous "foreign language tax" by a LOT, making the model way way cheaper for the rest of the world as wellOne last announcement from OpenAI was the desktop app experience, and this one, I actually got to use a bit, and it's incredible. MacOS only for now, this app comes with a launcher shortcut (kind of like RayCast) that let's you talk to ChatGPT right then and there, without opening a new tab, without additional interruptions, and it even can understand what you see on the screen, help you understand code, or jokes or look up information. Here's just one example I just had over at X. And sure, you could always do this with another tab, but the ability to do it without context switch is a huge win. OpenAI had to do their demo 1 day before GoogleIO, but even during the excitement about GoogleIO, they had announced that Ilya is not only alive, but is also departing from OpenAI, which was followed by an announcement from Jan Leike (who co-headed the superailgnment team together with Ilya) that he left as well. This to me seemed like a well executed timing to give dampen the Google news a bit. Google is BACK, backer than ever, Alex's Google IO recapOn Tuesday morning I showed up to Shoreline theater in Mountain View, together with creators/influencers delegation as we all watch the incredible firehouse of announcements that Google has prepared for us. TL;DR - Google is adding Gemini and AI into all it's products across workspace (Gmail, Chat, Docs), into other cloud services like Photos, where you'll now be able to ask your photo library for specific moments. They introduced over 50 product updates and I don't think it makes sense to cover all of them here, so I'll focus on what we do best."Google with do the Googling for you" Gemini 1.5 pro is now their flagship model (remember Ultra? where is that?

Papers Read on AI
DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model

Papers Read on AI

Play Episode Listen Later May 12, 2024 41:56


We present DeepSeek-V2, a strong Mixture-of-Experts (MoE) language model characterized by economical training and efficient inference. It comprises 236B total parameters, of which 21B are activated for each token, and supports a context length of 128K tokens. DeepSeek-V2 adopts innovative architectures including Multi-head Latent Attention (MLA) and DeepSeekMoE. MLA guarantees efficient inference through significantly compressing the Key-Value (KV) cache into a latent vector, while DeepSeekMoE enables training strong models at an economical cost through sparse computation. Compared with DeepSeek 67B, DeepSeek-V2 achieves significantly stronger performance, and meanwhile saves 42.5% of training costs, reduces the KV cache by 93.3%, and boosts the maximum generation throughput to 5.76 times. We pretrain DeepSeek-V2 on a high-quality and multi-source corpus consisting of 8.1T tokens, and further perform Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) to fully unlock its potential. Evaluation results show that, even with only 21B activated parameters, DeepSeek-V2 and its chat versions still achieve top-tier performance among open-source models. 2024: DeepSeek-AI https://arxiv.org/pdf/2405.04434

Make Money as a Life Coach
Ep #279: 156K in 12 Months to 128K in 3 Months by Result Focused Investing with Megan Wing

Make Money as a Life Coach

Play Episode Listen Later May 1, 2024 64:55


Megan Wing is a business coach for entrepreneurs and the creator of the Six Figure Systems framework. Coaching helped solidify and up-level her self-concept as a CEO, and she's learned valuable lessons along the way about what it means to be in business. She went from making $156K all of last year to making $128K by the end of this first quarter, and she's here to give you a masterclass in all things entrepreneurship.   We explore the importance of aggressively investing in your business and how she made hitting six figures statistically inevitable. Megan is sharing her thoughts on the core tenets of running a successful coaching business, the importance of having a community on your journey, and why tracking data is one of your biggest responsibilities as a coach.   If you want to start making serious money as a coach, you need to check out 2K for 2K. Click here to join: https://staceyboehman.com/2kfor2k!

El Mundo del Spectrum Podcast
12x05 Slow Glass - System 4 - Servicio Técnico - El Mundo del Spectrum Podcast

El Mundo del Spectrum Podcast

Play Episode Listen Later May 1, 2024 275:03


Regresamos con un programa intenso que disfrutarás de principio a fin. Acompáñanos por un viaje al pasado en el que recordaremos aquellos juegos baratos de System 4, entrevistaremos a los autores de Cyberbig (Slow Glass), a un técnico muy especial del Servicio Técnico de Continente y estrenaremos una nueva sección llamada «Colour Clash». En primer lugar repasaremos la actualidad con noticias como el lanzamiento de un documental del Sinclair C5, libros de autores ilustres, la nueva Retrogamer o los fallecimientos de Enrique Ventura, Frederick David Thorpe y Miguel Durán, al que Rafa Corrales recordará con unas palabras. A continuación estrenamos la sección «Colour Clash», dirigida por nuestro compañero Aitor Chavez cuyo primer episodio lleva el título de «Robocop, un puesto de periódicos y un golpe en la barra». Esperamos en esta nueva sección que vosotros seáis los protagonistas contando historias y anécdotas en torno al Spectrum. No dudes en mandarnos un email o un mensaje y contactaremos contigo. Seguiremos con la entrevista a Gabriel Cardel, un técnico de televisores Phillips que se puso con atrevimiento y audacia a reparar Spectrums para Continente y nos lo cuenta acompañado de su hijo Artur, compañero de El Mundo del Spectrum. Continuaremos con el regreso de la sección «El Altavoz». Alejandro leerá un mensaje de Natxo Cruz, oyente que nos descubrió hace un año, que ha devorado todos nuestros programas en este tiempo y que nos cuenta su experiencia con el Spectrum. Abordaremos a continuación el tema principal: «System 4, juegos baratos para Spectrum». Repasaremos el catálogo de juegos propios de esta compañía barata y lo que supuso poder ir a comprar juegos originales por 295, 395 o 595 pesetas. Para terminar os invitamos a comer con nuestro compañero Fede Jerez y dos invitados muy especiales: Alberto Pérez Torres y Manuel Domínguez Zaragoza (Slow Glass). Ellos fueron los autores de Cyberbig y nos contarán infinidad de detalles de la época en una entrevista muy extensa, interesante y entrañable. Como debe ser se escucharán cubiertos, platos, tazas, viento y todo tipo de sonido ambiente que te permitirá sentirte como uno más en la mesa del restaurante. Este es el menú que te ofrecemos en El Mundo del Spectrum Podcast 12×05. Esperamos que te guste y que la espera haya merecido la pena. Dedicamos el programa a Tecnoretro, miembro de RetroMallorca. Programa con Jesús Martínez del Vas, Jesús Relinque (Pedja) y dirigido y presentado por Alejandro Ibáñez.

Beyond The Systems Podcast | Business Systems & Growth Strategies For Your Online Business

In this episode, I discuss my client's surprise $128,000 month and the behind-the-scenes factors that contributed to this success. The client had a team and processes in place, allowing her to focus on high-level tasks. I dive into the importance of setting up passive funnels and aligning them with the customer journey to support audience growth. A light launch strategy, focused on emails and YouTube, was implemented with careful planning. The key takeaway is the significance of consistency and long-term commitment to systems and processes for sustainable success.Topics covered :Having a team and processes in place allows entrepreneurs to focus on high-level tasks and avoid getting caught up in day-to-day operations.Setting up passive funnels and aligning them with the customer journey can support audience growth and revenue generation.A well-planned light launch strategy, focused on specific channels, can be effective in generating sales.Consistency and long-term commitment to systems and processes are crucial for sustainable success.Connect with Sam Whisnant:Website: https://www.systemswithsam.com/services Instagram:  https://www.instagram.com/systemswithsam/ ​​

Graham Allen’s Dear America Podcast
EP 623| THE RED WAVE IS FINALLY HERE!!! + Dems ARE Terrified!!! + BYE BYE Nikki!!! Graham Allen 128K followers Join Unfollow 1.26K 4

Graham Allen’s Dear America Podcast

Play Episode Listen Later Mar 6, 2024 78:52


In an unprecedented Super Tuesday, Donald Trump goes 14-1! Nikki has decided to "suspend" her campaign but, not endorse Trump at the moment. (Imagine that) The Dems should be terrified about November! ► Today's Sponsors: Text 231-231 and use keyword GRAHAM to get a bottle of Nugenix Thermo X for FREE  Protect your savings with the precious metal IRA specialist. www.birchgold.com Text: Graham to 989898 ► Watch LIVE on Rumble:  https://rumble.com/c/GrahamAllenOfficial ► Support freedom with 9/12 Merch: https://912united.com Learn more about your ad choices. Visit megaphone.fm/adchoices

Dave and Dujanovic
Walmart increasing store manager pay to $128K

Dave and Dujanovic

Play Episode Listen Later Jan 22, 2024 19:04


"Store Managers, We’re Investing in You" In a press release, Walmart is saying it's going to pay store managers a new average salary of $128,000 a year. The raise kicks in Feb. 1 and has stated that pay structures have not been adjusted in more than a decade. Dave and Debbie discuss and take listener calls about Walmart's investment in it's employees.

Dave and Dujanovic
Dave & Dujanovic Full Show January 22nd, 2024: Ron Desantis ends presidential Campaign

Dave and Dujanovic

Play Episode Listen Later Jan 22, 2024 119:10


Cold Plunging: How should this growing trend be regulated in Utah? Utah's controversial social media law on hold Oakland A's interested in new Daybreak baseball stadium Walmart increasing store manager pay to $128K

China In Focus
Hong Kong Places $128k Bounty on U.S. Citizen

China In Focus

Play Episode Listen Later Dec 16, 2023 21:00


Hong Kong Places $128k Bounty on U.S. CitizenYellen: U.S. Aims to Repair Relationship with ChinaTrump: Would Renege on $3 Billion U.S. Pledge to Climate FundHouse Passes Ndaa Despite Policy ObjectionsFDA Seizes Millions of Illegal E-CigarettesWorld Bank: China's Economy Will Slow in 2024Putin Praises China Ties as Trade Hits $200 BillionBeijing Wraps Up Trade Probe Ahead of Taiwan ElectionSnowstorms Pummel Northern, Central China500+ Injured in Beijing Subway AccidentChina Sends COVID-19, U.S. Pays for Tests: Rep. Harshbarger

Highlights from The Pat Kenny Show
Buyers need to earn at least 128k to afford a new home in Dublin

Highlights from The Pat Kenny Show

Play Episode Listen Later Dec 7, 2023 18:40


Buyers need to earn at least 128k to afford a new home in Dublin. A new report by the Society of Chartered Surveyors Ireland has found that the national average cost of delivering a three bedroom semi detached house is 397k.Speaking to Pat in response to the findings of this report was Karl Deeter CEO of OnlineApplications.com.

Leveraging AI
38 | 4 Concepts You Must Know in Order to Maximize the ROI of AI Implementation

Leveraging AI

Play Episode Listen Later Nov 14, 2023 35:39 Transcription Available


 What if AI could 10x your business growth? On today's episode of Leveraging AI, Isar Meitis talks about the secrets to leveraging AI to take your business from surviving to thriving.He discusses:Stop chasing efficiency gains - focus on transformational outcomes with AIProfessions → Skills - How AI makes specialized skills accessibleUse your data and optimization to crush the competitionAI enables infinite scalability - break traditional business bottlenecksWhether you're a startup or an enterprise, these practical AI strategies will help you tap into astounding growth opportunities.AI news of the week:China boldly claims it has a plan to mass-produce humanoid robots that can 'reshape the world' within 2 yearsElon Musk says AI will remove need for jobs and create ‘universal high income.' But workers don't want to wait for robots to get financial reliefUNEMPLOYED MAN USES AI TO APPLY FOR 5,000 JOBS, GETS 20 INTERVIEWSMicrosoft's GitHub announces Copilot assistant that can learn about companies' private codeNew emotional AI prompting method generates improved resultsAn AI just negotiated a contract for the first time ever — and no human was involved

The Steve Gruber Show
Chris Chmielenski, 128K Border Apprehensions in Feb, GOP Promised to Act on Border Security

The Steve Gruber Show

Play Episode Listen Later Mar 8, 2023 7:30


Chris Chmielenski is the NumbersUSA Vice-President. 128K Border Apprehensions in Feb, GOP Promised to Act on Border Security

Retro Gaming Discussion Show
318 - James Bond : License To Game

Retro Gaming Discussion Show

Play Episode Listen Later Mar 1, 2023 197:11


In our latest episode Kingy and Drisky don their tuxedos and sup on their vodka Martinis, shaken not stirred and take a look through March 40th Anniversary of James Bond video games.  Not only that It is also the 70th Anniversary in April  since James Bond was first introduced to the world in the book Casino Royale and October last year was the 60th Anniversary of the Bond films, so there is plenty of reasons to celebrate Bond. So sit back and enjoy as we take you through 40 years of great gaming history with Bond. James Bond Blu Ray Collection : https://www.amazon.co.uk/James-Bond-Collection-1-24-Blu-ray/dp/B074T8XN1Q/ref=sr_1_8?crid=2V7LY2P5J0MWA&keywords=james+bond+box+set&qid=1676922538&sprefix=james+bond+%2Caps%2C100&sr=8-8   here is the original tune, that Bond tune was based from an abandoned indian inspired musical in 1950...Monty Norman who was brought on to do dr no music and wrote the ursula andress song in dr no, but wasnt the sound they wanted so they brought in john barry last min....who took the tune and jazzed it up... https://youtu.be/g6EuzGhIyRQ   1. Shaken But Not Stirred 1982 spectrum Play Online : Spectum : https://zxart.ee/eng/software/game/arcade/action/shaken-but-not-stirred/shaken-but-not-stirred/#   2. James Bond 2600 1983 vcs, c64 Play Online :               Atari 2600 VCS : https://archive.org/details/atari_2600_james_bond_007_james_bond_agent_007_1983_parker_brothers_joe_gaucher_l   3. A View to a Kill (both pc txt adv and Domark) C64 : https://archive.org/details/View_to_a_Kill_A_1985_Domark_cr_DD PC Mindscape Text Adventure : https://archive.org/details/a2woz_James_Bond_007_in_A_View_To_A_Kill_1985_Mindscape Manuals for PC Game : https://www.mocagh.org/loadpage.php?getgame=viewtoakill-alt   4. Goldfinger 1986 (pc txt adv) PC Mindscape Text Adventure : https://archive.org/details/msdos_James_Bond_007_-_Goldfinger_1986 Manual : https://www.mocagh.org/loadpage.php?getgame=goldfinger    5. Living Daylights 1989Amstrad CPC : https://archive.org/details/007_The_Living_Daylights_1987_Domark    6. Live and Let Die 1988 C64 : https://archive.org/details/Live_and_Let_Die_1988_Elite   7. License to Kill 1989 https://archive.org/details/zx_007_Licence_to_Kill_1989_Domark_a_128K   8. Spectrum +2 Action Pack 1989 Ad: ZX Spectrum +2  James Bond 007 Action Pack 1990 TV Commercial Q Audio Tapes on You Tube : James Bond Action Pack - Desmond Llewellyn's Audio Briefing as QQ Audio Tapes in MP3 Format https://archive.org/download/World_of_Spectrum_June_2017_Mirror/World%20of%20Spectrum%20June%202017%20Mirror.zip/World%20of%20Spectrum%20June%202017%20Mirror/sinclair/music/bonustracks/JamesBond007ActionPack.mp3.zip     9. Spy Who Loves Me 1990 Amstrad CPC: https://archive.org/details/007_The_Spy_Who_Loved_Me_1990_Domark_cr_NPS_t_2_NPS Amiga Manual : https://archive.org/details/TheSpyWhoLovedMe/page/n4/mode/1up   10. James Bond: Stealth Affair 1990 PC: https://archive.org/details/msdos_James_Bond_007_-_The_Stealth_Affair_1990   11. Octopussy (unlicensed Slovakian Game) 1992 translated and given 128K mod in 2018 Spectrum : https://archive.org/details/zx_James_Bond_Octopussy_1992_Ultrasoft_sk_128K_incomplete 2018 128K Mod: https://vtrd.in/release.php?r=0ea1fbab0fdaf6563d85d2d4e7308a19&fbclid=IwAR1m-XGvc0BcogaUzLg9z2hZLtoEkvaLpA1j_7ggLYV7z4E7CSW7z3J_mtY   12. James Bond The Duel ( MegaDrive and Master System) Vid: James Bond 007: The Duel (Sega Genesis/Mega Drive) - Full Walkthrough HD     13. Goldeneye N64 1996 Buy Rare Replay Digitally to Own Goldeneye on Xbox (Currently in Sale as of 1st March) : https://www.xbox.com/en-GB/games/store/rare-replay/BWXKD3FFMNP3 Golden Eye Source: https://www.geshl2.com/ XBLA ver and new release. https://gamingretro.co.uk/how-to-play-goldeneye-007-xbox-360-xbla-game-on-windows-pc/ 1964 Emulator to play Goldeneye, GoldFinger 64 and Perfect Dark with upscaled visuals and  option for mouse and keys support: https://github.com/Graslu/1964GEPD/releases/tag/latest?fbclid=IwAR32q74h3xKUIRUm5rHXJ_c3JBnMjAKMHgcwh4NeQa-vaRYXD1ikHZWLbtc XBLA Version never officially released : GoldenEye 007 XBLA - Longplay (4K 60FPS)   14. James Bond 007 1998 Gameboy James Bond 007 (GB) - Level 1&2 (China/London) - Long Play     15. Tomorrow Never Dies ps1 Old UK gaming advert - Playstation James Bond Tomorrow Never Dies 1999 1990s     16. World Is Not Enough 2000 n64 and ps1 (diff games n64 better)   17. 007 Racing ps1   Agent Under Fire (GC, PS2, Xbox)   19. NightFire 2002 (GC, PS2, Xbox) mention terrible pc and diff gba game.   20. Everything or Nothing (gc, ps2, xbox) mention diff gba game   21. Goldeneye : Rogue Agent (gc, ps2, xbox) mention ds version   22. From Russia with Love 2005 (GC, PS2, Xbox, PSP)   23. Quantum Of Solace (PS3, 360, wii, pc) mention diff ds game   24. Bloodstone (ps3, 360, pc) mention diff ds game.   25. Goldeneye 007 2010 Wii and Goldeneye Reloaded 2011 ps3,360   26. 007 Legends 2012 (ps3, 360, pc wiiu)   27. Goldfinger n64 fan made game patching Goldeneye rom Patched N64 Rom : https://drive.google.com/file/d/1-CDp1qcn55XUVWy8IK6arwuzMC3Y37xD/view?usp=sharing

LESSONS FROM TIK TOK
023 - How to Discover Your Purpose, Change your Habits, and Trust Intuition with Dr. Carolyn Kurle

LESSONS FROM TIK TOK

Play Episode Listen Later Jan 13, 2023 56:29


If you feel like you've lost focus in life, this episode, featuring Dr. Carolyn Kurle, is going to rock your world. Carolyn, author of the forthcoming book, THE GUIDANCE GROOVE: Escape Unproductive Habits, Trust Your Intuition, and Be True [Releases February 2023], had just written a book called, is all about trusting your intuition, going with your gut, and manifesting the best life possible. A Doctor of Biology, Dr. Kurle knows our instinct is to go with logic, but she knows there's a voice inside you that knows exactly what to do when it's time to make a choice. Every single time. That is your authenticity, your truth, your spark, your own personal Guidance Groove. LEARN MORE AT: https://www.guidancegroove.com __________________ IF YOU NEED A FRESH START, grab my $10 book with over 50 pages of MINDSET, WELLNESS, AND MANIFESTATION activities to help pivot your life today! https://www.kamelahurley.com/offers/DPjmvLyF/checkout ________________ LEARN MORE ABOUT ME If you want to learn more about me, reading, or my podcast, please use the links: WEBSITE http://www.kamelahurley.com TIK TOK with 128K+ https://www.tiktok.com/@kamelahurley FACEBOOK GROUP with 15K+ Followers www.facebook.com/groups/148443738659491/ INSTAGRAM https://www.instagram.com/kamelahurley/ ___________ LIGHT SWITCH: TURN YOUR INTUITION ON PODCAST __________ SPOTIFY https://open.spotify.com/show/7oYeEdnUc8CKV5iSi3v2lQ APPLE https://apple.co/3rePHeS Before you go...I want you to know what a gift you are...how special you are...that you matter to this world...that there is purpose in your experiences, hardships, and/or wins! Sending you light, love, peace, protection, and abundance. YOU ARE THE MAIN CHARACTER IN YOUR LIFE, AND YOU GET TO WRITE THE STORY SO MANIFEST MIRACLES!!

Lynch and Taco
5:35 Idiotology December 7, 2022

Lynch and Taco

Play Episode Listen Later Dec 7, 2022 10:10


Office worker is getting paid $128K a year to just 'eat lunch and read newspapers', Philly gas station owner is so fed up with crime that he has hired heavily armed guards to protect his business, A bologna mascot upstages Santa every year at Canadian Christmas parade

Wholesale Hotline
$128k First Deal -- How to Generate Massive Profits in a Super Competitive Market | TTP Breakout

Wholesale Hotline

Play Episode Listen Later Aug 31, 2022 40:40


With its tropical breeze and booming metropolis, there is no doubt that Miami is at the top of the real estate market. With that comes great competition from real estate seasoned real estate investors. But that didn't stop our guest Robin Perez from going down the rabbit hole and joining the real estate frenzy two years ago. He jumped into the market with a ton of confidence and was ready to do whatever it takes to "make it happen"...which he did! Robin joins Brent Daniels and shares his story of how he's able to compete and climb the success ladder in this competitive market with the use of one unique strategy. If you like what you heard in today's episode, apply to Brent's TTP Cold Calling training program today to receive the systems and strategies you need to become a successful real estate wholesaler...just like Robin. ---------- Show notes: (0:46) Beginning of today's episode (7:53) Keeping your pipeline of leads fresh (14:14) Robin's advice to people who are struggling in real estate (21:55) Concentrate on one market for the first 12 months (23:07) Robin talks about his agent outreach strategy ---------- Resources: Want to learn more? Check out our TTP training program. Try the 7 Levels Deep exercise to discover your why. Check out Brent Daniels' Youtube channel. To speak with Brent or one of our other expert coaches call (281) 835-4201 or schedule here Please give us a rating and let us know how we are doing! ➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖ ☎️ Welcome to Wholesale Hotline & TTP Breakout