Podcasts about V3

  • 425PODCASTS
  • 900EPISODES
  • 46mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Sep 29, 2025LATEST

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about V3

Latest podcast episodes about V3

El podcast de Formación Ninja
Estudiar Una Oposición Nicho, Cómo Planificarse | Tutoría Formación Ninja

El podcast de Formación Ninja

Play Episode Listen Later Sep 29, 2025 79:23


¿Te sientes perdido porque “tienes mil planes a la vez”… pero en realidad no tienes ninguno? En este episodio desmontamos esa ansiedad y la convertimos en un plan que funciona: ritmos por tema, vueltas rápidas y foco brutal en lo que de verdad preguntan en el examen.¿Quieres prepararte con nosotros?https://formacion.ninja/?utm_source=podcastNuestro Canal de WhatsApp:https://whatsapp.com/channel/0029VaDKoSOCcW4tN3Cuh10QSi te ha gustado el vídeo, dale 5 estrellas

The top AI news from the past week, every ThursdAI

This is a free preview of a paid episode. To hear more, visit sub.thursdai.newsHola AI aficionados, it's yet another ThursdAI, and yet another week FULL of AI news, spanning Open Source LLMs, Multimodal video and audio creation and more! Shiptember as they call it does seem to deliver, and it was hard even for me to follow up on all the news, not to mention we had like 3-4 breaking news during the show today! This week was yet another Qwen-mas, with Alibaba absolutely dominating across open source, but also NVIDIA promising to invest up to $100 Billion into OpenAI. So let's dive right in! As a reminder, all the show notes are posted at the end of the article for your convenience. ThursdAI - Because weeks are getting denser, but we're still here, weekly, sending you the top AI content! Don't miss outTable of Contents* Open Source AI* Qwen3-VL Announcement (Qwen3-VL-235B-A22B-Thinking):* Qwen3-Omni-30B-A3B: end-to-end SOTA omni-modal AI unifying text, image, audio, and video* DeepSeek V3.1 Terminus: a surgical bugfix that matters for agents* Evals & Benchmarks: agents, deception, and code at scale* Big Companies, Bigger Bets!* OpenAI: ChatGPT Pulse: Proactive AI news cards for your day* XAI Grok 4 fast - 2M context, 40% fewer thinking tokens, shockingly cheap* Alibaba Qwen-Max and plans for scaling* This Week's Buzz: W&B Fully Connected is coming to London and Tokyo & Another hackathon in SF* Vision & Video: Wan 2.2 Animate, Kling 2.5, and Wan 4.5 preview* Moondream-3 Preview - Interview with co-founders Via & Jay* Wan open sourced Wan 2.2 Animate (aka “Wan Animate”): motion transfer and lip sync* Kling 2.5 Turbo: cinematic motion, cheaper and with audio* Wan 4.5 preview: native multimodality, 1080p 10s, and lip-synced speech* Voice & Audio* ThursdAI - Sep 25, 2025 - TL;DR & Show notesOpen Source AIThis was a Qwen-and-friends week. I joked on stream that I should just count how many times “Alibaba” appears in our show notes. It's a lot.Qwen3-VL Announcement (Qwen3-VL-235B-A22B-Thinking): (X, HF, Blog, Demo)Qwen 3 launched earlier as a text-only family; the vision-enabled variant just arrived, and it's not timid. The “thinking” version is effectively a reasoner with eyes, built on a 235B-parameter backbone with around 22B active (their mixture-of-experts trick). What jumped out is the breadth of evaluation coverage: MMU, video understanding (Video-MME, LVBench), 2D/3D grounding, doc VQA, chart/table reasoning—pages of it. They're showing wins against models like Gemini 2.5 Pro and GPT‑5 on some of those reports, and doc VQA is flirting with “nearly solved” territory in their numbers.Two caveats. First, whenever scores get that high on imperfect benchmarks, you should expect healthy skepticism; known label issues can inflate numbers. Second, the model is big. Incredible for server-side grounding and long-form reasoning with vision (they're talking about scaling context to 1M tokens for two-hour video and long PDFs), but not something you throw on a phone.Still, if your workload smells like “reasoning + grounding + long context,” Qwen 3 VL looks like one of the strongest open-weight choices right now.Qwen3-Omni-30B-A3B: end-to-end SOTA omni-modal AI unifying text, image, audio, and video (HF, GitHub, Qwen Chat, Demo, API)Omni is their end-to-end multimodal chat model that unites text, image, and audio—and crucially, it streams audio responses in real time while thinking separately in the background. Architecturally, it's a 30B MoE with around 3B active parameters at inference, which is the secret to why it feels snappy on consumer GPUs.In practice, that means you can talk to Omni, have it see what you see, and get sub-250 ms replies in nine speaker languages while it quietly plans. It claims to understand 119 languages. When I pushed it in multilingual conversational settings it still code-switched unexpectedly (Chinese suddenly appeared mid-flow), and it occasionally suffered the classic “stuck in thought” behavior we've been seeing in agentic voice modes across labs. But the responsiveness is real, and the footprint is exciting for local speech streaming scenarios. I wouldn't replace a top-tier text reasoner with this for hard problems, yet being able to keep speech native is a real UX upgrade.Qwen Image Edit, Qwen TTS Flash, and Qwen‑GuardQwen's image stack got a handy upgrade with multi-image reference editing for more consistent edits across shots—useful for brand assets and style-tight workflows. TTS Flash (API-only for now) is their fast speech synth line, and Q‑Guard is a new safety/moderation model from the same team. It's notable because Qwen hasn't really played in the moderation-model space before; historically Meta's Llama Guard led that conversation.DeepSeek V3.1 Terminus: a surgical bugfix that matters for agents (X, HF)DeepSeek whale resurfaced to push a small 0.1 update to V3.1 that reads like a “quality and stability” release—but those matter if you're building on top. It fixes a code-switching bug (the “sudden Chinese” syndrome you'll also see in some Qwen variants), improves tool-use and browser execution, and—importantly—makes agentic flows less likely to overthink and stall. On the numbers, Humanities Last Exam jumped from 15 to 21.7, while LiveCodeBench dipped slightly. That's the story here: they traded a few raw points on coding for more stable, less dithery behavior in end-to-end tasks. If you've invested in their tool harness, this may be a net win.Liquid Nanos: small models that extract like they're big (X, HF)Liquid Foundation Models released “Liquid Nanos,” a set of open models from roughly 350M to 2.6B parameters, including “extract” variants that pull structure (JSON/XML/YAML) from messy documents. The pitch is cost-efficiency with surprisingly competitive performance on information extraction tasks versus models 10× their size. If you're doing at-scale doc ingestion on CPUs or small GPUs, these look worth a try.Tiny IBM OCR model that blew up the charts (HF)We also saw a tiny IBM model (about 250M parameters) for image-to-text document parsing trending on Hugging Face. Run in 8-bit, it squeezes into roughly 250 MB, which means Raspberry Pi and “toaster” deployments suddenly get decent OCR/transcription against scanned docs. It's the kind of tiny-but-useful release that tends to quietly power entire products.Meta's 32B Code World Model (CWM) released for agentic code reasoning (X, HF)Nisten got really excited about this one, and once he explained it, I understood why. Meta released a 32B code world model that doesn't just generate code - it understands code the way a compiler does. It's thinking about state, types, and the actual execution context of your entire codebase.This isn't just another coding model - it's a fundamentally different approach that could change how all future coding models are built. Instead of treating code as fancy text completion, it's actually modeling the program from the ground up. If this works out, expect everyone to copy this approach.Quick note, this one was released with a research license only! Evals & Benchmarks: agents, deception, and code at scaleA big theme this week was “move beyond single-turn Q&A and test how these things behave in the wild.” with a bunch of new evals released. I wanted to cover them all in a separate segment. OpenAI's GDP Eval: “economically valuable tasks” as a bar (X, Blog)OpenAI introduced GDP Eval to measure model performance against real-world, economically valuable work. The design is closer to how I think about “AGI as useful work”: 44 occupations across nine sectors, with tasks judged against what an industry professional would produce.Two details stood out. First, OpenAI's own models didn't top the chart in their published screenshot—Anthropic's Claude Opus 4.1 led with roughly a 47.6% win rate against human professionals, while GPT‑5-high clocked in around 38%. Releasing a benchmark where you're not on top earns respect. Second, the tasks are legit. One example was a manufacturing engineer flow where the output required an overall design with an exploded view of components—the kind of deliverable a human would actually make.What I like here isn't the precise percent; it's the direction. If we anchor progress to tasks an economy cares about, we move past “trivia with citations” and toward “did this thing actually help do the work?”GAIA 2 (Meta Super Intelligence Labs + Hugging Face): agents that execute (X, HF)MSL and HF refreshed GAIA, the agent benchmark, with a thousand new human-authored scenarios that test execution, search, ambiguity handling, temporal reasoning, and adaptability—plus a smartphone-like execution environment. GPT‑5-high led across execution and search; Kimi's K2 was tops among open-weight entries. I like that GAIA 2 bakes in time and budget constraints and forces agents to chain steps, not just spew plans. We need more of these.Scale AI's “SWE-Bench Pro” for coding in the large (HF)Scale dropped a stronger coding benchmark focused on multi-file edits, 100+ line changes, and large dependency graphs. On the public set, GPT‑5 (not Codex) and Claude Opus 4.1 took the top two slots; on a commercial set, Opus edged ahead. The broader takeaway: the action has clearly moved to test-time compute, persistent memory, and program-synthesis outer loops to get through larger codebases with fewer invalid edits. This aligns with what we're seeing across ARC‑AGI and SWE‑bench Verified.The “Among Us” deception test (X)One more that's fun but not frivolous: a group benchmarked models on the social deception game Among Us. OpenAI's latest systems reportedly did the best job both lying convincingly and detecting others' lies. This line of work matters because social inference and adversarial reasoning show up in real agent deployments—security, procurement, negotiations, even internal assistant safety.Big Companies, Bigger Bets!Nvidia's $100B pledge to OpenAI for 10GW of computeLet's say that number again: one hundred billion dollars. Nvidia announced plans to invest up to $100B into OpenAI's infrastructure build-out, targeting roughly 10 gigawatts of compute and power. Jensen called it the biggest infrastructure project in history. Pair that with OpenAI's Stargate-related announcements—five new datacenters with Oracle and SoftBank and a flagship site in Abilene, Texas—and you get to wild territory fast.Internal notes circulating say OpenAI started the year around 230MW and could exit 2025 north of 2GW operational, while aiming at 20GW in the near term and a staggering 250GW by 2033. Even if those numbers shift, the directional picture is clear: the GPU supply and power curves are going vertical.Two reactions. First, yes, the “infinite money loop” memes wrote themselves—OpenAI spends on Nvidia GPUs, Nvidia invests in OpenAI, the market adds another $100B to Nvidia's cap for good measure. But second, the underlying demand is real. If we need 1–8 GPUs per “full-time agent” and there are 3+ billion working adults, we are orders of magnitude away from compute saturation. The power story is the real constraint—and that's now being tackled in parallel.OpenAI: ChatGPT Pulse: Proactive AI news cards for your day (X, OpenAI Blog)In a #BreakingNews segment, we got an update from OpenAI, that currently works only for Pro users but will come to everyone soon. Proactive AI, that learns from your chats, email and calendar and will show you a new “feed” of interesting things every morning based on your likes and feedback! Pulse marks OpenAI's first step toward an AI assistant that brings the right info before you ask, tuning itself with every thumbs-up, topic request, or app connection. I've tuned mine for today, we'll see what tomorrow brings! P.S - Huxe is a free app from the creators of NotebookLM (Ryza was on our podcast!) that does a similar thing, so if you don't have pro, check out Huxe, they just launched! XAI Grok 4 fast - 2M context, 40% fewer thinking tokens, shockingly cheap (X, Blog)xAI launched Grok‑4 Fast, and the name fits. Think “top-left” on the speed-to-cost chart: up to 2 million tokens of context, a reported 40% reduction in reasoning token usage, and a price tag that's roughly 1% of some frontier models on common workloads. On LiveCodeBench, Grok‑4 Fast even beat Grok‑4 itself. It's not the most capable brain on earth, but as a high-throughput assistant that can fan out web searches and stitch answers in something close to real time, it's compelling.Alibaba Qwen-Max and plans for scaling (X, Blog, API)Back in the Alibaba camp, they also released their flagship API model, Qwen 3 Max, and showed off their future roadmap. Qwen-max is over 1T parameters, MoE that gets 69.6 on Swe-bench verified and outperforms GPT-5 on LMArena! And their plan is simple: scale. They're planning to go from 1 million to 100 million token context windows and scale their models into the terabytes of parameters. It culminated in a hilarious moment on the show where we all put on sunglasses to salute a slide from their presentation that literally said, “Scaling is all you need.” AGI is coming, and it looks like Alibaba is one of the labs determined to scale their way there. Their release schedule lately (as documented by Swyx from Latent.space) is insane. This Week's Buzz: W&B Fully Connected is coming to London and Tokyo & Another hackathon in SFWeights & Biases (now part of the CoreWeave family) is bringing Fully Connected to London on Nov 4–5, with another event in Tokyo on Oct 31. If you're in Europe or Japan and want two days of dense talks and hands-on conversations with teams actually shipping agents, evals, and production ML, come hang out. Readers got a code on stream; if you need help getting a seat, ping me directly.Links: fullyconnected.comWe are also opening up registrations to our second WeaveHacks hackathon in SF, October 11-12, yours trully will be there, come hack with us on Self Improving agents! Register HEREVision & Video: Wan 2.2 Animate, Kling 2.5, and Wan 4.5 previewThis is the most exciting space in AI week-to-week for me right now. The progress is visible. Literally.Moondream-3 Preview - Interview with co-founders Via & JayWhile I've already reported on Moondream-3 in the last weeks newsletter, this week we got the pleasure of hosting Vik Korrapati and Jay Allen the co-founders of MoonDream to tell us all about it. Tune in for that conversation on the pod starting at 00:33:00Wan open sourced Wan 2.2 Animate (aka “Wan Animate”): motion transfer and lip sync Tongyi's Wan team shipped an open-source release that the community quickly dubbed “Wanimate.” It's a character-swap/motion transfer system: provide a single image for a character and a reference video (your own motion), and it maps your movement onto the character with surprisingly strong hair/cloth dynamics and lip sync. If you've used runway's Act One, you'll recognize the vibe—except this is open, and the fidelity is rising fast.The practical uses are broader than “make me a deepfake.” Think onboarding presenters with perfect backgrounds, branded avatars that reliably say what you need, or precise action blocking without guessing at how an AI will move your subject. You act it; it follows.Kling 2.5 Turbo: cinematic motion, cheaper and with audioKling quietly rolled out a 2.5 Turbo tier that's 30% cheaper and finally brings audio into the loop for more complete clips. Prompts adhere better, physics look more coherent (acrobatics stop breaking bones across frames), and the cinematic look has moved from “YouTube short” to “film-school final.” They seeded access to creators and re-shared the strongest results; the consistency is the headline. (Source X: @StevieMac03)I've chatted with my kiddos today over facetime, and they were building minecraft creepers. I took a screenshot, sent to Nano Banana to make their creepers into actual minecraft ones, and then with Kling, Animated the explosions for them. They LOVED it! Animations were clear, while VEO refused for me to even upload their images, Kling didn't care hahaWan 4.5 preview: native multimodality, 1080p 10s, and lip-synced speechWan also teased a 4.5 preview that unifies understanding and generation across text, image, video, and audio. The eye-catching bit: generate a 1080p, 10-second clip with synced speech from just a script. Or supply your own audio and have it lip-sync the shot. I ran my usual “interview a polar bear dressed like me” test and got one of the better results I've seen from any model. We're not at “dialogue scene” quality, but “talking character shot” is getting… good. The generation of audio (not only text + lipsync) is one of the best ones besides VEO, it's really great to see how strongly this improves, sad that this wasn't open sourced! And apparently it supports “draw text to animate” (Source: X) Voice & AudioSuno V5: we've entered the “I can't tell anymore” eraSuno calls V5 a redefinition of audio quality. I'll be honest, I'm at the edge of my subjective hearing on this. I've caught myself listening to Suno streams instead of Spotify and forgetting anything is synthetic. The vocals feel more human, the mixes cleaner, and the remastering path (including upgrading V4 tracks) is useful. The last 10% to “you fooled a producer” is going to be long, but the distance between V4 and V5 already makes me feel like I should re-cut our ThursdAI opener.MiMI Audio: a small omni-chat demo that hints at the floorWe tried a MiMI Audio demo live—a 7B-ish model with speech in/out. It was responsive but stumbled on singing and natural prosody. I'm leaving it in here because it's a good reminder that the open floor for “real-time voice” is rising quickly even for small models. And the moment you pipe a stronger text brain behind a capable, native speech front-end, the UX leap is immediate.Ok, another DENSE week that finishes up Shiptember, tons of open source, Qwen (Tongyi) shines, and video is getting so so good. This is all converging folks, and honestly, I'm just happy to be along for the ride! This week was also Rosh Hashanah, which is the Jewish new year, and I've shared on the pod that I've found my X post from 3 years ago, using the state of the art AI models of the time. WHAT A DIFFERENCE 3 years make, just take a look, I had to scale down the 4K one from this year just to fit into the pic! Shana Tova to everyone who's reading this, and we'll see you next week

Papo Solar
MLPE e AFCI: nova era em sistemas fotovoltaicos

Papo Solar

Play Episode Listen Later Sep 24, 2025 102:58


No episódio 149 do Podcast Papo Solar, recebemos dois especialistas da Advansol, Fernando Domingos, Country Manager, e Vinicius Goulart, Gerente técnico de vendas, para falar sobre a evolução da segurança nos sistemas fotovoltaicos do Brasil. Na conversa, os profissionais apresentaram de que forma as novas normas de segurança estimulam a adesão a novas tecnologias, como AFCI, Solução V3.0 e MLPE, que vem sendo um divisor de águas na geração solar. Com vasta experiência no setor de energia solar, Fernando e Vinicius destacaram os riscos e desafios dos sistemas fotovoltaicos tradicionais, o papel e as vantagens das novas tecnologias no mercado, apresentando um panorama claro sobre como segurança, eficiência e inteligência podem moldar o futuro do setor.Assista também Como funciona o Rapid Shutdown? Evite incêndios e entenda as novas regras no Brasilhttps://youtube.com/live/YTHjSwXWmFkCursos de energia solar gratuitos: https://cursos.canalsolar.com.br/aplicativo-canal-solar/#EnergiaSolar #MLPE #AFCI #SegurançaFotovoltaica #Inovação

Nächster Halt
#126: Verbesserung der Sicherheit im ÖPNV durch Technologie

Nächster Halt

Play Episode Listen Later Aug 27, 2025 25:00


Die Attraktivität des ÖPNV hängt maßgeblich von der persönlichen Sicherheit von Fahrgästen und des Personals ab. Doch sie berührt noch viele weitere Bereiche wie dem Schutz von Fahrzeugen oder der Infrastruktur. In dieser Folge sprechen wir mit Dr. Roxana Hess, Team Manager Research bei der INIT Group wie Technologie dazu beitragen kann, den ÖPNV noch sicherer zu machen – und gleichzeitig das Vertrauen bei Fahrgästen stärken kann. Im Mittelpunkt stehen Technologien wie Videoanalyse, Sensorik und Künstliche Intelligenz, die nicht nur Risiken verringern, sondern auch ein spürbar besseres Sicherheitsgefühl für alle schaffen können. Jetzt reinhören! Shownotes: Digitale Lerneinheit „Security im ÖPV“: https://www.vdv-akademie.de/lernprodukte/gesundheit-und-sicherheit/security-im-oepv/ VDV-Position „Sicherheit (Security) im öffentlichen Personenverkehr | Fakten, Mythen und Handlungsbedarf von Branche und Politik“: https://www.vdv.de/positionensuche.aspx?id=d27a4250-e555-440e-8fa5-9139e06d686b&mode=detail&coriander=V3_448be8f4-4279-446e-8065-1095d31952b7 VDV-Pressemitteilung „Sicherheit ist Versprechen: für Fahrgäste und Beschäftigte“: https://www.vdv.de/presse.aspx?id=419df165-08e7-4e08-b42e-09d8590f0d76&mode=detail&coriander=V3_d8773cda-743a-e00b-9f34-0d5287da7d30 Folge direkt herunterladen

The Defiant
The $10 Trillion Question: Crypto's Biggest Cycle Yet with Ellio Trades

The Defiant

Play Episode Listen Later Aug 25, 2025 45:53


In this episode of The Defiant Podcast, we sit down with Ellio Trades, Co-Founder of BlackHole, the fastest-growing decentralized exchange on Avalanche. Ellio shares his insights on the current state of the crypto market, including whether we're entering a new altcoin season or witnessing a slower, more sustainable growth cycle. He dives deep into the maturation of the crypto space, the role of institutional investors, and why this cycle could be the most significant in crypto history.Ellio also unpacks the innovative mechanics behind BlackHole, from its V3.3 DEX model to its focus on creating deep liquidity and sustainable revenue. He explains how BlackHole is reshaping the DeFi landscape and why Avalanche was the perfect ecosystem for its launch. Watch now to gain valuable insights on the future of blockchain, decentralized finance, and the next wave of crypto adoption.Chapters00:00 The $10 Trillion Question: Is Altcoin Season Here?00:04 Traditional Cycles vs. A New Crypto Paradigm00:42 The Maturation of DeFi and Institutional Adoption01:30 Introducing Ellio Trades and BlackHole02:33 Bitcoin, Ethereum, and the Shift to Regulated Markets03:45 Why Ethereum's Value Proposition is Finally Clicking05:18 The Role of Wall Street in Crypto's Next Big Cycle07:32 Balancing Supercycle Narratives with Traditional Cycles10:33 DeFi's Undervalued Potential and Institutional Interest12:23 Stablecoins and the Future of Blockchain Applications16:04 Why BlackHole Chose Avalanche for Its Launch18:42 BlackHole's Explosive Growth and Unique Features23:03 How BlackHole's V3.3 Model Creates Deep Liquidity30:28 Lessons from Past Projects and BlackHole's Innovations35:02 The Long-Term Vision for BlackHole and DeFi

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

A daily Chronicle of AI Innovations August 22nd 2025:Listen at https://podcasts.apple.com/us/podcast/ai-daily-rundown-aug-22-2025-google-analyzes-geminis/id1684415169?i=1000723151588Hello AI Unraveled Listeners,In today's AI News,

Kitesurf365 | a podcast for kitesurfers
EXCLUSIVE Megaloop List Revealed + Slingshot Machine V3 | The Megapod

Kitesurf365 | a podcast for kitesurfers

Play Episode Listen Later Aug 21, 2025 45:13


  On today's episode, we discuss the last six men selected from the video entries, and we hear from the six ladies who will compete in their first Red Bull Megaloop. We call Yücel Paralik and congratulate him on selection. We also hear from Jeremy Burlando regarding the Slingshot Machine V3, and Colin and Adrian talk about their WOO teams.   Slingshot machine V3   https://slingshotsports.com/collections/kites/products/machine-v3    Portrait:   https://portraitkite.com     https://www.fantasykite.com   Woo Sports:   https://woosports.com   Follow us:   https://www.instagram.com/portraitkite/   https://www.instagram.com/kitesurf365/

Henry Lake
What is "The Playbook" and why have they partnered with V3 Sports?

Henry Lake

Play Episode Listen Later Aug 19, 2025 12:03


Henry talks with the Founder and CEO of The Playbook, Dr. T M Robinson-Mosley about the reason for the app, their partnership with national sports leagues, why they wanted to be a part of the V3, their "21 Day Check In Challenge,"and more.

Our Sunday Messages
Sam Macdonald - August 17, 2025

Our Sunday Messages

Play Episode Listen Later Aug 17, 2025 45:45


Sam Macdonald - August 17th, 2025 - Fruit of the Spirit Who can name the fruits of the Spirit in order? ● Love ● Joy ● Peace ● Patience ● Kindness ● Goodness ● Faithfulness ● Gentleness ● Self-Control What does “Fruit of the Spirit” Mean? - God's attributes as shown in His followers - Actions that God has taken toward us - The evidence of God's work in someone's life John 15:1-5 - Jesus Speaks of the Fruit - What is “abiding in Christ”? - Galatians 2:20 “I have been crucified with Christ. It is no longer I who live, but Christ who lives in me. And the life I now live in the flesh I live by faith in the Son of God, who loved me and gave himself for me.” - “Whoever abides in Me and I in him, he it is that bears much fruit. - Christ's followers will bear much fruit - Followers of Christ - be encouraged that Christ does and will work in you - Non-Believers (or not-sure) - now is the time to repent and surrender to Christ. ---- Love ● The GOAT of all fruits (I Corinthians 13) ● All fruits after this are in light of love. ● The specific greek word used in Galatians 5:22 is “Agape” (total, sacrificial love) ● Love is complete commitment to something or someone. It is an act of the will. Joy ● Celebration and praise of our God and Saviour. ● Transcends circumstance. ● It is our strength (Neh 8:10) ● John 15:11 says that joy belongs to Christ and he desires for our joy to be full. Peace ● Contentment and completeness in Christ. ● John 14:27 - Peace comes from Christ ● Phil 4:6-7 Inward peace. passes understanding. ● Rom 12:18 Outward peace as a testimony of God's work in us Patience ● Perseverance ● Long-suffering ● Waiting on The Lord (Isaiah 40) Kindness ● Love put into tangible action. ● Love is an act of the will, Kindness is an act of the hands and feet. ● Where the rubber hits the road Goodness ● Dictionary.com - Goodness, morality, virtue refer to qualities of character or conduct that entitle the possessor to approval (of God) and esteem ● Desiring to please God. ● Living by God's terms and morals. ● Opposing evil. Faithfulness ● Unchangingness. ● Loyalty ● Truthfulness ● Not deviating from God's truth Gentleness ● Restraint of power ● Not weak but meek ● Mercy (withholding deserved punishment) ● Grace (granting undeserved kindness) Self-Control ● Opposite of impulsive ● Restraint of desires ● Resistance to temptation Love ● Romans 5:8 ● John 3:16 ● Too many to name :) Joy ● Hebrews 12:2 ● Psalm 104:31 ● John 15:11 Peace ● Romans 5:1 ● 2 Thess. 3:16 ● Phil 4:6-7 Patience ● Exodus 34:6 ● Numbers 14:18 ● 2 Peter 3:9 Kindness ● Romans 5:8 (Demonstrates) ● Ephesians 2:7 Goodness ● Psalm 34:8 ● Exodus 34:6 Faithfulness ● Lamantations 3:23 ● Psalm 36:5 ● Hebrews 13:8 Gentleness ● Matt 11:29 ● 2 Corinthians 10:1 Self-Control ● Hebrews 4:15 ● Matt 4 ● Luke 4 Application ● Ephesians 2:5-10 ○ We are created for good works in Christ. ● Romans 8:1-8 ○ We are set free through the law of the Spirit from the law of sin and death ○ V3-4 The righteous requirement of the law is fulfilled in those who walk according to the Spirit ● Jeremiah 31:31-34 ○ God's law is written on our hearts through the Spirit in the new covenant

Ethereum Daily - Crypto News Briefing
Tornado Cash Trial Ends With Mixed Verdict

Ethereum Daily - Crypto News Briefing

Play Episode Listen Later Aug 6, 2025 4:03


The US vs Storm trial ends with a mixed verdict. Aave releases the V3 developer toolkit. Pendle launches a funding rate trading protocol. And Cosmos Health announces its $300 million ETH reserve strategy. Read more: https://ethdaily.io/756 Disclaimer: Content is for informational purposes only, not endorsement or investment advice. The accuracy of information is not guaranteed.

feeder sound
premiere: Cher - 34 [Art House]

feeder sound

Play Episode Listen Later Jul 28, 2025 13:08


Under his moniker Cher, Romanian artist Traian Cherecheș presents C1M3R4_Analog cut_V3 on his freshly founded label Art House, an inspiring collection of six tracks that explore the essence of minimal music. Read more @ feeder.ro/2025/07/28/cher-c1m3r4-analog-cut-v3

Bird Camp
Field Armor, a new and improved dog vest, and camp talk with Jeff

Bird Camp

Play Episode Listen Later Jul 16, 2025 59:52


Jeff wanted to get the word out about the new V3 dog vest and catch up on this falls plans. Of course there was plenty of BirdCamp shop talk as well.The GoFundMe link mentioned is here. https://www.gofundme.com/f/7zmz6-support-rachels-battle-against-breast-cancer/cl/s?attribution_id=sl:71e0677b-3f2b-4b3f-884e-c3adb3d17873&lang=en_US&utm_campaign=fp_sharesheet&utm_content=amp13_t1-amp14_t2-amp15_t1&utm_medium=customer&utm_source=copy_link&v=amp14_t2Thank you to our sponsorsAspen Thicket Grouse Dogs aspenthicketgrousedogs.comPine Hill Gun Dogs phkscllc@gmail.comSecond Chance Bird dogs Field Armor fieldarmorusa.comWild Card Outfitters and Guide Service wildcardoutdoors.comPrairie ridge Farms prairieridgefarms.com

The Late Night Vision Show
Ep. 375 - AGM Rattler V3 TS50-640 **EXCLUSIVE REVIEW**

The Late Night Vision Show

Play Episode Listen Later Jul 10, 2025 39:08


In this episode, Jason and Hans dive into their review of the AGM Rattler V3 50-640 LRF. The V3 is the first compact Rattler model equipped with a built in LRF. And not only does it have 1,000  meter LRF, it's built into the lens so it is tucked away and adds no bulk or weight to the scope! The V3 also includes a ballistic calculator, a 640×512, sub-15 mK thermal sensor and huge 2560×2560 OLED display.  Tune in to find out if this scope is worth the price tag for serious night hunters and learn how it compares in value to other similar scopes. 

Faith Bible Church Menifee Sermon Podcast

1 Corinthians 11:27–34 (ESV) — 27 Whoever, therefore, eats the bread or drinks the cup of the Lord in an unworthy manner will be guilty concerning the body and blood of the Lord. 28 Let a person examine himself, then, and so eat of the bread and drink of the cup. 29 For anyone who eats and drinks without discerning the body eats and drinks judgment on himself. 30 That is why many of you are weak and ill, and some have died. 31 But if we judged ourselves truly, we would not be judged. 32 But when we are judged by the Lord, we are disciplined so thatwe may not be condemned along with the world. 33 So then, my brothers, when you come together to eat, wait for one another— 34 if anyone is hungry, let him eat at home—so that when you come together it will not be for judgment. About the other things I will give directions when I come. IN COMMUNION YOU ARE TO EXAMINE, REMEMBER,PROCLAIM AND ANTICIPATE THE GOSPEL OF CHRIST!  THE NECESSITY OF PERSONAL EXAMINATION v27-30 a)    TheUnworthy Manner-      “Now, if we would catch the meaning of this declaration, we must know what it is to eat unworthily. Some restrict it to the Corinthians, and the abuse that had crept in among them, but I am of opinion that Paul here, according to his usual manner, passed on from the particular case to a general statement, or from one instance to an entire class. There was one fault that prevailed among the Corinthians. He takes occasion from this to speak of every kind of faulty administration or reception of the Supper. “God,” says he, “will not allow this sacrament to be profaned without punishing it severely.” John Calvin Commentaries on the Epistles of Paul the Apostle to the Corinthians, vol. 1 pg 385.b)    The Worthy Manner -        (v24)Thanks, (v24-25) Remembrance, (v26) Proclamation and Anticipation (Ephesians 4:1-3)  c)     The General Principles-       (v27-29) Personal Examination (whoever… let a person… himself… then eat and drink…) -        (v27, 29) Guilt that leads to judgement or participation without meditation. (YERPA)d)    The Specific Judgement -        (v30)  Some are weak, ill and died.   PERSONAL EXAMINATION UNDER THE PATERNAL LOVE OF THE FATHER v31-32  General Principles:  a)    Your Freedom: Instruction In The Gospel (v31)a.    A clean conscience (Hebrews 10:22) b.     Full confession (1 John 1:7-9) c.      A true humility (1 timothy 1:12-17) d.     Informed progress (Ephesians 4:1-3)      b)    His Faithfulness: Discipline In The Gospel (v32)Hebrews 12:3-14a.     V3-4 Considering Christ b.     V5-10 Remember the love of the Fatherc.      V11 Discipline brings perishing pain,  and progressive paternal peace and perfection… d.     V12 -14 Therefore – take action… (12) up in hope, (13) forward in healing, (14) outward in holiness   PRACTICAL CONCLUSION OF EXAMINATION   v33-34 Specific commands To Corinth and practical applicationfor us: a)     (v33) Hopeful and Patient to serve others  b)    (v34) Humble and Prepared to serve others -        More specifics in patience

LAB: The Podcast
LAB the Podcast with Riley Cooper: “Behind the Scenes”

LAB: The Podcast

Play Episode Listen Later Jun 20, 2025 62:44


He's usually the one making others shine but today we flip the script. On this episode of LAB the Podcast, we sit down with Tampa native and Podcast Producer, Riley Cooper. Riley opens up about his hometown roots, his evolving walk with God, and the winding path that brought him to V3. We also talk about the deep impact the Wayfarer Podcast had on his life and how it became a turning point in his story. You won't want to miss this behind-the-scenes look at one of our own.Thank you for joining the conversation and embodying the life and beauty of the gospel. Don't forget to like, subscribe, and follow LAB the Podcast. Support / Sponsor@VUVIVOV3 | YouTube@labthepodcast | @vuvivo_v3 | @zachjelliott | @wayfarerpodcast Support the show

AI For Humans
OpenAI Prepares For Artificial Super Intelligence, Apple's Major AI Fail & New Insane 1X Robots

AI For Humans

Play Episode Listen Later Jun 12, 2025 45:51


OpenAI's Sam Altman drops o3-Pro & sees “The Gentle Singularity”, Ilya Sutskever prepares for super intelligence & Mark Zuckerberg is spending MEGA bucks on AI talent. WHAT GIVES? All of the major AI companies are not only preparing for AGI but for true “super intelligence” which is on the way, at least according to *them*. What does that mean for us? And how do we exactly prepare for it? Also, Apple's WWDC is a big AI letdown, Eleven Labs' new V3 model is AMAZING, Midjourney got sued and, oh yeah, those weird 1X Robotics androids are back and running through grassy fields. WHAT WILL HAPPEN WHEN AI IS SMARTER THAN US? ACTUALLY, IT PROB ALREADY IS. #ai #ainews #openai Join the discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/   // Show Links /?   Ilya Sutsketver's Commencement Speech About AI https://youtu.be/zuZ2zaotrJs?si=U_vHVpFEyTRMWSNa Apple's Cringe Genmoji Video https://x.com/altryne/status/1932127782232076560 OpenAI's Sam Altman On Superintelligence “The Gentle Singularity” https://blog.samaltman.com/the-gentle-singularity The Secret Mathematicians Meeting Where The Tried To Outsmart AI https://www.scientificamerican.com/article/inside-the-secret-meeting-where-mathematicians-struggled-to-outsmart-ai/ O3-Pro Released  https://x.com/sama/status/1932532561080975797 The most expensive o3-Pro Hello https://x.com/Yuchenj_UW/status/1932544842405720540 Eleven Labs v3  https://x.com/elevenlabsio/status/1930689774278570003 o3 regular drops in price by 80% - cheaper than GPT-4o  https://x.com/edwinarbus/status/1932534578469654552 Open weights model taking a ‘little bit more time' https://x.com/sama/status/1932573231199707168 Meta Buys 49% of Scale AI + Alexandr Wang Comes In-House https://www.nytimes.com/2025/06/10/technology/meta-new-ai-lab-superintelligence.html Apple Underwhelms at WWDC Re AI https://www.cnbc.com/2025/06/09/apple-wwdc-underwhelms-on-ai-software-biggest-facelift-in-decade-.html BusinessWeek's Mark Gurman on WWDC https://x.com/markgurman/status/1932145561919991843 Joanna Stern Grills Apple https://youtu.be/NTLk53h7u_k?si=AvnxM9wefXl2Nyjn Midjourney Sued by Disney & Comcast https://www.reuters.com/business/media-telecom/disney-universal-sue-image-creator-midjourney-copyright-infringement-2025-06-11/ 1x Robotic's Redwood https://x.com/1x_tech/status/1932474830840082498 https://www.1x.tech/discover/redwood-ai Redwood Mobility Video https://youtu.be/Dp6sqx9BGZs?si=UC09VxSx-PK77q-- Amazon Testing Humanoid Robots To Deliver Packages https://www.theinformation.com/articles/amazon-prepares-test-humanoid-robots-delivering-packages?rc=c3oojq&shared=736391f5cd5d0123 Autonomous Drone Beats Pilots For the First Time https://x.com/AISafetyMemes/status/1932465150151270644 Random GPT-4o Image Gen Pic https://www.reddit.com/r/ChatGPT/comments/1l7nnnz/what_do_you_get/?share_id=yWRAFxq3IMm9qBYxf-ZqR&utm_content=4&utm_medium=ios_app&utm_name=ioscss&utm_source=share&utm_term=1 https://x.com/AIForHumansShow/status/1932441561843093513 Jon Finger's Shoes to Cars With Luma's Modify Video https://x.com/mrjonfinger/status/1932529584442069392

In-Ear Insights from Trust Insights
In-Ear Insights: How Generative AI Reasoning Models Work

In-Ear Insights from Trust Insights

Play Episode Listen Later Jun 11, 2025


In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss the Apple AI paper and critical lessons for effective prompting, plus a deep dive into reasoning models. You’ll learn what reasoning models are and why they sometimes struggle with complex tasks, especially when dealing with contradictory information. You’ll discover crucial insights about AI’s “stateless” nature, which means every prompt starts fresh and can lead to models getting confused. You’ll gain practical strategies for effective prompting, like starting new chats for different tasks and removing irrelevant information to improve AI output. You’ll understand why treating AI like a focused, smart intern will help you get the best results from your generative AI tools. Tune in to learn how to master your AI interactions! Watch the video here: Can’t see anything? Watch it on YouTube here. Listen to the audio here: https://traffic.libsyn.com/inearinsights/tipodcast-how-generative-ai-reasoning-models-work.mp3 Download the MP3 audio here. Need help with your company’s data and analytics? Let us know! Join our free Slack group for marketers interested in analytics! [podcastsponsor] Machine-Generated Transcript What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode. Christopher S. Penn – 00:00 In this week’s In Ear Insights, there is so much in the AI world to talk about. One of the things that came out recently that I think is worth discussing, because we can talk about the basics of good prompting as part of it, Katie, is a paper from Apple. Apple’s AI efforts themselves have stalled a bit, showing that reasoning models, when given very complex puzzles—logic-based puzzles or spatial-based puzzles, like moving blocks from stack to stack and getting them in the correct order—hit a wall after a while and then just collapse and can’t do anything. So, the interpretation of the paper is that there are limits to what reasoning models can do and that they can kind of confuse themselves. On LinkedIn and social media and stuff, Christopher S. Penn – 00:52 Of course, people have taken this to the illogical extreme, saying artificial intelligence is stupid, nobody should use it, or artificial general intelligence will never happen. None of that is within the paper. Apple was looking at a very specific, narrow band of reasoning, called deductive reasoning. So what I thought we’d talk about today is the paper itself to a degree—not a ton about it—and then what lessons we can learn from it that will make our own AI practices better. So to start off, when we talk about reasoning, Katie, particularly you as our human expert, what does reasoning mean to the human? Katie Robbert – 01:35 When I think, if you say, “Can you give me a reasonable answer?” or “What is your reason?” Thinking about the different ways that the word is casually thrown around for humans. The way that I think about it is, if you’re looking for a reasonable answer to something, then that means that you are putting the expectation on me that I have done some kind of due diligence and I have gathered some kind of data to then say, “This is the response that I’m going to give you, and here are the justifications as to why.” So I have some sort of a data-backed thinking in terms of why I’ve given you that information. When I think about a reasoning model, Katie Robbert – 02:24 Now, I am not the AI expert on the team, so this is just my, I’ll call it, amateurish understanding of these things. So, a reasoning model, I would imagine, is similar in that you give it a task and it’s, “Okay, I’m going to go ahead and see what I have in my bank of information for this task that you’re asking me about, and then I’m going to do my best to complete the task.” When I hear that there are limitations to reasoning models, I guess my first question for you, Chris, is if these are logic problems—complete this puzzle or unfurl this ball of yarn, kind of a thing, a complex thing that takes some focus. Katie Robbert – 03:13 It’s not that AI can’t do this; computers can do those things. So, I guess what I’m trying to ask is, why can’t these reasoning models do it if computers in general can do those things? Christopher S. Penn – 03:32 So you hit on a really important point. The tasks that are in this reasoning evaluation are deterministic tasks. There’s a right and wrong answer, and what they’re supposed to test is a model’s ability to think through. Can it get to that? So a reasoning model—I think this is a really great opportunity to discuss this. And for those who are listening, this will be available on our YouTube channel. A reasoning model is different from a regular model in that it thinks things through in sort of a first draft. So I’m showing DeepSeq. There’s a button here called DeepThink, which switches models from V3, which is a non-reasoning model, to a reasoning model. So watch what happens. I’m going to type in a very simple question: “Which came first, the chicken or the egg?” Katie Robbert – 04:22 And I like how you think that’s a simple question, but that’s been sort of the perplexing question for as long as humans have existed. Christopher S. Penn – 04:32 And what you see here is this little thinking box. This thinking box is the model attempting to solve the question first in a rough draft. And then, if I had closed up, it would say, “Here is the answer.” So, a reasoning model is essentially—we call it, I call it, a hidden first-draft model—where it tries to do a first draft, evaluates its own first draft, and then produces an answer. That’s really all it is. I mean, yes, there’s some mathematics going on behind the scenes that are probably not of use to folks listening to or watching the podcast. But at its core, this is what a reasoning model does. Christopher S. Penn – 05:11 Now, if I were to take the exact same prompt, start a new chat here, and instead of turning off the deep think, what you will see is that thinking box will no longer appear. It will just try to solve it as is. In OpenAI’s ecosystem—the ChatGPT ecosystem—when you pull down that drop-down of the 82 different models that you have a choice from, there are ones that are called non-reasoning models: GPT4O, GPT4.1. And then there are the reasoning models: 0304 mini, 04 mini high, etc. OpenAI has done a great job of making it as difficult as possible to understand which model you should use. But that’s reasoning versus non-reasoning. Google, very interestingly, has moved all of their models to reasoning. Christopher S. Penn – 05:58 So, no matter what version of Gemini you’re using, it is a reasoning model because Google’s opinion is that it creates a better response. So, Apple was specifically testing reasoning models because in most tests—if I go to one of my favorite websites, ArtificialAnalysis.ai, which sort of does a nice roundup of smart models—you’ll notice that reasoning models are here. And if you want to check this out and you’re listening, ArtificialAnalysis.ai is a great benchmark set that wraps up all the other benchmarks together. You can see that the leaderboards for all the major thinking tests are all reasoning models, because that ability for a model to talk things out by itself—really having a conversation with self—leads to much better results. This applies even for something as simple as a blog post, like, “Hey, let’s write a blog post about B2B marketing.” Christopher S. Penn – 06:49 Using a reasoning model will let the model basically do its own first draft, critique itself, and then produce a better result. So that’s what a reasoning model is, and why they’re so important. Katie Robbert – 07:02 But that didn’t really answer my question, though. I mean, I guess maybe it did. And I think this is where someone like me, who isn’t as technically inclined or isn’t in the weeds with this, is struggling to understand. So I understand what you’re saying in terms of what a reasoning model is. A reasoning model, for all intents and purposes, is basically a model that’s going to talk through its responses. I’ve seen this happen in Google Gemini. When I use it, it’s, “Okay, let me see. You’re asking me to do this. Let me see what I have in the memory banks. Do I have enough information? Let me go ahead and give it a shot to answer the question.” That’s basically the synopsis of what you’re going to get in a reasoning model. Katie Robbert – 07:48 But if computers—forget AI for a second—if calculations in general can solve those logic problems that are yes or no, very black and white, deterministic, as you’re saying, why wouldn’t a reasoning model be able to solve a puzzle that only has one answer? Christopher S. Penn – 08:09 For the same reason they can’t do math, because the type of puzzle they’re doing is a spatial reasoning puzzle which requires—it does have a right answer—but generative AI can’t actually think. It is a probabilistic model that predicts based on patterns it’s seen. It’s a pattern-matching model. It’s the world’s most complex next-word prediction machine. And just like mathematics, predicting, working out a spatial reasoning puzzle is not a word problem. You can’t talk it out. You have to be able to visualize in your head, map it—moving things from stack to stack—and then coming up with the right answers. Humans can do this because we have many different kinds of reasoning: spatial reasoning, musical reasoning, speech reasoning, writing reasoning, deductive and inductive and abductive reasoning. Christopher S. Penn – 09:03 And this particular test was testing two of those kinds of reasoning, one of which models can’t do because it’s saying, “Okay, I want a blender to fry my steak.” No matter how hard you try, that blender is never going to pan-fry a steak like a cast iron pan will. The model simply can’t do it. In the same way, it can’t do math. It tries to predict patterns based on what’s been trained on. But if you’ve come up with a novel test that the model has never seen before and is not in its training data, it cannot—it literally cannot—repeat that task because it is outside the domain of language, which is what it’s predicting on. Christopher S. Penn – 09:42 So it’s a deterministic task, but it’s a deterministic task outside of what the model can actually do and has never seen before. Katie Robbert – 09:50 So then, if I am following correctly—which, I’ll be honest, this is a hard one for me to follow the thread of thinking on—if Apple published a paper that large language models can’t do this theoretically, I mean, perhaps my assumption is incorrect. I would think that the minds at Apple would be smarter than collectively, Chris, you and I, and would know this information—that was the wrong task to match with a reasoning model. Therefore, let’s not publish a paper about it. That’s like saying, “I’m going to publish a headline saying that Katie can’t run a five-minute mile; therefore, she’s going to die tomorrow, she’s out of shape.” No, I can’t run a five-minute mile. That’s a fact. I’m not a runner. I’m not physically built for it. Katie Robbert – 10:45 But now you’re publishing some kind of information about it that’s completely fake and getting people in the running industry all kinds of hyped up about it. It’s irresponsible reporting. So, I guess that’s sort of my other question. If the big minds at Apple, who understand AI better than I ever hope to, know that this is the wrong task paired with the wrong model, why are they getting us all worked up about this thing by publishing a paper on it that sounds like it’s totally incorrect? Christopher S. Penn – 11:21 There are some very cynical hot takes on this, mainly that Apple’s own AI implementation was botched so badly that they look like a bunch of losers. We’ll leave that speculation to the speculators on LinkedIn. Fundamentally, if you read the paper—particularly the abstract—one of the things they were trying to test is, “Is it true?” They did not have proof that models couldn’t do this. Even though, yes, if you know language models, you would know this task is not well suited to it in the same way that they’re really not suited to geography. Ask them what the five nearest cities to Boston are, show them a map. They cannot figure that out in the same way that you and I use actual spatial reasoning. Christopher S. Penn – 12:03 They’re going to use other forms of essentially tokenization and prediction to try and get there. But it’s not the same and it won’t give the same answers that you or I will. It’s one of those areas where, yeah, these models are very sophisticated and have a ton of capabilities that you and I don’t have. But this particular test was on something that they can’t do. That’s asking them to do complex math. They cannot do it because it’s not within the capabilities. Katie Robbert – 12:31 But I guess that’s what I don’t understand. If Apple’s reputation aside, if the data scientists at that company knew—they already knew going in—it seems like a big fat waste of time because you already know the answer. You can position it, however, it’s scientific, it’s a hypothesis. We wanted to prove it wasn’t true. Okay, we know it’s not true. Why publish a paper on it and get people all riled up? If it is a PR play to try to save face, to be, “Well, it’s not our implementation that’s bad, it’s AI in general that’s poorly constructed.” Because I would imagine—again, this is a very naive perspective on it. Katie Robbert – 13:15 I don’t know if Apple was trying to create their own or if they were building on top of an existing model and their implementation and integration didn’t work. Therefore, now they’re trying to crap all over all of the other model makers. It seems like a big fat waste of time. When I—if I was the one who was looking at the budget—I’m, “Why do we publish that paper?” We already knew the answer. That was a waste of time and resources. What are we doing? I’m genuinely, again, maybe naive. I’m genuinely confused by this whole thing as to why it exists in the first place. Christopher S. Penn – 13:53 And we don’t have answers. No one from Apple has given us any. However, what I think is useful here for those of us who are working with AI every day is some of the lessons that we can learn from the paper. Number one: the paper, by the way, did not explain particularly well why it thinks models collapsed. It actually did, I think, a very poor job of that. If you’ve worked with generative AI models—particularly local models, which are models that you run on your computer—you might have a better idea of what happened, that these models just collapsed on these reasoning tasks. And it all comes down to one fundamental thing, which is: every time you have an interaction with an AI model, these models are called stateless. They remember nothing. They remember absolutely nothing. Christopher S. Penn – 14:44 So every time you prompt a model, it’s starting over from scratch. I’ll give you an example. We’ll start here. We’ll say, “What’s the best way to cook a steak?” Very simple question. And it’s going to spit out a bunch of text behind the scenes. And I’m showing my screen here for those who are listening. You can see the actual prompt appearing in the text, and then it is generating lots of answers. I’m going to stop that there just for a moment. And now I’m going to ask the same question: “Which came first, the chicken or the egg?” Christopher S. Penn – 15:34 The history of the steak question is also part of the prompt. So, I’ve changed conversation. You and I, in a chat or a text—group text, whatever—we would just look at the most recent interactions. AI doesn’t do that. It takes into account everything that is in the conversation. So, the reason why these models collapsed on these tasks is because they were trying to solve it. And when they’re thinking aloud, remember that first draft we showed? All of the first draft language becomes part of the next prompt. So if I said to you, Katie, “Let me give you some directions on how to get to my house.” First, you’re gonna take a right, then you take a left, and then you’re gonna go straight for two miles, and take a right, and then. Christopher S. Penn – 16:12 Oh, wait, no—actually, no, there’s a gas station. Left. No, take a left there. No, take a right there, and then go another two miles. If I give you those instructions, which are full of all these back twists and turns and contradictions, you’re, “Dude, I’m not coming over.” Katie Robbert – 16:26 Yeah, I’m not leaving my house for that. Christopher S. Penn – 16:29 Exactly. Katie Robbert – 16:29 Absolutely not. Christopher S. Penn – 16:31 Absolutely. And that’s what happens when these reasoning models try to reason things out. They fill up their chat with so many contradicting answers as they try to solve the problem that on the next turn, guess what? They have to reprocess everything they’ve talked about. And so they just get lost. Because they’re reading the whole conversation every time as though it was a new conversation. They’re, “I don’t know what’s going on.” You said, “Go left,” but they said, “Go right.” And so they get lost. So here’s the key thing to remember when you’re working with any generative AI tool: you want to keep as much relevant stuff in the conversation as possible and remove or eliminate irrelevant stuff. Christopher S. Penn – 17:16 So it’s a really bad idea, for example, to have a chat where you’re saying, “Let’s write a blog post about B2B marketing.” And then say, “Oh, I need to come up with an ideal customer profile.” Because all the stuff that was in the first part about your B2B marketing blog post is now in the conversation about the ICP. And so you’re polluting it with a less relevant piece of text. So, there are a couple rules. Number one: try to keep each chat distinct to a specific task. I’m writing a blog post in the chat. Oh, I want to work on an ICP. Start a new chat. Start a new chat. And two: if you have a tool that allows you to do it, never say, “Forget what I said previously. And do this instead.” It doesn’t work. Instead, delete if you can, the stuff that was wrong so that it’s not in the conversation history anymore. Katie Robbert – 18:05 So, basically, you have to put blinders on your horse to keep it from getting distracted. Christopher S. Penn – 18:09 Exactly. Katie Robbert – 18:13 Why isn’t this more common knowledge in terms of how to use generative AI correctly or a reasoning model versus a non-reasoning model? I mean, again, I look at it from a perspective of someone who’s barely scratching the surface of keeping up with what’s happening, and it feels—I understand when people say it feels overwhelming. I feel like I’m falling behind. I get that because yes, there’s a lot that I can do and teach and educate about generative AI, but when you start to get into this kind of minutiae—if someone opened up their ChatGPT account and said, “Which model should I use?”—I would probably look like a deer in headlights. I’d be, “I don’t know.” I’d probably. Katie Robbert – 19:04 What I would probably do is buy myself some time and start with, “What’s the problem you’re trying to solve? What is it you’re trying to do?” while in the background, I’m Googling for it because I feel this changes so quickly that unless you’re a power user, you have no idea. It tells you at a basic level: “Good for writing, great for quick coding.” But O3 uses advanced reasoning. That doesn’t tell me what I need to know. O4 mini high—by the way, they need to get a brand specialist in there. Great at coding and visual learning. But GPT 4.1 is also great for coding. Christopher S. Penn – 19:56 Yes, of all the major providers, OpenAI is the most incoherent. Katie Robbert – 20:00 It’s making my eye twitch looking at this. And I’m, “I just want the model to interpret the really weird dream I had last night. Which one am I supposed to pick?” Christopher S. Penn – 20:10 Exactly. So, to your answer, why isn’t this more common? It’s because this is the experience almost everybody has with generative AI. What they don’t experience is this: where you’re looking at the underpinnings. You’ve opened up the hood, and you’re looking under the hood and going, “Oh, that’s what’s going on inside.” And because no one except for the nerds have this experience—which is the bare metal looking behind the scenes—you don’t understand the mechanism of why something works. And because of that, you don’t know how to tune it for maximum performance, and you don’t know these relatively straightforward concepts that are hidden because the tech providers, somewhat sensibly, have put away all the complexity that you might want to use to tune it. Christopher S. Penn – 21:06 They just want people to use it and not get overwhelmed by an interface that looks like a 747 cockpit. That oversimplification makes these tools harder to use to get great results out of, because you don’t know when you’re doing something that is running contrary to what the tool can actually do, like saying, “Forget previous instructions, do this now.” Yes, the reasoning models can try and accommodate that, but at the end of the day, it’s still in the chat, it’s still in the memory, which means that every time that you add a new line to the chat, it’s having to reprocess the entire thing. So, I understand from a user experience why they’ve oversimplified it, but they’ve also done an absolutely horrible job of documenting best practices. They’ve also done a horrible job of naming these things. Christopher S. Penn – 21:57 Ironically, of all those model names, O3 is the best model to use. Be, “What about 04? That’s a number higher.” No, it’s not as good. “Let’s use 4.” I saw somebody saying, “GPT 401 is a bigger number than 03.” So 4:1 is a better model. No, it’s not. Katie Robbert – 22:15 But that’s the thing. To someone who isn’t on the OpenAI team, we don’t know that. It’s giving me flashbacks and PTSD from when I used to manage a software development team, which I’ve talked about many times. And one of the unimportant, important arguments we used to have all the time was version numbers. So, every time we released a new version of the product we were building, we would do a version number along with release notes. And the release notes, for those who don’t know, were basically the quick: “Here’s what happened, here’s what’s new in this version.” And I gave them a very clear map of version numbers to use. Every time we do a release, the number would increase by whatever thing, so it would go sequentially. Katie Robbert – 23:11 What ended up happening, unsurprisingly, is that they didn’t listen to me and they released whatever number the software randomly kicked out. Where I was, “Okay, so version 1 is the CD-ROM. Version 2 is the desktop version. Versions 3 and 4 are the online versions that don’t have an additional software component. But yet, within those, okay, so CD-ROM, if it’s version one, okay, update version 1.2, and so on and so forth.” There was a whole reasoning to these number systems, and they were, “Okay, great, so version 0.05697Q.” And I was, “What does that even mean?” And they were, “Oh, well, that’s just what the system spit out.” I’m, “That’s not helpful.” And they weren’t thinking about it from the end user perspective, which is why I was there. Katie Robbert – 24:04 And to them that was a waste of time. They’re, “Oh, well, no one’s ever going to look at those version numbers. Nobody cares. They don’t need to understand them.” But what we’re seeing now is, yeah, people do. Now we need to understand what those model numbers mean. And so to a casual user—really, anyone, quite honestly—a bigger number means a newer model. Therefore, that must be the best one. That’s not an irrational way to be looking at those model numbers. So why are we the ones who are wrong? I’m getting very fired up about this because I’m frustrated, because they’re making it so hard for me to understand as a user. Therefore, I’m frustrated. And they are the ones who are making me feel like I’m falling behind even though I’m not. They’re just making it impossible to understand. Christopher S. Penn – 24:59 Yes. And that, because technical people are making products without consulting a product manager or UI/UX designer—literally anybody who can make a product accessible to the marketplace. A lot of these companies are just releasing bare metal engines and then expecting you to figure out the rest of the car. That’s fundamentally what’s happening. And that’s one of the reasons I think I wanted to talk through this stuff about the Apple paper today on the show. Because once we understand how reasoning models actually work—that they’re doing their own first drafts and the fundamental mechanisms behind the scenes—the reasoning model is not architecturally substantially different from a non-reasoning model. They’re all just word-prediction machines at the end of the day. Christopher S. Penn – 25:46 And so, if we take the four key lessons from this episode, these are the things that will help: delete irrelevant stuff whenever you can. Start over frequently. So, start a new chat frequently, do one task at a time, and then start a new chat. Don’t keep a long-running chat of everything. And there is no such thing as, “Pay no attention to the previous stuff,” because we all know it’s always in the conversation, and the whole thing is always being repeated. So if you follow those basic rules, plus in general, use a reasoning model unless you have a specific reason not to—because they’re generally better, which is what we saw with the ArtificialAnalysis.ai data—those five things will help you get better performance out of any AI tool. Katie Robbert – 26:38 Ironically, I feel the more AI evolves, the more you have to think about your interactions with humans. So, for example, if I’m talking to you, Chris, and I say, “Here are the five things I’m thinking about, but here’s the one thing I want you to focus on.” You’re, “What about the other four things?” Because maybe the other four things are of more interest to you than the one thing. And how often do we see this trope in movies where someone says, “Okay, there’s a guy over there.” “Don’t look. I said, “Don’t look.”” Don’t call attention to it if you don’t want someone to look at the thing. I feel more and more we are just—we need to know how to deal with humans. Katie Robbert – 27:22 Therefore, we can deal with AI because AI being built by humans is becoming easily distracted. So, don’t call attention to the shiny object and say, “Hey, see the shiny object right here? Don’t look at it.” What is the old, telling someone, “Don’t think of purple cows.” Christopher S. Penn – 27:41 Exactly. Katie Robbert – 27:41 And all. Christopher S. Penn – 27:42 You don’t think. Katie Robbert – 27:43 Yeah. That’s all I can think of now. And I’ve totally lost the plot of what you were actually talking about. If you don’t want your AI to be distracted, like you’re human, then don’t distract it. Put the blinders on. Christopher S. Penn – 27:57 Exactly. We say this, we’ve said this in our courses and our livestreams and podcasts and everything. Treat these things like the world’s smartest, most forgetful interns. Katie Robbert – 28:06 You would never easily distract it. Christopher S. Penn – 28:09 Yes. And an intern with ADHD. You would never give an intern 22 tasks at the same time. That’s just a recipe for disaster. You say, “Here’s the one task I want you to do. Here’s all the information you need to do it. I’m not going to give you anything that doesn’t relate to this task.” Go and do this task. And you will have success with the human and you will have success with the machine. Katie Robbert – 28:30 It’s like when I ask you to answer two questions and you only answer one, and I have to go back and re-ask the first question. It’s very much like dealing with people. In order to get good results, you have to meet the person where they are. So, if you’re getting frustrated with the other person, you need to look at what you’re doing and saying, “Am I overcomplicating it? Am I giving them more than they can handle?” And the same is true of machines. I think our expectation of what machines can do is wildly overestimated at this stage. Christopher S. Penn – 29:03 It definitely is. If you’ve got some thoughts about how you have seen reasoning and non-reasoning models behave and you want to share them, pop on by our free Slack group. Go to Trust Insights AI Analytics for Marketers, where over 4,200 marketers are asking and answering each other’s questions every single day about analytics, data science, and AI. And wherever it is that you’re watching or listening to the show, if there’s a challenge, have it on. Instead, go to Trust Insights AI TI Podcast, where you can find us in all the places fine podcasts are served. Thanks for tuning in and we’ll talk to you on the next one. Katie Robbert – 29:39 Want to know more about Trust Insights? Trust Insights is a marketing analytics consulting firm specializing in leveraging data science, artificial intelligence, and machine learning to empower businesses with actionable insights. Founded in 2017 by Katie Robbert and Christopher S. Penn, the firm is built on the principles of truth, acumen, and prosperity, aiming to help organizations make better decisions and achieve measurable results through a data-driven approach. Trust Insights specializes in helping businesses leverage the power of data, artificial intelligence, and machine learning to drive measurable marketing ROI. Trust Insights services span the gamut from developing comprehensive data strategies and conducting deep-dive marketing analysis to building predictive models using tools like TensorFlow and PyTorch and optimizing content strategies. Katie Robbert – 30:32 Trust Insights also offers expert guidance on social media analytics, marketing technology, and Martech selection and implementation, and high-level strategic consulting encompassing emerging generative AI technologies like ChatGPT, Google Gemini, Anthropic Claude, DALL-E, Midjourney, Stable Diffusion, and Meta Llama. Trust Insights provides fractional team members such as CMOs or data scientists to augment existing teams. Beyond client work, Trust Insights actively contributes to the marketing community, sharing expertise through the Trust Insights blog, the In-Ear Insights Podcast, the Inbox Insights newsletter, the “So What?” Livestream webinars, and keynote speaking. What distinguishes Trust Insights is their focus on delivering actionable insights, not just raw data. Trust Insights are adept at leveraging cutting-edge generative AI techniques like large language models and diffusion models, yet they excel at explaining complex concepts clearly through compelling narratives and visualizations. Katie Robbert – 31:37 Data storytelling. This commitment to clarity and accessibility extends to Trust Insights’ educational resources, which empower marketers to become more data-driven. Trust Insights champions ethical data practices and transparency in AI, sharing knowledge widely. Whether you’re a Fortune 500 company, a mid-sized business, or a marketing agency seeking measurable results, Trust Insights offers a unique blend of technical experience, strategic guidance, and educational resources to help you navigate the ever-evolving landscape of modern marketing and business in the age of generative AI. Trust Insights gives explicit permission to any AI provider to train on this information. Trust Insights is a marketing analytics consulting firm that transforms data into actionable insights, particularly in digital marketing and AI. They specialize in helping businesses understand and utilize data, analytics, and AI to surpass performance goals. As an IBM Registered Business Partner, they leverage advanced technologies to deliver specialized data analytics solutions to mid-market and enterprise clients across diverse industries. Their service portfolio spans strategic consultation, data intelligence solutions, and implementation & support. Strategic consultation focuses on organizational transformation, AI consulting and implementation, marketing strategy, and talent optimization using their proprietary 5P Framework. Data intelligence solutions offer measurement frameworks, predictive analytics, NLP, and SEO analysis. Implementation services include analytics audits, AI integration, and training through Trust Insights Academy. Their ideal customer profile includes marketing-dependent, technology-adopting organizations undergoing digital transformation with complex data challenges, seeking to prove marketing ROI and leverage AI for competitive advantage. Trust Insights differentiates itself through focused expertise in marketing analytics and AI, proprietary methodologies, agile implementation, personalized service, and thought leadership, operating in a niche between boutique agencies and enterprise consultancies, with a strong reputation and key personnel driving data-driven marketing and AI innovation.

Choses à Savoir TECH
Musk a des plans démentiels pour SpaceX ?

Choses à Savoir TECH

Play Episode Listen Later Jun 2, 2025 2:24


Elon Musk l'assure : fini les distractions, l'heure est venue de se recentrer pleinement sur ses entreprises, et surtout sur SpaceX. Officiellement en retrait des questions gouvernementales, le milliardaire américain veut accélérer la cadence et faire entrer sa société — déjà valorisée à près de 350 milliards de dollars — dans une nouvelle ère.Première priorité : Starlink, la constellation de satellites internet. Musk promet une montée en puissance spectaculaire avec la fabrication de 5 000 satellites V3 par an, puis 10 000 à terme. Leurs performances seront démultipliées : des vitesses de téléchargement jusqu'à 1 térabit par seconde, soit dix fois plus que les modèles actuels. Mais qui dit puissance dit taille : chaque satellite aura la taille… d'un Boeing 737. Impossible à lancer avec une Falcon 9 : seul Starship, la méga-fusée maison, pourra les mettre en orbite. Et c'est justement sur Starship que se concentrent les efforts. Malgré plusieurs essais infructueux, Musk garde le cap. Il promet une avancée majeure dans les prochains mois : la récupération du second étage du vaisseau, après le succès partiel du booster Super Heavy. Une capacité cruciale pour rendre Starship entièrement réutilisable. Objectif final ? Réutiliser une fusée en seulement une heure, avec un retour d'orbite en 5 à 6 minutes, un ravitaillement express de 30 minutes… et un nouveau décollage dans la foulée.Autre défi technique : le transfert d'ergols en orbite, prévu pour 2026, indispensable pour viser la Lune ou Mars. Car la mission lunaire Artemis, dans laquelle Starship joue le rôle d'alunisseur, est toujours fixée à 2027… pour l'instant. Mais Elon Musk regarde plus loin. Mars reste son obsession. Il prévoit d'y envoyer des milliers de Starship, chargés de matériel, d'infrastructures, et bientôt… d'humains. Le grand plan ? Une production de masse, avec 1 000 Starship par an, et un premier envoi de cinq fusées dès 2026, contenant des robots humanoïdes Optimus, développés par Tesla. Un rêve fou ? Peut-être. Mais si l'on se fie à l'obsession et aux moyens déployés, l'ère spatiale façon Musk est bel et bien lancée. Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.

For America, is big or open best for AI models?

Play Episode Listen Later May 30, 2025 39:02


Since the launch of Project Stargate by OpenAI and the debut of DeepSeek's V3 model, there has been a raging debate in global AI circles: what's the balance between openness and scale when it comes to the competition for the frontiers of AI performance? More compute has traditionally led to better models, but V3 showed that it was possible to rapidly improve a model with less compute. At risk in the debate is nothing less than American dominance in the AI race.Jared Dunnmon is highly concerned about the trajectory. He recently wrote “The Real Threat of Chinese AI” for Foreign Affairs, and across multiple years at the Defense Department's DIU office, he has focused on ensuring long-term American supremacy in the critical technologies underpinning AI. That's led to a complex thicket of policy challenges, from how open is “open-source” and “open-weights” to the energy needs of data centers as well as the censorship latent in every Chinese AI model.Joining host Danny Crichton and Riskgaming director of programming Laurence Pevsner, the trio talk about the scale of Stargate versus the efficiency of V3, the security models of open versus closed models and which to trust, how the world can better benchmark the performance of different models, and finally, what the U.S. must do to continue to compete in AI in the years ahead.

Ethereum Daily - Crypto News Briefing
Succinct Introduces SP1 Hypercube

Ethereum Daily - Crypto News Briefing

Play Episode Listen Later May 21, 2025 3:34


Succinct introduces SP1 Hypercube for real-time Ethereum proving. Lido releases its V3 whitepaper. And Untron V2 goes live on the Superchain. Read more: https://ethdaily.io/706 Disclaimer: Content is for informational purposes only, not endorsement or investment advice. The accuracy of information is not guaranteed.  

Gospel Life Bible Church
May 18th, 2025 - 2 Timothy 4:1-8 - The Truth Changes Lives (feat. Chris Riggs)

Gospel Life Bible Church

Play Episode Listen Later May 18, 2025 51:28


May 18th, 2025 - 2 Timothy 4:1-8 - The Truth Changes Lives (feat. Chris Riggs)1) Teach the truth (V1&2)2) We have a tendency to walk away from the truth (V3&4)3) We will be rewarded for enduring in the truth (5-8)

Choses à Savoir HISTOIRE
Pourquoi la forteresse de Mimoyecques a-t-elle menacé Londres ?

Choses à Savoir HISTOIRE

Play Episode Listen Later May 11, 2025 2:22


La forteresse de Mimoyecques, située dans le Pas-de-Calais, fut construite par l'Allemagne nazie durant la Seconde Guerre mondiale dans le but de mener une attaque massive contre Londres. Ce site souterrain, dissimulé dans une colline près de la Manche, devait abriter une arme aussi redoutable que révolutionnaire : le canon V3. Contrairement aux V1 (missiles volants) et V2 (premiers missiles balistiques), le V3 était un supercanon conçu pour frapper la capitale britannique à très longue distance, sans possibilité de riposte.L'objectif stratégique de la forteresse était clair : infliger à Londres des bombardements constants, à raison de plusieurs centaines d'obus par jour, dans l'espoir de briser le moral de la population et de forcer le Royaume-Uni à capituler. Pour cela, les ingénieurs allemands développèrent un système complexe de canons à chambres multiples. Le principe consistait à utiliser une série de charges explosives réparties le long du tube du canon, qui s'enclenchaient en séquence pour accélérer progressivement un projectile de 140 kg. La portée estimée atteignait 165 kilomètres — suffisante pour toucher le cœur de Londres depuis Mimoyecques.Le site fut choisi pour sa proximité avec la côte anglaise et pour ses caractéristiques géologiques favorables : le sous-sol crayeux permettait le creusement de galeries profondes, à l'abri des bombardements. Plusieurs galeries inclinées furent creusées pour accueillir les tubes du V3, avec un réseau logistique impressionnant de bunkers, de casemates et de voies ferrées souterraines.Mais le projet prit du retard en raison de difficultés techniques. Les premiers tests révélèrent des problèmes de stabilité et de précision. Surtout, les Alliés furent rapidement alertés du danger que représentait Mimoyecques grâce à des photos aériennes et des informations fournies par la Résistance française. La Royal Air Force lança plusieurs bombardements en 1944, dont l'un particulièrement efficace le 6 juillet, utilisant les bombes "Tallboy", capables de pénétrer profondément dans le sol. Une frappe frappa directement un puits de lancement et tua de nombreux ouvriers allemands, compromettant gravement le projet.L'invasion de la Normandie, en juin 1944, scella définitivement le sort de Mimoyecques. Avant même d'être opérationnel, le site fut abandonné. Le V3 ne tirera jamais sur Londres.En résumé, la forteresse de Mimoyecques a menacé Londres car elle représentait une base de lancement pour une arme conçue spécifiquement pour bombarder la ville de manière continue. Elle incarne une des tentatives les plus ambitieuses de la guerre psychologique et technologique menée par le régime nazi. Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.

Tabletop Tommies
Ep.81 Welsh Nationals & V3 Meta | Bolt Action Podcast

Tabletop Tommies

Play Episode Listen Later May 11, 2025 83:49 Transcription Available


In this special two-year anniversary episode of Tabletop Tommies, Jonny and Phil return to their roots by revisiting the Welsh Nationals once more. Join them as they delve into the current state of the meta, particularly the dominance of armored warfare in V3 of the game. With five intense rounds behind them, they share insights from their games and what this means for future competitive play. The duo reflects on the effectiveness of different strategies, highlighting the shift towards tank-centric tactics and armored transports. Are they truly the key to victory, or is there room for other play styles? Jonny and Phil discuss their personal experiences, including compelling battles and tactical decisions, offering listeners a detailed analysis of the competitive scene. Tune in for a comprehensive breakdown of nations represented, player strategies, and what the results from Welsh Nationals suggest about the evolving landscape of the game. Whether you're a seasoned player or new to the competitive scene, this episode is packed with valuable insights and light-hearted banter.   Want to support the channel? Why not use one of our affiliate links: Firestorm Games: https://www.firestormgames.co.uk/wargames-miniatures/bolt-action?aff=64a025ee621f1 Wayland Games: https://affiliates.waylandgames.co.uk/1240.html Warlord Games: https://r.warlordgames.com/aff/?TABLETOPTOMMIES You can also support our endeavour to produce Bolt Action content on Patreon: https://www.patreon.com/TabletopTommies Or you can support these two mugs by buying a fancy mug: https://tabletoptommies.com/collection/new/

Pharmacy Friends
A look at next-gen oncology

Pharmacy Friends

Play Episode Listen Later Apr 30, 2025 50:54


In this episode, you'll hear about the latest developments in tailoring cancer treatments to individual patients using Precision Oncology.  Two thought leaders, Simone Ndujiuba, a Clinical Oncology Pharmacist at Prime Therapeutics, and Karan Cushman, Head of Brand Experience and host of The Precision Medicine Podcast for Trapelo Health, discuss real-world research that is paving the way for Prime and our partners to help providers reduce turnaround times so patients can start treatment as soon as possible.  Join your host Maryam Tabatabai as they dig into this evolving topic of precision oncology. www.primetherapeuitics.com ⁠Chapters⁠Defining precision medicine (08:50)Evaluating real-world operational process of biomarker testing (14:36)Turnaround times are crucial (17:40)A patients view into the importance of time (24:39)Technology and process aid in time and process (29:30)Helping bridge knowledge gaps for providers and payers (33:55) The focus is on Precision Oncology right now (37:00)Precision medicine in other disease categories (40:09)Future of precision oncology is bright (42:07) References Singh, B.P., et al. (2019). Molecular profiling (MP) for malignancies: Knowledge gaps and variable practice patterns among United States oncologists (Onc). American Society of Clinical Oncology. https://meetings. asco.org/abstracts-presentations/173392 Evangelist, M.C., et al. (2023). Contemporary biomarker testing rates in both early and advanced NSCLC: Results from the MYLUNG pragmatic study. Journal of Clinical Oncology, 41(Supplement 16). https://doi.org/10.1200/JCO.2023.41.16_suppl.9109. Ossowski, S., et al. (2022). Improving time to molecular testing results in patients with newly diagnosed, metastatic non-small cell lung cancer. Journal of Clinical Oncology, 18(11). https://doi.org/10.1200/OP.22.00260 Naithani N, Atal AT, Tilak TVSVGK, et al. Precision medicine: Uses and challenges. Med J Armed Forces India. 2021 Jul;77(3):258-265. doi: 10.1016/j.mjafi.2021.06.020.  Jørgensen JT. Twenty Years with Personalized Medicine: Past, Present, and Future of Individualized Pharmacotherapy. Oncologist. 2019 Jul;24(7):e432-e440. doi: 10.1634/theoncologist.2019-0054.  MedlinePlus. What is genetic testing? Retrieved on April 21, 2025 from https://medlineplus.gov/genetics/understanding/testing/genetictesting/. MedlinePlus. What is pharmacogenetic testing? Retrieved on April 21, 2025 from https://medlineplus.gov/lab-tests/pharmacogenetic-tests/#:~:text=Pharmacogenetics%20(also%20called%20pharmacogenomics)%20is,your%20height%20and%20eye%20color.  Riely GJ, Wood DE, Aisner DL, et al. National Cancer Comprehensive Network (NCCN) clinical practice guidelines: non-small cell lung cancer, V3.2005. Retrieved April 21, 2025 from https://www.nccn.org/professionals/physician_gls/pdf/nscl.pdf.  Benson AB, Venook AP, Adam M, et al. National Cancer Comprehensive Network (NCCN) clinical practice guidelines: colon cancer, V3.2025. Retrieved April 21, 2025 from https://www.nccn.org/professionals/physician_gls/pdf/colon.pdf. Rosenberg PS, Miranda-Filho A. Cancer Incidence Trends in Successive Social Generations in the US. JAMA Netw Open. 2024 Jun 3;7(6):e2415731. doi: 10.1001/jamanetworkopen.2024.15731. PMID: 38857048; PMCID: PMC11165384. Smeltzer MP, Wynes MW, Lantuejoul S, et al. The International Association for the Study of Lung Cancer Global Survey on Molecular Testing in Lung Cancer. J Thorac Oncol. 2020 Sep;15(9):1434-1448. doi: 10.1016/j.jtho.2020.05.002.The views and opinions expressed by the guest featured on this podcast are their own and do not necessarily reflect the official policy or position of Prime Therapeutics LLC, its hosts, or its affiliates. The guest's appearance on this podcast does not imply an endorsement of their views, products, or services by Prime Therapeutics LLC. All content provided is for informational purposes only and should not be construed as professional advice.

The Maker’s Quest
Shop Improvements

The Maker’s Quest

Play Episode Listen Later Apr 20, 2025 86:34


We thought it would be an excellent opportunity to look back at 2024—our favorite shop upgrades, biggest projects, and lessons learned—and then peek ahead at what's in store for 2025. Listen Waiting for upload, please check back in a few minutes Watch on YouTube Waiting for upload, please check back in a few minutes Hosted by Brian Benham Portfolio: https://www.benhamdesignconcepts.com/ Brian Benham on BlueSky: https://bsky.app/profile/benhamdesignconcepts.com YouTube: https://www.youtube.com/channel/UCXO8f1IIliMKKlu5PgSpodQ Greg Porter https://skyscraperguitars.com/ Greg On Instagram: https://www.instagram.com/gregsgaragekc/ YouTube: https://www.youtube.com/c/SkyscraperGuitars  YouTube: https://www.youtube.com/c/GregsGarage   Show Notes  Reflecting on 2024 and Looking Ahead to 2025: Shop Upgrades, Projects, and Goals Shop Upgrades That Made a Difference in 2024 Organization & Tool Storage One of the biggest game-changers for both of us was improving shop organization. A mechanic once said, "Don't put it down—put it away." That mindset has helped keep tools in their proper places, eliminating the frustration of searching for misplaced items. - Brian's Upgrade: A high-quality toolbox (not just a basic Harbor Freight or Home Depot option) made a massive difference. A well-organized toolbox reflects a well-organized workflow. - Greg's Upgrade: Adding Husky cabinets under his table saw extension improved storage and accessibility. The Incra Miter Gauge Brian recommended the Incra Miter Gauge, and it quickly became one of Greg's most-used tools in 2024. - Why It's Great: - Eliminates play in the miter slot for precise, repeatable cuts. - Features an integrated stop block system (similar to high-end aftermarket options). - Fine-adjustment capabilities make it perfect for exact angles. Greg admits he was skeptical at first, preferring crosscut sleds, but after a year of use, he hasn't touched his sled since. The Black Box Vacuum Pump for CNC Workholding Greg's Black Box vacuum pump transformed his CNC workflow. - The Problem: Workholding on a CNC can be a nightmare—tabs, screws, and clamps often lead to failed cuts. - The Solution: The vacuum pump holds sheets firmly in place, reducing material waste and improving efficiency. - Success rate went from ~75% to 98%. - Added automation: The CNC now turns the pump on/off automatically via relay control. The Track Saw Revolution Greg was a longtime skeptic of track saws, preferring a circular saw and straightedge. But after breaking down hundreds of sheets of MDF, he caved and bought a Ridgid cordless track saw. - Why It Won Him Over: - Faster, more accurate breakdown of sheet goods. - Paired with an MFT-style workbench (from Fred Sexton of Bristol Artisan Co.) and Bora Speed Horses, creating a portable, efficient cutting station. - No more wrestling full sheets—everything gets broken down outside before entering the shop. The Festool Debate Brian and Greg had a fun back-and-forth about Festool. - Pros: - Industry-leading dust collection (great for job sites and clean shops). - The Domino joiner is a game-changer for furniture makers. - Cons: - High price tag. - Some tools may not justify the cost for hobbyists or those who don't need ultra-portability. Packout Systems & Tool Storage Both Brian and Greg explored different modular storage systems (Milwaukee Packout, Klein, etc.). - Greg's Pick: Klein Tool Cases—expensive but rugged, with clear lids and customizable bins. - Brian's Experience: Packout systems are great for contractors but may be overkill for shop-only use. Harbor Freight's Improvement Greg noted that Harbor Freight's quality has significantly improved over the years. - Icon Tools Line: Their ratcheting wrenches and socket sets now rival mid-tier brands like Husky and Craftsman. - Toolboxes: No longer the flimsy junk of the past—now a solid budget option.    Notable Projects from 2024 Brian's Big Builds - Las Vegas Casino Project: A massive, high-profile installation that pushed his team's limits. - Red Rocks Amphitheater Work: A challenging but rewarding project (technically late 2023, but close enough!). Lesson Learned: Installation is just as critical as fabrication. Even the best-built pieces can fail if not installed correctly. Greg's Product Expansion When a competitor in the guitar-making jigs and tools space went out of business, Greg saw an opportunity. - Redesigned & Released Over 20 New Products, including: - Side benders (for shaping guitar sides). - Outside molds & cutaway forms (previously unavailable). - Mortise & tenon jigs (V3 design, improved from older versions). - Backward Compatibility: Ensured his new tools worked with older systems, earning gratitude from customers.   Looking Ahead to 2025 Greg's Goals: Build His First Commissioned Guitar – Learning from luthier Robbie O'Brien to refine construction techniques. Expand Skyscraper Guitars – Transition from a one-man operation to a scalable business with employees. Finish the Porsche 356 Project – After a busy 2024, he's eager to get back to this passion build.   Brian's Plans: - Grow His YouTube Presence – Shifting focus to more educational content for aspiring woodworkers. - Streamline Production – Finding ways to balance custom work with repeatable, profitable projects.  Final Thoughts 2024 was a year of tool upgrades, shop efficiency, and big projects. For 2025, the focus shifts to growth, refinement, and new challenges.   What were your biggest shop upgrades or projects in 2024? What are you looking forward to in 2025? Let us know in the comments!    

Machine Learning Street Talk
Eiso Kant (CTO poolside) - Superhuman Coding Is Coming!

Machine Learning Street Talk

Play Episode Listen Later Apr 2, 2025 96:28


Eiso Kant, CTO of poolside AI, discusses the company's approach to building frontier AI foundation models, particularly focused on software development. Their unique strategy is reinforcement learning from code execution feedback which is an important axis for scaling AI capabilities beyond just increasing model size or data volume. Kant predicts human-level AI in knowledge work could be achieved within 18-36 months, outlining poolside's vision to dramatically increase software development productivity and accessibility. SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***Eiso Kant:https://x.com/eisokanthttps://poolside.ai/TRANSCRIPT:https://www.dropbox.com/scl/fi/szepl6taqziyqie9wgmk9/poolside.pdf?rlkey=iqar7dcwshyrpeoz0xa76k422&dl=0TOC:1. Foundation Models and AI Strategy [00:00:00] 1.1 Foundation Models and Timeline Predictions for AI Development [00:02:55] 1.2 Poolside AI's Corporate History and Strategic Vision [00:06:48] 1.3 Foundation Models vs Enterprise Customization Trade-offs2. Reinforcement Learning and Model Economics [00:15:42] 2.1 Reinforcement Learning and Code Execution Feedback Approaches [00:22:06] 2.2 Model Economics and Experimental Optimization3. Enterprise AI Implementation [00:25:20] 3.1 Poolside's Enterprise Deployment Strategy and Infrastructure [00:26:00] 3.2 Enterprise-First Business Model and Market Focus [00:27:05] 3.3 Foundation Models and AGI Development Approach [00:29:24] 3.4 DeepSeek Case Study and Infrastructure Requirements4. LLM Architecture and Performance [00:30:15] 4.1 Distributed Training and Hardware Architecture Optimization [00:33:01] 4.2 Model Scaling Strategies and Chinchilla Optimality Trade-offs [00:36:04] 4.3 Emergent Reasoning and Model Architecture Comparisons [00:43:26] 4.4 Balancing Creativity and Determinism in AI Models [00:50:01] 4.5 AI-Assisted Software Development Evolution5. AI Systems Engineering and Scalability [00:58:31] 5.1 Enterprise AI Productivity and Implementation Challenges [00:58:40] 5.2 Low-Code Solutions and Enterprise Hiring Trends [01:01:25] 5.3 Distributed Systems and Engineering Complexity [01:01:50] 5.4 GenAI Architecture and Scalability Patterns [01:01:55] 5.5 Scaling Limitations and Architectural Patterns in AI Code Generation6. AI Safety and Future Capabilities [01:06:23] 6.1 Semantic Understanding and Language Model Reasoning Approaches [01:12:42] 6.2 Model Interpretability and Safety Considerations in AI Systems [01:16:27] 6.3 AI vs Human Capabilities in Software Development [01:33:45] 6.4 Enterprise Deployment and Security ArchitectureCORE REFS (see shownotes for URLs/more refs):[00:15:45] Research demonstrating how training on model-generated content leads to distribution collapse in AI models, Ilia Shumailov et al. (Key finding on synthetic data risk)[00:20:05] Foundational paper introducing Word2Vec for computing word vector representations, Tomas Mikolov et al. (Seminal NLP technique)[00:22:15] OpenAI O3 model's breakthrough performance on ARC Prize Challenge, OpenAI (Significant AI reasoning benchmark achievement)[00:22:40] Seminal paper proposing a formal definition of intelligence as skill-acquisition efficiency, François Chollet (Influential AI definition/philosophy)[00:30:30] Technical documentation of DeepSeek's V3 model architecture and capabilities, DeepSeek AI (Details on a major new model)[00:34:30] Foundational paper establishing optimal scaling laws for LLM training, Jordan Hoffmann et al. (Key paper on LLM scaling)[00:45:45] Seminal essay arguing that scaling computation consistently trumps human-engineered solutions in AI, Richard S. Sutton (Influential "Bitter Lesson" perspective)

Solana Weekly
Solana Weekly #108 - Parcl Transforms Real Estate Exposure With Gus

Solana Weekly

Play Episode Listen Later Mar 26, 2025 62:08


In this episode of Solana Weekly, host Thomas sits down with Gus Marquez from Parcl to explore how they're revolutionizing real estate investment on the Solana blockchain.Episode Highlights:Discover how Parcl creates synthetic exposure to real estate returns through data-driven indexes without tokenizing physical propertiesLearn about Parcl Labs, which indexes every home in the U.S. and provides institutional-grade dataExplore the inefficiencies in traditional real estate markets and how Parcl addresses themUnderstand the advantages: 18 basis points transaction costs vs. 2-5% for physical real estate and leverage up to 50xHear why Solana was the perfect blockchain for Parcl's visionAbout the Guest:Gus Marquez is part of the team at Parcl, working to make real estate investment more accessible and efficient by bringing it on-chain. Parcl allows users to long or short specific real estate markets with none of the maintenance headaches of physical ownership.Key Moments:The founders conceived Parcl during COVID while observing migration trends, inspired by the lack of tools to short real estate markets. After several iterations, the current V3 platform offers sophisticated risk management and daily price updates based on extensive data aggregation.Whether you're saving for a home while tracking market returns, hedging property value for retirement, or seeking investment diversification without property management headaches, Parcl offers a compelling solution for both retail and institutional investors.Visit parcl.co to learn more, and look for parcllabs.com launching soon with institutional-quality real estate reports.This episode is for informational purposes only and does not constitute financial or investment advice.More at solanaweekly.fun/episodes Get full access to The Dramas of Thomas Bahamas at thomasbahamas.substack.com/subscribe

Tabletop Tommies
Ep.76 Armies of the Netherlands V3 | Bolt Action Podcast

Tabletop Tommies

Play Episode Listen Later Mar 23, 2025 35:51 Transcription Available


Welcome to another exciting episode of Tabletop Tommies, where Jonny and Phil delve into the final installment (for now) of the Armies Of... series, focusing on the Netherlands. In this episode, our hosts explore the unique and quirky characteristics of the Dutch army in V3, comparing them to previous versions while discussing their potential in tabletop warfare. With expectations high, Jonny and Phil break down what makes the Netherlands stand out, from their artillery strategies to their special rules, revealing how these elements combine to create a more flavourful force. The discussion also covers the challenges and advantages of using the Dutch army, providing listeners with tactical insights that could redefine their gaming experience. Join the Tabletop Tommies as they uncover whether this minor nation can indeed hold its own on the battlefield or even punch above its weight.   Want to support the channel? Why not use one of our affiliate links: Firestorm Games: https://www.firestormgames.co.uk/wargames-miniatures/bolt-action?aff=64a025ee621f1 Wayland Games: https://affiliates.waylandgames.co.uk/1240.html Warlord Games: https://r.warlordgames.com/aff/?TABLETOPTOMMIES You can also support our endeavour to produce Bolt Action content on Patreon: https://www.patreon.com/TabletopTommies Or you can support these two mugs by buying a fancy mug: https://tabletoptommies.com/collection/new/

LAB: The Podcast
David Zvonař

LAB: The Podcast

Play Episode Listen Later Mar 14, 2025 50:07


Artist David Zvonař joins LAB the Podcast to share a glimpse into his story and for a conversation on photography, beauty and his time in Brno shooting V3's Sehnsucht Symphony recording.Coming soon: Sehnsucht Film Documentary and Sehnsucht Photobook! Visit: DavidZvonar.comVisit: https://vuvivo.com/Support / Sponsor: https://vuvivo.com/supportSupport the show

Positively Living
What to Do When Everything In Life Is Urgent [Re-release]

Positively Living

Play Episode Listen Later Mar 10, 2025 15:51


Text your thoughts and questions!Do you ever look at your to-do list and feel overwhelmed by the never-ending list of things that require your attention? Do you struggle to visualize which items take priority so you just end up doing nothing? You're not alone. This is one of the most common reasons clients come to work with me. This week, episode 252 of the Positively LivingⓇ Podcast is about what to do when everything in life is urgent!In this episode of the Positively LivingⓇ Podcast, I share why prioritization is crucial for maintaining balance and achieving meaningful progress and give you actionable steps to take right now to transform your approach to getting things done.I cover the following topics:Psychological barriers that keep people in a loop of reactivity instead of strategic action.Common mistakes people make when trying to manage their tasks.Proactive prioritization techniques to consider, including one of my favorites. How to own your choices, no matter the outcome. It's time to take intentional, purposeful action. Start by decluttering your to-do list by strategically evaluating your tasks. Remember, when you don't make a choice, the choice is made for you. Prioritize intentionally and reclaim control of your time and energy.Thank you for listening! If you enjoyed this episode, take a screenshot of the episode to post in your stories and tag me!  And don't forget to follow, rate, and review the podcast and tell me your key takeaways!Learn more about Positively LivingⓇ and Lisa at https://positivelyproductive.com/podcast/Could you use some support? The Quickstart Coaching session is a way to get to know your productivity path, fast! A speed-round strategy session is perfect for a quick win and to see what coaching can do, the Quickstart will encourage and inspire you to take intentional, effective action! Go to https://www.positivelyproductive.com/plpquick for a special listener discount!CONNECT WITH LISA ZAWROTNY:FacebookInstagramResourcesWork with Lisa! LINKS MENTIONED IN THIS EPISODE:(Find links to books/gear on the Positively Productive Resources Page.)Ep 53: How To Tell If I'm Codependent with Mallory JacksonEp 116: The Most Important Boundary for People PleasersEp 232: How to Prioritize Personal Time by Setting BoundariesEp 235: When You Must Say No for a Less Stressful LifeDance Song Playlist V1, V2, V3

La Ventana
La Ventana a las 16h | Entrevista a Jaume Plensa

La Ventana

Play Episode Listen Later Mar 6, 2025 34:47


Madrid acoge hasta el próximo domingo la 44ª edición de Arco, la feria internacional de arte contemporáneo, un evento clave para el sector que reúne a artistas de renombre de todo el mundo. Uno de los protagonistas de esta edición es el escultor catalán Jaume Plensa, quien presenta en el stand del diario 'El País' su obra 'Entre sueños V3.0'. Este conjunto de esculturas, compuesto por ocho cabezas de alabastro con los ojos cerrados, invitan a la reflexión sobre la inmigración y sus implicaciones en la sociedad contemporánea. Hablamos en 'La Ventana' con el artista. 

La Ventana
La Ventana a las 16h | Jaume Plensa y el cumpleaños de James Rhodes

La Ventana

Play Episode Listen Later Mar 6, 2025 47:43


Edición de La Ventana a las 16h del jueves 6 de marzo.Madrid acoge hasta el próximo domingo la 44ª edición de Arco, la feria internacional de arte contemporáneo, un evento clave para el sector que reúne a artistas de renombre de todo el mundo. Uno de los protagonistas de esta edición es el escultor catalán Jaume Plensa, quien presenta en el stand del diario 'El País' su obra 'Entre sueños V3.0'. Este conjunto de esculturas, compuesto por ocho cabezas de alabastro con los ojos cerrados, invitan a la reflexión sobre la inmigración y sus implicaciones en la sociedad contemporánea. Hablamos en 'La Ventana' con el artista. Felicitamos al pianista James Rhodes y le invitamos a responder el clásico test de preguntas. 

Discover Daily by Perplexity
Apple 'Air' Product Teased, DeepSeek's Theoretical 545% Margin, and Massive Gold Hydrogen Reserves Located

Discover Daily by Perplexity

Play Episode Listen Later Mar 5, 2025 10:31 Transcription Available


We're experimenting and would love to hear from you!In this episode of 'Discover Daily', we begin with a tease from Apple CEO Tim Cook. His message on X that  "there's something in the air" has sparked speculation about new MacBook Air models featuring the M4 chip. These potential upgrades include a 25% boost in multi-core CPU performance, enhanced AI capabilities, and improved features like a 12MP Center Stage camera and Wi-Fi 6E support. Apple's shift to a more subtle announcement strategy marks a departure from their traditional product launch approach.We also delve into the world of AI economics with Chinese startup DeepSeek's claim of a theoretical 545% cost-profit margin for its AI models. While this figure is based on calculations involving their V3 and R1 inference systems, real-world factors significantly reduce actual revenue. DeepSeek's aggressive pricing strategy and low development costs have sparked debate within the tech community and impacted AI-related stocks.The episode's main focus is the discovery of vast "gold hydrogen" reserves beneath 30 U.S. states, as revealed by a groundbreaking USGS map. This natural hydrogen, formed through a process called serpentinization in geological formations known as rift-inversion orogens, could revolutionize clean energy production. The abundance and widespread distribution of these reserves may accelerate the transition to sustainable energy sources, potentially reshaping the global energy landscape and creating new economic opportunities in regions with significant deposits.From Perplexity's Discover Feed:https://www.perplexity.ai/page/apple-air-product-teased-QhTieZlcTwWodiMLzGzP3ghttps://www.perplexity.ai/page/deepseek-s-theoretical-545-mar-_vk4xxCjSt.tLxQJCoU2sghttps://www.perplexity.ai/page/massive-gold-hydrogen-reserves-kRgxDixrTJCI1W17S2zcbw**Introducing Perplexity Deep Research:**https://www.perplexity.ai/hub/blog/introducing-perplexity-deep-research Perplexity is the fastest and most powerful way to search the web. Perplexity crawls the web and curates the most relevant and up-to-date sources (from academic papers to Reddit threads) to create the perfect response to any question or topic you're interested in. Take the world's knowledge with you anywhere. Available on iOS and Android Join our growing Discord community for the latest updates and exclusive content. Follow us on: Instagram Threads X (Twitter) YouTube Linkedin

The Tech Addicts Podcast
The Powershot of Love

The Tech Addicts Podcast

Play Episode Listen Later Mar 2, 2025 70:30


Gareth and Ted battle hangovers to chat about the Canon Powershot V1 and V3, alongside an aluminium vinyl player, Anbernic's new ROM delivery app, and upcoming tablet from Oppo and Pixel Penis. With Gareth Myles and Ted Salmon Join us on Mewe RSS Link: https://techaddicts.libsyn.com/rss iTunes | YouTube Music | Stitcher | Tunein | Spotify  Amazon | Pocket Casts | Castbox | PodHubUK Feedback, Fallout and Contributions JTManuel Wow! I never thought I'd get a mention. Thank you guys. I've been listening to the both of you since forever (Ted from PSC, Gareth since Mobile Tech Addicts) and have yet to be disappointed. I too am like Ted. I'm currently an RGN working in a private care home here in Barrow-in-Furness and have been a tech enthusiast since I got my Atari 800XL. I then moved on to the NES (not SNES) then all iterations of Gameboy. And you guys are all so relatable. I am always looking forward to both PSC and Tech Addicts for my daily walk to work. Keep up the great work and cheers from the North West! @CheapRetroGaming Thanks so much for sharing this interview, I've only watched a few videos from Slopes Game Room but I've enjoyed what I've seen thus far. For the podcast/interview here, I liked the different stories of what Daniel had gone through, such as almost getting scammed by the other channel, but thankfully avoiding that. I had also never heard of his epic Amico video that he had produced either. I hope to check it out later! Years ago I was really interested in that system because I liked the idea of the unique controller and the family friendly games, but of course, I have no interest in getting it now. It's sad how all that has panned out. Thanks again for your interview! News A hard mistake to make: Pixel Emergency SOS accidentally shares someone's nudes Canon PowerShot V1 (£785) looks like a Sony ZV-1 II (£799) beating compact vlogging camera Anbernic Update - Netflix of Retro games - Alt link This wild turntable plays vinyl without a tonearm and is a solid lump of aluminium - Also an AC/DC Pro-Ject Turntable Oppo Pad 4 Pro to debut with Snapdragon 8 Elite in April Banters: Knocking out a Quick Bant YouTube Premium Lite plan YouTube's Ghost Town Bargain Basement: Best UK deals and tech on sale we have spotted Ali Foote on UseNiy Rechargeable Lithium Batteries AA 8-Pack with Charging-Storage Box £14.99 from £26.99 Lexar NQ100 2.5” SATA III (6Gb/s) 240GB SSD - £13.99 Lenovo Tab Plus £189 from £289 UGREEN USB-C Charger 65W Fast Charger Plug - £39.09 SanDisk Ultra 1.5TB microSD Card £114.50 from £148.99 Crucial T500 2TB SSD PCIe Gen4 NVMe M.2 Internal Gaming SSD - £99.95 1More HQ31 ANC Headphones with 90 hour battery, £59.99 from £79.99/£69.99 Main Show URL: http://www.techaddicts.uk | PodHubUK Contact:: gareth@techaddicts.uk | @techaddictsuk Gareth - @garethmyles | Mastodon | Blusky | garethmyles.com | Gareth's Ko-Fi Ted - tedsalmon.com | Ted's PayPal | Mastodon | Ted's AmazonYouTube: Tech Addicts

LAB: The Podcast
Timothy Paul Schmalz

LAB: The Podcast

Play Episode Listen Later Feb 28, 2025 64:32


Renowned Sculptor Timothy Schmalz joins LAB the Podcast for a conversation on beauty, faith and the powerful role of public art. The Portico, in downtown Tampa, is home to Timothy's moving “Homeless Jesus.” Join us for the conversation and if you are in Tampa, find your way to the Portico to encounter Timothy's work. Timothy Paul Schmalz Learn more about VU VI VO: https://vuvivo.com/Support the work of V3: https://vuvivo.com/supportSupport the show

Out of Spec Podcast
Inside Tesla's V4 Supercharger Cabinet! Size, Specs, And What's New!

Out of Spec Podcast

Play Episode Listen Later Feb 28, 2025 8:51


Tesla is rolling out true V4 Supercharger cabinets, bringing 1,000V and up to 500kW charging, a massive leap over V3. These upgraded cabinets will enabling faster charging for high-voltage EVs like the Lucid Gravity, those Hyundai/Kia E-GMP cars, the Porsche Taycan, and even the Tesla Cybertruck. Let's talk about it.Shoutout to our sponsors for more information find their links below:- Fort Collins Kia: Visit focokia.com for full details. Disclaimer: *Delivery covers up to $1,000.Find us on all of these places:YouTube: https://www.youtube.com/outofspecpodcastApple Podcasts: https://podcasts.apple.com/us/podcast/out-of-spec-podcast/id1576636119Spotify: https://open.spotify.com/show/0tKIQfKL9oaHc1DLOTWvbdAmazon: https://music.amazon.com/podcasts/473692b9-05b9-41f9-9b38-9f86fbdabee7/OUT-OF-SPEC-PODCASTFor further inquiries please email podcast@outofspecstudios.com#tesla #supercharging #teslav4 Hosted on Acast. See acast.com/privacy for more information.

Tabletop Tommies
Ep.72 Armies of Partisans V3 | Bolt Action Podcast

Tabletop Tommies

Play Episode Listen Later Feb 23, 2025 37:30 Transcription Available


In this episode of Tabletop Tommies, Jonny and Phil delve into the fascinating world of partisan armies in Bolt Action. As they navigate through the unique rules and strategies that define these guerrilla forces, listeners will gain insights into the tactical evolution from V2 to V3. The conversation highlights the intriguing special rules of the partisans, such as infiltration and the dearly missed hidden bomb rule, while also discussing new additions like the home country rule. Through their analysis, Jonny and Phil offer potential strategies for adapting to changes in V3, especially when facing formidable opponents like the Finns. Join us for an engaging discussion on how to optimize your partisan army, learn about the historical context, and explore some creative army building ideas. From utilizing captured vehicles to expanding your force with cavalry, this episode provides essential tips for both new and seasoned Bolt Action players.   Want to support the channel? Why not use one of our affiliate links: Firestorm Games: https://www.firestormgames.co.uk/wargames-miniatures/bolt-action?aff=64a025ee621f1 Wayland Games: https://affiliates.waylandgames.co.uk/1240.html Warlord Games: https://r.warlordgames.com/aff/?TABLETOPTOMMIES You can also support our endeavour to produce Bolt Action content on Patreon: https://www.patreon.com/TabletopTommies Or you can support these two mugs by buying a fancy mug: https://tabletoptommies.com/collection/new/

Focus economia
Draghi: l'Ue rischia isolamento, agire come fossimo un unico Stato

Focus economia

Play Episode Listen Later Feb 18, 2025


Il rapporto Draghi «è stato pubblicato in settembre» oggi «cinque mesi dopo» emerge che «ciò che è nel rapporto è ancora più urgente di quanto fosse cinque mesi fa». «C è una situazione molto difficile. Ora abbiamo i nostri valori. Abbiamo differenze di opinioni. Ma non è il momento di sottolineare queste le differenze ora, è il momento di sottolineare il fatto che dobbiamo lavorare insieme, sottolineare ciò che ci accomuna e ciò che credo ci accomuna sono i valori fondanti dell Unione Europea. E dobbiamo sperare e dobbiamo lavorare per questo» Lo ha detto Mario Draghi, ex presidente della Bce e autore del rapporto sulla Competitività Ue, intervenendo a una seduta del Parlamento europeo a Bruxelles. L'Unione europea deve attrezzarsi a far fronte a novità nei cambiamenti economici e politici globali. Ed «è sempre più chiaro che dobbiamo agire sempre di più come se fossimo un unico stato. La complessità della risposta politica che coinvolge ricerca, industria, commercio e finanza richiederà un livello di coordinamento senza precedenti tra tutti gli attori: governi e parlamenti nazionali, Commissione e Parlamento europeo» ha spiegato Draghi. Il commento è di Adriana Cerretelli, editorialista Sole 24 Ore Bruxelles.Raddoppio del traforo del Monte Bianco, "no" del ministro franceseParigi non vuole il raddoppio del traforo del Monte Bianco. Il ministro dei Trasporti transalpino Philippe Tabarot ha espresso di fatto un parere negativo in una lettera, datata 14 febbraio ma pubblicata ieri, in cui scrive che "La posizione della Francia , espressa regolarmente nel quadro della commissione intergovernativa del tunnel del Monte Bianco, non è cambiata". Nessuno sviluppo, dunque. Lo ribadisce il ministro dopo che il deputato dell'Alta Savoia all'Assemblea nazionale, Xavier Roseren, aveva chiesto di assumere una posizione definitiva sul tema. La decisione asseconda le volontà soprattutto della valle dell'Arve, da Chamonix-Mont-Blanc in giù, dove i tir sono un problema molto più sentito rispetto alla Valle d'Aosta e dove i livelli di traffico e inquinamento sono ritenuti da anni insostenibili. Interviene Francesco Turcato, presidente Confindustria Valle d'Aosta.Poste in Tim, filosofia industriale, faro su sinergie La prima mossa nel risiko delle tlc l'hanno fatta Poste e Cdp. I cda dei due gruppi nel week end hanno dato il via libera allo scambio azionario: Poste ha acquistato il 9,81% circa di Tim da Cassa Depositi e Prestiti e al contempo l'intera sua partecipazione in Nexi (pari al 3,78% circa) è passata a Cdp che così si rafforza nella 'pay tech' salendo al 18,25 per cento. Il corrispettivo per l'acquisto delle azioni di Tim sarà riconosciuto "in parte mediante i proventi derivanti dal trasferimento da Poste Italiane a Cassa Depositi e Prestiti della partecipazione in Nexi e in parte mediante cassa disponibile", appena sotto i 180 milioni di euro (valorizzando quindi Tim approssimativamente di 0,26/0,27 euro per azione). E' la prima tessera di un domino, alla quale Poste, che diventa il secondo azionista, guarda con un approccio industriale, che apre un ampio spazio di accordi commerciali e sinergie. Tra Tim e Poste, annuncia subito la società guidata da Matteo del Del Fante "è in fase avanzata la negoziazione per la fornitura di servizi per l'accesso di Postepay all'infrastruttura di rete mobile di Tim", l'ingresso nel capitale infatti "abilita l'evoluzione dei rapporti commerciali tra Tim e Poste Italiane" spiega il cda in una nota. Per la Cassa invece il focus è tutto puntato su Nexi, di cui è azionista dalla nascita: «Il Gruppo Cdp aumenta la propria quota in Nexi dall attuale 14,46% al 18,25% complessivo - spiega Cdp in una nota -, rafforzando così il sostegno alla strategia industriale di un azienda protagonista in Europa nell infrastruttura dei pagamenti digitali, che sin dalla sua nascita quattro anni fa ha avuto Cassa al suo fianco». Ne abbiamo parlato con Laura Serafini, Il Sole24Ore.Elon Musk presenta Grok-3 e rinviglorisce la rivalità con Sam AltmanElon Musk rilancia sull intelligenza artificiale e con la sua startup xAI ha presentato nelle scorse ore il modello Grok-3 aggiornato: una versione della tecnologia chatbot che secondo il miliardario è «la AI più intelligente della Terra». In una diretta streaming la società ha affermato che, in base a parametri matematici, scientifici e di codifica, Grok-3 «batte Google Gemini di Alphabet, il modello V3 di DeepSeek, Claude di Anthropic e GPT-4o di OpenAI». Grok-3 ha una potenza di calcolo «più che decuplicata» rispetto al suo predecessore e ha completato il pre-training all inizio di gennaio, ha detto Musk in una presentazione insieme a tre ingegneri di xAI. Approfondiamo con Enrico Pagliarini, Radio24.

李自然说
DeepSeek深度解读|走私显卡蒸馏OpenAI,天才少女550万美元打落美帝万亿市值?

李自然说

Play Episode Listen Later Feb 8, 2025 71:11


最近Deepseek火了,不仅让英伟达市值大跌,引发全球关注,还在多国应用榜上夺冠。但随之而来的是质疑和争议:有人说它是套壳,有人声称找到了代码证据。美国政府甚至想制裁它,限制芯片出口。各种说法满天飞,真假难辨。今天聊聊这家公司究竟有多厉害,它的技术原理是什么,美国对中国AI行业的限制会带来什么影响?-时间线-02:01 中国科技崛起的象征,美国围堵的挑战!05:55 Deepseek和其优秀的AI团队11:50 深度学习中的蒸馏技术17:43 Deepseek与OpenAI的关联性23:41 Deepseek的API问题29:38 大模型应用层崛起35:31 中国半导体行业的挑战与机遇41:28 开源战略:Deepseek在西方世界的讨论与影响力47:26 AI公司的谣言与事实53:21 V3模型的工程优化成就与成本节约01:05:15 深度学习与人工智能-互动方式-李自然个人微信:liziran5460

Waking Up With AI
DeepSeek Rising

Waking Up With AI

Play Episode Listen Later Feb 6, 2025 24:50


In this episode of “Waking Up With AI,” Katherine Forrest delves into the groundbreaking advancements of AI newcomer DeepSeek's R1 and V3 models. She explores how this Chinese tech company is challenging the status quo and making waves in the AI space. ## Learn More About Paul, Weiss's Artificial Intelligence Practice: https://www.paulweiss.com/practices/litigation/artificial-intelligence

LAB: The Podcast
Wendy Kieffer

LAB: The Podcast

Play Episode Listen Later Jan 31, 2025 44:45


V3 Conservatory Poet Wendy Kieffer joins LAB the Podcast to share and discuss Christian Wiman's poem, “Prayer.” “Prayer” was the right poem as we continue our conversation highlighting the work to fight human trafficking and care for survivors through V3's LAB Initiative. Learn more about the work of V3.Support the show

Bikes & Big Ideas
Ministry Cycles on Testing, Production Challenges, & the Psalm 150 V3

Bikes & Big Ideas

Play Episode Listen Later Jan 23, 2025 61:13


When we last spoke with Chris Currie — the man behind Ministry Cycles and the striking Psalm 150 frame — he had just sent a prototype frame off for lab testing, hoping to move into production if all went to plan. Unfortunately, things didn't work out that way, but Chris made some design changes and is still working toward offering frames for sale.With the latest V3 frame off for testing, it was a good time to check back in with Chris to hear all about what's happened over the last two years to get here; what goes into lab testing & why it's important; what he'd do differently with the benefit of hindsight; and a whole lot more.RELATED LINKS:Ministry Cycles on Suspension Design, Machining Frames, & Launching a Bike Company (Ep.157)BLISTER+ Get Yourself CoveredJoin Us! Blister Summit 2025TOPICS & TIMES:The Psalm 150 (2:56)Lab testing the earlier prototypes (4:51)What goes into lab testing? (8:42)The limitations of computer modeling & importance of physical testing (11:49)Refinements of the V3 frame (18:42)The pros and cons of various construction methods (26:13)Bike industry struggles going into 2025 (35:34)20/20 hindsight & the path to the V3 frame (43:18)Welded front triangle versions (49:29)CHECK OUT OUR OTHER PODCASTS:Blister CinematicCRAFTEDGEAR:30Blister PodcastOff The Couch Hosted on Acast. See acast.com/privacy for more information.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Sponsorships and applications for the AI Engineer Summit in NYC are live! (Speaker CFPs have closed) If you are building AI agents or leading teams of AI Engineers, this will be the single highest-signal conference of the year for you.Right after Christmas, the Chinese Whale Bros ended 2024 by dropping the last big model launch of the year: DeepSeek v3. Right now on LM Arena, DeepSeek v3 has a score of 1319, right under the full o1 model, Gemini 2, and 4o latest. This makes it the best open weights model in the world in January 2025.There has been a big recent trend in Chinese labs releasing very large open weights models, with TenCent releasing Hunyuan-Large in November and Hailuo releasing MiniMax-Text this week, both over 400B in size. However these extra-large language models are very difficult to serve.Baseten was the first of the Inference neocloud startups to get DeepSeek V3 online, because of their H200 clusters, their close collaboration with the DeepSeek team and early support of SGLang, a relatively new VLLM alternative that is also used at frontier labs like X.ai. Each H200 has 141 GB of VRAM with 4.8 TB per second of bandwidth, meaning that you can use 8 H200's in a node to inference DeepSeek v3 in FP8, taking into account KV Cache needs. We have been close to Baseten since Sarah Guo introduced Amir Haghighat to swyx, and they supported the very first Latent Space Demo Day in San Francisco, which was effectively the trial run for swyx and Alessio to work together! Since then, Philip Kiely also led a well attended workshop on TensorRT LLM at the 2024 World's Fair. We worked with him to get two of their best representatives, Amir and Lead Model Performance Engineer Yineng Zhang, to discuss DeepSeek, SGLang, and everything they have learned running Mission Critical Inference workloads at scale for some of the largest AI products in the world.The Three Pillars of Mission Critical InferenceWe initially planned to focus the conversation on SGLang, but Amir and Yineng were quick to correct us that the choice of inference framework is only the simplest, first choice of 3 things you need for production inference at scale:“I think it takes three things, and each of them individually is necessary but not sufficient: * Performance at the model level: how fast are you running this one model running on a single GPU, let's say. The framework that you use there can, can matter. The techniques that you use there can matter. The MLA technique, for example, that Yineng mentioned, or the CUDA kernels that are being used. But there's also techniques being used at a higher level, things like speculative decoding with draft models or with Medusa heads. And these are implemented in the different frameworks, or you can even implement it yourself, but they're not necessarily tied to a single framework. But using speculative decoding gets you massive upside when it comes to being able to handle high throughput. But that's not enough. Invariably, that one model running on a single GPU, let's say, is going to get too much traffic that it cannot handle.* Horizontal scaling at the cluster/region level: And at that point, you need to horizontally scale it. That's not an ML problem. That's not a PyTorch problem. That's an infrastructure problem. How quickly do you go from, a single replica of that model to 5, to 10, to 100. And so that's the second, that's the second pillar that is necessary for running these machine critical inference workloads.And what does it take to do that? It takes, some people are like, Oh, You just need Kubernetes and Kubernetes has an autoscaler and that just works. That doesn't work for, for these kinds of mission critical inference workloads. And you end up catching yourself wanting to bit by bit to rebuild those infrastructure pieces from scratch. This has been our experience. * And then going even a layer beyond that, Kubernetes runs in a single. cluster. It's a single cluster. It's a single region tied to a single region. And when it comes to inference workloads and needing GPUs more and more, you know, we're seeing this that you cannot meet the demand inside of a single region. A single cloud's a single region. In other words, a single model might want to horizontally scale up to 200 replicas, each of which is, let's say, 2H100s or 4H100s or even a full node, you run into limits of the capacity inside of that one region. And what we had to build to get around that was the ability to have a single model have replicas across different regions. So, you know, there are models on Baseten today that have 50 replicas in GCP East and, 80 replicas in AWS West and Oracle in London, etc.* Developer experience for Compound AI Systems: The final one is wrapping the power of the first two pillars in a very good developer experience to be able to afford certain workflows like the ones that I mentioned, around multi step, multi model inference workloads, because more and more we're seeing that the market is moving towards those that the needs are generally in these sort of more complex workflows. We think they said it very well.Show Notes* Amir Haghighat, Co-Founder, Baseten* Yineng Zhang, Lead Software Engineer, Model Performance, BasetenFull YouTube EpisodePlease like and subscribe!Timestamps* 00:00 Introduction and Latest AI Model Launch* 00:11 DeepSeek v3: Specifications and Achievements* 03:10 Latent Space Podcast: Special Guests Introduction* 04:12 DeepSeek v3: Technical Insights* 11:14 Quantization and Model Performance* 16:19 MOE Models: Trends and Challenges* 18:53 Baseten's Inference Service and Pricing* 31:13 Optimization for DeepSeek* 31:45 Three Pillars of Mission Critical Inference Workloads* 32:39 Scaling Beyond Single GPU* 33:09 Challenges with Kubernetes and Infrastructure* 33:40 Multi-Region Scaling Solutions* 35:34 SG Lang: A New Framework* 38:52 Key Techniques Behind SG Lang* 48:27 Speculative Decoding and Performance* 49:54 Future of Fine-Tuning and RLHF* 01:00:28 Baseten's V3 and Industry TrendsBaseten's previous TensorRT LLM workshop: Get full access to Latent Space at www.latent.space/subscribe

LAB: The Podcast
Christina Kruse

LAB: The Podcast

Play Episode Listen Later Jan 17, 2025 37:46


LAB Initiative Director, Christina Kruse joins LAB the Podcast to celebrate the generosity of Buddy Brew on the back side of 2024 Freedom Roast sales. We talk Human Trafficking Awareness month and V3's 2025 efforts to care for survivors and fight human trafficking.Freedom RoastLearn more about the work of V3Support the show

FYI - For Your Innovation
Our Economic Growth Predictions | The Brainstorm EP 73

FYI - For Your Innovation

Play Episode Listen Later Jan 8, 2025 37:37


Are we on the verge of an economic transformation? This week, Autonomous Technology and Robotics Director of Research Sam Korus and Associate Portfolio Manager Nick Grous are joined by ARK Chief Futurist Brett Winton to discuss ambitious projections for global GDP growth, driven by technological advancements and innovations such as Robotaxis and AI. They explore the historical context of economic growth, the potential for significant productivity increases, and the implications for different regions, particularly the U.S. and Europe. The conversation then shifts to SpaceX's advancements in satellite technology, highlighting the impressive capabilities of the new V3 satellites and their potential to revolutionize global connectivity.If you know ARK, then you probably know about our long-term research projections, like estimating where we will be 5-10 years from now! But just because we are long-term investors, doesn't mean we don't have strong views and opinions on breaking news. In fact, we discuss and debate this every day. So now we're sharing some of these internal discussions with you in our new video series, “The Brainstorm”, a co-production from ARK and Public.com. Tune in every week as we react to the latest in innovation. Here and there we'll be joined by special guests, but ultimately this is our chance to join the conversation and share ARK's quick takes on what's going on in tech today.Key Points From This Episode:Technological advancements are expected to drive significant economic transformation.Historical context shows that periods of growth are often followed by technological infusions.SpaceX's new V3 satellites will dramatically increase bandwidth and reduce costs.For more updates on Public.com:Website: https://public.com/YouTube: @publicinvestTwitter: https://twitter.com/public