Podcasts about Daxter

  • 246PODCASTS
  • 322EPISODES
  • 1h 22mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • May 15, 2025LATEST
Daxter

POPULARITY

20172018201920202021202220232024


Best podcasts about Daxter

Latest podcast episodes about Daxter

Galinha Viajante
GV#192: Top Dúzia - Os Melhores Collectathons 3D (part. Marko do Vai Logar Hoje)

Galinha Viajante

Play Episode Listen Later May 15, 2025 93:59


Mais um Top Dúzia no Galinha Viajante! Leon e Samuca chamam Marko do Vai Logar Hoje pra eleger os doze melhores collectathons 3D da história! De Mario 64 a Astro Bot, velharias e novidades se enfrentam no ringue do Galinha onde doze jogos entram e só um jogo sai!APOIE O GALINHA VIAJANTEAcesse catarse.me/galinhaviajanteLINKS DA GALINHACatarse | Youtube | Instagram | BlueskyContato: cast@galinhaviajante.com.brAcesse nosso SITE: galinhaviajante.com.brJOGOS CITADOSSuper Mario 64, Lego Lord of The Rings, Psychonauts 2, Astro Bot, Super Mario Odyssey, Spyro 2 Ripto's Rage, Jak and Daxter The Precursor's Legacy, Rayman 2 The Great Escape, Donkey Kong 64, Banjo Kazooie, Penny's Big Breakaway, Kirby The Forgotten Land, Banjo Tooie, A Hat In TimeTRILHA SONORAAstro, Goal (Astro Bot), Fossil Falls (Super Mario Odyssey), Jig's Up Penny (Penny's Big Breakaway), Mountain Pass (Jak and Daxter), The Fairy Glade, Final Battle (Rayman 2), Glimmer (Spyro Reignited Trilogy)Let's Get Ready To Rumble (Space Jam)Midside Notes (Martin Landstrom)Battle On The Jazzy Bridge (Quasar)00:00:00 - Abertura do Episódio00:02:55 - Top Dúzia - Collectathons 3D01:28:55 - Encerramento do EpisódioO Galinha vai ao ar toda semana graças aos Escudeiros da Galinha Viajante! Apoie você também o nosso projeto no Catarse e junte-se à Escudaria!Apresentado e produzido por Leon Cleveland e Samuel R. Auras.Contato: cast@galinhaviajante.com.brSupport the show

Cannabis School
Joints to Jack & Daxter: A Sesh Full of Smoke, Games, and Good Vibes

Cannabis School

Play Episode Listen Later Apr 17, 2025 36:20


Welcome back to The Sesh – where the joints are harsh, the nostalgia is real, and the tangents are deliciously random. In this episode, Brandon and Jesse spark up and dive into everything from smoking styles to gaming gripes to sitcom soul food.We talk about:

Into the Aether
Jak to Jak (feat. Xenoblade Chronicles X, Jak and Daxter)

Into the Aether

Play Episode Listen Later Mar 26, 2025 97:58


This ain't your granddad's Jak. This Jak squiggles. Discussed: The two types of podcasts, Assassin's Creed: Shadows, Xenoblade Chronicles X: Definitive Edition, how to keep people locked into Nintendo Online, Switch 2 Gamecube speculation, Jak and Daxter: The Precursor Legacy, Daxter, Jak II, the Sly Cooper series, Jak and Daxter: The Lost Frontier for the PS2, Pokemon Lazarus demo, Valkyria Chronicles 4, Civilization VII, Nintendo Switch 1 direct --- Find us everywhere: https://intothecast.online Buy some NEW merch if you'd like: https://shop.intothecast.online Join the Patreon: https://www.patreon.com/intothecast --- Follow Stephen Hilger: https://stephenhilgerart.com/ Follow Brendon Bigley: https://bsky.app/profile/bb.wavelengths.online Produced by AJ Fillari: https://bsky.app/profile/ajfillari.bsky.social --- Season 7 cover art by Scout Wilkinson: https://scoutwilkinson.myportfolio.com/ Theme song by Will LaPorte: https://ghostdown.online/ --- Timecodes: (00:00) - Intro (01:11) - Assassin's Creed Shadows | One thing (03:14) - Xenoblad Chronicles X Definitive Edition | Setting in stone the kind of podcast we are (57:29) - Jak and Daxter | Big news (01:12:53) - Jak 2 | The squandification of video games (01:29:00) - Wrapping up --- Thanks to all of our amazing patrons including our Eternal Gratitude members: Zachary D IanfaceMcGee Matt H Clayton M Chris Y w0nderbrad Shawn L Cody R Zach R Federico V Logan H Alan R Slink mattjanzz  Deacon Grok Corey Z Directional Joy Susan H Olivia K Dan S Isaac S Will C Jim W Evan B David H min2 Aaron G V Erik M Brady H Joshua J Tony L Danny K Seth M Adam B Justin K Andy H Demo Parker E Maxwell L Spiritofthunder Jason W Jason T Corey T Minnow Eats Whale Caleb W fingerbelly Jesse W Mike T Codes Wesley Erik B mebezac Sergio L ninjadeathdog Rory B A42PoundMoose Andrew Justin M Peter Stellar.Bees Brendan K Scott R wreckx Noah O Michael G Arcturus Chris R hepahe Cory F Chase A LoveDies Nick Q Wes K Chris M RB Michaela W Adam F Scott H Alexander SP Therese K jgprinters Jessica B Murray David P Jason K Bede R Kamrin H Kyle S Philip  ★ Support this podcast on Patreon ★ Learn more about your ad choices. Visit podcastchoices.com/adchoices

RaczejKonsolowo Gamecast
Ten o taktycznym bębenku

RaczejKonsolowo Gamecast

Play Episode Listen Later Mar 5, 2025 106:39


Przed mikrofonami tego odcinka Tomek i Norbert, więc wiecie, czego się spodziewać? Powrót do przeszłości! Tym razem nie PS3, a konsole przenośne wiodą prym, dzięki pierwszemu Jak and Daxter oraz Pataponom. Pomiędzy tym "ważny" indyk, Celeste. Dobrej zabawy!(00:00:00) - START(00:00:54) - Rozgrzewka(00:12:03) - Jak and Daxter: The Precursor Legacy (PSV)(00:39:38) - Celeste (PS4)(01:09:24) - Patapon (PSP)(01:39:34) - W co jeszcze gramy?------------Intro/outro music by ⁠⁠Carter Harrell⁠⁠Background Track(s) by ⁠⁠Noir Et Blanc Vie

Lithium-ion Rocks!
Power Metals: Fast-Tracking the Next High-Grade Cesium Mine w/ Haydn Daxter and Dr. Nigel Brand

Lithium-ion Rocks!

Play Episode Listen Later Feb 17, 2025 24:18


In this episode, Haydn and Nigel discuss Power Metal's (TSXV: PWM | OTCQB: PWRMF) rare high-grade cesium discoveries, specifically the Case Lake project. They delve into the cost, timelines, and permitting processes compared to other critical minerals like lithium. The conversation includes updates on cesium carbonate market size (2,200 tons per year) and OPEX costs, exploration updates like cesium oxide intercepts greater than 20% in phase three drilling, and test work progress at SGS and Nagrom. The discussion also covers structural mapping exercises, upcoming exploration plans starting in March, and engagement with Canaccord for strategic guidance. They aim to fast-track mining operations and place Power Metals among the top four producers of high-grade cesium, with milestones such as the mineral resource estimate due by the end of Q1 and the PEA by the end of Q2. CHAPTERS

GigaBoots Podcasts
Comatose Sony Shows Brain Activity | Big Think Dimension #310

GigaBoots Podcasts

Play Episode Listen Later Feb 14, 2025 256:02


Follow us on BlueSky! https://bsky.app/profile/gigaboots.com Game of the Year FINALE: https://youtu.be/MPt5p744dAc Podlord Song: https://youtu.be/jdkTdaNJsvs Industry Burning Down Song: https://youtu.be/6XJmalxng0Q Become a podlord or normal patron today! http://www.patreon.com/GBPodcasts RSS Feed: https://gbpods.podbean.com/ Kris' BlueSky: https://bsky.app/profile/kriswolfheart.bsky.social Dr. Aggro's BlueSky: https://bsky.app/profile/draggro.bsky.social Bob's BlueSky: https://bsky.app/profile/gigabob.bsky.social GB Main Patreon: http://www.patreon.com/gigaboots GB Fan Discord: https://discord.gg/XAGcxBk #StateofPlay #BigThinkDimension #Saros Tags: gigaboots,Valve bans ads,pirate warriors 4,Glover,Bobby Kotick was a friend of Jeffrey Epstein,Astro Bot Speedrun Levels,Death Stranding 2 on the beach sxsw,Overwatch 2 lootboxes,PS+,Twisted Metal Season 2,Peacock,MinnMax,Ready at Dawn,Daxter,Jak & Daxter,The Order 1886,Naughty Dog,League of Legends,WB Games Division,Monolith Productions,Shadow of War,Wonder Woman,Gotham Knights,Unity layoffs,Crytek Layoffs,Saros,Shinobi,Sonic Racing Crossworlds

Into the Aether
The Best of JD on UMD (feat. Daxter, Citizen Sleeper 2: Starward Vector, Ruffy and the Riverside, Dragon Quest VIII)

Into the Aether

Play Episode Listen Later Feb 12, 2025 97:48


You like my UMD cellar? Thank you. This whole wall is just season 1 of Scrubs.Discussed: How Meta Killed Ready At Dawn by MinnMax on YouTube, Daxter, The Wall by Pink Floyd, Citizen Sleeper 2: Starward Vector, Ruffy and the Riverside, Kingdom Hearts Birth by Sleep, Dragon Quest VIIIThe Best Way to Play PSP Games in 2025: https://youtu.be/JZIKhcZjnGw ---Find us everywhere: https://intothecast.onlineBuy some NEW merch if you'd like: https://shop.intothecast.onlineJoin the Patreon: https://www.patreon.com/intothecast---Follow Stephen Hilger: https://stephenhilgerart.com/Follow Brendon Bigley: https://bsky.app/profile/bb.wavelengths.onlineProduced by AJ Fillari: https://bsky.app/profile/ajfillari.bsky.social---Season 7 cover art by Scout Wilkinson: https://scoutwilkinson.myportfolio.com/Theme song by Will LaPorte: https://ghostdown.online/---Timecodes:(00:00) - Intro (00:25) - A callback shoutout (06:55) - Daxter (28:38) - Citizen Sleeper 2: Starward Vector (52:39) - Ruffy and the Riverside (01:02:28) - Birth by Sleep (01:05:09) - Dragon Quest VIII (01:29:02) - Wrapping up ---Thanks to all of our amazing patrons including our Eternal Gratitude members:Zachary DIanfaceMcGeeMatt HClayton MChris Yw0nderbradShawn LCody RZach RFederico VLogan HAlan RSlinkmattjanzz DeaconGrokCorey ZDirectional JoySusan HOlivia KDan SIsaac SWill CJim WEvan BDavid Hmin2Aaron GVErik MBrady HJoshua JTony LDanny KSeth MAdam BJustin KAndy HDemoParker EMaxwell LSpiritofthunderJason WJason TCorey TMinnow Eats WhaleCaleb WfingerbellyJesse WMike TCodesWesleyErik BmebezacSergio LninjadeathdogRory BA42PoundMooseAndrewJustin MPeterStellar.BeesBrendan KScott RwreckxNoah OMichael GArcturusChris RhepaheCory FChase ALoveDiesNick QWes KChris MRBMichaela WAdam FScott HAlexander SPTherese KjgprintersJessica BMurrayDavid PJason KBede RKamrin HKyle SPhilip  ★ Support this podcast on Patreon ★

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Did you know that adding a simple Code Interpreter took o3 from 9.2% to 32% on FrontierMath? The Latent Space crew is hosting a hack night Feb 11th in San Francisco focused on CodeGen use cases, co-hosted with E2B and Edge AGI; watch E2B's new workshop and RSVP here!We're happy to announce that today's guest Samuel Colvin will be teaching his very first Pydantic AI workshop at the newly announced AI Engineer NYC Workshops day on Feb 22! 25 tickets left.If you're a Python developer, it's very likely that you've heard of Pydantic. Every month, it's downloaded >300,000,000 times, making it one of the top 25 PyPi packages. OpenAI uses it in its SDK for structured outputs, it's at the core of FastAPI, and if you've followed our AI Engineer Summit conference, Jason Liu of Instructor has given two great talks about it: “Pydantic is all you need” and “Pydantic is STILL all you need”. Now, Samuel Colvin has raised $17M from Sequoia to turn Pydantic from an open source project to a full stack AI engineer platform with Logfire, their observability platform, and PydanticAI, their new agent framework.Logfire: bringing OTEL to AIOpenTelemetry recently merged Semantic Conventions for LLM workloads which provides standard definitions to track performance like gen_ai.server.time_per_output_token. In Sam's view at least 80% of new apps being built today have some sort of LLM usage in them, and just like web observability platform got replaced by cloud-first ones in the 2010s, Logfire wants to do the same for AI-first apps. If you're interested in the technical details, Logfire migrated away from Clickhouse to Datafusion for their backend. We spent some time on the importance of picking open source tools you understand and that you can actually contribute to upstream, rather than the more popular ones; listen in ~43:19 for that part.Agents are the killer app for graphsPydantic AI is their attempt at taking a lot of the learnings that LangChain and the other early LLM frameworks had, and putting Python best practices into it. At an API level, it's very similar to the other libraries: you can call LLMs, create agents, do function calling, do evals, etc.They define an “Agent” as a container with a system prompt, tools, structured result, and an LLM. Under the hood, each Agent is now a graph of function calls that can orchestrate multi-step LLM interactions. You can start simple, then move toward fully dynamic graph-based control flow if needed.“We were compelled enough by graphs once we got them right that our agent implementation [...] is now actually a graph under the hood.”Why Graphs?* More natural for complex or multi-step AI workflows.* Easy to visualize and debug with mermaid diagrams.* Potential for distributed runs, or “waiting days” between steps in certain flows.In parallel, you see folks like Emil Eifrem of Neo4j talk about GraphRAG as another place where graphs fit really well in the AI stack, so it might be time for more people to take them seriously.Full Video EpisodeLike and subscribe!Chapters* 00:00:00 Introductions* 00:00:24 Origins of Pydantic* 00:05:28 Pydantic's AI moment * 00:08:05 Why build a new agents framework?* 00:10:17 Overview of Pydantic AI* 00:12:33 Becoming a believer in graphs* 00:24:02 God Model vs Compound AI Systems* 00:28:13 Why not build an LLM gateway?* 00:31:39 Programmatic testing vs live evals* 00:35:51 Using OpenTelemetry for AI traces* 00:43:19 Why they don't use Clickhouse* 00:48:34 Competing in the observability space* 00:50:41 Licensing decisions for Pydantic and LogFire* 00:51:48 Building Pydantic.run* 00:55:24 Marimo and the future of Jupyter notebooks* 00:57:44 London's AI sceneShow Notes* Sam Colvin* Pydantic* Pydantic AI* Logfire* Pydantic.run* Zod* E2B* Arize* Langsmith* Marimo* Prefect* GLA (Google Generative Language API)* OpenTelemetry* Jason Liu* Sebastian Ramirez* Bogomil Balkansky* Hood Chatham* Jeremy Howard* Andrew LambTranscriptAlessio [00:00:03]: Hey, everyone. Welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:12]: Good morning. And today we're very excited to have Sam Colvin join us from Pydantic AI. Welcome. Sam, I heard that Pydantic is all we need. Is that true?Samuel [00:00:24]: I would say you might need Pydantic AI and Logfire as well, but it gets you a long way, that's for sure.Swyx [00:00:29]: Pydantic almost basically needs no introduction. It's almost 300 million downloads in December. And obviously, in the previous podcasts and discussions we've had with Jason Liu, he's been a big fan and promoter of Pydantic and AI.Samuel [00:00:45]: Yeah, it's weird because obviously I didn't create Pydantic originally for uses in AI, it predates LLMs. But it's like we've been lucky that it's been picked up by that community and used so widely.Swyx [00:00:58]: Actually, maybe we'll hear it. Right from you, what is Pydantic and maybe a little bit of the origin story?Samuel [00:01:04]: The best name for it, which is not quite right, is a validation library. And we get some tension around that name because it doesn't just do validation, it will do coercion by default. We now have strict mode, so you can disable that coercion. But by default, if you say you want an integer field and you get in a string of 1, 2, 3, it will convert it to 123 and a bunch of other sensible conversions. And as you can imagine, the semantics around it. Exactly when you convert and when you don't, it's complicated, but because of that, it's more than just validation. Back in 2017, when I first started it, the different thing it was doing was using type hints to define your schema. That was controversial at the time. It was genuinely disapproved of by some people. I think the success of Pydantic and libraries like FastAPI that build on top of it means that today that's no longer controversial in Python. And indeed, lots of other people have copied that route, but yeah, it's a data validation library. It uses type hints for the for the most part and obviously does all the other stuff you want, like serialization on top of that. But yeah, that's the core.Alessio [00:02:06]: Do you have any fun stories on how JSON schemas ended up being kind of like the structure output standard for LLMs? And were you involved in any of these discussions? Because I know OpenAI was, you know, one of the early adopters. So did they reach out to you? Was there kind of like a structure output console in open source that people were talking about or was it just a random?Samuel [00:02:26]: No, very much not. So I originally. Didn't implement JSON schema inside Pydantic and then Sebastian, Sebastian Ramirez, FastAPI came along and like the first I ever heard of him was over a weekend. I got like 50 emails from him or 50 like emails as he was committing to Pydantic, adding JSON schema long pre version one. So the reason it was added was for OpenAPI, which is obviously closely akin to JSON schema. And then, yeah, I don't know why it was JSON that got picked up and used by OpenAI. It was obviously very convenient for us. That's because it meant that not only can you do the validation, but because Pydantic will generate you the JSON schema, it will it kind of can be one source of source of truth for structured outputs and tools.Swyx [00:03:09]: Before we dive in further on the on the AI side of things, something I'm mildly curious about, obviously, there's Zod in JavaScript land. Every now and then there is a new sort of in vogue validation library that that takes over for quite a few years and then maybe like some something else comes along. Is Pydantic? Is it done like the core Pydantic?Samuel [00:03:30]: I've just come off a call where we were redesigning some of the internal bits. There will be a v3 at some point, which will not break people's code half as much as v2 as in v2 was the was the massive rewrite into Rust, but also fixing all the stuff that was broken back from like version zero point something that we didn't fix in v1 because it was a side project. We have plans to move some of the basically store the data in Rust types after validation. Not completely. So we're still working to design the Pythonic version of it, in order for it to be able to convert into Python types. So then if you were doing like validation and then serialization, you would never have to go via a Python type we reckon that can give us somewhere between three and five times another three to five times speed up. That's probably the biggest thing. Also, like changing how easy it is to basically extend Pydantic and define how particular types, like for example, NumPy arrays are validated and serialized. But there's also stuff going on. And for example, Jitter, the JSON library in Rust that does the JSON parsing, has SIMD implementation at the moment only for AMD64. So we can add that. We need to go and add SIMD for other instruction sets. So there's a bunch more we can do on performance. I don't think we're going to go and revolutionize Pydantic, but it's going to continue to get faster, continue, hopefully, to allow people to do more advanced things. We might add a binary format like CBOR for serialization for when you'll just want to put the data into a database and probably load it again from Pydantic. So there are some things that will come along, but for the most part, it should just get faster and cleaner.Alessio [00:05:04]: From a focus perspective, I guess, as a founder too, how did you think about the AI interest rising? And then how do you kind of prioritize, okay, this is worth going into more, and we'll talk about Pydantic AI and all of that. What was maybe your early experience with LLAMP, and when did you figure out, okay, this is something we should take seriously and focus more resources on it?Samuel [00:05:28]: I'll answer that, but I'll answer what I think is a kind of parallel question, which is Pydantic's weird, because Pydantic existed, obviously, before I was starting a company. I was working on it in my spare time, and then beginning of 22, I started working on the rewrite in Rust. And I worked on it full-time for a year and a half, and then once we started the company, people came and joined. And it was a weird project, because that would never go away. You can't get signed off inside a startup. Like, we're going to go off and three engineers are going to work full-on for a year in Python and Rust, writing like 30,000 lines of Rust just to release open-source-free Python library. The result of that has been excellent for us as a company, right? As in, it's made us remain entirely relevant. And it's like, Pydantic is not just used in the SDKs of all of the AI libraries, but I can't say which one, but one of the big foundational model companies, when they upgraded from Pydantic v1 to v2, their number one internal model... The metric of performance is time to first token. That went down by 20%. So you think about all of the actual AI going on inside, and yet at least 20% of the CPU, or at least the latency inside requests was actually Pydantic, which shows like how widely it's used. So we've benefited from doing that work, although it didn't, it would have never have made financial sense in most companies. In answer to your question about like, how do we prioritize AI, I mean, the honest truth is we've spent a lot of the last year and a half building. Good general purpose observability inside LogFire and making Pydantic good for general purpose use cases. And the AI has kind of come to us. Like we just, not that we want to get away from it, but like the appetite, uh, both in Pydantic and in LogFire to go and build with AI is enormous because it kind of makes sense, right? Like if you're starting a new greenfield project in Python today, what's the chance that you're using GenAI 80%, let's say, globally, obviously it's like a hundred percent in California, but even worldwide, it's probably 80%. Yeah. And so everyone needs that stuff. And there's so much yet to be figured out so much like space to do things better in the ecosystem in a way that like to go and implement a database that's better than Postgres is a like Sisyphean task. Whereas building, uh, tools that are better for GenAI than some of the stuff that's about now is not very difficult. Putting the actual models themselves to one side.Alessio [00:07:40]: And then at the same time, then you released Pydantic AI recently, which is, uh, um, you know, agent framework and early on, I would say everybody like, you know, Langchain and like, uh, Pydantic kind of like a first class support, a lot of these frameworks, we're trying to use you to be better. What was the decision behind we should do our own framework? Were there any design decisions that you disagree with any workloads that you think people didn't support? Well,Samuel [00:08:05]: it wasn't so much like design and workflow, although I think there were some, some things we've done differently. Yeah. I think looking in general at the ecosystem of agent frameworks, the engineering quality is far below that of the rest of the Python ecosystem. There's a bunch of stuff that we have learned how to do over the last 20 years of building Python libraries and writing Python code that seems to be abandoned by people when they build agent frameworks. Now I can kind of respect that, particularly in the very first agent frameworks, like Langchain, where they were literally figuring out how to go and do this stuff. It's completely understandable that you would like basically skip some stuff.Samuel [00:08:42]: I'm shocked by the like quality of some of the agent frameworks that have come out recently from like well-respected names, which it just seems to be opportunism and I have little time for that, but like the early ones, like I think they were just figuring out how to do stuff and just as lots of people have learned from Pydantic, we were able to learn a bit from them. I think from like the gap we saw and the thing we were frustrated by was the production readiness. And that means things like type checking, even if type checking makes it hard. Like Pydantic AI, I will put my hand up now and say it has a lot of generics and you need to, it's probably easier to use it if you've written a bit of Rust and you really understand generics, but like, and that is, we're not claiming that that makes it the easiest thing to use in all cases, we think it makes it good for production applications in big systems where type checking is a no-brainer in Python. But there are also a bunch of stuff we've learned from maintaining Pydantic over the years that we've gone and done. So every single example in Pydantic AI's documentation is run on Python. As part of tests and every single print output within an example is checked during tests. So it will always be up to date. And then a bunch of things that, like I say, are standard best practice within the rest of the Python ecosystem, but I'm not followed surprisingly by some AI libraries like coverage, linting, type checking, et cetera, et cetera, where I think these are no-brainers, but like weirdly they're not followed by some of the other libraries.Alessio [00:10:04]: And can you just give an overview of the framework itself? I think there's kind of like the. LLM calling frameworks, there are the multi-agent frameworks, there's the workflow frameworks, like what does Pydantic AI do?Samuel [00:10:17]: I glaze over a bit when I hear all of the different sorts of frameworks, but I like, and I will tell you when I built Pydantic, when I built Logfire and when I built Pydantic AI, my methodology is not to go and like research and review all of the other things. I kind of work out what I want and I go and build it and then feedback comes and we adjust. So the fundamental building block of Pydantic AI is agents. The exact definition of agents and how you want to define them. is obviously ambiguous and our things are probably sort of agent-lit, not that we would want to go and rename them to agent-lit, but like the point is you probably build them together to build something and most people will call an agent. So an agent in our case has, you know, things like a prompt, like system prompt and some tools and a structured return type if you want it, that covers the vast majority of cases. There are situations where you want to go further and the most complex workflows where you want graphs and I resisted graphs for quite a while. I was sort of of the opinion you didn't need them and you could use standard like Python flow control to do all of that stuff. I had a few arguments with people, but I basically came around to, yeah, I can totally see why graphs are useful. But then we have the problem that by default, they're not type safe because if you have a like add edge method where you give the names of two different edges, there's no type checking, right? Even if you go and do some, I'm not, not all the graph libraries are AI specific. So there's a, there's a graph library called, but it allows, it does like a basic runtime type checking. Ironically using Pydantic to try and make up for the fact that like fundamentally that graphs are not typed type safe. Well, I like Pydantic, but it did, that's not a real solution to have to go and run the code to see if it's safe. There's a reason that starting type checking is so powerful. And so we kind of, from a lot of iteration eventually came up with a system of using normally data classes to define nodes where you return the next node you want to call and where we're able to go and introspect the return type of a node to basically build the graph. And so the graph is. Yeah. Inherently type safe. And once we got that right, I, I wasn't, I'm incredibly excited about graphs. I think there's like masses of use cases for them, both in gen AI and other development, but also software's all going to have interact with gen AI, right? It's going to be like web. There's no longer be like a web department in a company is that there's just like all the developers are building for web building with databases. The same is going to be true for gen AI.Alessio [00:12:33]: Yeah. I see on your docs, you call an agent, a container that contains a system prompt function. Tools, structure, result, dependency type model, and then model settings. Are the graphs in your mind, different agents? Are they different prompts for the same agent? What are like the structures in your mind?Samuel [00:12:52]: So we were compelled enough by graphs once we got them right, that we actually merged the PR this morning. That means our agent implementation without changing its API at all is now actually a graph under the hood as it is built using our graph library. So graphs are basically a lower level tool that allow you to build these complex workflows. Our agents are technically one of the many graphs you could go and build. And we just happened to build that one for you because it's a very common, commonplace one. But obviously there are cases where you need more complex workflows where the current agent assumptions don't work. And that's where you can then go and use graphs to build more complex things.Swyx [00:13:29]: You said you were cynical about graphs. What changed your mind specifically?Samuel [00:13:33]: I guess people kept giving me examples of things that they wanted to use graphs for. And my like, yeah, but you could do that in standard flow control in Python became a like less and less compelling argument to me because I've maintained those systems that end up with like spaghetti code. And I could see the appeal of this like structured way of defining the workflow of my code. And it's really neat that like just from your code, just from your type hints, you can get out a mermaid diagram that defines exactly what can go and happen.Swyx [00:14:00]: Right. Yeah. You do have very neat implementation of sort of inferring the graph from type hints, I guess. Yeah. Is what I would call it. Yeah. I think the question always is I have gone back and forth. I used to work at Temporal where we would actually spend a lot of time complaining about graph based workflow solutions like AWS step functions. And we would actually say that we were better because you could use normal control flow that you already knew and worked with. Yours, I guess, is like a little bit of a nice compromise. Like it looks like normal Pythonic code. But you just have to keep in mind what the type hints actually mean. And that's what we do with the quote unquote magic that the graph construction does.Samuel [00:14:42]: Yeah, exactly. And if you look at the internal logic of actually running a graph, it's incredibly simple. It's basically call a node, get a node back, call that node, get a node back, call that node. If you get an end, you're done. We will add in soon support for, well, basically storage so that you can store the state between each node that's run. And then the idea is you can then distribute the graph and run it across computers. And also, I mean, the other weird, the other bit that's really valuable is across time. Because it's all very well if you look at like lots of the graph examples that like Claude will give you. If it gives you an example, it gives you this lovely enormous mermaid chart of like the workflow, for example, managing returns if you're an e-commerce company. But what you realize is some of those lines are literally one function calls another function. And some of those lines are wait six days for the customer to print their like piece of paper and put it in the post. And if you're writing like your demo. Project or your like proof of concept, that's fine because you can just say, and now we call this function. But when you're building when you're in real in real life, that doesn't work. And now how do we manage that concept to basically be able to start somewhere else in the in our code? Well, this graph implementation makes it incredibly easy because you just pass the node that is the start point for carrying on the graph and it continues to run. So it's things like that where I was like, yeah, I can just imagine how things I've done in the past would be fundamentally easier to understand if we had done them with graphs.Swyx [00:16:07]: You say imagine, but like right now, this pedantic AI actually resume, you know, six days later, like you said, or is this just like a theoretical thing we can go someday?Samuel [00:16:16]: I think it's basically Q&A. So there's an AI that's asking the user a question and effectively you then call the CLI again to continue the conversation. And it basically instantiates the node and calls the graph with that node again. Now, we don't have the logic yet for effectively storing state in the database between individual nodes that we're going to add soon. But like the rest of it is basically there.Swyx [00:16:37]: It does make me think that not only are you competing with Langchain now and obviously Instructor, and now you're going into sort of the more like orchestrated things like Airflow, Prefect, Daxter, those guys.Samuel [00:16:52]: Yeah, I mean, we're good friends with the Prefect guys and Temporal have the same investors as us. And I'm sure that my investor Bogomol would not be too happy if I was like, oh, yeah, by the way, as well as trying to take on Datadog. We're also going off and trying to take on Temporal and everyone else doing that. Obviously, we're not doing all of the infrastructure of deploying that right yet, at least. We're, you know, we're just building a Python library. And like what's crazy about our graph implementation is, sure, there's a bit of magic in like introspecting the return type, you know, extracting things from unions, stuff like that. But like the actual calls, as I say, is literally call a function and get back a thing and call that. It's like incredibly simple and therefore easy to maintain. The question is, how useful is it? Well, I don't know yet. I think we have to go and find out. We have a whole. We've had a slew of people joining our Slack over the last few days and saying, tell me how good Pydantic AI is. How good is Pydantic AI versus Langchain? And I refuse to answer. That's your job to go and find that out. Not mine. We built a thing. I'm compelled by it, but I'm obviously biased. The ecosystem will work out what the useful tools are.Swyx [00:17:52]: Bogomol was my board member when I was at Temporal. And I think I think just generally also having been a workflow engine investor and participant in this space, it's a big space. Like everyone needs different functions. I think the one thing that I would say like yours, you know, as a library, you don't have that much control of it over the infrastructure. I do like the idea that each new agents or whatever or unit of work, whatever you call that should spin up in this sort of isolated boundaries. Whereas yours, I think around everything runs in the same process. But you ideally want to sort of spin out its own little container of things.Samuel [00:18:30]: I agree with you a hundred percent. And we will. It would work now. Right. As in theory, you're just like as long as you can serialize the calls to the next node, you just have to all of the different containers basically have to have the same the same code. I mean, I'm super excited about Cloudflare workers running Python and being able to install dependencies. And if Cloudflare could only give me my invitation to the private beta of that, we would be exploring that right now because I'm super excited about that as a like compute level for some of this stuff where exactly what you're saying, basically. You can run everything as an individual. Like worker function and distribute it. And it's resilient to failure, et cetera, et cetera.Swyx [00:19:08]: And it spins up like a thousand instances simultaneously. You know, you want it to be sort of truly serverless at once. Actually, I know we have some Cloudflare friends who are listening, so hopefully they'll get in front of the line. Especially.Samuel [00:19:19]: I was in Cloudflare's office last week shouting at them about other things that frustrate me. I have a love-hate relationship with Cloudflare. Their tech is awesome. But because I use it the whole time, I then get frustrated. So, yeah, I'm sure I will. I will. I will get there soon.Swyx [00:19:32]: There's a side tangent on Cloudflare. Is Python supported at full? I actually wasn't fully aware of what the status of that thing is.Samuel [00:19:39]: Yeah. So Pyodide, which is Python running inside the browser in scripting, is supported now by Cloudflare. They basically, they're having some struggles working out how to manage, ironically, dependencies that have binaries, in particular, Pydantic. Because these workers where you can have thousands of them on a given metal machine, you don't want to have a difference. You basically want to be able to have a share. Shared memory for all the different Pydantic installations, effectively. That's the thing they work out. They're working out. But Hood, who's my friend, who is the primary maintainer of Pyodide, works for Cloudflare. And that's basically what he's doing, is working out how to get Python running on Cloudflare's network.Swyx [00:20:19]: I mean, the nice thing is that your binary is really written in Rust, right? Yeah. Which also compiles the WebAssembly. Yeah. So maybe there's a way that you'd build... You have just a different build of Pydantic and that ships with whatever your distro for Cloudflare workers is.Samuel [00:20:36]: Yes, that's exactly what... So Pyodide has builds for Pydantic Core and for things like NumPy and basically all of the popular binary libraries. Yeah. It's just basic. And you're doing exactly that, right? You're using Rust to compile the WebAssembly and then you're calling that shared library from Python. And it's unbelievably complicated, but it works. Okay.Swyx [00:20:57]: Staying on graphs a little bit more, and then I wanted to go to some of the other features that you have in Pydantic AI. I see in your docs, there are sort of four levels of agents. There's single agents, there's agent delegation, programmatic agent handoff. That seems to be what OpenAI swarms would be like. And then the last one, graph-based control flow. Would you say that those are sort of the mental hierarchy of how these things go?Samuel [00:21:21]: Yeah, roughly. Okay.Swyx [00:21:22]: You had some expression around OpenAI swarms. Well.Samuel [00:21:25]: And indeed, OpenAI have got in touch with me and basically, maybe I'm not supposed to say this, but basically said that Pydantic AI looks like what swarms would become if it was production ready. So, yeah. I mean, like, yeah, which makes sense. Awesome. Yeah. I mean, in fact, it was specifically saying, how can we give people the same feeling that they were getting from swarms that led us to go and implement graphs? Because my, like, just call the next agent with Python code was not a satisfactory answer to people. So it was like, okay, we've got to go and have a better answer for that. It's not like, let us to get to graphs. Yeah.Swyx [00:21:56]: I mean, it's a minimal viable graph in some sense. What are the shapes of graphs that people should know? So the way that I would phrase this is I think Anthropic did a very good public service and also kind of surprisingly influential blog post, I would say, when they wrote Building Effective Agents. We actually have the authors coming to speak at my conference in New York, which I think you're giving a workshop at. Yeah.Samuel [00:22:24]: I'm trying to work it out. But yes, I think so.Swyx [00:22:26]: Tell me if you're not. yeah, I mean, like, that was the first, I think, authoritative view of, like, what kinds of graphs exist in agents and let's give each of them a name so that everyone is on the same page. So I'm just kind of curious if you have community names or top five patterns of graphs.Samuel [00:22:44]: I don't have top five patterns of graphs. I would love to see what people are building with them. But like, it's been it's only been a couple of weeks. And of course, there's a point is that. Because they're relatively unopinionated about what you can go and do with them. They don't suit them. Like, you can go and do lots of lots of things with them, but they don't have the structure to go and have like specific names as much as perhaps like some other systems do. I think what our agents are, which have a name and I can't remember what it is, but this basically system of like, decide what tool to call, go back to the center, decide what tool to call, go back to the center and then exit. One form of graph, which, as I say, like our agents are effectively one implementation of a graph, which is why under the hood they are now using graphs. And it'll be interesting to see over the next few years whether we end up with these like predefined graph names or graph structures or whether it's just like, yep, I built a graph or whether graphs just turn out not to match people's mental image of what they want and die away. We'll see.Swyx [00:23:38]: I think there is always appeal. Every developer eventually gets graph religion and goes, oh, yeah, everything's a graph. And then they probably over rotate and go go too far into graphs. And then they have to learn a whole bunch of DSLs. And then they're like, actually, I didn't need that. I need this. And they scale back a little bit.Samuel [00:23:55]: I'm at the beginning of that process. I'm currently a graph maximalist, although I haven't actually put any into production yet. But yeah.Swyx [00:24:02]: This has a lot of philosophical connections with other work coming out of UC Berkeley on compounding AI systems. I don't know if you know of or care. This is the Gartner world of things where they need some kind of industry terminology to sell it to enterprises. I don't know if you know about any of that.Samuel [00:24:24]: I haven't. I probably should. I should probably do it because I should probably get better at selling to enterprises. But no, no, I don't. Not right now.Swyx [00:24:29]: This is really the argument is that instead of putting everything in one model, you have more control and more maybe observability to if you break everything out into composing little models and changing them together. And obviously, then you need an orchestration framework to do that. Yeah.Samuel [00:24:47]: And it makes complete sense. And one of the things we've seen with agents is they work well when they work well. But when they. Even if you have the observability through log five that you can see what was going on, if you don't have a nice hook point to say, hang on, this is all gone wrong. You have a relatively blunt instrument of basically erroring when you exceed some kind of limit. But like what you need to be able to do is effectively iterate through these runs so that you can have your own control flow where you're like, OK, we've gone too far. And that's where one of the neat things about our graph implementation is you can basically call next in a loop rather than just running the full graph. And therefore, you have this opportunity to to break out of it. But yeah, basically, it's the same point, which is like if you have two bigger unit of work to some extent, whether or not it involves gen AI. But obviously, it's particularly problematic in gen AI. You only find out afterwards when you've spent quite a lot of time and or money when it's gone off and done done the wrong thing.Swyx [00:25:39]: Oh, drop on this. We're not going to resolve this here, but I'll drop this and then we can move on to the next thing. This is the common way that we we developers talk about this. And then the machine learning researchers look at us. And laugh and say, that's cute. And then they just train a bigger model and they wipe us out in the next training run. So I think there's a certain amount of we are fighting the bitter lesson here. We're fighting AGI. And, you know, when AGI arrives, this will all go away. Obviously, on Latent Space, we don't really discuss that because I think AGI is kind of this hand wavy concept that isn't super relevant. But I think we have to respect that. For example, you could do a chain of thoughts with graphs and you could manually orchestrate a nice little graph that does like. Reflect, think about if you need more, more inference time, compute, you know, that's the hot term now. And then think again and, you know, scale that up. Or you could train Strawberry and DeepSeq R1. Right.Samuel [00:26:32]: I saw someone saying recently, oh, they were really optimistic about agents because models are getting faster exponentially. And I like took a certain amount of self-control not to describe that it wasn't exponential. But my main point was. If models are getting faster as quickly as you say they are, then we don't need agents and we don't really need any of these abstraction layers. We can just give our model and, you know, access to the Internet, cross our fingers and hope for the best. Agents, agent frameworks, graphs, all of this stuff is basically making up for the fact that right now the models are not that clever. In the same way that if you're running a customer service business and you have loads of people sitting answering telephones, the less well trained they are, the less that you trust them, the more that you need to give them a script to go through. Whereas, you know, so if you're running a bank and you have lots of customer service people who you don't trust that much, then you tell them exactly what to say. If you're doing high net worth banking, you just employ people who you think are going to be charming to other rich people and set them off to go and have coffee with people. Right. And the same is true of models. The more intelligent they are, the less we need to tell them, like structure what they go and do and constrain the routes in which they take.Swyx [00:27:42]: Yeah. Yeah. Agree with that. So I'm happy to move on. So the other parts of Pydantic AI that are worth commenting on, and this is like my last rant, I promise. So obviously, every framework needs to do its sort of model adapter layer, which is, oh, you can easily swap from OpenAI to Cloud to Grok. You also have, which I didn't know about, Google GLA, which I didn't really know about until I saw this in your docs, which is generative language API. I assume that's AI Studio? Yes.Samuel [00:28:13]: Google don't have good names for it. So Vertex is very clear. That seems to be the API that like some of the things use, although it returns 503 about 20% of the time. So... Vertex? No. Vertex, fine. But the... Oh, oh. GLA. Yeah. Yeah.Swyx [00:28:28]: I agree with that.Samuel [00:28:29]: So we have, again, another example of like, well, I think we go the extra mile in terms of engineering is we run on every commit, at least commit to main, we run tests against the live models. Not lots of tests, but like a handful of them. Oh, okay. And we had a point last week where, yeah, GLA is a little bit better. GLA1 was failing every single run. One of their tests would fail. And we, I think we might even have commented out that one at the moment. So like all of the models fail more often than you might expect, but like that one seems to be particularly likely to fail. But Vertex is the same API, but much more reliable.Swyx [00:29:01]: My rant here is that, you know, versions of this appear in Langchain and every single framework has to have its own little thing, a version of that. I would put to you, and then, you know, this is, this can be agree to disagree. This is not needed in Pydantic AI. I would much rather you adopt a layer like Lite LLM or what's the other one in JavaScript port key. And that's their job. They focus on that one thing and they, they normalize APIs for you. All new models are automatically added and you don't have to duplicate this inside of your framework. So for example, if I wanted to use deep seek, I'm out of luck because Pydantic AI doesn't have deep seek yet.Samuel [00:29:38]: Yeah, it does.Swyx [00:29:39]: Oh, it does. Okay. I'm sorry. But you know what I mean? Should this live in your code or should it live in a layer that's kind of your API gateway that's a defined piece of infrastructure that people have?Samuel [00:29:49]: And I think if a company who are well known, who are respected by everyone had come along and done this at the right time, maybe we should have done it a year and a half ago and said, we're going to be the universal AI layer. That would have been a credible thing to do. I've heard varying reports of Lite LLM is the truth. And it didn't seem to have exactly the type safety that we needed. Also, as I understand it, and again, I haven't looked into it in great detail. Part of their business model is proxying the request through their, through their own system to do the generalization. That would be an enormous put off to an awful lot of people. Honestly, the truth is I don't think it is that much work unifying the model. I get where you're coming from. I kind of see your point. I think the truth is that everyone is centralizing around open AIs. Open AI's API is the one to do. So DeepSeq support that. Grok with OK support that. Ollama also does it. I mean, if there is that library right now, it's more or less the open AI SDK. And it's very high quality. It's well type checked. It uses Pydantic. So I'm biased. But I mean, I think it's pretty well respected anyway.Swyx [00:30:57]: There's different ways to do this. Because also, it's not just about normalizing the APIs. You have to do secret management and all that stuff.Samuel [00:31:05]: Yeah. And there's also. There's Vertex and Bedrock, which to one extent or another, effectively, they host multiple models, but they don't unify the API. But they do unify the auth, as I understand it. Although we're halfway through doing Bedrock. So I don't know about it that well. But they're kind of weird hybrids because they support multiple models. But like I say, the auth is centralized.Swyx [00:31:28]: Yeah, I'm surprised they don't unify the API. That seems like something that I would do. You know, we can discuss all this all day. There's a lot of APIs. I agree.Samuel [00:31:36]: It would be nice if there was a universal one that we didn't have to go and build.Alessio [00:31:39]: And I guess the other side of, you know, routing model and picking models like evals. How do you actually figure out which one you should be using? I know you have one. First of all, you have very good support for mocking in unit tests, which is something that a lot of other frameworks don't do. So, you know, my favorite Ruby library is VCR because it just, you know, it just lets me store the HTTP requests and replay them. That part I'll kind of skip. I think you are busy like this test model. We're like just through Python. You try and figure out what the model might respond without actually calling the model. And then you have the function model where people can kind of customize outputs. Any other fun stories maybe from there? Or is it just what you see is what you get, so to speak?Samuel [00:32:18]: On those two, I think what you see is what you get. On the evals, I think watch this space. I think it's something that like, again, I was somewhat cynical about for some time. Still have my cynicism about some of the well, it's unfortunate that so many different things are called evals. It would be nice if we could agree. What they are and what they're not. But look, I think it's a really important space. I think it's something that we're going to be working on soon, both in Pydantic AI and in LogFire to try and support better because it's like it's an unsolved problem.Alessio [00:32:45]: Yeah, you do say in your doc that anyone who claims to know for sure exactly how your eval should be defined can safely be ignored.Samuel [00:32:52]: We'll delete that sentence when we tell people how to do their evals.Alessio [00:32:56]: Exactly. I was like, we need we need a snapshot of this today. And so let's talk about eval. So there's kind of like the vibe. Yeah. So you have evals, which is what you do when you're building. Right. Because you cannot really like test it that many times to get statistical significance. And then there's the production eval. So you also have LogFire, which is kind of like your observability product, which I tried before. It's very nice. What are some of the learnings you've had from building an observability tool for LEMPs? And yeah, as people think about evals, even like what are the right things to measure? What are like the right number of samples that you need to actually start making decisions?Samuel [00:33:33]: I'm not the best person to answer that is the truth. So I'm not going to come in here and tell you that I think I know the answer on the exact number. I mean, we can do some back of the envelope statistics calculations to work out that like having 30 probably gets you most of the statistical value of having 200 for, you know, by definition, 15% of the work. But the exact like how many examples do you need? For example, that's a much harder question to answer because it's, you know, it's deep within the how models operate in terms of LogFire. One of the reasons we built LogFire the way we have and we allow you to write SQL directly against your data and we're trying to build the like powerful fundamentals of observability is precisely because we know we don't know the answers. And so allowing people to go and innovate on how they're going to consume that stuff and how they're going to process it is we think that's valuable. Because even if we come along and offer you an evals framework on top of LogFire, it won't be right in all regards. And we want people to be able to go and innovate and being able to write their own SQL connected to the API. And effectively query the data like it's a database with SQL allows people to innovate on that stuff. And that's what allows us to do it as well. I mean, we do a bunch of like testing what's possible by basically writing SQL directly against LogFire as any user could. I think the other the other really interesting bit that's going on in observability is OpenTelemetry is centralizing around semantic attributes for GenAI. So it's a relatively new project. A lot of it's still being added at the moment. But basically the idea that like. They unify how both SDKs and or agent frameworks send observability data to to any OpenTelemetry endpoint. And so, again, we can go and having that unification allows us to go and like basically compare different libraries, compare different models much better. That stuff's in a very like early stage of development. One of the things we're going to be working on pretty soon is basically, I suspect, GenAI will be the first agent framework that implements those semantic attributes properly. Because, again, we control and we can say this is important for observability, whereas most of the other agent frameworks are not maintained by people who are trying to do observability. With the exception of Langchain, where they have the observability platform, but they chose not to go down the OpenTelemetry route. So they're like plowing their own furrow. And, you know, they're a lot they're even further away from standardization.Alessio [00:35:51]: Can you maybe just give a quick overview of how OTEL ties into the AI workflows? There's kind of like the question of is, you know, a trace. And a span like a LLM call. Is it the agent? It's kind of like the broader thing you're tracking. How should people think about it?Samuel [00:36:06]: Yeah, so they have a PR that I think may have now been merged from someone at IBM talking about remote agents and trying to support this concept of remote agents within GenAI. I'm not particularly compelled by that because I don't think that like that's actually by any means the common use case. But like, I suppose it's fine for it to be there. The majority of the stuff in OTEL is basically defining how you would instrument. A given call to an LLM. So basically the actual LLM call, what data you would send to your telemetry provider, how you would structure that. Apart from this slightly odd stuff on remote agents, most of the like agent level consideration is not yet implemented in is not yet decided effectively. And so there's a bit of ambiguity. Obviously, what's good about OTEL is you can in the end send whatever attributes you like. But yeah, there's quite a lot of churn in that space and exactly how we store the data. I think that one of the most interesting things, though, is that if you think about observability. Traditionally, it was sure everyone would say our observability data is very important. We must keep it safe. But actually, companies work very hard to basically not have anything that sensitive in their observability data. So if you're a doctor in a hospital and you search for a drug for an STI, the sequel might be sent to the observability provider. But none of the parameters would. It wouldn't have the patient number or their name or the drug. With GenAI, that distinction doesn't exist because it's all just messed up in the text. If you have that same patient asking an LLM how to. What drug they should take or how to stop smoking. You can't extract the PII and not send it to the observability platform. So the sensitivity of the data that's going to end up in observability platforms is going to be like basically different order of magnitude to what's in what you would normally send to Datadog. Of course, you can make a mistake and send someone's password or their card number to Datadog. But that would be seen as a as a like mistake. Whereas in GenAI, a lot of data is going to be sent. And I think that's why companies like Langsmith and are trying hard to offer observability. On prem, because there's a bunch of companies who are happy for Datadog to be cloud hosted, but want self-hosted self-hosting for this observability stuff with GenAI.Alessio [00:38:09]: And are you doing any of that today? Because I know in each of the spans you have like the number of tokens, you have the context, you're just storing everything. And then you're going to offer kind of like a self-hosting for the platform, basically. Yeah. Yeah.Samuel [00:38:23]: So we have scrubbing roughly equivalent to what the other observability platforms have. So if we, you know, if we see password as the key, we won't send the value. But like, like I said, that doesn't really work in GenAI. So we're accepting we're going to have to store a lot of data and then we'll offer self-hosting for those people who can afford it and who need it.Alessio [00:38:42]: And then this is, I think, the first time that most of the workloads performance is depending on a third party. You know, like if you're looking at Datadog data, usually it's your app that is driving the latency and like the memory usage and all of that. Here you're going to have spans that maybe take a long time to perform because the GLA API is not working or because OpenAI is kind of like overwhelmed. Do you do anything there since like the provider is almost like the same across customers? You know, like, are you trying to surface these things for people and say, hey, this was like a very slow span, but actually all customers using OpenAI right now are seeing the same thing. So maybe don't worry about it or.Samuel [00:39:20]: Not yet. We do a few things that people don't generally do in OTA. So we send. We send information at the beginning. At the beginning of a trace as well as sorry, at the beginning of a span, as well as when it finishes. By default, OTA only sends you data when the span finishes. So if you think about a request which might take like 20 seconds, even if some of the intermediate spans finished earlier, you can't basically place them on the page until you get the top level span. And so if you're using standard OTA, you can't show anything until those requests are finished. When those requests are taking a few hundred milliseconds, it doesn't really matter. But when you're doing Gen AI calls or when you're like running a batch job that might take 30 minutes. That like latency of not being able to see the span is like crippling to understanding your application. And so we've we do a bunch of slightly complex stuff to basically send data about a span as it starts, which is closely related. Yeah.Alessio [00:40:09]: Any thoughts on all the other people trying to build on top of OpenTelemetry in different languages, too? There's like the OpenLEmetry project, which doesn't really roll off the tongue. But how do you see the future of these kind of tools? Is everybody going to have to build? Why does everybody want to build? They want to build their own open source observability thing to then sell?Samuel [00:40:29]: I mean, we are not going off and trying to instrument the likes of the OpenAI SDK with the new semantic attributes, because at some point that's going to happen and it's going to live inside OTEL and we might help with it. But we're a tiny team. We don't have time to go and do all of that work. So OpenLEmetry, like interesting project. But I suspect eventually most of those semantic like that instrumentation of the big of the SDKs will live, like I say, inside the main OpenTelemetry report. I suppose. What happens to the agent frameworks? What data you basically need at the framework level to get the context is kind of unclear. I don't think we know the answer yet. But I mean, I was on the, I guess this is kind of semi-public, because I was on the call with the OpenTelemetry call last week talking about GenAI. And there was someone from Arize talking about the challenges they have trying to get OpenTelemetry data out of Langchain, where it's not like natively implemented. And obviously they're having quite a tough time. And I was realizing, hadn't really realized this before, but how lucky we are to primarily be talking about our own agent framework, where we have the control rather than trying to go and instrument other people's.Swyx [00:41:36]: Sorry, I actually didn't know about this semantic conventions thing. It looks like, yeah, it's merged into main OTel. What should people know about this? I had never heard of it before.Samuel [00:41:45]: Yeah, I think it looks like a great start. I think there's some unknowns around how you send the messages that go back and forth, which is kind of the most important part. It's the most important thing of all. And that is moved out of attributes and into OTel events. OTel events in turn are moving from being on a span to being their own top-level API where you send data. So there's a bunch of churn still going on. I'm impressed by how fast the OTel community is moving on this project. I guess they, like everyone else, get that this is important, and it's something that people are crying out to get instrumentation off. So I'm kind of pleasantly surprised at how fast they're moving, but it makes sense.Swyx [00:42:25]: I'm just kind of browsing through the specification. I can already see that this basically bakes in whatever the previous paradigm was. So now they have genai.usage.prompt tokens and genai.usage.completion tokens. And obviously now we have reasoning tokens as well. And then only one form of sampling, which is top-p. You're basically baking in or sort of reifying things that you think are important today, but it's not a super foolproof way of doing this for the future. Yeah.Samuel [00:42:54]: I mean, that's what's neat about OTel is you can always go and send another attribute and that's fine. It's just there are a bunch that are agreed on. But I would say, you know, to come back to your previous point about whether or not we should be relying on one centralized abstraction layer, this stuff is moving so fast that if you start relying on someone else's standard, you risk basically falling behind because you're relying on someone else to keep things up to date.Swyx [00:43:14]: Or you fall behind because you've got other things going on.Samuel [00:43:17]: Yeah, yeah. That's fair. That's fair.Swyx [00:43:19]: Any other observations just about building LogFire, actually? Let's just talk about this. So you announced LogFire. I was kind of only familiar with LogFire because of your Series A announcement. I actually thought you were making a separate company. I remember some amount of confusion with you when that came out. So to be clear, it's Pydantic LogFire and the company is one company that has kind of two products, an open source thing and an observability thing, correct? Yeah. I was just kind of curious, like any learnings building LogFire? So classic question is, do you use ClickHouse? Is this like the standard persistence layer? Any learnings doing that?Samuel [00:43:54]: We don't use ClickHouse. We started building our database with ClickHouse, moved off ClickHouse onto Timescale, which is a Postgres extension to do analytical databases. Wow. And then moved off Timescale onto DataFusion. And we're basically now building, it's DataFusion, but it's kind of our own database. Bogomil is not entirely happy that we went through three databases before we chose one. I'll say that. But like, we've got to the right one in the end. I think we could have realized that Timescale wasn't right. I think ClickHouse. They both taught us a lot and we're in a great place now. But like, yeah, it's been a real journey on the database in particular.Swyx [00:44:28]: Okay. So, you know, as a database nerd, I have to like double click on this, right? So ClickHouse is supposed to be the ideal backend for anything like this. And then moving from ClickHouse to Timescale is another counterintuitive move that I didn't expect because, you know, Timescale is like an extension on top of Postgres. Not super meant for like high volume logging. But like, yeah, tell us those decisions.Samuel [00:44:50]: So at the time, ClickHouse did not have good support for JSON. I was speaking to someone yesterday and said ClickHouse doesn't have good support for JSON and got roundly stepped on because apparently it does now. So they've obviously gone and built their proper JSON support. But like back when we were trying to use it, I guess a year ago or a bit more than a year ago, everything happened to be a map and maps are a pain to try and do like looking up JSON type data. And obviously all these attributes, everything you're talking about there in terms of the GenAI stuff. You can choose to make them top level columns if you want. But the simplest thing is just to put them all into a big JSON pile. And that was a problem with ClickHouse. Also, ClickHouse had some really ugly edge cases like by default, or at least until I complained about it a lot, ClickHouse thought that two nanoseconds was longer than one second because they compared intervals just by the number, not the unit. And I complained about that a lot. And then they caused it to raise an error and just say you have to have the same unit. Then I complained a bit more. And I think as I understand it now, they have some. They convert between units. But like stuff like that, when all you're looking at is when a lot of what you're doing is comparing the duration of spans was really painful. Also things like you can't subtract two date times to get an interval. You have to use the date sub function. But like the fundamental thing is because we want our end users to write SQL, the like quality of the SQL, how easy it is to write, matters way more to us than if you're building like a platform on top where your developers are going to write the SQL. And once it's written and it's working, you don't mind too much. So I think that's like one of the fundamental differences. The other problem that I have with the ClickHouse and Impact Timescale is that like the ultimate architecture, the like snowflake architecture of binary data in object store queried with some kind of cache from nearby. They both have it, but it's closed sourced and you only get it if you go and use their hosted versions. And so even if we had got through all the problems with Timescale or ClickHouse, we would end up like, you know, they would want to be taking their 80% margin. And then we would be wanting to take that would basically leave us less space for margin. Whereas data fusion. Properly open source, all of that same tooling is open source. And for us as a team of people with a lot of Rust expertise, data fusion, which is implemented in Rust, we can literally dive into it and go and change it. So, for example, I found that there were some slowdowns in data fusion's string comparison kernel for doing like string contains. And it's just Rust code. And I could go and rewrite the string comparison kernel to be faster. Or, for example, data fusion, when we started using it, didn't have JSON support. Obviously, as I've said, it's something we can do. It's something we needed. I was able to go and implement that in a weekend using our JSON parser that we built for Pydantic Core. So it's the fact that like data fusion is like for us the perfect mixture of a toolbox to build a database with, not a database. And we can go and implement stuff on top of it in a way that like if you were trying to do that in Postgres or in ClickHouse. I mean, ClickHouse would be easier because it's C++, relatively modern C++. But like as a team of people who are not C++ experts, that's much scarier than data fusion for us.Swyx [00:47:47]: Yeah, that's a beautiful rant.Alessio [00:47:49]: That's funny. Most people don't think they have agency on these projects. They're kind of like, oh, I should use this or I should use that. They're not really like, what should I pick so that I contribute the most back to it? You know, so but I think you obviously have an open source first mindset. So that makes a lot of sense.Samuel [00:48:05]: I think if we were probably better as a startup, a better startup and faster moving and just like headlong determined to get in front of customers as fast as possible, we should have just started with ClickHouse. I hope that long term we're in a better place for having worked with data fusion. We like we're quite engaged now with the data fusion community. Andrew Lam, who maintains data fusion, is an advisor to us. We're in a really good place now. But yeah, it's definitely slowed us down relative to just like building on ClickHouse and moving as fast as we can.Swyx [00:48:34]: OK, we're about to zoom out and do Pydantic run and all the other stuff. But, you know, my last question on LogFire is really, you know, at some point you run out sort of community goodwill just because like, oh, I use Pydantic. I love Pydantic. I'm going to use LogFire. OK, then you start entering the territory of the Datadogs, the Sentrys and the honeycombs. Yeah. So where are you going to really spike here? What differentiator here?Samuel [00:48:59]: I wasn't writing code in 2001, but I'm assuming that there were people talking about like web observability and then web observability stopped being a thing, not because the web stopped being a thing, but because all observability had to do web. If you were talking to people in 2010 or 2012, they would have talked about cloud observability. Now that's not a term because all observability is cloud first. The same is going to happen to gen AI. And so whether or not you're trying to compete with Datadog or with Arise and Langsmith, you've got to do first class. You've got to do general purpose observability with first class support for AI. And as far as I know, we're the only people really trying to do that. I mean, I think Datadog is starting in that direction. And to be honest, I think Datadog is a much like scarier company to compete with than the AI specific observability platforms. Because in my opinion, and I've also heard this from lots of customers, AI specific observability where you don't see everything else going on in your app is not actually that useful. Our hope is that we can build the first general purpose observability platform with first class support for AI. And that we have this open source heritage of putting developer experience first that other companies haven't done. For all I'm a fan of Datadog and what they've done. If you search Datadog logging Python. And you just try as a like a non-observability expert to get something up and running with Datadog and Python. It's not trivial, right? That's something Sentry have done amazingly well. But like there's enormous space in most of observability to do DX better.Alessio [00:50:27]: Since you mentioned Sentry, I'm curious how you thought about licensing and all of that. Obviously, your MIT license, you don't have any rolling license like Sentry has where you can only use an open source, like the one year old version of it. Was that a hard decision?Samuel [00:50:41]: So to be clear, LogFire is co-sourced. So Pydantic and Pydantic AI are MIT licensed and like properly open source. And then LogFire for now is completely closed source. And in fact, the struggles that Sentry have had with licensing and the like weird pushback the community gives when they take something that's closed source and make it source available just meant that we just avoided that whole subject matter. I think the other way to look at it is like in terms of either headcount or revenue or dollars in the bank. The amount of open source we do as a company is we've got to be open source. We're up there with the most prolific open source companies, like I say, per head. And so we didn't feel like we were morally obligated to make LogFire open source. We have Pydantic. Pydantic is a foundational library in Python. That and now Pydantic AI are our contribution to open source. And then LogFire is like openly for profit, right? As in we're not claiming otherwise. We're not sort of trying to walk a line if it's open source. But really, we want to make it hard to deploy. So you probably want to pay us. We're trying to be straight. That it's to pay for. We could change that at some point in the future, but it's not an immediate plan.Alessio [00:51:48]: All right. So the first one I saw this new I don't know if it's like a product you're building the Pydantic that run, which is a Python browser sandbox. What was the inspiration behind that? We talk a lot about code interpreter for lamps. I'm an investor in a company called E2B, which is a code sandbox as a service for remote execution. Yeah. What's the Pydantic that run story?Samuel [00:52:09]: So Pydantic that run is again completely open source. I have no interest in making it into a product. We just needed a sandbox to be able to demo LogFire in particular, but also Pydantic AI. So it doesn't have it yet, but I'm going to add basically a proxy to OpenAI and the other models so that you can run Pydantic AI in the browser. See how it works. Tweak the prompt, et cetera, et cetera. And we'll have some kind of limit per day of what you can spend on it or like what the spend is. The other thing we wanted to b

Into the Aether
PNG Face (feat. Kingdom Hearts Birth by Sleep Final Mix, the PSP, Ridge Racer)

Into the Aether

Play Episode Listen Later Feb 5, 2025 102:21


Honestly no joke I write here would be funnier than the end of this episode.Discussed: Trust falls, Twilight, Kingdom Hearts Birth by Sleep Final Mix on the PS4, a little bit of PSP stuff, Flip and Scratch are back, Ninja Gaiden 2 Black, Eternal Strands, Citizen Sleeper 2: Starward Vector, Civilization VII, favorite 90s Disney movies, Home ImPSProvement, the Playstation TV, PNG Face, Ridge Racer, Ridge Racer 2, Gran Turismo on the PSP, Riiiiidge Raceeeer, Daxter, Secret Agent Clank, Jak and Daxter The Lost Frontier, wrapping up, stick and hoop, craps---Find us everywhere: https://intothecast.onlineBuy some NEW merch if you'd like: https://shop.intothecast.onlineJoin the Patreon: https://www.patreon.com/intothecast---Follow Stephen Hilger: https://stephenhilgerart.com/Follow Brendon Bigley: https://bsky.app/profile/bb.wavelengths.onlineProduced by AJ Fillari: https://bsky.app/profile/ajfillari.bsky.social---Season 7 cover art by Scout Wilkinson: https://scoutwilkinson.myportfolio.com/Theme song by Will LaPorte: https://ghostdown.online/---Timecodes:(00:00) - Intro (00:54) - An update for Brendon (03:25) - Kingdom Hearts Birth by Sleep Final Mix on the PS4 (43:18) - Four* games that Brendon is dying to play (46:52) - 90s Disney movies (49:26) - Hardware chat (01:08:10) - The Sony AVT (01:08:34) - Ridge Racer (01:33:08) - Wrapping up ---Thanks to all of our amazing patrons including our Eternal Gratitude members:Zachary DIanfaceMcGeeMatt HClayton MChris Yw0nderbradShawn LCody RZach RFederico VLogan HAlan RSlinkmattjanzz DeaconGrokCorey ZDirectional JoySusan HOlivia KDan SIsaac SWill CJim WEvan BDavid Hmin2Aaron GVErik MBrady HJoshua JTony LDanny KSeth MAdam BJustin KAndy HDemoParker EMaxwell LSpiritofthunderJason WJason TCorey TMinnow Eats WhaleCaleb WfingerbellyJesse WMike TCodesWesleyErik BmebezacSergio LninjadeathdogRory BA42PoundMooseAndrewJustin MPeterStellar.BeesBrendan KScott RwreckxNoah OMichael GArcturusChris RhepaheCory FChase ALoveDiesNick QWes KChris MRBMichaela WAdam FScott HAlexander SPTherese KjgprintersJessica BMurrayDavid PJason KBede RKamrin HKyle SPhilip  ★ Support this podcast on Patreon ★

Hardcore Gaming 101
Jak II (and Doctor Hauzer!)

Hardcore Gaming 101

Play Episode Listen Later Jan 28, 2025 140:15


Join the HG101 gang as they discuss and rank Naughty Dog's slightly darker, slightly better (?) Jak and Daxter sequel. Then stick around for Doctor Hauzer, the Japan-only survival horror game for the 3DO! This weekend's Patreon Bonus Get episode will be AUTOBAHN TOKIO — a 3DO arcade racer with some seriously confusing geographical implications! Donate at Patreon to get this bonus content and much, much more! Follow the show on Bluesky to get the latest and straightest dope. Check out what games we've already ranked on the Big Damn List, then nominate a game of your own via five-star review on Apple Podcasts! Take a screenshot and show it to us on our Discord server! Intro music by NORM. 2024 © Hardcore Gaming 101, all rights reserved. No portion of this or any other Hardcore Gaming 101 ("HG101") content/data shall be included, referenced, or otherwise used in any model, resource, or collection of data.

PS THIS IS AWESOME!
361 - The Future of PlayStation

PS THIS IS AWESOME!

Play Episode Listen Later Jan 22, 2025 79:26


PS This is Awesome: Episode 361Fred is almost finished with Final Fantasy XVI, while Jake just wrapped up Trepang 2! Now he's on the hunt for his next big AAA adventure. If you have suggestions, let him know!In this episode, we're diving into The Future of PlayStation. We reflect on how Sony has shaped the gaming landscape over the years, from introducing 3D graphics and CD-quality sound in 1994 to championing indie developers and story-driven single-player games. We explore key innovations like the DualShock controller, Blu-ray integration, and the rise of online services with the PS4.As for the future, what's next for PlayStation? Cloud gaming is gaining ground, but with live service games faltering, Sony might need to rethink its strategy. Could we see a move toward Early Access, more PC integration, or a resurgence of multiplayer modes in single-player games? Plus, we speculate about which franchises and studios—like Naughty Dog and Uncharted—should lead the charge.In news, the rumored State of Play in February has us wondering what surprises are in store. Meanwhile, Arken Age is out now on PSVR2, earning high praise for its combat despite some story critiques. The Until Dawn movie just dropped its first trailer, and it looks promising! On a bittersweet note, Sony canceled Bluepoint's live-service God of War project and Bend Studio's new title, but both studios remain intact.Finally, PlayStation Blog shared its top 10 platformers available on PS Plus, including classics like Jak and Daxter, Ape Escape, and Celeste. There's a lot to unpack, so hit play and join us for the conversation!By joining our Patreon community for ONLY $1.00 per month, you'll also enjoy these exclusive benefits:Early Access: Be the first to listen to our episodes as soon as they're ready. Get ahead of the game and dive into the latest news, reviews, and discussions.Personalized Shoutout: As a token of our gratitude for your support, we'll give you a special shout out during one of our podcast episodes, acknowledging your contribution and dedication to our show.Custom Die-Cut Vinyl Sticker: Receive an exclusive custom die-cut vinyl sticker featuring our podcast's unique design. Showcase your support with this limited-edition collectible.Your support goes a long way in helping us continue to create the content you love. It's a simple and direct way to show your appreciation for our podcast.To become a patron and unlock these exciting benefits, visit www.patreon.com/psthisisawesome today. Your support keeps us going and ensures that we can keep delivering top-notch PlayStation content.Please, if you enjoyed the content or even if you didn't quite enjoy this one, we encourage you to come back. We try to offer something for everybody. Please share with your friends and help us spread the show as we try to build a bigger community here! As always you can support our show at our Patreon Page. Thanks for listening.http://www.patreon.com/psthisisawesome 0:00 - INTRO9:50 - GAMES WE'RE PLAYING16:31 - LISTENER FEEDBACK25:50 - FUTURE OF PLAYSTATION 57:00 - BEND AND BLUEPOING CANCELED GAMES1:06:25 - NEXT STATE OF PLAY INCOMING?1:10:00 - UNTIL DAWN TRAILER REACTION1:13:41 - ARKEN AGE COMING TO VR21:14:50 - TOP 10 PS PLUS PLATFORMERS Support PS This is Awesome! Hosted on Acast. See acast.com/privacy for more information.

GlitchCube
Ep. 222: Rapping Up The Year With Dragon Age: The Veilguard

GlitchCube

Play Episode Listen Later Dec 30, 2024 37:32


Hey there Cubies! We are closing out 2024 with some great games. Some from the back log and some to prepare for our final game of the year episode. Games like Atlyss, Jak & Daxter, Call Of Duty Black Ops and the surprisingly good Dragon Age: The Veilguard. ------------------ Hosts: Christian & Chris Don't forget to leave us a cheeky review and we will read them on the show. Tell your friends that they need some GlitchCube in their life.

Breaking Change
v26 - Luigi's Mansion

Breaking Change

Play Episode Listen Later Dec 14, 2024 204:11


I'd write more here, but I've got places to be. Becky, Jeremy, and I are going to engage in some holiday festivities. We have a couple gingerbread houses to make and a tree to trim. And no nog to speak of. Really, that's all you get by way of show notes this time as a result, deal with it. Send your complaints to podcast@searls.co and they will be read on air. Some bullet points below the fold: My 90-minute, outdated guide to setting up a Mac Aaron's puns, ranked Jim Carrey is 62 and can't even retire I bought my 8 year old a switch and didn't realize how much games cost Teen creates memecoin, dumps it, earns $50,000 Startup will brick $800 emotional support robot for kids without refunds Install the Mozi app (manifesto here | app here) Vision Pro getting PSVR2 controllers The 2024 Game Awards news roundup Intergalactic: The Heretic Prophet looks badass, but is it too inclusive for The Gamers? We don't talk about Luigi An invisible desktop app for cheating on technical interviews (HN comments) Sora is out, but it's not good yet Indiana Jones and the Great Circle is out, and it is good yet Emudeck is so great it shouldn't be legal, and some people probably think it isn't Pikmin Stay tuned to my YouTube channel for upcoming LIVE streams Transcript: [00:00:00] Thank you. [00:00:29] Good morning, internet. [00:00:32] I started speaking before I realized, as an asynchronous audio production, it's actually pretty unlikely that it's the morning where you are. [00:00:43] Although, if it is the morning, coincidentally, please feel free to be creeped out, check over your shoulder. [00:00:51] Today was, I woke up with Vim and Vigor this morning, super excited to take on the day, thinking maybe I've got what it takes to record an audio production today. [00:01:07] And then we have an elderly coffee pot. [00:01:11] I don't want to completely put the blame on it because we were using it wrong for several years. [00:01:24] And it's a long story that I will shorten to say, any piece of consumer electronics or appliances in America, the half-life keeps decreasing. [00:01:37] And so when I say elderly coffee pot, I mean that we bought this coffee pot post-COVID. [00:01:42] And it's already feeling like, oh, we should probably get a new coffee pot, huh? [00:01:45] What happens is, from time to time, heat will build up in the grounds dingus. [00:01:55] I'm just realizing now that I'm like, you know, I'm not a coffee engineer. [00:01:58] Some of you are. [00:02:00] But, you know, of course, we all know that the dingus is connected to the water spigot, which is above the craft. [00:02:09] And what happens, as far as I can tell, is once in a while, you get all that hot water and grounds swirling around. [00:02:20] And if it clogs at all, like if it doesn't release just so, the whole little undercarriage, again, this is a technical term, just stay with me. [00:02:30] And we'll pop forward like three millimeters, which is just enough for the water to kind of miss its target on the craft and then spray all who's he what's it's, as well as for the spigot to start just kind of like splurring, you know, this water coffee slurry everywhere. [00:02:49] And so I went after, you know, but then you still get the triumphant ding dong sound that the coffee is ready. [00:02:56] So I walked over to the coffee expecting like, yes, it's the best, best way to start my day or whatever. [00:03:06] Pull out the coffee. [00:03:07] And the pot is too light. [00:03:10] And I had a familiarity of like what that means. [00:03:13] It means like there is water somewhere. [00:03:17] And it's not in this pot. [00:03:19] And so it's just like, you know, this big, big machine we actually have we've put because of our Mr. [00:03:26] Coffee's, you know, elderly onset incontinence. [00:03:33] We have we have put the entire coffee pot on a tray, like a rimmed silicone tray that you would use for like, I guess, a dog feeding bowl, right? [00:03:45] A dog, you know, messily eats food and slaps water around and stuff. [00:03:49] And you don't want it all over your hardwood. [00:03:50] Like you'd put this underneath that and it would catch some of the water. [00:03:53] So we I spent the first 30 minutes of my waking life today getting my hopes up that I was going to have coffee, followed by, you know, painstakingly carrying this entire cradle of of of coffee pot full of hot brown liquid. [00:04:10] That would stay in all of my clothes and, you know, get on the cabinets and stuff with a silicone underbelly thing. [00:04:18] And just kind of like, you know, we've got one of those big we're very fortunate to have one of those big farmers, farmer house, farmhouse. [00:04:25] I never know what to call it. [00:04:27] Steel, basically a double wide sink. [00:04:30] So what's nice about a double wide sink is that if you've got a problem in your kitchen and you're only a few steps away, whether it's the coffee pot part of the kitchen or the fridge or the freezer or the God forbid, the range or the oven, you can just sort of strategically hurl whatever it is you're holding just about into the into the sink. [00:04:51] And then once it hits the sink, it's, you know, the the the potential damage is limited. [00:04:57] So I gently hurled my coffee apparatus. [00:05:02] Is that the plural of apparatus? [00:05:04] One wonders into the into the into the sink and then spent the next 20 minutes, you know, scrubbing them and all to make another pot. [00:05:13] And Becky, of course, walks down the minute that the second pot is about to be finished. [00:05:18] And I'm like, I've already seen some shit and I'm going to go record a podcast now. [00:05:22] And that swallow you just heard was me having a sip of coffee that was not disgusting, but not great. [00:05:31] But I'll take it over where I was an hour ago. [00:05:39] Thank you for for subscribing as a as a true believer in breaking change. [00:05:47] We're coming up on one year now. [00:05:49] It's hard to believe that it's already been a year, not because this has been a lot of work or a big accomplishment, but just because the the the agony of existence seems to accelerate as you get older. [00:06:03] It's one of the few kindnesses in life and so as we whipsaw around the sun yet again, we're about to do that. [00:06:11] This is the 26th edition version 26 of the podcast. [00:06:17] I've got two names here to release titles and I haven't picked one yet. [00:06:22] So as a special. [00:06:24] Nearing the end of the year treat. [00:06:29] I'm going to pitch them both to you now, right? [00:06:31] So so we're in this together. [00:06:33] I like to think this is a highly collaborative one person show. [00:06:37] Version 26 rich nanotexture. [00:06:42] And that's a nod to the MacBook Pro has a nanotexture anti-glare screen coding option. [00:06:52] It's a reference to the rich Corinthian leather that was actually it's a Chrysler reference. [00:06:58] It's a made up thing. [00:06:59] There is no such thing as Corinthian leather, but like that's what they called their their seating. [00:07:03] And Steve Jobs referenced that as being the inspiration for I think it was the iPad calendar app. [00:07:13] With the rich Corinthian leather up at the top during the era of skeuomorphic designs back in 2010, 2009, maybe I can't remember exactly when they I think it's 2010 when he had his famous actually leather chair demonstration of the iPad. [00:07:28] Maybe the reason that that stood out to me was the car reference because it is it is an upsell. [00:07:34] The nanotexture $150 if you want to have a don't call it matte finish. [00:07:41] The other one, so that's option one, rich nanotexture. [00:07:46] And I didn't love it because I couldn't get texture. [00:07:49] I couldn't get the same Corinthian, right? [00:07:53] Like you want that bite, the multisyllabic bite that adds the extra, you know, the gravitas of a luxury good. [00:08:04] Yeah, texture just didn't have it for me. [00:08:06] But then if you change that word, it doesn't make sense. [00:08:08] So I mean, the other option two that came to mind version 26 don't don't by the way, don't think I'm going to edit this in post and fix it. [00:08:19] I will not. [00:08:20] I will ultimately land on one of these and that will be the title that you saw on your podcast player. [00:08:25] Or maybe some third thing will come to mind and then this conversation will be moot. [00:08:29] I do not think of this collaborative exercise. [00:08:32] Just imagine it's a it's a it's a quantum collaboration. [00:08:37] So by observing it, that's you actually took part. [00:08:41] You opened your podcast player and then the yeah, the entangled, you know, bits just they coalesced around one of these two names or some third name. [00:08:58] It's all just statistics version 26 Luigi's Mansion, which is a nod to two things at once. [00:09:05] I'm going to talk a little bit about GameCube, but also I'll probably not escape mentioning Luigi Manjoni Manjoni man. [00:09:15] You know, I haven't been watching the news. [00:09:17] I don't know how to pronounce his name, but it looks enough like mansion that I was like, oh, man. [00:09:21] I bet you there's a Nintendo PR guy whose day just got fucking ruined by the fella who is a overnight folk hero. [00:09:30] More attractive than most assassins, I would say. [00:09:35] Great hair. [00:09:36] Good skin. [00:09:37] Apparently, skincare Reddit is all about this fella who murdered in cold blood the CEO of UnitedHealthcare. [00:09:45] If you haven't caught the news, if you're even less online than I am. [00:09:51] And yeah, so I'm trying to decide. [00:09:53] I think Luigi's Mansion is probably going to win. [00:09:56] It's more timely. [00:09:57] It's the first time the name Luigi has come up in the last year. [00:10:00] And I may have mentioned nanotexture before when discussing Apple's very compromised studio display. [00:10:11] So I'm leaning Luigi's Mansion, but, you know, don't tempt me. [00:10:15] I might switch. [00:10:18] I'm going to just keep drinking coffee because I got to power through this. [00:10:21] Let's talk about some life stuff. [00:10:24] I so when we last talked that way back in the heady days of version 25, I had just gotten off a plane from Japan. [00:10:34] I was still a little bit jet lagged. [00:10:36] I recorded later in the evening. [00:10:38] I was tired. [00:10:39] You know, I was still overcoming. [00:10:41] I listened to the episode, realized I was overcoming a cold. [00:10:44] You know, then Becky shortly thereafter, after recording, she developed a pretty bad cough. [00:10:51] And so we've both been sleeping relatively poorly. [00:10:53] And I can't complain about this cough because her having a cough for four nights is nothing like me snoring on and off for over a year. [00:11:02] And I think the fact that her cough is consistent is actually a kindness compared to the sporadic nature of my snoring, where it's like I might go a week without it. [00:11:11] And then all of a sudden there's like, bam. [00:11:14] So she doesn't, you know, it's like sneaks up on her and that's not fair. [00:11:17] So so she's got a cough and I haven't been sleeping particularly well. [00:11:20] Maybe that's it. [00:11:22] I also, you know, I wanted to dry out because I was living on shoe highs, you know, canned cocktails in Japan for way too long. [00:11:30] Just drinking, you know, five whole dollars of alcohol every day, which is an irresponsible amount of alcohol. [00:11:36] It turns out. [00:11:40] Yeah, that's one nice thing about living in Orlando and theme park Orlando is that the average price of a cocktail here is seriously $20. [00:11:49] I think it is. [00:11:51] I am delighted and surprised when I find a cocktail under $20. [00:11:55] That's any good. [00:11:55] In fact, the four seasons right around the corner, their lobby bar has a some of the best bartenders in the state of Florida. [00:12:05] Like they went all kinds of awards. [00:12:06] And so when you say a lobby bar, you think it sucks. [00:12:09] But it's actually it's like it's a it's a restaurant with a room if you're ever around and they still do a happy hour with like $4. [00:12:18] It was $4 beers. [00:12:19] I think they finally increased to $5 beers draft beer. [00:12:23] And it's all craft. [00:12:25] You know, it's all fancy people stuff. [00:12:27] And they do it's I think it's $10 margaritas, French 75s, and they got some other happy hour cocktail. [00:12:37] It was highballs for a while. [00:12:39] Whiskey highballs was like probably centauri toki or something. [00:12:43] I gotta say like that $10 margarita. [00:12:47] They'll throw some jalapeno in there if you want some tahini rim, you know, they do it up. [00:12:52] They do it well. [00:12:54] But that might be the cheapest cocktail I've had in all of Orlando is at the Four Seasons. [00:13:01] Famous for that TikTok meme of the Four Seasons baby, if you're a TikTok person. [00:13:06] Anyway, all that all all this drinking talk back to the point. [00:13:11] I've been not drinking for a week. [00:13:12] And I, you know, I'm back to tracking my nutrients every day. [00:13:17] The things that I consume and adding up all of the protein and carbohydrate and realizing [00:13:21] if you don't drink, it's actually really easy to blow past one's protein goals. [00:13:25] And so I had one day where I had like 240 grams of protein, which is [00:13:28] enough protein that you'll feel it the next morning if you're not used to it. [00:13:34] And I still was losing weight. [00:13:38] I lost like five or six pounds in the last week. [00:13:43] And to the point where it was like, you know, I was feeling a little lightheaded, [00:13:47] a little bit woozy because I wasn't drinking enough is the takeaway. [00:13:52] So so thank God we got to go to a Christmas party last night. [00:13:57] It was it was great Gatsby themed. [00:13:58] And I dressed up like a man who wanted to do the bare minimum to not get made fun of at the party. [00:14:05] So I had some some suspenders on instead of a belt, which was the first time I ever put on suspenders. [00:14:13] They were not period appropriate suspenders simply because they had the, you know, the [00:14:18] little class B dues instead of how they had some other system for I don't I don't fucking know. [00:14:25] Like I, I had chat GPT basically helped me through this. [00:14:28] And it's like, hey, you want these kinds of suspenders? [00:14:30] I'm like, that sounds like an ordeal. [00:14:31] How about I just get some universal one size fits all fit and clip them in? [00:14:36] I also had a clip on bow tie. [00:14:37] So that worked. [00:14:39] When you think clip on bow tie, I guess I'd never used one before, but like it, I always [00:14:45] assumed it would just be like, you know, like a barrette clip that would go in front of the [00:14:49] front button and look silly for that reason. [00:14:51] And maybe that's how they used to be. [00:14:53] But it seems these days, if you want to spend $3 on a fancy clip on bow tie with a nice texturing, [00:14:58] I'll say, uh, it's just pre it's a pre tied bow with a still wraps around your neck. [00:15:04] It's just, it has a class mechanism, which seems smart to me, right? [00:15:08] I don't know what. [00:15:09] Look, if you're really into men's fashion, uh, there's this weird intersection or this tension [00:15:19] between I'm a manly man who, who ties my own shoes and, you know, kills my own dinner and [00:15:25] stuff. [00:15:25] And I, I, for fuck's sake, tie my own bow tie from scratch every day. [00:15:29] Right? [00:15:29] Like there's a toxically masculine approach to bow ties, but at the same time, it is such [00:15:35] a foofy accoutrement. [00:15:37] It's like an ascot, um, that the idea of like a manly man, like a man trying to demonstrate [00:15:43] his manliness by the fact that he doesn't use a clip on bow tie, uh, came to mind yesterday [00:15:50] when I was, uh, struggling even with the clasping kind. [00:15:54] I was like, man, I wish I could just get this to anyway. [00:15:58] Um, I had a vest at a gray vest. [00:16:03] This is all brand new territory for me. [00:16:05] Uh, yeah, I, I've, I've leaned pretty hard into the t-shirt and shorts and or jeans life [00:16:10] for so long. [00:16:12] Uh, the, the fella in front of us when we, when we were checking in, cause they took little [00:16:16] photos of you, uh, all of the women had the same exact flapper dress from Amazon, you know, [00:16:22] with the, the, the, the hairband thing with the, you know, fake, the polyester peacock tail. [00:16:28] Becky's looked the best. [00:16:29] I'm not gonna, I'm not even lying. [00:16:32] Uh, uh, her dress actually fit. [00:16:35] He had some, uh, very ill fitting flapper costumes that these women couldn't even move in. [00:16:40] Um, it was interesting. [00:16:42] Uh, but the, the fella in front of us at check-in was wearing a, a, a full blown, you know, tuxedo [00:16:48] get up that he brought from home. [00:16:50] And he was talking about, Oh yeah, well he's got two of them and his wife, you know, ribbed [00:16:54] him a little bit that he could only fit in one. [00:16:55] I was like, man, owning a tuxedo, that's nuts. [00:16:58] Like, and then it like turns out he's like got all these suits and these fancy clothes and [00:17:02] he's an older gentleman. [00:17:05] Uh, but my entire career only the first few years did I have to think about what I was [00:17:10] wearing and, and it never really got beyond pleated, you know, khakis and a starched shirt. [00:17:18] And, and I had, I had to wear a suit maybe on two sales calls. [00:17:22] Um, and they were always the sales calls that were just, uh, there were certain sales demos [00:17:30] when I was a, a, a baby consultant, these really complex bids. [00:17:39] I remember we were at cook County once, uh, uh, the, the county that wraps Chicago and it [00:17:44] has a lot of functions and facilities that operate at the county level. [00:17:48] So, but of course we're in Chicago in some, you know, uh, dystopian office building. [00:17:54] That's very Gothic, I should say. [00:17:57] And the, the solution that we were selling was a response to a bid around some kind of [00:18:05] document, electronic document ingestion and, and, and routing solution. [00:18:09] And so what, what that meant was it was like a 12 person team. [00:18:14] It was a big project working on this pitch. [00:18:18] And most of the work and most of the money came from the software side at the end of the [00:18:23] process. [00:18:23] It's like, you're going to get IBM file net and you're going to get all these different, [00:18:26] uh, enterprise tools. [00:18:28] And we're going to integrate, uh, with all your systems and, and build these custom integrations [00:18:32] that you've asked for here and here and here. [00:18:33] But the, the, the hard part is the human logistics of how do you get all of their paper documents [00:18:41] into the system. [00:18:42] Uh, and that was my job was I had to get paper and then scan it, uh, with a production, big [00:18:50] Kodak funkin fucking scanner. [00:18:52] Uh, and then use, what was it? [00:18:54] Kofax capture or something like a, like an OCR tool of the era. [00:18:59] And the thing about it is that scanning is not, was not ever a science and neither is [00:19:07] OCR, the OCR stuff and OCR stands for optical character recognition. [00:19:10] So you'd have a form and you'd write on the form, like, you know, uh, uh, uh, uh, some, [00:19:15] some demo address and name and all this. [00:19:19] I spent. [00:19:22] So like the people doing the software, like they, they could just like click a button and [00:19:26] like, they could even just use fakery, right? [00:19:29] Like, Oh, the API is not really there, but I'll always return this particular, like, let's [00:19:33] call it an XML soap message. [00:19:34] And so the, the software guys clocked in, clocked out, got back to their billable work. [00:19:39] I, because the stakes were so high in this particular, uh, and I'm here right now explaining [00:19:46] all of this nonsense because I had to wear a suit and that was also really bad, but I [00:19:51] was in Chicago late at night with a group of like, at that point it was like 9 PM and it [00:19:54] was just me and two partners. [00:19:56] Cause the partners had a sickness called avoid family, stay at work. [00:20:02] And, uh, I, I was just running over and over and over again where I'd like, you know, [00:20:09] I'd take the paper, I'd put it through the scanner and it would get 90% of the OCR stuff [00:20:13] done, or I'd get it perfect. [00:20:15] And it would scan everything just right, which would result in the downstream, you know, after [00:20:21] the capture, like all of my integrations, like would route it to the right thing. [00:20:24] So that like, it was basically a game of mousetrap or dominoes where like my task was both [00:20:29] the most important to being able to demonstrate, but also the most error prone, but also the [00:20:37] least, uh, financially like, um, valuable to, to our services company. [00:20:42] And so I had no support, uh, on top of that, they, the, our fucking it people pushed out some [00:20:49] kind of, um, you know, involuntary security update security and bunny quotes that, that [00:20:57] slowed my system down dramatically in the course of just like a day. [00:21:01] And I had, I had no way to test for this. [00:21:04] So I remember I was up at like 11 PM at that point, trying to make this work consistently [00:21:10] and realizing that the only way to get it to run it all required me to, um, install a virtual [00:21:16] machine, put windows in the virtual machine, install all this software inside that virtual [00:21:22] machine, and then run it there because only in the black box of an encrypted virtual machine [00:21:27] image or, uh, you know, a virtual machine, like disc image, could I evade all of the accountant [00:21:33] bullshit that was trying to track and encrypt and, and, and muck with files and flight and [00:21:38] so forth. [00:21:39] And so it was only around like probably one 30 or two that I got to bed and our, our demo [00:21:46] was like at seven in the morning and I had to wear a suit. [00:21:47] So if you ever wonder, Hey, why is Justin always just in a, a t-shirt and shorts? [00:21:54] Uh, I would say childhood trauma, fuck suits. [00:21:59] The only, the only time I associate like nice clothes, you know, having a lot of [00:22:03] having to dress up is church shit. [00:22:05] I didn't want to go to. [00:22:06] And usually it's like the worst church shit. [00:22:09] Like there's some cool church shit out there, you know, youth group where everyone's a horny, [00:22:14] right. [00:22:15] And singing pop songs to try to get people in. [00:22:17] That's as church shit goes, that's above average. [00:22:21] But when you're talking about like, Hey, you know, this aunt you've never heard of died and [00:22:27] we got to go all the way to goddamn Dearborn to sit in a Catholic mass, that's going to [00:22:32] be in Latin. [00:22:33] And they're going to, you know, one of those, you know, you should feel bad for him because [00:22:39] he's abused. [00:22:39] But one of the altar boys, he's going to be waving that little like incense thingy, [00:22:43] the jigger back and forth and back and forth like a metronome. [00:22:46] And, uh, you're going to get all this soot in your face, all of that, you know, frankincense [00:22:51] and myrrh and whatever the fuck they burn. [00:22:52] And, uh, yeah, then they're going to play some songs, but they're not going to be songs you [00:22:57] want to hear. [00:22:57] And you're going to be uncomfortable because I bought you this suit at JC Penny when you [00:23:01] were like nine and you're 12, you're 12 now, and you've gained a lot of weight, but [00:23:06] here we are. [00:23:07] And then you got to go and, you know, like, don't worry because after the service, there's [00:23:12] a big meal, but it's mostly just going to be, you know, styrofoam plates and plastic forks [00:23:16] and, uh, cold rubbery chicken. [00:23:19] And then a whole lot of family members who want to pinch your cheeks, uh, had an aunt that [00:23:24] always wanted to, um, put on a bunch of red lipstick and kiss me and leave kiss marks. [00:23:30] And she thought that was adorable and everyone else thought it was funny. [00:23:33] And for whatever reason, I wasn't a fan, uh, that's the kind of, uh, yeah, so anyway, moving [00:23:45] right along the, uh, the, the other than having to dress up, the, the Christmas party was really [00:23:50] nice because it had an all you can drink martini bar. [00:23:52] So that, that helped that took the edge off a little bit since I hadn't been drinking for [00:23:57] the previous week. [00:23:57] Uh, and it was, you know, uh, they, they had a great bartender, the, the, I assume that [00:24:07] that people drank gin martinis back in the day of Gatsby, but it seemed to be a vodka forward [00:24:12] martini bar, which I appreciated. [00:24:15] Uh, as I get older and my taste buds start dying, uh, I found myself going from dry martinis [00:24:23] to martinis with an olive to martinis with two olives to me asking for like a little bit of [00:24:30] olive juice and then drinking the martini and realizing that wasn't quite enough olive juice. [00:24:34] So that's just disgusting, but, um, it's where, uh, it's one of the signs of age, I guess. [00:24:43] Uh, so the martini bar was good. [00:24:46] Uh, they also had an aged old fashion that they'd made, you know, homemade, um, with like nutmeg [00:24:51] and cinnamon in there. [00:24:52] That was impressive. [00:24:53] Uh, so yeah, had a, had a big old Christmas party last night, had a couple of drinks, uh, [00:25:00] and, and, uh, because of the contrast, whenever I go, you know, go a week without any alcohol [00:25:06] and then I have some alcohol and then I wake up the next morning and I'm like, oh yes, I [00:25:11] know what people mean now that alcohol is poison. [00:25:13] And it's a mildly poisonous thing because I feel mildly poisoned. [00:25:19] Um, and, and I just usually feel that most days until I forget about it. [00:25:23] So it's a data point, uh, to think about, uh, uh, I, I, I had a good, good run for, [00:25:30] for a while there, just cause like when you live in a fucking theme park and there's nowadays [00:25:34] alcohol everywhere that I go and every outing, I had a good run for a few months. [00:25:40] Um, not last year, the year before where I just didn't drink at home as a rule to myself. [00:25:46] I was like, you know, I'm not going to pour any liquor for myself at home unless I'm entertaining [00:25:49] guests. [00:25:50] And, uh, even then go easy on it because I I'm, I'm, I'm going to just the background radiation [00:25:56] of existence in when you live in a bunch of resorts. [00:25:59] Uh, I'll, I'll get, I'll get, I'll get plenty of alcohol subcutaneously. [00:26:05] Um, a contact tie. [00:26:07] So maybe I'll, maybe I'll try that again. [00:26:10] I don't know. [00:26:11] It's the stuff you think about in mid December when you're just inundated with specialty food [00:26:17] and drink options, uh, do other life stuff that isn't alcohol or religion or clothing [00:26:27] related. [00:26:28] Oh, uh, uh, I've been on a quest to not necessarily save a bunch of money, not necessarily. [00:26:35] Uh, I was going to say, uh, tighten my belt, but, uh, I don't know what the suspender equivalent [00:26:43] is because I did not wear a belt last night. [00:26:45] I just wore suspenders. [00:26:46] Uh, I've been interested in, in not budgeting either. [00:26:52] Just, I think awareness. [00:26:54] Like I want, I know that a lot of money flies through my pockets every month in the form of, [00:27:01] um, SAS software subscriptions and streaming services. [00:27:05] I mentioned this last, uh, last go round that I was recommending, Hey, let's say, go take a [00:27:11] look at like our unused streaming subscriptions of those. [00:27:14] Uh, yesterday I did cancel max. [00:27:16] Cause I realized that, uh, if I'm not watching a lot of news, I'm not going to watch John Oliver [00:27:20] and, and they frankly, a lot of HBO's prestige shows haven't been besides they cut a Sesame [00:27:28] street and it just so happened that I canceled that day. [00:27:31] So maybe there's a, some data engineer at HBO who's like, Oh man, people are canceling because [00:27:37] we got rid of Sesame street. [00:27:38] Uh, that would be good. [00:27:40] That would be good for America to get that feedback. [00:27:43] Uh, yeah. [00:27:44] I just want awareness of like, where's the money going and in what proportion and does that sound [00:27:50] right to me? [00:27:50] Uh, and I've, there are software tools for this. [00:27:53] Uh, they are all compromised in some way. [00:27:57] For example, we just, uh, we'd used lunch money in the past, which is a cool app. [00:28:02] And it has the kind of, you know, basic integrations you would expect. [00:28:06] I don't know if it uses plaid or whatever behind the covers, but like you, you connect your, your, [00:28:11] your checking accounts, your credit card accounts. [00:28:14] It lists all your transactions is very, um, customizable in terms of rules that you can [00:28:21] set. [00:28:21] It has an API. [00:28:22] Jen is a solo co-founder and she seems really, really competent and lovely and responsive, [00:28:27] which are all great things. [00:28:29] But the UI is a little clunky for me. [00:28:32] I don't like how it handled URLs. [00:28:33] It was like, once you got all the transactions in there and, and set up, it didn't feel informative [00:28:41] because there wasn't like a good reporting or graphs that just kind of at a glance would [00:28:45] tell you, this is where your money's going. [00:28:46] At least for me. [00:28:47] Uh, additionally, like it, it can't do the Apple card. [00:28:51] That's the, that's become the crux for a lot of these services is that, um, Apple card [00:28:55] only added support for reading. [00:28:59] Uh, well now you can read, uh, uh, so I, Apple added away on iOS and specifically iPhone [00:29:07] OS to read, uh, transactions from Apple card, Apple savings and Apple cash. [00:29:14] And this was like nine months ago, if that, but copilot, uh, money is one of two apps maybe [00:29:22] that supports this. [00:29:23] And so if you, if you have, we have, we each have an Apple card and we use it for kind of [00:29:29] our silly stuff whenever we're, you know, using a tap to pay. [00:29:33] So, so if, if you want to track transactions and you don't want to manually export CSVs [00:29:40] from your wife's phone every 30 days, which is the process that I'd fallen into with, with [00:29:44] lunch money, then you, you basically have copilot money. [00:29:50] And then there's another one, maybe Monarch, uh, the copilot money. [00:29:53] People are always talking about this other app called Monarch. [00:29:55] I haven't checked it out. [00:29:55] I don't know if that's why they like it or if it's just the other one that's being developed [00:29:59] right now in this post mint apocalypse, as we all grapple with the fact that mint was [00:30:04] always bad, uh, but people got into it and I don't copilot money is like nice, but like [00:30:11] it, like, for example, like if I'm, uh, if I buy a, uh, if I put $10, the equivalent of [00:30:19] $10, so 1000 yen on my Starbucks card in Japan, which is totally separate because of course it [00:30:25] is there's two Starbucks cards. [00:30:27] There's the one in Japan and then the one in the rest of the world. [00:30:30] So you open the Japanese only app, you put a thousand yen on it. [00:30:33] Uh, you pay for that with Apple pay. [00:30:36] So which goes to my Apple card and copilot money will read that transaction. [00:30:40] But if you read like the text in the merchant description, it's literally like [00:30:44] staba day and it's like all no spaces. [00:30:47] It's just like 40 characters in a row to, and if you really squint, you can kind of see [00:30:52] Starbucks, Japan, um, you know, app store payment, which is, you know, like I want to [00:31:00] change that to Starbucks, Japan, and then set up a rule to just like always change that. [00:31:05] So I don't have to like memorize these random ass merchant names. [00:31:08] Uh, apparently like after, after two hours of setting up copilot money yesterday, I realized [00:31:13] that there's like both no way to set up that kind of rule. [00:31:16] The only rule that it supports is categorization of, of spending fine, but then if you set [00:31:22] up a rule and you don't like it, there's no way to edit the rules cause there's no UI for [00:31:25] rule editing. [00:31:26] And so then, you know, where do you go, but read it and you're like, okay, well there's [00:31:30] a subreddit. [00:31:30] And then like, what's half the post in the subreddit? [00:31:32] It's about, Oh, of course it's a bunch of dads who are like, I can't see my rules and I have [00:31:36] to contact support. [00:31:37] And it's been nine months. [00:31:38] And I was like, Oh God. [00:31:39] So that's, uh, if anyone's got any great budgeting software that supports Apple card, you let me [00:31:46] know. [00:31:47] Uh, and also isn't a part-time job. [00:31:50] I'm not gonna, I'm not gonna spend all day on this. [00:31:52] I'm not, I'm not gonna, I'm gonna check in on this, uh, the four times a year that I, that [00:31:58] I wake up in a cold sweat wondering, Oh my God, how many subscriptions do I have? [00:32:02] Which is, uh, I, I really missed my calling by not being a dad, I guess. [00:32:07] But it did land me on looking at rocket money. [00:32:11] Uh, so, so, so there was an app called true bill that marketed heavily with like a lot of [00:32:19] other DTC apps where the pitch was, we will negotiate your bills for you. [00:32:26] And by bills, I think that one of the reasons why this, this, this business probably struggled [00:32:31] is that there's really only two that they could reasonably negotiate on your behalf. [00:32:37] You know, you, you imagine they've got a call center or they've got people who've, who [00:32:40] are trained, who have scripts that they follow, who, who will doggedly keep calling back until [00:32:44] they get what, you know, the discount, the, just the steps that you would have to go through [00:32:48] if you wanted to call Comcast or Verizon, they, they, they, they can basically could basically [00:32:57] only really negotiate your ISP and your cell phone carrier. [00:33:01] Cause those are the two sort of, you know, that are, that are transactional enough that [00:33:08] are regionalized or nationalized enough that they, that they could train on. [00:33:11] And then of course, like they, they're the ones that like get you in with a teaser rate and [00:33:15] then gradually turn up the heat over the course of a couple of years. [00:33:19] Well, Quicken Loans bought, they rebranded as rocket and then rocket fill in the blank [00:33:26] with other products. [00:33:26] And they bought true bill around the same time. [00:33:29] And I, my understanding from a distance is that true bill, uh, uh, that became rocket money [00:33:36] in order to be an entree into other rocket star services. [00:33:41] So like you, you now, when you install rocket money, it's still got the negotiation thing. [00:33:46] Cause that's what they market it on, but you have to slog through so much like, no, I'm actually [00:33:52] all set with credit and, and, and, and debt repayment services. [00:33:57] And I'm, I'm already all set with financial advisors and retirement goals. [00:34:00] I just get me to the, to the thing where I can pay you 35% of whatever you save me on [00:34:06] my ISP bill. [00:34:07] And so of course, you know, like I, I, I signed up for the first time, went through the app [00:34:12] onboarding. [00:34:13] I was not impressed with the bugginess of the app, but I was able to soldier on through [00:34:19] it. [00:34:19] And where I landed was I was, uh, following its little setup wizard for first. [00:34:27] Spectrum, which is my internet provider. [00:34:28] And I was, I'd initially paid a hundred dollars when I moved here in 2021, uh, a month for, [00:34:36] for one gig down, call it 30 megabits per second up. [00:34:40] And I can't get a, another ISP here. [00:34:43] They had an exclusive agreement. [00:34:44] They're building neighborhoods bullshit. [00:34:47] Uh, and I, I, so I can't get higher upstream and that really gets in my crawl. [00:34:53] Nevertheless, they have increased prices about $15 a year. [00:34:59] Each time I'm here to the point now where I think my monthly, you know, debit is like $150, [00:35:05] $145 and you fill it out and you give them your pin number. [00:35:11] You got this customer pin that like, you know, is secures your account. [00:35:14] I'm like, eh, all right, well, that's four digits, you know? [00:35:17] And besides I'm already on like this one dead simple plan. [00:35:20] It's just their normal plan. [00:35:22] And it's, you know, like I'm paying top dollar for it. [00:35:26] So what's the worst that they could do if they, if somebody else were to call and change [00:35:30] my plan up, you know, like it, it wouldn't cause that much lasting damage. [00:35:34] Cause it's not like I'm on some teaser rate. [00:35:36] It's not like I've got a great deal as it is. [00:35:38] So I let them do it. [00:35:39] And three days later, I had low expectations, right? [00:35:42] Cause you go on Reddit, speaking of Reddit, you go on and you, you search other people's [00:35:46] experiences and people will say, oh yeah, well like the, you know, I, some of them are [00:35:52] pretty hyperbolic. [00:35:53] It's like, you know, like they, they changed my plan to this and now I'm stuck with this, [00:35:57] you know, TV subscription for the next four years. [00:35:59] And then they charged me a thousand dollars in imagined savings that never materialized. [00:36:03] I'm like, shit. [00:36:04] All right. [00:36:04] Well, that's, that's not good. [00:36:06] But I, I gave them a shot. [00:36:08] They came back three days later and they said, congratulations. [00:36:12] We saved you $859. [00:36:14] I was like, what the, excuse me over the next 12 months. [00:36:18] And it turned out that they got me from $142, $145 down to 70 flat. [00:36:25] You multiply that by 12 and then indeed comes out to eight something. [00:36:28] And I was like, damn. [00:36:29] All right. [00:36:30] And so I've been, I've been looking for the other shoe to drop like ever since, like something [00:36:36] is fishy here. [00:36:37] Like I, they didn't sign me up for other services. [00:36:39] I did receive, I'm looking over at it now. [00:36:43] I did receive a relatively large box that has a, you know, one of those wifi modem router [00:36:50] combo units in it. [00:36:51] That was partly like apparently part of the deal. [00:36:54] I don't know if they canceled my service and then in one fell swoop also signed me up for [00:36:58] service. [00:36:58] But now I've got this gigantic fucking wifi thing that wouldn't even fit in my patch box [00:37:02] if I wanted it, which I don't. [00:37:04] So I'm, I'm, I'm currently in this ether of like, well, if my modem that I rent is still [00:37:11] going to work, I rent for $0. [00:37:14] It's one nice thing about spectrum. [00:37:15] If my modem that I rent is still going to work, uh, maybe I can just keep this wifi thing in [00:37:20] the box and not call anyone. [00:37:22] And maybe everything will keep working and I'll pay the $70 a month, or maybe I should send [00:37:27] the other one back, but then that might trigger some other thing. [00:37:30] Right. [00:37:30] I, so look like, do I recommend the service? [00:37:36] I don't really, I don't, we'll see. [00:37:38] Right. [00:37:39] Like call me in a year. [00:37:40] I should set a reminder. [00:37:41] Oh, I'm sure if something bad happens, I'll, I'll be right on the airwaves screaming about [00:37:47] it. [00:37:47] Like I, like I do, but even after this experience, saving me a lot of money, like what I trust [00:37:53] them with my T-Mobile account, right. [00:37:54] Where I have been grandfathered in on what was called the one choice plus plan in 2014 [00:38:01] or whatever. [00:38:02] And it's genuine, honest to God, unlimited data without any real throttling. [00:38:08] As far as I can tell, until you get to some absurdly high number where you can watch your [00:38:12] videos in HD on your, you know, like, like it's, it's, it's a good one. [00:38:16] It's better than their magenta crap. [00:38:18] Um, and a lower price than their magenta max thing. [00:38:21] Well, we got three lines. [00:38:22] You got, you know, the watches and I would love to pay less for that, but I just don't [00:38:27] try like you, you, you fill out the rocket money form, uh, with the, uh, the, the, it wants [00:38:34] your T-Mobile, like login information. [00:38:36] And that's, that was a bridge too far for me. [00:38:40] I got there and I was like, you know, I could just imagine this going poorly. [00:38:44] You know, these plans are so complicated and feels like even when I call T-Mobile and I [00:38:48] ask, Hey, how's the weather? [00:38:49] Like they click a button and it fucks up my shit for two weeks. [00:38:52] So I'm, I'm, I'm good. [00:38:55] I can probably afford a cell phone bill. [00:38:57] Uh, I just, I just would prefer not to have to pay it. [00:39:01] Only one other life item in the last week, I was given a special opportunity. [00:39:11] Um, I've talked about massages a couple of times on this program and the, uh, I mentioned, [00:39:15] uh, the one I went, uh, the one I had most recently in a previous episode, I, I, I was, I was wrapping [00:39:29] up my massage with a human like you do. [00:39:31] And the human said, have you, have you tried our robot massage? [00:39:36] And, uh, I didn't know how to take that. [00:39:38] And I said, I, I've heard of it. [00:39:41] I know Becky tried it. [00:39:43] If you check Becky's, um, Becky Graham, you'll see, uh, there's a video of her, uh, getting [00:39:48] felt up by a robot. [00:39:50] Uh, I forget the name of the company, but it's, it's, uh, it's like a robot that tries to simulate [00:39:59] the experience of a human massaging you. [00:40:02] So it's, uh, you're on a bed, you're face down. [00:40:06] It's, uh, got arms that kind of go back and forth, uh, on a track and they, they push and [00:40:13] whatnot. [00:40:13] And it kind of reminds me of the white birthing robot from star Wars episode three at the end [00:40:21] when, when Luke and Leah are being born, it does everything short of make the cooing [00:40:26] sounds to get the babies to calm down. [00:40:28] You know, like I, you do have a tablet and you can, you can pick out these pre-baked Spotify [00:40:34] playlists while it's pushing on you. [00:40:36] Anyway, all that to say, I signed up, um, mostly cause it was free. [00:40:41] So I had a 30 minute trial and, uh, the fact is trying to imitate humans was really interesting [00:40:49] to me because I had just spent a month in Japan, uh, getting, uh, what'd you call it? [00:40:54] Uh, massage chairs, our hotel chain that we stay at has always has massage chairs and even [00:41:01] bad massage chairs in Japan are pretty intense. [00:41:03] Uh, uh, but, but good ones are just like, you know, you go in there and it's just like, [00:41:09] I'm sure there's been, you've probably seen a horror movie image, right? [00:41:13] Where it's like, you sit in a chair and then like 25 hands grab all the parts of your body [00:41:18] simultaneously and that is meant to be horrific. [00:41:20] But if those hands, if there was some nice music playing and it was illuminated and those [00:41:25] hands were massaging you simultaneously all over your body, maybe it would be pretty, pretty [00:41:29] great. [00:41:29] And so that's what a Japanese massage chair is like. [00:41:33] Cause they, they don't have this arbitrary conceit that a massage must happen in a format [00:41:39] that resembles how it would happen if a single human on a bed surface was rubbing your tiddly [00:41:45] bits, which is what this robot is. [00:41:49] Right. [00:41:49] And so it's trying to think of another analog, right? [00:41:55] Like where we, we kind of retain the artifice of the way that it used to be before we automated [00:42:00] it. [00:42:00] And, and in some, sometimes we do that to keep people being comfortable like that rich [00:42:05] Corinthian leather. [00:42:06] It's like, we wanted to look like a traditional calendar. [00:42:08] So people know what they're looking at instead of just a bunch of boxes. [00:42:11] It's like, Oh yeah, this looks like a placemat style calendar that I would have had on my desk. [00:42:15] And then eventually that ages out. [00:42:16] And the younger people are like, I've never seen a calendar on a desk, even though my dad [00:42:20] grew up with one, you know? [00:42:24] So maybe that's it, right? [00:42:25] Like, like sometimes that's why we would have a robo massage that like, you know, pressures [00:42:31] and needs you, you know, kind of with just the two arms up and down in particular points, [00:42:35] sometimes at the same time, sometimes just one arm, you know, it's, it's, it's less efficient [00:42:41] is my immediate frustration. [00:42:43] Cause it's like, you could have 45 fucking arms going to town all over my body and I'd [00:42:49] get way more work done in 30 minutes. [00:42:52] Right. [00:42:52] Cause I'm just trying to min max my existence, but instead by, by, by, by imitating a human [00:42:59] massage, like nothing is really gained because I can't see it. [00:43:03] I'm facedown. [00:43:04] I'm looking at a silly tablet and watching imagery, imagery of forests and, and, and ocean waves [00:43:10] and whatnot, and I'm kind of getting a, you can look at a weird overhead view of what [00:43:14] your body is looking at, looking like right then, you know, like it scans your body and [00:43:19] then has like a little illustration of like, here's where I'm pushing you. [00:43:21] Here I go. [00:43:22] It's, it seems more to me like they designed this, you look at this unit and it's just like, [00:43:31] this has got to cost at least 15 grand. [00:43:34] This is an expensive, complicated piece of equipment. [00:43:38] It feels like a lack of imagination, uh, to, to somebody had the idea, let's take human [00:43:47] masseuses out of the equation and just make a robo masseuse thing that we could put in spas [00:43:53] when, uh, you'd actually have a better experience. [00:43:56] It would be cheaper. [00:43:57] And there's like more prior art at Panasonic or these other companies in Japan. [00:44:01] If you just made a, you know, massage chair, but that would be boring, I guess. [00:44:08] Uh, and massage chairs, like you, you hear the word massage chair right now as you're listening. [00:44:13] And if you haven't had like a real one, you know, at a Japanese Denki-yasan on the third [00:44:17] floor, where all the salary men on their way home tell their wives, oh, I got a, I got a big meeting [00:44:24] with the boss and then they go to, they go to Yamada Denki or they go to Yodabashi camera. [00:44:28] And then they just, you know, they take their briefcase and they set it down next to one of the [00:44:33] trial units of the massage chair. [00:44:34] And then they, they, they, they, they go into this little like sensory deprivation pod and [00:44:39] they get all their bits smushed simultaneously and they got a remote control and they can [00:44:45] say, just do it hard. [00:44:46] And then they can forget their worries for, for 15 minutes until, uh, one of the staff has [00:44:52] to remind them that, uh, they don't live there and that they have to go home now. [00:44:56] If you haven't had that experience, uh, you probably, when you hear a massage chair, think [00:45:02] of like those $2, you know, leather chairs that are, you know, just like our just normal [00:45:08] fucking chairs that may be vibrate, like the vibrating bed equivalent that you see at an [00:45:12] airport. [00:45:12] Um, this is not what I'm talking about. [00:45:15] So get your head out of there and, and go Google, you know, for high end Japanese massage [00:45:22] chair, and you might get some idea. [00:45:24] Uh, also I, uh, in the course of a 30 minute massage, I encountered so many fucking Android [00:45:32] tablet bugs. [00:45:33] I, I didn't, I gave them a lot of feedback cause they, this is sort of a trial that they're [00:45:37] doing. [00:45:37] They wanted to want to know how, what I thought. [00:45:40] And I gave them a lot of this perspective and feedback about like, well, you know, this [00:45:44] skeuomorphic design, yada, yada. [00:45:45] But I didn't even touch any of the software stuff. [00:45:49] Cause like there's an absolutely nothing that they're going to be able to do with that much [00:45:52] less like they won't even be able to communicate this back to the company in a way that's helpful, [00:45:55] but it was, you know, it would freeze or the display would become non-responsive. [00:46:01] One time I had the music just turn itself all the way up. [00:46:05] The, um, the, so many things about this design are meant to make you feel comfortable are [00:46:13] meant to make you feel safe. [00:46:14] Like if, if you, it moves at all, or if it detects anything is off at all, it basically [00:46:20] like will, will disengage entirely and reposition itself. [00:46:23] And then you have to actively resume the massage. [00:46:26] And then it's got to put the little flappy doos back over you. [00:46:30] Like it's really worried about people flipping out about this robot pressing up against them. [00:46:36] And it extends to, to like, you know, you pick your firmness, like light, medium firm. [00:46:41] And I clicked firm. [00:46:42] And then there, you could see there was like a little like pressure bar on the right. [00:46:47] And that even though I'd clicked the firm preset, I wasn't at a hundred percent pressure. [00:46:52] And I was like, well, that, that won't do. [00:46:54] And so I jacked it up to a hundred percent right out of the gate. [00:46:56] And the whole time, 30 minutes, like you could, uh, [00:46:59] Hmm. [00:47:01] It, I knew that a massage was happening. [00:47:05] Like I knew when contact was being made, but like, it was not a massage. [00:47:08] It was, it was somebody kind of like, like, like back rub would be generous. [00:47:14] It was like somebody like took an open palm hand and just pressed it. [00:47:18] Just, just, just an obnoxiously against different parts of my body and no firmness beyond that. [00:47:26] So you got a robo massage. [00:47:29] It's limited in what it can do. [00:47:33] Cause it's trying to imitate a human. [00:47:34] It's very worried about liability, which is why I imagine the max firmness is light pressure. [00:47:39] Uh, and it's fussy and it's buggy. [00:47:42] And of course it can only do very limited regions of the body. [00:47:45] Like if I was a massage therapist, I'd be like, Hey, sweet. [00:47:49] You know, I'm going to keep having a job longer than all these programmer juckle fucks. [00:47:52] You're going to get replaced by a Claude and open AI. [00:47:56] So I'm, I'm, I'm, I'm confident that a massage therapist is going to be a, a lucrative, you [00:48:03] know, going concern as a career for a little while programming. [00:48:08] I'm not so sure of, but most of us listening have already made our choice, whether we're [00:48:14] going to be massage therapists or programmers. [00:48:16] So we're just going to have to see how this, how this plays out. [00:48:19] All right. [00:48:20] Well, that's all, that's everything going on in my life. [00:48:23] So let's, uh, well, let's follow up on stuff that had been going on in my life and is now [00:48:30] continuing or is once again, I started to realize that there's a, there's a certain theme to this [00:48:37] show. [00:48:37] Hmm. [00:48:38] All right. [00:48:46] There's basically two major areas of follow-up today. [00:48:51] Um, but somehow the two of them take up 11 bullet points in my notes. [00:48:59] So I'll try to be expeditious. [00:49:02] The first is I bought a, uh, M4 pro MacBook pro, I guess an Apple nomenclature, a MacBook pro [00:49:13] left parentheses, 2024, right parentheses with M4 pro. [00:49:19] I think is probably maybe the 2024 is at the end. [00:49:22] Maybe they don't put the date now that they have the chip name. [00:49:25] In any case, I needed a computer that was built for Apple intelligence, which is how they also, [00:49:32] they crammed that in the fucking name. [00:49:34] Um, and like the, every subheader says Apple intelligence on it, which, you know, I mean, [00:49:40] if you're, if you're a marketing dude, it's the thing that, you know, like you gotta, every [00:49:48] year is a struggle to goose people into, to buying computers. [00:49:51] And, uh, it's been a while since they've had anything new to say that your computer can do. [00:49:56] So it makes sense, but come on. [00:49:59] It can't even make Genmoji yet. [00:50:02] Uh, just if you've, if you've downloaded it, used 18.2 iOS or iPadOS, uh, go turn on the, [00:50:13] um, you know, the AI feature, if it's available in your region and language, and then you open [00:50:19] the image playground app and you click through there and let it download all of the image [00:50:24] playground shit, uh, in particular, the image playground itself, where you can take a person [00:50:30] and a place and kind of like, you know, create sort of a, uh, a witch's brew of bad imagery [00:50:35] and then, and then have a keep swiping to the right as, as they just all look bad that I have [00:50:43] no, no need for, but Genmoji, or at least the promise of Genmoji, I like quite a lot. [00:50:49] I enjoy, you know, um, typing in little like name, like, so we were at the parks, uh, with [00:50:57] our friends last week and it was a Jollywood Knights event, which is also Gatsby themed. [00:51:06] There's a reason why ordering 1920s era costumes on Amazon in Orlando was like not an overnight. [00:51:13] It was like a two, three day leg because this, this Jollywood Knights 1920s era themed, uh, [00:51:21] ticketed event at Hollywood studios has been going on. And it was one of those nights. And so some [00:51:26] flapper lady in line, she had a purse that had a phone handle on it. And her husband, who now that [00:51:34] I think back on this was dressed very similarly to how I dressed myself last night. So something tells [00:51:39] me he was sort of a long for the ride in this, she picked up the phone handle off of her purse and [00:51:46] handed it to Becky. And then he, you could sort of see him on the phone being a bad ventriloquist [00:51:53] and talking to her on the phone. So like his cell phone was somehow communicating to the purse phone. [00:51:59] It was very, it reminded me of get smart, you know, like that spy TV show from the sixties that was on [00:52:05] Nick at night in the eighties or nineties when I would have watched it. Uh, of course it didn't [00:52:10] work. And then we were just in line and it was like, sorry, we're in line. It didn't work. And then, [00:52:14] and then of course the way that lines work, right. As you turn left, turn right. And now it's up, [00:52:18] here's the same people again. And so they're like, all right, try again. So she picks up the purse [00:52:23] phone and here's the guy talk. And she's like, yes, this is indeed a telephone. That is a purse. [00:52:28] My reaction, my contribution to this experience was to try to generate a Genmoji for the group [00:52:35] that I was with. That was like purse phone. And, uh, wouldn't you know it, uh, it struggled to like, [00:52:43] I was like purse with a phone handle on top. And it was, it gave me like one with like a, [00:52:49] like a locker combination lock instead of a rotary dial in the middle. It was all, it was not, [00:52:54] not good. And, and I think like a lot of these Genmoji, in addition to being bad and not good, [00:53:01] they are when they, there's, they have to be so detailed because usually it's people mashing up [00:53:07] different concepts. They have to be so detailed that when in line with texts, you have to squint [00:53:12] and you can barely see what they are. And then if they're as a tap back, you have no hope of knowing [00:53:16] what they are. Like if it's of a person, for example, like it's, you're going to get like 80% shirt [00:53:21] and then like 10% head. So you're not going to be able to tell who's what. Uh, so those need work [00:53:27] and no one wants my Genmoji. My, my brother has formally requested. I stopped sending them and, [00:53:32] uh, I will, I will take that request under advisement. Anyway, uh, bought a MacBook pro. Um, [00:53:42] Oh, I've got a, I've got a parenthetical as a C notes. All right, well, here's eight more bullet [00:53:50] points. I'm going to rattle through these. So Becky, actually, it was her idea. She wanted to [00:53:54] get me this. We were in Japan. She's like, Hey, you know, I heard you talking about the nanotexture [00:53:57] display. And like, of course, you know, the, the, the brighter screen and us being in Orlando, [00:54:01] you never use a computer outside or out of the house. So she wanted to buy it. And she said, [00:54:06] it was just really complicated. I didn't want to fuck up. I didn't want to get you the wrong set of [00:54:09] options. I asked Aaron and Aaron didn't know either. He said he hadn't really been on top of it. [00:54:16] Uh, and I was like, honey, that's so I didn't say like, bless your heart. I, it was a such a sweet [00:54:23] gesture. And it is true that I've been curious about it. Um, but I didn't feel like, uh, I had [00:54:30] to get one right this minute. Uh, and, and honestly, the, the, the 14 inch MacBook pro is still too heavy. [00:54:36] I, I, I, I lifted tonal my, my weightlifting robot, uh, reported in my tonal wrapped because [00:54:46] everything has to do a goddamn wrapped dingus to try to share in social media as if like, you know, [00:54:52] one assumes that all these wrapped posts just go to the goddamn bottom of every algorithm because [00:54:57] they're all the same. But in any case, it showed me a little wrapped video and it said, I wait, [00:55:02] I, I lifted one and a half million pounds last year or over the course of 2024. And I was like, [00:55:07] that's a lot of weight that I lifted. I, yesterday I did the equivalent of like, you know, 250, [00:55:12] 275 pound deadlift barbell deadlift. And that was hard, but not too hard. It's the max weight that, [00:55:20] that tonal can do. Um, I, I, I, I like to think I'm pretty strong now. Uh, that four pound fucking [00:55:31] MacBook pro is backbreakingly heavy, no matter where I am, I'll pick it up and like, that is denser than [00:55:40] it looks. It's a, it's like when you pick up a baby, that's like a little bit too dense, you know, [00:55:46] and you're just like, Oh wow. I was expecting this to be more fun. This is just going to give [00:55:51] me pelvic floor problems. If I do this for more than exactly 30 seconds and then hand it back to [00:55:57] its mother who surely has pelvic floor issues. Um, I don't want to be carrying around this MacBook pro. [00:56:05] I don't want to carry it with my arms. I don't want to carry it in a bag. I don't want to carry it [00:56:09] into the car. I don't want to carry it, you know, uh, in a Starbucks. I want to hire a Porter to [00:56:16] bring it around to me, you know, from place to place. Maybe, maybe they could also saddle up and [00:56:23] have a, uh, vision pro. So that's what I really want. Uh, at least until, and unless Apple releases [00:56:30] the 12 inch MacBook pro, uh, that we were promised in our early years. [00:56:34] Anyway, when Becky said that it was hard to configure and figure out what she'd want to order [00:56:43] or what I would want her to order. And as a result would have made a pretty lousy gift because [00:56:49] the likelihood of her getting it right. Where if you look at the number of configurations for these [00:56:53] seeing this thing, like astronomically small, I actually spent, I sat down, I look, I, I said, [00:57:01] I didn't need the thing. And then I come home and then within a day and a half, uh, my MacBook air is [00:57:07] crying because it's out of storage to the point where like I composed an email and I hit send on the email [00:57:12] and then Apple mail reported, yo, we just barfed on all this and just deleted all your shit. Cause we [00:57:17] ran out of disk space, no warning. And in modern day Mac OS, you don't get to know how much disk space [00:57:23] you have because all of it is like optimized storage. So like whether it's your iCloud drive [00:57:29] or it's your Apple photos, once the system is under any sort of, um, storage stress, it'll, [00:57:35] it's supposed to detect that and start deleting shit. Your phone does this too. So sometimes like [00:57:41] you're like, like I was importing a bunch of raw images on the phone and it said, Oh, you're out of [00:57:45] storage. And then I knew, because I know how it works under the hood, even though it exposes zero [00:57:49] controls or visibility as to what is going the fuck on. I knew that when it ran out of storage, [00:57:54] the right solution was sit and wait for 30 seconds while it deletes shit in the background and then [00:57:59] just hit import again. Right. Well, I, that didn't work in this case. Like I actually went and deleted [00:58:05] like a hundred gigabytes of garbage. It's a small SSD. It's a 512 gigabyte MacBook air. I deleted all this [00:58:11] stuff, but, um, from my iCloud drive on another computer, because this one was finder was completely [00:58:17] unresponsive. Uh, and it never got better because it had suspended all iCloud drive syncing as a, [00:58:24] probably like some sort of like memory safeguard or storage safeguard to like make sure I didn't, [00:58:27] it didn't fuck up anything in the cloud. And so like even going, I'm not going to, [00:58:33] most of that storage was in my iCloud drive, which is how it got full while I was overseas. [00:58:38] And when I came back, I, I didn't have like, I could, I could have gone through and like run [00:58:47] RM dash RF from the terminal and deleted stuff from the iCloud drive to like as a, as an emergency break, [00:58:52] like get, get this SSD empty enough that the operating system can run and then figure it out. [00:59:00] But then of course it would have synced all of those deletions up to the cloud and deleted the [00:59:03] same things off of my other computers. So this is a tractable problem. And I, I, I ultimately did solve [00:59:10] it, but I, I realize now why Apple markets so much of its pro devices to photos and video people, [00:59:20] because photos and videos take up a shit ton of space. Uh, they have different performance [00:59:26] characteristics than programming and, and the, their needs in many ways are higher than what you need. [00:59:33] If you're just writing Ruby code, right? Uh, it just so happens that Swift, the programming language [00:59:38] that they wrote is also like, we'll, we'll take advantage of all of these cores during compilation [00:59:42] in a way that like a lot of local development in other languages won't. [00:59:45] But in my last year of doing a lot more video work, doing a lot more audio work, I can definitely [00:59:52] understand now like, Oh yeah, like the, the MacBook air actually is inappropriate for a lot of the [00:59:57] workflows of the things that I do. So that experience, I came to Becky and I was like, look, I know I said [01:00:05] I didn't need this, but I think I might need this. Um, where need is in very, you know, very gentle [01:00:12] text. It's, it's a thin font variant to say, I need this. What I mean to say is like, I, it would save [01:00:19] me a lot of time and stress and headache and, uh, uh, rework to have a better computer, a more [01:00:26] capacious computer. And of course you can't upgrade the storage and your existing max. So here we are. [01:00:32] Um, but anyway, I was in the configurator for the new MacBook pro. And the first decision you got to [01:00:36] make is do I want a regular M4 chip, which I did not, or one of the pro ones, which is a, you know, [01:00:43] 12 or 14 core. I want to say a chip, uh, which is a huge upgrade over the M3 pro the M3 pro had a way [01:00:53] more efficiency cores and the M4 pro has more performance score. So it's like a, it's doing [01:00:57] much better in synthetic benchmarking that that's impressive. It's a big year over year change or the [01:01:02] M4 max, which is, you know, uh, an incremental improvement over the M3 max, but to the extent [01:01:10] that it's better than the pro it's like, you know, got another meat and quote unquote media [01:01:14] e

covid-19 christmas america god tv jesus christ ceo amazon spotify tiktok chicago google ai hollywood apple pr japan americans french speaking games story chinese elon musk japanese microsoft italian coffee iphone detroit oscars hbo harvard indiana bitcoin tesla nazis mcdonald ceos exclusive sony os pc catholic android reddit wars vr starbucks singapore ps nintendo switzerland mac cd avengers shit playstation latin ios xbox ipads raiders combat indiana jones sonic e3 ibm mark zuckerberg apple tv whiskey gamers sort steel playstation 5 clinton bloomberg call of duty ram aka swift playstation 4 mccarthy witcher bill clinton spectrum paramount bethesda ups openai grinch vatican atlantis sonic the hedgehog mad max api hawk porsche jim carrey uncharted gta ubisoft luigi harrison ford watts verizon god of war sega davos mansion bluetooth sink hilton game awards naughty ui airpods gpt astro comcast nes technically snoop gothic vanguard iso indigo monarch sas yakuza lost ark t mobile macbook goodreads grand theft auto playstation 3 wwdc mayan dogecoin ultron kodak wii macos truman four seasons adp goldeneye silicon macbook pro sora steam deck googling toys r us bioshock llm macs john oliver cpu corinthian gpu tom clancy nearing u s gerd oled naughty dog dtc imac venn ssd icloud gamecube dali united healthcare oh god panasonic solves psvr dreamcast rm rf gatsby eb rivian chris farley urls kratos sony playstation ocr isp byzantine installing playstation vr wolfenstein ipados hdr pikmin m3 geralt ace ventura tabasco deus ex lucasarts vigor sesame astrobot dearborn furby irr m4 insufficient james spader sarah mclachlan xml quantic dream ars technica vim ciri pmc great circle robotnik searle chris hayes sergey brin batman arkham msrp eggman hn troy baker apple silicon jc penny mco postgres dmg quicken loans daxter gordon gekko keighley swiftui mark gurman mozi uhc gurman o1 adorama vnc ev williams vr vr moom searls izotope rx kofax joel baker csvs nintendo pr
Arcade Cozy
156. IGN's Top 100 Playstation Games, Dragon Age: The Veilguard, and more!

Arcade Cozy

Play Episode Listen Later Dec 6, 2024 96:06


Episode Notes: Welcome back to Arcade Cozy! This week, we're talking a lot of Playstation, a lot of Dragon Age, and, well, a lot of a few other things too. There's a lot of good stuff here and we hope you enjoy it! Games discussed include Dragon Age: The Veilguard, SteamWorld: Heist 2, Ratchet and Clank, Jak and Daxter, Sly Cooper, Metal Gear Solid, InFamous, and more! Do you have thoughts on what we talked about today? Are there things that we missed? Or do you have a few games you'd like us to check out? Hit us up on one of the avenues below—we would love to connect with you. Email us at arcadecozy@gmail.com Twitter at us (@arcade_cozy) Follow us on IG (@arcadecozy) Intro & outro music by Johnnybgood89 --- Support this podcast: https://podcasters.spotify.com/pod/show/arcadecozy/support

Arcade Cozy
155. What Games Are We Most Thankful For?

Arcade Cozy

Play Episode Listen Later Nov 28, 2024 98:59


Episode Notes: Welcome back to Arcade Cozy! This week, it's Thanksgiving in the United States, so we're taking a step back to talk about some of the games that we're most thankful for. Maybe it's what got us into gaming or what got us back into it many years later. Either way, there's a lot of good stuff here and we hope you enjoy it! Games discussed include Jak and Daxter, The Legend of Zelda: Breath of the Wild, Red Dead Redemption, Assassin's Creed 3, Final Fantasy 4, and more! Do you have thoughts on what we talked about today? Are there things that we missed? Or do you have a few games you'd like us to check out? Hit us up on one of the avenues below—we would love to connect with you. Email us at arcadecozy@gmail.com Twitter at us (@arcade_cozy) Follow us on IG (@arcadecozy) Intro & outro music by Johnnybgood89 --- Support this podcast: https://podcasters.spotify.com/pod/show/arcadecozy/support

Retro Handhelds Podcast
Anbernic RG406H Game Showcase and Q+A (ft. RetroGameCorps)

Retro Handhelds Podcast

Play Episode Listen Later Nov 20, 2024 152:46


Panel:@RetroGameCorps   @AishTalksTech   @retrotechdad   @StubbsStuff  The RG406H is the latest handheld gaming console from Anbernic, featuring a 4-inch 960x720 IPS touchscreen, the device is powered by the Unisoc T820 processor, 8 GB of LPDDR4X RAM and 128 GB of UFS flash storage. It also has Hall effect joysticks, customizable RGB lighting. It runs on Android 13, making it capable of emulating up to sixth-generation video game consoles. Available in three colors—white, purple transparent, and black. Stubbs' First Look: https://www.youtube.com/watch?v=ecyWy1bP2HARuss's In-depth Review: https://youtu.be/kxxrhKaXCOc?si=VpVVbaDVfs6mQEkv 

Dev Game Club
DGC Ep 404: Fatal Frame (part one)

Dev Game Club

Play Episode Listen Later Oct 2, 2024 83:57


Welcome to Dev Game Club, where this week we begin our annual spooky series, this time on 2001's Fatal Frame. We briefly talk about the year it came out, its developer/publisher, and why we picked it before turning to other introductory topics. Dev Game Club looks at classic video games and plays through them over several episodes, providing commentary. Sections played: An hour-ish Issues covered: Discord-only going forward, why this game?, photo mode games, changing culture of photography, re-elevating photography, Sony shenanigans, best years of all time, the ubergame, years where genres were introduced or forks were in the road, being able to fit more games in, risk aversion and uniqueness, indie games, too big to feel?, a bit about Tecmo, Japanese horror, based on a true story?, the cover and the possible origins of the game, the correlations screen, connecting up details, following the ghosts, pushing characters in the directions of the horror, using ghosts in interesting ways, pixel hunting with the characters, interaction prompts on key items, controls, meticulous camera placement and movement, entering a piece of furniture, not overthinking it, the straining of fixed cameras, distinguishing between cameras, something refreshing, feeling extremely different, the visual aesthetic, relying on tropes but using it to enrich, less is more, audio and horror, games that feel like save states due to repetition, accessibility options, give players options and opportunities and not getting in the way of that, accessibility as a frontier, save anywhere, games that get sanded down, more game boxes.  Games, people, and influences mentioned or discussed: Heroes of Might and Magic, Resident Evil (series), Pokemon Snap, Umurangi Generation, Toem, Beyond Good & Evil, Deadly Premonition, Minecraft, Halo, Animal Crossing, GTA III, Devil May Cry, Civ III, Phoenix Wright: Ace Attorney, Ico, Silent Hill 2, Final Fantasy X, Gran Turismo 3: A-Spec, MSG 2: Sons of Liberty, Myst III: Exile, SSX Tricky, Super Smash Bros. Melee, Advance Wars, Burnout, Gothic, Black & White, Ghost Recon, Jak and Daxter, Max Payne, Onimusha: Warlords, Koei, Pikmin, Red Faction, Serious Sam, GameCube, Xbox, GameBoy Advance, Spider-Man 2, Horizon, Ubisoft, Valheim, Dead or Alive, Ninja Gaiden, Tecmo Bowl, Pac-Man, Romance of the Three Kingdoms, Hideki Itegaki, Makoto Shibata, Ringu, Criterion Channel, Ju-On: The Grudge, Audition, Amityville Horror, The Entity, Until Dark, Skyrim, Alone in the Dark, Sam, MegaMan, Hitman, Tunic, Castlevania IV, Alien: Isolation, P. T., Dead Rising, Celeste, Far Cry 2, Ben Abraham, Ben Zaugg, Grim Fandango, Father Beast, Jedi Knight 2: Jedi Outcast, Dave K, Half-Life 2, Starfighter/Jedi Starfighter, Raven Software, World of Warcraft, Kirk Hamilton, Aaron Evers, Mark Garcia. Next time: More of Fatal Frame! Twitch: timlongojr Discord DevGameClub@gmail.com

Lithium-ion Rocks!
Power Moves: Power Metals' Critical Cesium Commodities. w/ Haydn Daxter

Lithium-ion Rocks!

Play Episode Listen Later Sep 28, 2024 20:14


INDEX: 00:00 - Introduction 02:41 - Exploration and drilling update 04:21 - Assay results and timeline 05:46 - Case Lake cesium uniqueness 09:08 - Drill results and geology 11:13 - First Nation partnership update 13:39 - Timeline to production 14:57 - Cesium advisory committee formed 16:42 - Grant funding details 18:33 - Closing remarks   _________________________________________________   Links

Arcade Cozy
148. Sony State of Play + What We'd Love to See From Sony in Future

Arcade Cozy

Play Episode Listen Later Sep 27, 2024 83:32


Episode Notes: Welcome back to Arcade Cozy! This week, it's all Sony (again)! First, a some news. There was a State of Play, Chris and Corey watched it, so they take a little time to unpack and share their thoughts. Then, in the back half, they take a minute to talk about what they would like to see from Sony at some point in the future. Safe predictions, wild ones too -- nothing's off the table! So, cozy up, grab a coffee, and plug in those headphones; this here's a good one! Games discussed include Astro Bot, The Plucky Squire, Metal Gear Solid V: The Phantom Pain, Ghost of Yotei, Sly Cooper, Jak and Daxter, The Last of Us, and more! Do you have thoughts on what we talked about today? Are there things that we missed? Or do you have a few games you'd like us to check out? Hit us up on one of the avenues below—we would love to connect with you. Email us at arcadecozy@gmail.com Twitter at us (@arcade_cozy) Follow us on IG (@arcadecozy) Intro & outro music by Johnnybgood89 --- Support this podcast: https://podcasters.spotify.com/pod/show/arcadecozy/support

Lithium-ion Rocks!
Power Metals Critically Important Cesium Discovery - Haydn Daxter

Lithium-ion Rocks!

Play Episode Listen Later Sep 9, 2024 31:55


INDEX: 00:00 - Introduction 01:02 - Introduction to Cesium 03:43 - Summary Synopsis 04:43 - About Hayden/Power Metals 05:51 - Discovering cesium at Case Lake 09:52 - Why cesium is a critical mineral 11:08 - Are there any cesium alternatives? 12:01 - Rarity of Power Metals' Deposits 13:44 - Metallurgical Test Work and Future Plans 15:12 - Next steps for drilling at Case Lake 16:15 - Preferred cesium products for buyers 17:46 - Case Lake Relative to Total Market 19:19 - Will the cesium be stored? 20:07 - Case Lake CapEx 21:50 - Permitting Process 23:55 - Funding plans and Winsome's role 25:04 - Assay Timeline 26:16 - Metallurgical Test Work Timeline 26:58 - Exploring lithium at Case Lake  30:54 - Howard's Closing Remarks   _________________________________________________   Links

Trophy Talk Podcast
Trophy Talk Podcast - Episode 116: Carnage X Cthulhu

Trophy Talk Podcast

Play Episode Listen Later Jul 28, 2024 185:40


Hello there everybody! Welcome back to another episode of Trophy Talk. This time we're joined by Maximum Carnage, from all the way across the great pond. Sitting in the UK, besieged by a police helicopter and 6 pints of beer, Max joins us to give his take on some of the more recent games he's been playing, the origins of the Rage God in our Discord, and his adventures in Tomb Raider and Final Fantasy! Darryl and Josh are also here of course to provide their insight on topics including what our current game of the year is before heading into a busy release schedule in the Fall, hot takes in gaming meant to ruffle some feathers, and weird rituals around the care and maintenance of our homes. On top of all of that we get into what we've been playing which is fact quite a lot! Max continues his massive journeys through the Tomb Raider Remastered Collection, and FF7 Rebirth. Josh gets his investigative hat on with Sherlock Holmes: The Devil's Daughter, Rise of the Ronin, and a return to FF16 for NG+ and the DLC. Darryl makes Colin a very proud man indeed by finally getting to Resident Evil Remake HD, as well as Death's Door. Colin brings up the rear with some quick and fun plats from the PS Classic Collection in Toy Story 2, Toy Story 3, and Daxter. We hope that you enjoy this episode, and thank you so much for listening! The next episode released on our feed will be Episode 4 of Surviving the Horror which will be releasing in a few days time. Take care and happy gaming!

Gamsters world
JAK 3 (REVIEW) #jak3 #jak&daxter #gaming

Gamsters world

Play Episode Listen Later Jul 23, 2024 5:34


JAK 3 (REVIEW) #jak3 #jak&daxter #gaming link- https://youtu.be/4tsju0DBXFs HEY GUYS BACK AGAIN FOR ANOTHER VIDEO CHECK IT OUT. ANY AND ALL COPY RIGHTS AND ARTWORK BELONG TO THERE RESPECTFUL OWNERS LINKS DOWN BELOW http://gamsterindustries1.wixsite.com... https://twitter.com/Gamsterwolf92 https://www.facebook.com/Gamster92 https://www.instagram.com https://anchor.fm/gamster-world

Gamsters world
Jak II (Review) #jak2 #jak&daxter #gamereview

Gamsters world

Play Episode Listen Later Jul 22, 2024 5:30


Jak II (Review) #jak2 #jak&daxter #gamereview link- https://youtu.be/E4NFhDRMfik HEY GUYS BACK AGAIN FOR ANOTHER VIDEO CHECK IT OUT. ANY AND ALL COPY RIGHTS AND ARTWORK BELONG TO THERE RESPECTFUL OWNERS LINKS DOWN BELOW http://gamsterindustries1.wixsite.com... https://twitter.com/Gamsterwolf92 https://www.facebook.com/Gamster92 https://www.instagram.com https://anchor.fm/gamster-world

Gamsters world
DAXTER THE GAME (REVIEW) FINALLY PLAYED IT #daxter #gamereview #psp

Gamsters world

Play Episode Listen Later Jul 19, 2024 4:51


DAXTER THE GAME (REVIEW) FINALLY PLAYED IT #daxter #gamereview #psp link- https://youtu.be/UItWHirDKZk HEY GUYS BACK AGAIN FOR ANOTHER VIDEO CHECK IT OUT. ANY AND ALL COPY RIGHTS AND ARTWORK BELONG TO THERE RESPECTFUL OWNERS LINKS DOWN BELOW http://gamsterindustries1.wixsite.com... https://twitter.com/Gamsterwolf92 https://www.facebook.com/Gamster92 https://www.instagram.com https://anchor.fm/gamster-world

Retro Game Club
Daxter, Shadowgate - Most Hated Video Game Bats

Retro Game Club

Play Episode Listen Later Apr 29, 2024 57:58


Season 6 Episode 8 Episode 171 News Hardware PICONTROL BRINGS MODERN CONTROLLERS TO ATARI 2600 NES-slotmaster Emulation / hacks / translations / homebrew games Pretendo network re-creates 3DS and Wii U servers Game Boy Emulator for iPhone Now Available in App Store Following Rule Change  NES emulator called Bimmy briefly available too The Storied Sword NES Homebrew Kickstarter  Doom Running on Things Enthusiast adds microtransactions to DOOM — QR codes direct to payment UI every time you pick up an item Other odd or interesting things Lakka 5.0 retro gaming Linux-based operating system now available with updated LibreELEC and RetroArch The Epyx Collection: Handheld' Brings 6 Atari Lynx Games To Switch Topic: Most hated bats in video games Reference Alf hitbox Game Club Discussion Daxter Shadowgate New Game Club Games Mad Max WWF in Your House Links Game Club Link Tree Retro Game Club Discord server Bumpers: Raftronaut , Inverse Phase Threads, Facebook, Twitter, Bluesky, and  Instagram managed by: Zach ===================================== #retro #retrogames #retrogaming #videogames #classiccomputing  #Atari2600 #NES #WiiU #GameBoy #Homebrew #Doom #Daxter #PSP #Shadowgate 

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Supervise the Process of AI Research — with Jungwon Byun and Andreas Stuhlmüller of Elicit

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Apr 11, 2024 56:20


Maggie, Linus, Geoffrey, and the LS crew are reuniting for our second annual AI UX demo day in SF on Apr 28. Sign up to demo here! And don't forget tickets for the AI Engineer World's Fair — for early birds who join before keynote announcements!It's become fashionable for many AI startups to project themselves as “the next Google” - while the search engine is so 2000s, both Perplexity and Exa referred to themselves as a “research engine” or “answer engine” in our NeurIPS pod. However these searches tend to be relatively shallow, and it is challenging to zoom up and down the ladders of abstraction to garner insights. For serious researchers, this level of simple one-off search will not cut it.We've commented in our Jan 2024 Recap that Flow Engineering (simply; multi-turn processes over many-shot single prompts) seems to offer far more performance, control and reliability for a given cost budget. Our experiments with Devin and our understanding of what the new Elicit Notebooks offer a glimpse into the potential for very deep, open ended, thoughtful human-AI collaboration at scale.It starts with promptsWhen ChatGPT exploded in popularity in November 2022 everyone was turned into a prompt engineer. While generative models were good at "vibe based" outcomes (tell me a joke, write a poem, etc) with basic prompts, they struggled with more complex questions, especially in symbolic fields like math, logic, etc. Two of the most important "tricks" that people picked up on were:* Chain of Thought prompting strategy proposed by Wei et al in the “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models”. Rather than doing traditional few-shot prompting with just question and answers, adding the thinking process that led to the answer resulted in much better outcomes.* Adding "Let's think step by step" to the prompt as a way to boost zero-shot reasoning, which was popularized by Kojima et al in the Large Language Models are Zero-Shot Reasoners paper from NeurIPS 2022. This bumped accuracy from 17% to 79% compared to zero-shot.Nowadays, prompts include everything from promises of monetary rewards to… whatever the Nous folks are doing to turn a model into a world simulator. At the end of the day, the goal of prompt engineering is increasing accuracy, structure, and repeatability in the generation of a model.From prompts to agentsAs prompt engineering got more and more popular, agents (see “The Anatomy of Autonomy”) took over Twitter with cool demos and AutoGPT became the fastest growing repo in Github history. The thing about AutoGPT that fascinated people was the ability to simply put in an objective without worrying about explaining HOW to achieve it, or having to write very sophisticated prompts. The system would create an execution plan on its own, and then loop through each task. The problem with open-ended agents like AutoGPT is that 1) it's hard to replicate the same workflow over and over again 2) there isn't a way to hard-code specific steps that the agent should take without actually coding them yourself, which isn't what most people want from a product. From agents to productsPrompt engineering and open-ended agents were great in the experimentation phase, but this year more and more of these workflows are starting to become polished products. Today's guests are Andreas Stuhlmüller and Jungwon Byun of Elicit (previously Ought), an AI research assistant that they think of as “the best place to understand what is known”. Ought was a non-profit, but last September, Elicit spun off into a PBC with a $9m seed round. It is hard to quantify how much a workflow can be improved, but Elicit boasts some impressive numbers for research assistants:Just four months after launch, Elicit crossed $1M ARR, which shows how much interest there is for AI products that just work.One of the main takeaways we had from the episode is how teams should focus on supervising the process, not the output. Their philosophy at Elicit isn't to train general models, but to train models that are extremely good at focusing processes. This allows them to have pre-created steps that the user can add to their workflow (like classifying certain features that are specific to their research field) without having to write a prompt for it. And for Hamel Husain's happiness, they always show you the underlying prompt. Elicit recently announced notebooks as a new interface to interact with their products: (fun fact, they tried to implement this 4 times before they landed on the right UX! We discuss this ~33:00 in the podcast)The reasons why they picked notebooks as a UX all tie back to process:* They are systematic; once you have a instruction/prompt that works on a paper, you can run hundreds of papers through the same workflow by creating a column. Notebooks can also be edited and exported at any point during the flow.* They are transparent - Many papers include an opaque literature review as perfunctory context before getting to their novel contribution. But PDFs are “dead” and it is difficult to follow the thought process and exact research flow of the authors. Sharing “living” Elicit Notebooks opens up this process.* They are unbounded - Research is an endless stream of rabbit holes. So it must be easy to dive deeper and follow up with extra steps, without losing the ability to surface for air. We had a lot of fun recording this, and hope you have as much fun listening!AI UX in SFLong time Latent Spacenauts might remember our first AI UX meetup with Linus Lee, Geoffrey Litt, and Maggie Appleton last year. Well, Maggie has since joined Elicit, and they are all returning at the end of this month! Sign up here: https://lu.ma/aiuxAnd submit demos here! https://forms.gle/iSwiesgBkn8oo4SS8We expect the 200 seats to “sell out” fast. Attendees with demos will be prioritized.Show Notes* Elicit* Ought (their previous non-profit)* “Pivoting” with GPT-4* Elicit notebooks launch* Charlie* Andreas' BlogTimestamps* [00:00:00] Introductions* [00:07:45] How Johan and Andreas Joined Forces to Create Elicit* [00:10:26] Why Products > Research* [00:15:49] The Evolution of Elicit's Product* [00:19:44] Automating Literature Review Workflow* [00:22:48] How GPT-3 to GPT-4 Changed Things* [00:25:37] Managing LLM Pricing and Performance* [00:31:07] Open vs. Closed: Elicit's Approach to Model Selection* [00:31:56] Moving to Notebooks* [00:39:11] Elicit's Budget for Model Queries and Evaluations* [00:41:44] Impact of Long Context Windows* [00:47:19] Underrated Features and Surprising Applications* [00:51:35] Driving Systematic and Efficient Research* [00:53:00] Elicit's Team Growth and Transition to a Public Benefit Corporation* [00:55:22] Building AI for GoodFull Interview on YouTubeAs always, a plug for our youtube version for the 80% of communication that is nonverbal:TranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:15]: Hey, and today we are back in the studio with Andreas and Jungwon from Elicit. Welcome.Jungwon [00:00:20]: Thanks guys.Andreas [00:00:21]: It's great to be here.Swyx [00:00:22]: Yeah. So I'll introduce you separately, but also, you know, we'd love to learn a little bit more about you personally. So Andreas, it looks like you started Elicit first, Jungwon joined later.Andreas [00:00:32]: That's right. For all intents and purposes, the Elicit and also the Ought that existed before then were very different from what I started. So I think it's like fair to say that you co-founded it.Swyx [00:00:43]: Got it. And Jungwon, you're a co-founder and COO of Elicit now.Jungwon [00:00:46]: Yeah, that's right.Swyx [00:00:47]: So there's a little bit of a history to this. I'm not super aware of like the sort of journey. I was aware of OTT and Elicit as sort of a nonprofit type situation. And recently you turned into like a B Corp, Public Benefit Corporation. So yeah, maybe if you want, you could take us through that journey of finding the problem. You know, obviously you're working together now. So like, how do you get together to decide to leave your startup career to join him?Andreas [00:01:10]: Yeah, it's truly a very long journey. I guess truly, it kind of started in Germany when I was born. So even as a kid, I was always interested in AI, like I kind of went to the library. There were books about how to write programs in QBasic and like some of them talked about how to implement chatbots.Jungwon [00:01:27]: To be clear, he grew up in like a tiny village on the outskirts of Munich called Dinkelschirben, where it's like a very, very idyllic German village.Andreas [00:01:36]: Yeah, important to the story. So basically, the main thing is I've kind of always been thinking about AI my entire life and been thinking about, well, at some point, this is going to be a huge deal. It's going to be transformative. How can I work on it? And was thinking about it from when I was a teenager, after high school did a year where I started a startup with the intention to become rich. And then once I'm rich, I can affect the trajectory of AI. Did not become rich, decided to go back to college and study cognitive science there, which was like the closest thing I could find at the time to AI. In the last year of college, moved to the US to do a PhD at MIT, working on broadly kind of new programming languages for AI because it kind of seemed like the existing languages were not great at expressing world models and learning world models doing Bayesian inference. Was always thinking about, well, ultimately, the goal is to actually build tools that help people reason more clearly, ask and answer better questions and make better decisions. But for a long time, it seemed like the technology to put reasoning in machines just wasn't there. Initially, at the end of my postdoc at Stanford, I was thinking about, well, what to do? I think the standard path is you become an academic and do research. But it's really hard to actually build interesting tools as an academic. You can't really hire great engineers. Everything is kind of on a paper-to-paper timeline. And so I was like, well, maybe I should start a startup, pursued that for a little bit. But it seemed like it was too early because you could have tried to do an AI startup, but probably would not have been this kind of AI startup we're seeing now. So then decided to just start a nonprofit research lab that's going to do research for a while until we better figure out how to do thinking in machines. And that was odd. And then over time, it became clear how to actually build actual tools for reasoning. And only over time, we developed a better way to... I'll let you fill in some of the details here.Jungwon [00:03:26]: Yeah. So I guess my story maybe starts around 2015. I kind of wanted to be a founder for a long time, and I wanted to work on an idea that stood the test of time for me, like an idea that stuck with me for a long time. And starting in 2015, actually, originally, I became interested in AI-based tools from the perspective of mental health. So there are a bunch of people around me who are really struggling. One really close friend in particular is really struggling with mental health and didn't have any support, and it didn't feel like there was anything before kind of like getting hospitalized that could just help her. And so luckily, she came and stayed with me for a while, and we were just able to talk through some things. But it seemed like lots of people might not have that resource, and something maybe AI-enabled could be much more scalable. I didn't feel ready to start a company then, that's 2015. And I also didn't feel like the technology was ready. So then I went into FinTech and kind of learned how to do the tech thing. And then in 2019, I felt like it was time for me to just jump in and build something on my own I really wanted to create. And at the time, I looked around at tech and felt like not super inspired by the options. I didn't want to have a tech career ladder, or I didn't want to climb the career ladder. There are two kind of interesting technologies at the time, there was AI and there was crypto. And I was like, well, the AI people seem like a little bit more nice, maybe like slightly more trustworthy, both super exciting, but threw my bet in on the AI side. And then I got connected to Andreas. And actually, the way he was thinking about pursuing the research agenda at OTT was really compatible with what I had envisioned for an ideal AI product, something that helps kind of take down really complex thinking, overwhelming thoughts and breaks it down into small pieces. And then this kind of mission that we need AI to help us figure out what we ought to do was really inspiring, right? Yeah, because I think it was clear that we were building the most powerful optimizer of our time. But as a society, we hadn't figured out how to direct that optimization potential. And if you kind of direct tremendous amounts of optimization potential at the wrong thing, that's really disastrous. So the goal of OTT was make sure that if we build the most transformative technology of our lifetime, it can be used for something really impactful, like good reasoning, like not just generating ads. My background was in marketing, but like, so I was like, I want to do more than generate ads with this. But also if these AI systems get to be super intelligent enough that they are doing this really complex reasoning, that we can trust them, that they are aligned with us and we have ways of evaluating that they're doing the right thing. So that's what OTT did. We did a lot of experiments, you know, like I just said, before foundation models really like took off. A lot of the issues we were seeing were more in reinforcement learning, but we saw a future where AI would be able to do more kind of logical reasoning, not just kind of extrapolate from numerical trends. We actually kind of set up experiments with people where kind of people stood in as super intelligent systems and we effectively gave them context windows. So they would have to like read a bunch of text and one person would get less text and one person would get all the texts and the person with less text would have to evaluate the work of the person who could read much more. So like in a world we were basically simulating, like in 2018, 2019, a world where an AI system could read significantly more than you and you as the person who couldn't read that much had to evaluate the work of the AI system. Yeah. So there's a lot of the work we did. And from that, we kind of iterated on the idea of breaking complex tasks down into smaller tasks, like complex tasks, like open-ended reasoning, logical reasoning into smaller tasks so that it's easier to train AI systems on them. And also so that it's easier to evaluate the work of the AI system when it's done. And then also kind of, you know, really pioneered this idea, the importance of supervising the process of AI systems, not just the outcomes. So a big part of how Elicit is built is we're very intentional about not just throwing a ton of data into a model and training it and then saying, cool, here's like scientific output. Like that's not at all what we do. Our approach is very much like, what are the steps that an expert human does or what is like an ideal process as granularly as possible, let's break that down and then train AI systems to perform each of those steps very robustly. When you train like that from the start, after the fact, it's much easier to evaluate, it's much easier to troubleshoot at each point. Like where did something break down? So yeah, we were working on those experiments for a while. And then at the start of 2021, decided to build a product.Swyx [00:07:45]: Do you mind if I, because I think you're about to go into more modern thought and Elicit. And I just wanted to, because I think a lot of people are in where you were like sort of 2018, 19, where you chose a partner to work with. Yeah. Right. And you didn't know him. Yeah. Yeah. You were just kind of cold introduced. A lot of people are cold introduced. Yeah. Never work with them. I assume you had a lot, a lot of other options, right? Like how do you advise people to make those choices?Jungwon [00:08:10]: We were not totally cold introduced. So one of our closest friends introduced us. And then Andreas had written a lot on the OTT website, a lot of blog posts, a lot of publications. And I just read it and I was like, wow, this sounds like my writing. And even other people, some of my closest friends I asked for advice from, they were like, oh, this sounds like your writing. But I think I also had some kind of like things I was looking for. I wanted someone with a complimentary skillset. I want someone who was very values aligned. And yeah, that was all a good fit.Andreas [00:08:38]: We also did a pretty lengthy mutual evaluation process where we had a Google doc where we had all kinds of questions for each other. And I think it ended up being around 50 pages or so of like various like questions and back and forth.Swyx [00:08:52]: Was it the YC list? There's some lists going around for co-founder questions.Andreas [00:08:55]: No, we just made our own questions. But I guess it's probably related in that you ask yourself, what are the values you care about? How would you approach various decisions and things like that?Jungwon [00:09:04]: I shared like all of my past performance reviews. Yeah. Yeah.Swyx [00:09:08]: And he never had any. No.Andreas [00:09:10]: Yeah.Swyx [00:09:11]: Sorry, I just had to, a lot of people are going through that phase and you kind of skipped over it. I was like, no, no, no, no. There's like an interesting story.Jungwon [00:09:20]: Yeah.Alessio [00:09:21]: Yeah. Before we jump into what a list it is today, the history is a bit counterintuitive. So you start with figuring out, oh, if we had a super powerful model, how would we align it? But then you were actually like, well, let's just build the product so that people can actually leverage it. And I think there are a lot of folks today that are now back to where you were maybe five years ago that are like, oh, what if this happens rather than focusing on actually building something useful with it? What clicked for you to like move into a list and then we can cover that story too.Andreas [00:09:49]: I think in many ways, the approach is still the same because the way we are building illicit is not let's train a foundation model to do more stuff. It's like, let's build a scaffolding such that we can deploy powerful models to good ends. I think it's different now in that we actually have like some of the models to plug in. But if in 2017, we had had the models, we could have run the same experiments we did run with humans back then, just with models. And so in many ways, our philosophy is always, let's think ahead to the future of what models are going to exist in one, two years or longer. And how can we make it so that they can actually be deployed in kind of transparent, controllableJungwon [00:10:26]: ways? I think motivationally, we both are kind of product people at heart. The research was really important and it didn't make sense to build a product at that time. But at the end of the day, the thing that always motivated us is imagining a world where high quality reasoning is really abundant and AI is a technology that's going to get us there. And there's a way to guide that technology with research, but we can have a more direct effect through product because with research, you publish the research and someone else has to implement that into the product and the product felt like a more direct path. And we wanted to concretely have an impact on people's lives. Yeah, I think the kind of personally, the motivation was we want to build for people.Swyx [00:11:03]: Yep. And then just to recap as well, like the models you were using back then were like, I don't know, would they like BERT type stuff or T5 or I don't know what timeframe we're talking about here.Andreas [00:11:14]: I guess to be clear, at the very beginning, we had humans do the work. And then I think the first models that kind of make sense were TPT-2 and TNLG and like Yeah, early generative models. We do also use like T5 based models even now started with TPT-2.Swyx [00:11:30]: Yeah, cool. I'm just kind of curious about like, how do you start so early? You know, like now it's obvious where to start, but back then it wasn't.Jungwon [00:11:37]: Yeah, I used to nag Andreas a lot. I was like, why are you talking to this? I don't know. I felt like TPT-2 is like clearly can't do anything. And I was like, Andreas, you're wasting your time, like playing with this toy. But yeah, he was right.Alessio [00:11:50]: So what's the history of what Elicit actually does as a product? You recently announced that after four months, you get to a million in revenue. Obviously, a lot of people use it, get a lot of value, but it would initially kind of like structured data extraction from papers. Then you had kind of like concept grouping. And today, it's maybe like a more full stack research enabler, kind of like paper understander platform. What's the definitive definition of what Elicit is? And how did you get here?Jungwon [00:12:15]: Yeah, we say Elicit is an AI research assistant. I think it will continue to evolve. That's part of why we're so excited about building and research, because there's just so much space. I think the current phase we're in right now, we talk about it as really trying to make Elicit the best place to understand what is known. So it's all a lot about like literature summarization. There's a ton of information that the world already knows. It's really hard to navigate, hard to make it relevant. So a lot of it is around document discovery and processing and analysis. I really kind of want to import some of the incredible productivity improvements we've seen in software engineering and data science and into research. So it's like, how can we make researchers like data scientists of text? That's why we're launching this new set of features called Notebooks. It's very much inspired by computational notebooks, like Jupyter Notebooks, you know, DeepNode or Colab, because they're so powerful and so flexible. And ultimately, when people are trying to get to an answer or understand insight, they're kind of like manipulating evidence and information. Today, that's all packaged in PDFs, which are super brittle. So with language models, we can decompose these PDFs into their underlying claims and evidence and insights, and then let researchers mash them up together, remix them and analyze them together. So yeah, I would say quite simply, overall, Elicit is an AI research assistant. Right now we're focused on text-based workflows, but long term, really want to kind of go further and further into reasoning and decision making.Alessio [00:13:35]: And when you say AI research assistant, this is kind of meta research. So researchers use Elicit as a research assistant. It's not a generic you-can-research-anything type of tool, or it could be, but like, what are people using it for today?Andreas [00:13:49]: Yeah. So specifically in science, a lot of people use human research assistants to do things. You tell your grad student, hey, here are a couple of papers. Can you look at all of these, see which of these have kind of sufficiently large populations and actually study the disease that I'm interested in, and then write out like, what are the experiments they did? What are the interventions they did? What are the outcomes? And kind of organize that for me. And the first phase of understanding what is known really focuses on automating that workflow because a lot of that work is pretty rote work. I think it's not the kind of thing that we need humans to do. Language models can do it. And then if language models can do it, you can obviously scale it up much more than a grad student or undergrad research assistant would be able to do.Jungwon [00:14:31]: Yeah. The use cases are pretty broad. So we do have a very large percent of our users are just using it personally or for a mix of personal and professional things. People who care a lot about health or biohacking or parents who have children with a kind of rare disease and want to understand the literature directly. So there is an individual kind of consumer use case. We're most focused on the power users. So that's where we're really excited to build. So Lissette was very much inspired by this workflow in literature called systematic reviews or meta-analysis, which is basically the human state of the art for summarizing scientific literature. And it typically involves like five people working together for over a year. And they kind of first start by trying to find the maximally comprehensive set of papers possible. So it's like 10,000 papers. And they kind of systematically narrow that down to like hundreds or 50 extract key details from every single paper. Usually have two people doing it, like a third person reviewing it. So it's like an incredibly laborious, time consuming process, but you see it in every single domain. So in science, in machine learning, in policy, because it's so structured and designed to be reproducible, it's really amenable to automation. So that's kind of the workflow that we want to automate first. And then you make that accessible for any question and make these really robust living summaries of science. So yeah, that's one of the workflows that we're starting with.Alessio [00:15:49]: Our previous guest, Mike Conover, he's building a new company called Brightwave, which is an AI research assistant for financial research. How do you see the future of these tools? Does everything converge to like a God researcher assistant, or is every domain going to have its own thing?Andreas [00:16:03]: I think that's a good and mostly open question. I do think there are some differences across domains. For example, some research is more quantitative data analysis, and other research is more high level cross domain thinking. And we definitely want to contribute to the broad generalist reasoning type space. Like if researchers are making discoveries often, it's like, hey, this thing in biology is actually analogous to like these equations in economics or something. And that's just fundamentally a thing that where you need to reason across domains. At least within research, I think there will be like one best platform more or less for this type of generalist research. I think there may still be like some particular tools like for genomics, like particular types of modules of genes and proteins and whatnot. But for a lot of the kind of high level reasoning that humans do, I think that is a more of a winner type all thing.Swyx [00:16:52]: I wanted to ask a little bit deeper about, I guess, the workflow that you mentioned. I like that phrase. I see that in your UI now, but that's as it is today. And I think you were about to tell us about how it was in 2021 and how it may be progressed. How has this workflow evolved over time?Jungwon [00:17:07]: Yeah. So the very first version of Elicit actually wasn't even a research assistant. It was a forecasting assistant. So we set out and we were thinking about, you know, what are some of the most impactful types of reasoning that if we could scale up, AI would really transform the world. We actually started with literature review, but we're like, oh, so many people are going to build literature review tools. So let's start there. So then we focused on geopolitical forecasting. So I don't know if you're familiar with like manifold or manifold markets. That kind of stuff. Before manifold. Yeah. Yeah. I'm not predicting relationships. We're predicting like, is China going to invade Taiwan?Swyx [00:17:38]: Markets for everything.Andreas [00:17:39]: Yeah. That's a relationship.Swyx [00:17:41]: Yeah.Jungwon [00:17:42]: Yeah. It's true. And then we worked on that for a while. And then after GPT-3 came out, I think by that time we realized that originally we were trying to help people convert their beliefs into probability distributions. And so take fuzzy beliefs, but like model them more concretely. And then after a few months of iterating on that, just realize, oh, the thing that's blocking people from making interesting predictions about important events in the world is less kind of on the probabilistic side and much more on the research side. And so that kind of combined with the very generalist capabilities of GPT-3 prompted us to make a more general research assistant. Then we spent a few months iterating on what even is a research assistant. So we would embed with different researchers. We built data labeling workflows in the beginning, kind of right off the bat. We built ways to find experts in a field and like ways to ask good research questions. So we just kind of iterated through a lot of workflows and no one else was really building at this time. And it was like very quick to just do some prompt engineering and see like what is a task that is at the intersection of what's technologically capable and like important for researchers. And we had like a very nondescript landing page. It said nothing. But somehow people were signing up and we had to sign a form that was like, why are you here? And everyone was like, I need help with literature review. And we're like, oh, literature review. That sounds so hard. I don't even know what that means. We're like, we don't want to work on it. But then eventually we were like, okay, everyone is saying literature review. It's overwhelmingly people want to-Swyx [00:19:02]: And all domains, not like medicine or physics or just all domains. Yeah.Jungwon [00:19:06]: And we also kind of personally knew literature review was hard. And if you look at the graphs for academic literature being published every single month, you guys know this in machine learning, it's like up into the right, like superhuman amounts of papers. So we're like, all right, let's just try it. I was really nervous, but Andreas was like, this is kind of like the right problem space to jump into, even if we don't know what we're doing. So my take was like, fine, this feels really scary, but let's just launch a feature every single week and double our user numbers every month. And if we can do that, we'll fail fast and we will find something. I was worried about like getting lost in the kind of academic white space. So the very first version was actually a weekend prototype that Andreas made. Do you want to explain how that worked?Andreas [00:19:44]: I mostly remember that it was really bad. The thing I remember is you entered a question and it would give you back a list of claims. So your question could be, I don't know, how does creatine affect cognition? It would give you back some claims that are to some extent based on papers, but they were often irrelevant. The papers were often irrelevant. And so we ended up soon just printing out a bunch of examples of results and putting them up on the wall so that we would kind of feel the constant shame of having such a bad product and would be incentivized to make it better. And I think over time it has gotten a lot better, but I think the initial version was like really very bad. Yeah.Jungwon [00:20:20]: But it was basically like a natural language summary of an abstract, like kind of a one sentence summary, and which we still have. And then as we learned kind of more about this systematic review workflow, we started expanding the capability so that you could extract a lot more data from the papers and do more with that.Swyx [00:20:33]: And were you using like embeddings and cosine similarity, that kind of stuff for retrieval, or was it keyword based?Andreas [00:20:40]: I think the very first version didn't even have its own search engine. I think the very first version probably used the Semantic Scholar or API or something similar. And only later when we discovered that API is not very semantic, we then built our own search engine that has helped a lot.Swyx [00:20:58]: And then we're going to go into like more recent products stuff, but like, you know, I think you seem the more sort of startup oriented business person and you seem sort of more ideologically like interested in research, obviously, because of your PhD. What kind of market sizing were you guys thinking? Right? Like, because you're here saying like, we have to double every month. And I'm like, I don't know how you make that conclusion from this, right? Especially also as a nonprofit at the time.Jungwon [00:21:22]: I mean, market size wise, I felt like in this space where so much was changing and it was very unclear what of today was actually going to be true tomorrow. We just like really rested a lot on very, very simple fundamental principles, which is like, if you can understand the truth, that is very economically beneficial and valuable. If you like know the truth.Swyx [00:21:42]: On principle.Jungwon [00:21:43]: Yeah. That's enough for you. Yeah. Research is the key to many breakthroughs that are very commercially valuable.Swyx [00:21:47]: Because my version of it is students are poor and they don't pay for anything. Right? But that's obviously not true. As you guys have found out. But you had to have some market insight for me to have believed that, but you skipped that.Andreas [00:21:58]: Yeah. I remember talking to VCs for our seed round. A lot of VCs were like, you know, researchers, they don't have any money. Why don't you build legal assistant? I think in some short sighted way, maybe that's true. But I think in the long run, R&D is such a big space of the economy. I think if you can substantially improve how quickly people find new discoveries or avoid controlled trials that don't go anywhere, I think that's just huge amounts of money. And there are a lot of questions obviously about between here and there. But I think as long as the fundamental principle is there, we were okay with that. And I guess we found some investors who also were. Yeah.Swyx [00:22:35]: Congrats. I mean, I'm sure we can cover the sort of flip later. I think you're about to start us on like GPT-3 and how that changed things for you. It's funny. I guess every major GPT version, you have some big insight. Yeah.Jungwon [00:22:48]: Yeah. I mean, what do you think?Andreas [00:22:51]: I think it's a little bit less true for us than for others, because we always believed that there will basically be human level machine work. And so it is definitely true that in practice for your product, as new models come out, your product starts working better, you can add some features that you couldn't add before. But I don't think we really ever had the moment where we were like, oh, wow, that is super unanticipated. We need to do something entirely different now from what was on the roadmap.Jungwon [00:23:21]: I think GPT-3 was a big change because it kind of said, oh, now is the time that we can use AI to build these tools. And then GPT-4 was maybe a little bit more of an extension of GPT-3. GPT-3 over GPT-2 was like qualitative level shift. And then GPT-4 was like, okay, great. Now it's like more accurate. We're more accurate on these things. We can answer harder questions. But the shape of the product had already taken place by that time.Swyx [00:23:44]: I kind of want to ask you about this sort of pivot that you've made. But I guess that was just a way to sell what you were doing, which is you're adding extra features on grouping by concepts. The GPT-4 pivot, quote unquote pivot that you-Jungwon [00:23:55]: Oh, yeah, yeah, exactly. Right, right, right. Yeah. Yeah. When we launched this workflow, now that GPT-4 was available, basically Elisa was at a place where we have very tabular interfaces. So given a table of papers, you can extract data across all the tables. But you kind of want to take the analysis a step further. Sometimes what you'd care about is not having a list of papers, but a list of arguments, a list of effects, a list of interventions, a list of techniques. And so that's one of the things we're working on is now that you've extracted this information in a more structured way, can you pivot it or group by whatever the information that you extracted to have more insight first information still supported by the academic literature?Swyx [00:24:33]: Yeah, that was a big revelation when I saw it. Basically, I think I'm very just impressed by how first principles, your ideas around what the workflow is. And I think that's why you're not as reliant on like the LLM improving, because it's actually just about improving the workflow that you would recommend to people. Today we might call it an agent, I don't know, but you're not relying on the LLM to drive it. It's relying on this is the way that Elicit does research. And this is what we think is most effective based on talking to our users.Jungwon [00:25:01]: The problem space is still huge. Like if it's like this big, we are all still operating at this tiny part, bit of it. So I think about this a lot in the context of moats, people are like, oh, what's your moat? What happens if GPT-5 comes out? It's like, if GPT-5 comes out, there's still like all of this other space that we can go into. So I think being really obsessed with the problem, which is very, very big, has helped us like stay robust and just kind of directly incorporate model improvements and they keep going.Swyx [00:25:26]: And then I first encountered you guys with Charlie, you can tell us about that project. Basically, yeah. Like how much did cost become a concern as you're working more and more with OpenAI? How do you manage that relationship?Jungwon [00:25:37]: Let me talk about who Charlie is. And then you can talk about the tech, because Charlie is a special character. So Charlie, when we found him was, had just finished his freshman year at the University of Warwick. And I think he had heard about us on some discord. And then he applied and we were like, wow, who is this freshman? And then we just saw that he had done so many incredible side projects. And we were actually on a team retreat in Barcelona visiting our head of engineering at that time. And everyone was talking about this wonder kid or like this kid. And then on our take home project, he had done like the best of anyone to that point. And so people were just like so excited to hire him. So we hired him as an intern and they were like, Charlie, what if you just dropped out of school? And so then we convinced him to take a year off. And he was just incredibly productive. And I think the thing you're referring to is at the start of 2023, Anthropic kind of launched their constitutional AI paper. And within a few days, I think four days, he had basically implemented that in production. And then we had it in app a week or so after that. And he has since kind of contributed to major improvements, like cutting costs down to a tenth of what they were really large scale. But yeah, you can talk about the technical stuff. Yeah.Andreas [00:26:39]: On the constitutional AI project, this was for abstract summarization, where in illicit, if you run a query, it'll return papers to you, and then it will summarize each paper with respect to your query for you on the fly. And that's a really important part of illicit because illicit does it so much. If you run a few searches, it'll have done it a few hundred times for you. And so we cared a lot about this both being fast, cheap, and also very low on hallucination. I think if illicit hallucinates something about the abstract, that's really not good. And so what Charlie did in that project was create a constitution that expressed what are the attributes of a good summary? Everything in the summary is reflected in the actual abstract, and it's like very concise, et cetera, et cetera. And then used RLHF with a model that was trained on the constitution to basically fine tune a better summarizer on an open source model. Yeah. I think that might still be in use.Jungwon [00:27:34]: Yeah. Yeah, definitely. Yeah. I think at the time, the models hadn't been trained at all to be faithful to a text. So they were just generating. So then when you ask them a question, they tried too hard to answer the question and didn't try hard enough to answer the question given the text or answer what the text said about the question. So we had to basically teach the models to do that specific task.Swyx [00:27:54]: How do you monitor the ongoing performance of your models? Not to get too LLM-opsy, but you are one of the larger, more well-known operations doing NLP at scale. I guess effectively, you have to monitor these things and nobody has a good answer that I talk to.Andreas [00:28:10]: I don't think we have a good answer yet. I think the answers are actually a little bit clearer on the just kind of basic robustness side of where you can import ideas from normal software engineering and normal kind of DevOps. You're like, well, you need to monitor kind of latencies and response times and uptime and whatnot.Swyx [00:28:27]: I think when we say performance, it's more about hallucination rate, isn't it?Andreas [00:28:30]: And then things like hallucination rate where I think there, the really important thing is training time. So we care a lot about having our own internal benchmarks for model development that reflect the distribution of user queries so that we can know ahead of time how well is the model going to perform on different types of tasks. So the tasks being summarization, question answering, given a paper, ranking. And for each of those, we want to know what's the distribution of things the model is going to see so that we can have well-calibrated predictions on how well the model is going to do in production. And I think, yeah, there's some chance that there's distribution shift and actually the things users enter are going to be different. But I think that's much less important than getting the kind of training right and having very high quality, well-vetted data sets at training time.Jungwon [00:29:18]: I think we also end up effectively monitoring by trying to evaluate new models as they come out. And so that kind of prompts us to go through our eval suite every couple of months. And every time a new model comes out, we have to see how is this performing relative to production and what we currently have.Swyx [00:29:32]: Yeah. I mean, since we're on this topic, any new models that have really caught your eye this year?Jungwon [00:29:37]: Like Claude came out with a bunch. Yeah. I think Claude is pretty, I think the team's pretty excited about Claude. Yeah.Andreas [00:29:41]: Specifically, Claude Haiku is like a good point on the kind of Pareto frontier. It's neither the cheapest model, nor is it the most accurate, most high quality model, but it's just like a really good trade-off between cost and accuracy.Swyx [00:29:57]: You apparently have to 10-shot it to make it good. I tried using Haiku for summarization, but zero-shot was not great. Then they were like, you know, it's a skill issue, you have to try harder.Jungwon [00:30:07]: I think GPT-4 unlocked tables for us, processing data from tables, which was huge. GPT-4 Vision.Andreas [00:30:13]: Yeah.Swyx [00:30:14]: Yeah. Did you try like Fuyu? I guess you can't try Fuyu because it's non-commercial. That's the adept model.Jungwon [00:30:19]: Yeah.Swyx [00:30:20]: We haven't tried that one. Yeah. Yeah. Yeah. But Claude is multimodal as well. Yeah. I think the interesting insight that we got from talking to David Luan, who is CEO of multimodality has effectively two different flavors. One is we recognize images from a camera in the outside natural world. And actually the more important multimodality for knowledge work is screenshots and PDFs and charts and graphs. So we need a new term for that kind of multimodality.Andreas [00:30:45]: But is the claim that current models are good at one or the other? Yeah.Swyx [00:30:50]: They're over-indexed because of the history of computer vision is Coco, right? So now we're like, oh, actually, you know, screens are more important, OCR, handwriting. You mentioned a lot of like closed model lab stuff, and then you also have like this open source model fine tuning stuff. Like what is your workload now between closed and open? It's a good question.Andreas [00:31:07]: I think- Is it half and half? It's a-Swyx [00:31:10]: Is that even a relevant question or not? Is this a nonsensical question?Andreas [00:31:13]: It depends a little bit on like how you index, whether you index by like computer cost or number of queries. I'd say like in terms of number of queries, it's maybe similar. In terms of like cost and compute, I think the closed models make up more of the budget since the main cases where you want to use closed models are cases where they're just smarter, where no existing open source models are quite smart enough.Jungwon [00:31:35]: Yeah. Yeah.Alessio [00:31:37]: We have a lot of interesting technical questions to go in, but just to wrap the kind of like UX evolution, now you have the notebooks. We talked a lot about how chatbots are not the final frontier, you know? How did you decide to get into notebooks, which is a very iterative kind of like interactive interface and yeah, maybe learnings from that.Jungwon [00:31:56]: Yeah. This is actually our fourth time trying to make this work. Okay. I think the first time was probably in early 2021. I think because we've always been obsessed with this idea of task decomposition and like branching, we always wanted a tool that could be kind of unbounded where you could keep going, could do a lot of branching where you could kind of apply language model operations or computations on other tasks. So in 2021, we had this thing called composite tasks where you could use GPT-3 to brainstorm a bunch of research questions and then take each research question and decompose those further into sub questions. This kind of, again, that like task decomposition tree type thing was always very exciting to us, but that was like, it didn't work and it was kind of overwhelming. Then at the end of 22, I think we tried again and at that point we were thinking, okay, we've done a lot with this literature review thing. We also want to start helping with kind of adjacent domains and different workflows. Like we want to help more with machine learning. What does that look like? And as we were thinking about it, we're like, well, there are so many research workflows. How do we not just build three new workflows into Elicit, but make Elicit really generic to lots of workflows? What is like a generic composable system with nice abstractions that can like scale to all these workflows? So we like iterated on that a bunch and then didn't quite narrow the problem space enough or like quite get to what we wanted. And then I think it was at the beginning of 2023 where we're like, wow, computational notebooks kind of enable this, where they have a lot of flexibility, but kind of robust primitives such that you can extend the workflow and it's not limited. It's not like you ask a query, you get an answer, you're done. You can just constantly keep building on top of that. And each little step seems like a really good unit of work for the language model. And also there was just like really helpful to have a bit more preexisting work to emulate. Yeah, that's kind of how we ended up at computational notebooks for Elicit.Andreas [00:33:44]: Maybe one thing that's worth making explicit is the difference between computational notebooks and chat, because on the surface, they seem pretty similar. It's kind of this iterative interaction where you add stuff. In both cases, you have a back and forth between you enter stuff and then you get some output and then you enter stuff. But the important difference in our minds is with notebooks, you can define a process. So in data science, you can be like, here's like my data analysis process that takes in a CSV and then does some extraction and then generates a figure at the end. And you can prototype it using a small CSV and then you can run it over a much larger CSV later. And similarly, the vision for notebooks in our case is to not make it this like one-off chat interaction, but to allow you to then say, if you start and first you're like, okay, let me just analyze a few papers and see, do I get to the correct conclusions for those few papers? Can I then later go back and say, now let me run this over 10,000 papers now that I've debugged the process using a few papers. And that's an interaction that doesn't fit quite as well into the chat framework because that's more for kind of quick back and forth interaction.Alessio [00:34:49]: Do you think in notebooks, it's kind of like structure, editable chain of thought, basically step by step? Like, is that kind of where you see this going? And then are people going to reuse notebooks as like templates? And maybe in traditional notebooks, it's like cookbooks, right? You share a cookbook, you can start from there. Is this similar in Elizit?Andreas [00:35:06]: Yeah, that's exactly right. So that's our hope that people will build templates, share them with other people. I think chain of thought is maybe still like kind of one level lower on the abstraction hierarchy than we would think of notebooks. I think we'll probably want to think about more semantic pieces like a building block is more like a paper search or an extraction or a list of concepts. And then the model's detailed reasoning will probably often be one level down. You always want to be able to see it, but you don't always want it to be front and center.Alessio [00:35:36]: Yeah, what's the difference between a notebook and an agent? Since everybody always asks me, what's an agent? Like how do you think about where the line is?Andreas [00:35:44]: Yeah, it's an interesting question. In the notebook world, I would generally think of the human as the agent in the first iteration. So you have the notebook and the human kind of adds little action steps. And then the next point on this kind of progress gradient is, okay, now you can use language models to predict which action would you take as a human. And at some point, you're probably going to be very good at this, you'll be like, okay, in some cases I can, with 99.9% accuracy, predict what you do. And then you might as well just execute it, like why wait for the human? And eventually, as you get better at this, that will just look more and more like agents taking actions as opposed to you doing the thing. I think templates are a specific case of this where you're like, okay, well, there's just particular sequences of actions that you often want to chunk and have available as primitives, just like in normal programming. And those, you can view them as action sequences of agents, or you can view them as more normal programming language abstraction thing. And I think those are two valid views. Yeah.Alessio [00:36:40]: How do you see this change as, like you said, the models get better and you need less and less human actual interfacing with the model, you just get the results? Like how does the UX and the way people perceive it change?Jungwon [00:36:52]: Yeah, I think this kind of interaction paradigms for evaluation is not really something the internet has encountered yet, because up to now, the internet has all been about getting data and work from people. So increasingly, I really want kind of evaluation, both from an interface perspective and from like a technical perspective and operation perspective to be a superpower for Elicit, because I think over time, models will do more and more of the work, and people will have to do more and more of the evaluation. So I think, yeah, in terms of the interface, some of the things we have today, you know, for every kind of language model generation, there's some citation back, and we kind of try to highlight the ground truth in the paper that is most relevant to whatever Elicit said, and make it super easy so that you can click on it and quickly see in context and validate whether the text actually supports the answer that Elicit gave. So I think we'd probably want to scale things up like that, like the ability to kind of spot check the model's work super quickly, scale up interfaces like that. And-Swyx [00:37:44]: Who would spot check? The user?Jungwon [00:37:46]: Yeah, to start, it would be the user. One of the other things we do is also kind of flag the model's uncertainty. So we have models report out, how confident are you that this was the sample size of this study? The model's not sure, we throw a flag. And so the user knows to prioritize checking that. So again, we can kind of scale that up. So when the model's like, well, I searched this on Google, I'm not sure if that was the right thing. I have an uncertainty flag, and the user can go and be like, oh, okay, that was actually the right thing to do or not.Swyx [00:38:10]: I've tried to do uncertainty readings from models. I don't know if you have this live. You do? Yeah. Because I just didn't find them reliable because they just hallucinated their own uncertainty. I would love to base it on log probs or something more native within the model rather than generated. But okay, it sounds like they scale properly for you. Yeah.Jungwon [00:38:30]: We found it to be pretty calibrated. It varies on the model.Andreas [00:38:32]: I think in some cases, we also use two different models for the uncertainty estimates than for the question answering. So one model would say, here's my chain of thought, here's my answer. And then a different type of model. Let's say the first model is Llama, and let's say the second model is GPT-3.5. And then the second model just looks over the results and is like, okay, how confident are you in this? And I think sometimes using a different model can be better than using the same model. Yeah.Swyx [00:38:58]: On the topic of models, evaluating models, obviously you can do that all day long. What's your budget? Because your queries fan out a lot. And then you have models evaluating models. One person typing in a question can lead to a thousand calls.Andreas [00:39:11]: It depends on the project. So if the project is basically a systematic review that otherwise human research assistants would do, then the project is basically a human equivalent spend. And the spend can get quite large for those projects. I don't know, let's say $100,000. In those cases, you're happier to spend compute then in the kind of shallow search case where someone just enters a question because, I don't know, maybe I heard about creatine. What's it about? Probably don't want to spend a lot of compute on that. This sort of being able to invest more or less compute into getting more or less accurate answers is I think one of the core things we care about. And that I think is currently undervalued in the AI space. I think currently you can choose which model you want and you can sometimes, I don't know, you'll tip it and it'll try harder or you can try various things to get it to work harder. But you don't have great ways of converting willingness to spend into better answers. And we really want to build a product that has this sort of unbounded flavor where if you care about it a lot, you should be able to get really high quality answers, really double checked in every way.Alessio [00:40:14]: And you have a credits-based pricing. So unlike most products, it's not a fixed monthly fee.Jungwon [00:40:19]: Right, exactly. So some of the higher costs are tiered. So for most casual users, they'll just get the abstract summary, which is kind of an open source model. Then you can add more columns, which have more extractions and these uncertainty features. And then you can also add the same columns in high accuracy mode, which also parses the table. So we kind of stack the complexity on the calls.Swyx [00:40:39]: You know, the fun thing you can do with a credit system, which is data for data, basically you can give people more credits if they give data back to you. I don't know if you've already done that. We've thought about something like this.Jungwon [00:40:49]: It's like if you don't have money, but you have time, how do you exchange that?Swyx [00:40:54]: It's a fair trade.Jungwon [00:40:55]: I think it's interesting. We haven't quite operationalized it. And then, you know, there's been some kind of like adverse selection. Like, you know, for example, it would be really valuable to get feedback on our model. So maybe if you were willing to give more robust feedback on our results, we could give you credits or something like that. But then there's kind of this, will people take it seriously? And you want the good people. Exactly.Swyx [00:41:11]: Can you tell who are the good people? Not right now.Jungwon [00:41:13]: But yeah, maybe at the point where we can, we can offer it. We can offer it up to them.Swyx [00:41:16]: The perplexity of questions asked, you know, if it's higher perplexity, these are the smarterJungwon [00:41:20]: people. Yeah, maybe.Andreas [00:41:23]: If you put typos in your queries, you're not going to get off the stage.Swyx [00:41:28]: Negative social credit. It's very topical right now to think about the threat of long context windows. All these models that we're talking about these days, all like a million token plus. Is that relevant for you? Can you make use of that? Is that just prohibitively expensive because you're just paying for all those tokens or you're just doing rag?Andreas [00:41:44]: It's definitely relevant. And when we think about search, as many people do, we think about kind of a staged pipeline of retrieval where first you use semantic search database with embeddings, get like the, in our case, maybe 400 or so most relevant papers. And then, then you still need to rank those. And I think at that point it becomes pretty interesting to use larger models. So specifically in the past, I think a lot of ranking was kind of per item ranking where you would score each individual item, maybe using increasingly expensive scoring methods and then rank based on the scores. But I think list-wise re-ranking where you have a model that can see all the elements is a lot more powerful because often you can only really tell how good a thing is in comparison to other things and what things should come first. It really depends on like, well, what other things that are available, maybe you even care about diversity in your results. You don't want to show 10 very similar papers as the first 10 results. So I think a long context models are quite interesting there. And especially for our case where we care more about power users who are perhaps a little bit more willing to wait a little bit longer to get higher quality results relative to people who just quickly check out things because why not? And I think being able to spend more on longer contexts is quite valuable.Jungwon [00:42:55]: Yeah. I think one thing the longer context models changed for us is maybe a focus from breaking down tasks to breaking down the evaluation. So before, you know, if we wanted to answer a question from the full text of a paper, we had to figure out how to chunk it and like find the relevant chunk and then answer based on that chunk. And the nice thing was then, you know, kind of which chunk the model used to answer the question. So if you want to help the user track it, yeah, you can be like, well, this was the chunk that the model got. And now if you put the whole text in the paper, you have to like kind of find the chunk like more retroactively basically. And so you need kind of like a different set of abilities and obviously like a different technology to figure out. You still want to point the user to the supporting quotes in the text, but then the interaction is a little different.Swyx [00:43:38]: You like scan through and find some rouge score floor.Andreas [00:43:41]: I think there's an interesting space of almost research problems here because you would ideally make causal claims like if this hadn't been in the text, the model wouldn't have said this thing. And maybe you can do expensive approximations to that where like, I don't know, you just throw out chunk of the paper and re-answer and see what happens. But hopefully there are better ways of doing that where you just get that kind of counterfactual information for free from the model.Alessio [00:44:06]: Do you think at all about the cost of maintaining REG versus just putting more tokens in the window? I think in software development, a lot of times people buy developer productivity things so that we don't have to worry about it. Context window is kind of the same, right? You have to maintain chunking and like REG retrieval and like re-ranking and all of this versus I just shove everything into the context and like it costs a little more, but at least I don't have to do all of that. Is that something you thought about?Jungwon [00:44:31]: I think we still like hit up against context limits enough that it's not really, do we still want to keep this REG around? It's like we do still need it for the scale of the work that we're doing, yeah.Andreas [00:44:41]: And I think there are different kinds of maintainability. In one sense, I think you're right that throw everything into the context window thing is easier to maintain because you just can swap out a model. In another sense, if things go wrong, it's harder to debug where like, if you know, here's the process that we go through to go from 200 million papers to an answer. And there are like little steps and you understand, okay, this is the step that finds the relevant paragraph or whatever it may be. You'll know which step breaks if the answers are bad, whereas if it's just like a new model version came out and now it suddenly doesn't find your needle in a haystack anymore, then you're like, okay, what can you do? You're kind of at a loss.Alessio [00:45:21]: Let's talk a bit about, yeah, needle in a haystack and like maybe the opposite of it, which is like hard grounding. I don't know if that's like the best name to think about it, but I was using one of these chatwitcher documents features and I put the AMD MI300 specs and the new Blackwell chips from NVIDIA and I was asking questions and does the AMD chip support NVLink? And the response was like, oh, it doesn't say in the specs. But if you ask GPD 4 without the docs, it would tell you no, because NVLink it's a NVIDIA technology.Swyx [00:45:49]: It just says in the thing.Alessio [00:45:53]: How do you think about that? Does using the context sometimes suppress the knowledge that the model has?Andreas [00:45:57]: It really depends on the task because I think sometimes that is exactly what you want. So imagine you're a researcher, you're writing the background section of your paper and you're trying to describe what these other papers say. You really don't want extra information to be introduced there. In other cases where you're just trying to figure out the truth and you're giving the documents because you think they will help the model figure out what the truth is. I think you do want, if the model has a hunch that there might be something that's not in the papers, you do want to surface that. I think ideally you still don't want the model to just tell you, probably the ideal thing looks a bit more like agent control where the model can issue a query that then is intended to surface documents that substantiate its hunch. That's maybe a reasonable middle ground between model just telling you and model being fully limited to the papers you give it.Jungwon [00:46:44]: Yeah, I would say it's, they're just kind of different tasks right now. And the task that Elicit is mostly focused on is what do these papers say? But there's another task which is like, just give me the best possible answer and that give me the best possible answer sometimes depends on what do these papers say, but it can also depend on other stuff that's not in the papers. So ideally we can do both and then kind of do this overall task for you more going forward.Alessio [00:47:08]: We see a lot of details, but just to zoom back out a little bit, what are maybe the most underrated features of Elicit and what is one thing that maybe the users surprise you the most by using it?Jungwon [00:47:19]: I think the most powerful feature of Elicit is the ability to extract, add columns to this table, which effectively extracts data from all of your papers at once. It's well used, but there are kind of many different extensions of that that I think users are still discovering. So one is we let you give a description of the column. We let you give instructions of a column. We let you create custom columns. So we have like 30 plus predefined fields that users can extract, like what were the methods? What were the main findings? How many people were studied? And we actually show you basically the prompts that we're using to

MKAU Gaming Podcast
INTERVIEW: Phil LaMarr

MKAU Gaming Podcast

Play Episode Listen Later Mar 24, 2024 22:39


A Los Angeles native, Phil is an alumnus of Yale University and The Groundlings Theater and is perhaps best known as one of the original cast members of MAD TV, the voice of SAMURAI JACK, “Hermes” on FUTURAMA,  “Static” on STATIC SHOCK, “Green Lantern” on JUSTICE LEAGUE and as "Marvin" in PULP FICTION. For over 30 years, Phil has thrilled audiences with his work on camera and behind the mic on TV shows such as FAMILY GUY, YOUNG JUSTICE, STAR WARS: THE CLONE WARS, STAR TREK:LOWER DECKS, THE FLASH, SUPERGIRL, GET SHORTY, LUCIFER, CURB YOUR ENTHUSIASM and VEEP; feature films like MADAGASCAR 2, INCREDIBLES 2, and THE LION KING (2019) and video games including JAK & DAXTER, FORTNITE, SHADOW OF MORDOR, and the METAL GEAR SOLID, INJUSTICE and MORTAL KOMBAT series. His stage work includes productions with The Actor's Gang, South  Coast  Repertory, Sacred  Fools  Theatre,  and Phil also portrayed “Cowboy Curtis” in "The Pee-wee Herman Show" both on at The Stephen Sondheim Theatre on Broadway and in  the Emmy-nominated  HBO  special.  Currently, in  addition  to  writing  and  producing  the  animated  series  "GOBLINS," (goblinsanimated.com), Phil  is  performing monthly  onstage with  "THE  BLACK  VERSION" (theblackversion.com), performing onscreen in HAMSTER & GRETEL, the CRAIG OF THE CREEK movie, INVINCIBLE, MULLIGAN, Amc's COOPER‘S BAR, a new season of FUTURAMA on Hulu and performing the role of “Sherlock Holmes” in Audible's series MORIARTY. You can catch Phil LaMarr at Supanova Comic-Con & Gaming 2024

Konsole Kombat: Video Game Battles
Episode 17: Jak & Daxter vs Ratchet & Clank

Konsole Kombat: Video Game Battles

Play Episode Listen Later Mar 11, 2024 58:51


What's up Gamers?! This week sees the shows first set of Duos, as Jak and Daxter meet Ratchet and Clank on the battlefield! Who will win this battle of 2 sets of Fan Favorite Characters? There's only 1 way to find out! This Podcast is a member of the DynaMic Podcast Network! Please check out the other shows on the Network: * Dynamic Duel: Marvel Vs. DC * Max Destruction: Movie Fights *Senjoh World: Anime Action And check out the ⁠DynaMic Network's Website⁠! Also, please consider leaving a 5 Star Rating and Review wherever you may be listening to this show, as it helps continue growing our listening audience! Also, check out our ⁠Website⁠! Lastly, don't forget to leave a voice memo using the link below to tell us who you think wins next week's fight between Max Payne and Michael DeSanta, and also please feel free to join us for Hacking the Game with the 2 characters as well! --- Send in a voice message: https://podcasters.spotify.com/pod/show/konsolekombat/message

Konsole Kombat: Video Game Battles
Episode 16: Joel Miller Vs Deacon St John

Konsole Kombat: Video Game Battles

Play Episode Listen Later Mar 5, 2024 70:29


What's going on Gamers? This presents a battle of 2 Post-Apocalyptic Protagonists, as The Last of Us' Joel Miller takes on Days Gone's Deacon St. John! Which of these survivors has the tenacity to fight the other to the death and win? There's only one way to find out! This Podcast is a member of the DynaMic Podcast Network! Please check out the other shows on the Network: * Dynamic Duel: Marvel Vs. DC * Max Destruction: Movie Fights *Senjoh World: Anime Action And check out the DynaMic Network's Website! Also, please consider leaving a 5 Star Rating and Review wherever you may be listening to this show, as it helps continue growing our listening audience! Also, check out our Website! Lastly, don't forget to leave a voice memo using the link below to tell us who you think wins next week's fight between Ratchet and Clank vs Jak and Daxter, and also please feel free to join us for Hacking the Game with the 2 characters as well! --- Send in a voice message: https://podcasters.spotify.com/pod/show/konsolekombat/message

Remember The Game? Retro Gaming Podcast
Remember The Game? #281 - Jak & Daxter: The Precursor Legacy

Remember The Game? Retro Gaming Podcast

Play Episode Listen Later Jan 10, 2024 96:00


Our Patreon podcasts are FINALLY available on Spotify! You can browse the entire catalog by searching for 'Remember The Game? Industries' on Spotify now! Are you on social media? Of course you are. So follow us!  Twitter: @MemberTheGame Instagram: @MemberTheGame Twitch.tv/MemberTheGame Youtube.com/RememberTheGame And if you want access to hundreds of bonus (ad-free) podcasts, along with multiple new shows EVERY WEEK, consider showing us some love over at Patreon. Subscriptions start at just $3/month, and 5% of our patreon income every month will be donated to our 24 hour Extra-Life charity stream at the end of the year! Patreon.com/RememberTheGame We've been talking about covering Jak & Daxter for YEARS! I just needed to find time to replay it first. I finally squeezed it in, and I'm happy to say it's as fun as I remember. I love this damned game. If you've never played, it's your classic 3D platforming collect-a-thon. But the charming characters, crisp graphics, and interactions between Jak and Daxter put it alongside Banjo Kazooie, Ratchet & Clank, and Sly Cooper as one of my favourite non-Mario platformers of all-time. I haven't been able to put it down for a week, just hit the 100% mark yesterday, and I'm ready to (finally) tell you all about why I love this game so much, and why you should, too. Years ago, I promised my buddy Andre that I'd have him on the show if we ever talked Jak & Daxter, and the time has come. I hope this episode lives up to the hype for those of you that have been waiting forever for it, and once we're talking talking orbs, we spend a few minutes speculating about the future of the series, too. And before we swim in eco, I put together another edition of the Infamous Intro! This week, someone asks how I feel about emulating modern games if you already own them. Where did sports game developers go wrong? And are there any myths about Canada I want to confirm or deny? Plus we play another round of 'Play One, Remake One, Erase One', too! This one features 3 PS2 platforming classics: Ratchet & Clank: Up Your Arsenal, Sly Cooper 2, and Ape Escape 2! Learn more about your ad choices. Visit megaphone.fm/adchoices

My Perfect Console with Simon Parkin
Josh Scherr, narrative director (co-writer Uncharted 4, The Last of Us Part II).

My Perfect Console with Simon Parkin

Play Episode Listen Later Dec 12, 2023 81:46


My guest today is an American writer and narrative designer for video games. After graduating from the USC School of Cinematic Arts with an MFA in Animation, he worked as an animator in Hollywood, contributing to an early version of Shrek at DreamWorks, Dinosaur at Disney, and to the music video Californication by the Red Hot Chili Peppers. In 2001 my guest joined the video game studio Naughty Dog and worked as the cinematics animation lead on seven titles including Jak and Daxter and the first three Uncharted games, a series for which he also helped to develop the storylines. He then became a staff writer and narrative designer on Uncharted IV: A Thief's End, Uncharted: The Lost Legacy, and the recent blockbuster The Last of Us: Part II. In 2020 he left Naughty Dog after more than two decades and joined Crop Circle Games as narrative director on the studio's first, as yet unannounced title. Play the console:A Mind Forever Voyaging.Rez.Ico.Bloodborne.Outer Wilds.Thank you for listening to My Perfect Console. Please consider becoming a Patreon supporter; your small monthly subscription will help to make the podcast sustainable for the long term, and you'll receive bonus content, and access to the My Perfect Console community: https://www.patreon.com/myperfectconsole Be attitude for gains. https://plus.acast.com/s/my-perfect-console. Hosted on Acast. See acast.com/privacy for more information.

Post Modern Art Podcast
REIGNITED | Charles Zembillas (Episode #149) (Part 1)

Post Modern Art Podcast

Play Episode Listen Later Nov 24, 2023 98:16


Enjoy a smashing conversation with Charles Zembillas, a legendary character designer with years of experience in shows and video games while also bringing up the next generation of animators with the Animation Academy in Burbank, California, as we discuss the wild early years in the industry, having a hand in designing iconic characters like Jak & Daxter, Crash Bandicoot and Spyro the Dragon, the esteemed alumni of the Animation academy, and so much more! Charles's Links: The Animation Academy website: https://www.theanimationacademy.com/ Twitter: https://twitter.com/zembillas Facebook: https://www.facebook.com/TheAnimationAcademy Blog: https://zembillas.blogspot.com/ YouTube: https://www.youtube.com/user/AnimationAcademy Thumbnail by: Jasper - https://twitter.com/PropertyOfHog Indiegogo for THE EVIL LITTLE THING: https://www.indiegogo.com/projects/the-evil-little-thing-show#/ Check out the NEW MERCH SHOP: https://post-modern-art-podcast-shop.fourthwall.com/ Join the PostModArtPod Discord server: https://discord.gg/bdg4UFbmm9 Join the PMAP Patreon: https://www.patreon.com/pmap Intro Song - "Seductive Treasure" - Color of Illusion Outro Song - "Parts In Motion" - Vera Much Stream her EP "Thank U!": https://open.spotify.com/album/3AO61mm8a81osp9FsPpFgv?si=sZ2Pq_aSTbWLzHLwff2Rig Linktree (To find other platforms, socials, etc.): https://linktr.ee/PostModernArtPodcast For business inquiries, contact postmodernartpodcast@gmail.com Showrunners of the podcast are Nathan Ragland and Maria Moreno Maria's Links: Twitter: https://twitter.com/TipsyJHearts Instagram: https://www.instagram.com/tipsyjhearts/ Patreon: https://www.patreon.com/tipsyjhearts Ko-fi: https://ko-fi.com/tipsyjhearts Portfolio: https://tipsyjhearts.wixsite.com/portfolio Produced with A1denArtz Aiden's Links: Carrd: https://a1denartz.carrd.co/ Tumblr: https://a1denartz.tumblr.com/ Bluesky: https://bsky.app/profile/a1denartz.bsky.social Inkblot: https://inkblot.art/profile/a1denartz Instagram: https://www.instagram.com/a1denartz/ Go out there and create something special!

The Back Page: A Video Games Podcast
From Jak & Daxter to Eurogamer (with Ellie Gibson)

The Back Page: A Video Games Podcast

Play Episode Listen Later Nov 10, 2023 74:17


This week's guest is comedian, writer and streaming superstar Ellie Gibson, who talks us through her career at Eurogamer during the heyday of the PS3 and 360 era of games media. This week's music is from the Sky Odyssey soundtrack by Kow Otani. Hosted on Acast. See acast.com/privacy for more information.

The Fourth Curtain
All Right the First Time with Jason Rubin

The Fourth Curtain

Play Episode Listen Later Oct 26, 2023 79:06


Naughty Dog co-founder Jason Rubin made massive hits like Crash Bandicoot and Jak and Daxter before moving on to run THQ. Now he's building the metaverse at Meta. Lessons in business success and the future of technology in this week's episode!Thank you for listening to our podcast all about videogames and the amazing people who bring them to life!Hosted by Alexander Seropian and Aaron MarroquinFind us at www.thefourthcurtain.comCome join the conversation at https://discord.gg/KWeGE4xHfeVideos available at https://www.youtube.com/@thefourthcurtainFollow us on twitter: @fourthcurtainFeaturing the music track Liberation by 505Please consider supporting the show by pre-registering for our Season Two Kickstarter at www.thefourthcurtain.com/kickstarter

Monday Morning Critic Podcast
(Episode 420) "Pulp Fiction" Actor: Phil LaMarr (Marvin).

Monday Morning Critic Podcast

Play Episode Listen Later Oct 19, 2023 34:34


Episode 420."Pulp Fiction"Actor: Phil LaMarr.Actor Phil LaMarr joins me to talk Pulp Fiction, Star Wars, his career in acting and so much more.A Los Angeles native, Phil is an alumnus of Yale University and The Groundlings Theater and is perhaps best known as one of the original cast members of MAD TV, “Hermes” on FUTURAMA,  "Marvin" in PULP FICTION, “Green Lantern” on JUSTICE LEAGUE and as the voice of SAMURAI JACK.For over 30 years, Phil has thrilled audiences with his work on camera and behind the mic on TV shows such as STATIC SHOCK, FAMILY GUY, YOUNG JUSTICE, STAR WARS: THE CLONE WARS, THE FLASH, SUPERGIRL, GET SHORTY, LUCIFER, CURB YOUR ENTHUSIASM and VEEP; feature films like MADAGASCAR 2, INCREDIBLES 2, and THE LION KING (2019) and video games including JAK & DAXTER, FORTNITE, SHADOW OF MORDOR, and the METAL GEAR SOLID, INJUSTICE and MORTAL KOMBAT series.Welcome, Phil LaMarr.https://www.instagram.com/mondaymorni...https://twitter.com/mdmcritic?lang=enhttps://www.tiktok.com/@mondaymorning...https://www.facebook.com/mondaymornin...www.mmcpodcast.commondaymorningcritic@gmail.com#starwars #pulpfiction #clonewars #greenlantern #dc #dccomics #interview #podcast #samuraijack #madtv #justiceleague #futurama #familyguy

Remember When?
Jak & Daxter: An Ottsel's Haven

Remember When?

Play Episode Listen Later Sep 15, 2023 38:50


What happens when you fall into a pit of dark eco? You get an Ottsel and the plot for Jak and Daxter the Precursor Legacy. The first game of a series that is forever connected to the PlayStation 2. The Remember When? crew revisit this Naughty Dog gem and talk about what makes the Jak & Daxter series so memorable. ---- ⁠Buy yourself some merch⁠! Thanks for listening! If you enjoyed, leave us a rating + review and we'll give you a shoutout on an episode! Follow us and interact on IG: ⁠⁠@rememberwhenpodcast⁠⁠ Drop us a line with any future episode topics!

Diet Coke & Lilith's House of Snax
#106 – Monterey Jackin' Daxter

Diet Coke & Lilith's House of Snax

Play Episode Listen Later Sep 5, 2023 27:40


Diet Coke and Lilith sample some Nestle Toll House Cookies, with a gooey surprise ;) Intro voiceover by Jarett Raymond Music & Sounds used during the intro & Outro: Hall of the Mountain King by Kevin MacLeod (incompetech.com) Thunder by lennyboy (freesound.org) Door, Front, Opening, A - InspectorJ (freesound.org) Noise - Juandamb (fresound.org) Walking through Mud - Breviceps Strong wind inside house _ Viento fuerte interior casa - SonoRec (freesound.org) Tape Start - unfa (freesound.org) video_recorder_load_cassette_02 - Magedu (freesound.org) creaky door - m_marek (freesound.org) Door, Front, Closing, A - InspectorJ (freesound.org) Door closing, door closed - steinhyrningur (freesound.org) Door_Heavy_Reverb_Open_Close - LamaMakesMusic (freesound.org) video_recorder_eject_cassette - magedu (freesound.org) Music used for snack descriptions: Soft Synth Pad Chord Progression 95 bpm - tyballer92 (freesound.org)

Save Before Quitting
Level 140 - It's Mealtime Fun!

Save Before Quitting

Play Episode Listen Later Aug 26, 2023 93:17


Level 140 - It's Mealtime Fun! Level Up Gamers! Thank you for joining the guys for another week of the Save Before Quitting Podcast! As we begin this episode Chris gushes to us about his time eating Fondue and his weekend spent playing Dungeons and Dragons. Once we get into what we've been playing, the D&D vibes continue as we get into more Baldur's Gate and Tears of the Kingdom. Chris, also, keeps us updated on his Call of Duty league. Additionally, we get into our news segment. We cover Charles Martinet stepping down as the voice of Mario, Xbox's console wraps, Chris' excitement about the return of Zoo Pals' plates, Good Burger 2, a live-action Jak and Daxter film, Ant's excitement about the launch of Sea of Stars, creating your own Funko Pop, the launch of Ahsoka series on Disney plus, everything announced at Gamescom, and more! Finally, we find out the Game Awards will be on December 7th and you can stream the show right along with us as always! #LEVELUP JOIN OUR PATREON! patreon.com/saveb4quitting JOIN OUR DISCORD! https://discord.gg/sUhJuSE3 You can now find everything SB4Q related at SAVEBFOREQUITTING.COM! If YOU have a question or comment PLEASE don't hesitate to hit us up at savebeforequitting@gmail.com or any of our contacts down below ⬇️⬇️ Follow us on our social media! Twitter: @saveb4quitting @CJKingAnimation @ANTMAN2K Instagram: @saveb4quitting @cjkingjr @antman2k We're now on TikTok!: @Saveb4quitting Subscribe to our Youtube channel! https://www.youtube.com/channel/UC1BOvAO0528ETmpRIpbdrHQ/ Follow us on Twitch! Twitch.tv/saveb4quitting Theme song by @AoGotTheSauce3

The Gorge: With Ben and Sara
Episode 235: "You are Building a Bridge Between Kermit the Frog and Hideo Kojima"

The Gorge: With Ben and Sara

Play Episode Listen Later Aug 23, 2023 104:01


Ben played 30XX! Sara read Umineko!Report: Embracer's $2 Billion Deal That Blew Up Was With Saudi ArabiaMicrosoft confirms Final Fantasy 7 Remake post was a mistake, not a teaseOne Piece: Pirate Warriors 4 Gameplay Shows Luffy Gear 5 FormNetflix's Scott Pilgrim Anime Has an Official Teaser and a Full Series TitleSex% is now an official Baldur's Gate 3 speedrun category and someone already did it in under 8 minutesJoJo's Bizarre Adventure: All Star Battle R DLC character Leone Abbacchio announcedEnd of the road: The Xbox 360 game marketplace will shut downTwitch streamers can soon stop banned people from watching altogetherNintendo Confirms Original Mario Voice Actor Charles Martinet Is 'Stepping Back' From RecordingUncharted's Tom Holland reportedly set to star in Jak and Daxter adaptation with Chris PrattOpening Night Live – All The New Trailers and AnnouncementsSupport the showPATREON: http://www.patreon.com/thegorgeDiscord: discord.gg/K8A6SG2Big Gay Nerds: https://soundcloud.com/biggaynerdsBackground music: DJ CUTMAN: https://music.djcutman.com/Broke for Free: https://brokeforfree.comVisager: https://visager.bandcamp.comAdventuria: https://adventuria.bandcamp.com/INTRO: https://soundcloud.com/zak235Ben's Twitter: @TheGorgePodcastSara's Twitter: @RadioinactivityE-mail: thegorgepodcast@gmail.com

All You Can Geek
The Future of Movies is Video Games - AYCG Moviecast #661

All You Can Geek

Play Episode Listen Later Aug 23, 2023 33:09


DC gets weird on this week's Moviecast. Snyder's unrealized plans included a reboot. Blue Beetle flops despite good reviews. Wonder Woman 3 is an illusion. James Gunn gets trolled. Meanwhile, Jak and Daxter is getting a Tom Holland-led movie adaptation and Scott Pilgrim returns on Netflix. #bluebeetle #dcu #jakanddaxter #sony #scottpilgrim #netflix #mcu #streaming #Movies #tv

Film Bros! Podcast
Ep 169 Blue Beetle a potential failure, Logan Paul walks out of Oppenheimer, Sony making Jak and Daxter live action and more

Film Bros! Podcast

Play Episode Listen Later Aug 23, 2023 73:22


In this episode the FilmBros discuss Blue Beetle not putting up numbers opening weekend, James Gunn says No Batman cameo in Blue Beetle, Logan Paul walks out of Oppenheimer, Sony making Jak and Daxter starring some odd choices, and so much more Support the show

El Gamer Cave
El regreso de God of War está en marcha.

El Gamer Cave

Play Episode Listen Later Aug 21, 2023 41:36


Bienvenidos a nuestro podcast semanal! En cada episodio, exploraremos las últimas noticias y novedades del mundo de los videojuegos y la industria del entretenimiento. En esta edición, discutiremos los emocionantes acontecimientos que están ocurriendo en la industria.Comenzaremos hablando sobre Santa Monica Studio, quienes están en busca de un diseñador de combate con conocimientos en God of War. Analizaremos qué podemos esperar de esta próxima entrega y cómo podría impactar en la franquicia.Además, exploraremos las intrigantes pistas que apuntan a un posible anuncio de un nuevo Tomb Raider este año, a pesar de los despidos en la compañía. ¿Qué podemos esperar de esta icónica saga de aventuras?También nos sumergiremos en el mundo de Cyberpunk 2077, ya que se acerca un emocionante evento el 24 de agosto con novedades y gameplay. Analizaremos las expectativas y qué podemos esperar de este esperado título.En otro tema, descubriremos cómo se está construyendo un increíble PC inspirado en Starfield, el próximo gran lanzamiento de Bethesda. Exploraremos los detalles de esta impresionante creación y cómo podría influir en la experiencia de juego.Además, exploraremos los rumores de que Sony podría estar trabajando en una película de Jak & Daxter. Analizaremos las posibilidades y qué podríamos esperar de esta adaptación cinematográfica.¡Y eso no es todo! También tendremos muchas otras noticias y sorpresas en este episodio lleno de emocionantes novedades del mundo del entretenimiento.¡No te lo pierdas! Sintoniza nuestro podcast semanal para mantenerte al día con todas las últimas noticias y discusiones apasionantes del mundo de los videojuegos y más. ¡Te esperamos en nuestro próximo episodio! --- Support this podcast: https://podcasters.spotify.com/pod/show/elgamercave/support

Boz To The Future
The Future According to Jason Rubin

Boz To The Future

Play Episode Listen Later Jun 7, 2023 42:04


In today's episode, our host, Head of RL, and Meta CTO Andrew “Boz” Bosworth is joined by Jason Rubin, VP of Metaverse Experiences.Rubin is an industry veteran who started out as co-founder of the gaming studio Naughty Dog, where as a programmer and director he led work on titles like Crash Bandicoot and Jak & Daxter — two pioneering games from the early days of the PlayStation platform. Following that, he worked across a range of gaming, social media, and technology businesses before landing at Oculus in 2014. And he's played an integral role in the development of the VR gaming and content ecosystem ever since. With Meta Quest 3 coming in the fall, Boz and Rubin get into what to expect from our next generation of VR headsets, as well as some of the biggest VR games coming this year, including Asgard's Wrath 2, Ghostbusters: Rise of the Ghost Lord, Arizona Sunshine 2, and more. They also discuss why they both love photography so much, what's the most slept-on VR game out there right now, and the graphic novel you need to be reading in 2023. For feedback and suggestions, drop Boz a message @boztank on Instagram or Twitter.

The Potential Podcast!
Wise Cracking With Max Casella

The Potential Podcast!

Play Episode Listen Later May 23, 2023 49:54


Extra! Extra! Read all about it! Over 30 years since the release of Disney's "Newsies", we had one of the original cast, Race Track Higgins himself, Max Casella.  After the highly lauded release of the tv series, Tulsa King, Chris and Taylor were so very fortunate to have a fun and and surprisingly nerdy chat with actor Max Casella. Join us as we discuss his first start on Newsies,  his various roles in film and TV,  working with the acting greats like Stallone and Gandolfini, and voice acting Daxter from the Jak & Daxter video game series. We even tapped into his nerdy gaming side! Follow us on:Instagram: https://www.instagram.com/thepotentialpodcast/Facebook: https://www.facebook.com/thepotentialpodcastTwitter: https://twitter.com/thepotentialpod Thanks to our sponsors: AURA & NUEROAura:Get a 14-day free trial of Aura for individuals, couples and or their family by going to aura.com/potential Nuero:Our listeners will get a 20% discount on any gum or mints  by going to  tryneurogum.com/potential  ★ Support this podcast on Patreon ★

They Create Worlds
A Very Naughty Dog Pt 2

They Create Worlds

Play Episode Listen Later May 1, 2023 121:01


TCW Podcast Episode 185 - A Very Naughty Dog Pt 2   In our continuing look at Naughty Dog, we examine the continuing development of Crash Bandicoot and the Jax and Dexter series. After Sony's acquisition of Naughty Dog, and during the development of Jak 2, Andy and Jason decided to part ways with the company to focus more on their personal lives. They did help in training people to take their place. These new people at the helm led to the creation of Naughty Dog's most famous franchises. Uncharted and The Last of Us. Games that really pushed forward what is possible for storytelling in video games.   Polygon Man: https://en.wikipedia.org/wiki/Polygon_Man Sony E3 1996: https://www.youtube.com/watch?v=GHTNTuiOt7s 1984 Macintosh Commercial: https://www.youtube.com/watch?v=2zfqw8nhUwA Crash Bandicoot Nintendo Commercial: https://www.youtube.com/watch?v=mTi5EaocGaY All Crash Bandicoot Commercial: https://www.youtube.com/watch?v=bLdQczq3NMM Crash Bandicoot PSX: https://www.youtube.com/watch?v=xK-h4M4Aetg Crash Bandicoot Japanese Commercials: https://www.youtube.com/watch?v=RS3rpyTqoUs Crash Bandicoot Japanese VS America: https://www.youtube.com/watch?v=qqL2XffOBA0 Crash Bandicoot 2: https://www.youtube.com/watch?v=bmnq7yedAzI Crash Bandicoot Warped: https://www.youtube.com/watch?v=HG-NRnGp3RA Crash Team Racing: https://www.youtube.com/watch?v=QQDWEIiKN6E Jax and Daxter: https://www.youtube.com/watch?v=9lpq3zVXs9A Michael Jordan Chaos in the Windy City: https://www.youtube.com/watch?v=gJcIjxfFKdA Legacy of Kain Overview: https://www.youtube.com/watch?v=Ysxs_oErmWo Jak 3: https://www.youtube.com/watch?v=PLIWS4QDc6Q Uncharted Drake's Fortune: https://www.youtube.com/watch?v=h0XhMMjFjBU Gears of War PC: https://www.youtube.com/watch?v=yecvln9dTOI&t=72s Kill Switch: https://www.youtube.com/watch?v=i1zW5X0CwTs ICO: https://www.youtube.com/watch?v=tZC_FzeRz4Y Uncharted 2 Among Thieves: https://www.youtube.com/watch?v=0EhpxpPeTKE Uncharted 2 Train Fight: https://www.youtube.com/watch?v=LXaOguzFHTE DAVID BENIOFF on City of Thieves: https://www.youtube.com/watch?v=9P_QI0FoTnE John Hartigan - Sin City: https://sincity.fandom.com/wiki/John_Hartigan Intro to The Last of Us: https://www.youtube.com/watch?v=BC_HB7wDYg8 Quark about Humans: https://www.youtube.com/watch?v=-D2SHNqkjbY   New episodes are on the 1st and 15th of every month!   TCW Email: feedback@theycreateworlds.com  Twitter: @tcwpodcast Patreon: https://www.patreon.com/theycreateworlds Alex's Video Game History Blog: http://videogamehistorian.wordpress.com Alex's book, published Dec 2019, is available at CRC Press and at major on-line retailers: http://bit.ly/TCWBOOK1   Intro Music: Josh Woodward - Airplane Mode -  Music - "Airplane Mode" by Josh Woodward. Free download: http://joshwoodward.com/song/AirplaneMode  Outro Music: RolemMusic - Bacterial Love: http://freemusicarchive.org/music/Rolemusic/Pop_Singles_Compilation_2014/01_rolemusic_-_bacterial_love    Copyright: Attribution: http://creativecommons.org/licenses/by/4.0/

Thumb Cramps
PlaystApril Bonanza and Other Games (Ft. Ethan Taylor)

Thumb Cramps

Play Episode Listen Later Apr 27, 2023 89:24


This week on Thumb Cramps, PlaystApril presented by Ben comes to an end as we're joined by Ethan Taylor to farewell PlaystApril the only way we know how, with a PlaystApril Bonanza that looks at Crash Twinsanity for the PS2, Rugrats: Search for Reptar for the PS1, Beast Wars: Transformers for the PS1, Primal for the PS2, Jak and Daxter for the PS2, SIlent Hill 2 for the PS2 and finally Tron: Identity for the Nintendo Switch. Big month. Bigger episode. Biggest amount of PlaystApril cheer. Check out STRAY GODS hereEmail us at ThumbCrampsPod@gmail.com Find us on Twitter;Jackson | Duscher | Thumb Cramps | EthanWatch us on Twitch;Jackson | DuscherYou can now physically send us stuff to PO BOX 7127, Reservoir East, Victoria, 3073.Join our facebook group here or join our Discord here.Theme music by Benny Davis! You can find all his stuff at his website or check out his YouTube channel. Hosted on Acast. See acast.com/privacy for more information.

Retronauts
488: Retronauts Episode 488: Jak & Daxter

Retronauts

Play Episode Listen Later Oct 17, 2022 126:20


Join Stuart Gipp with guests John Linneman and Thomas Nickel for a hop, skip, and headshot through this fantasy future world. Jak's gonna kill Praxis, and we here at Retronauts are gonna kill praxis. Retronauts is made possible by listener support through Patreon! Support the show to enjoy ad-free early access, better audio quality, and great exclusive content. Learn more at http://www.patreon.com/retronauts

Podcast Beyond - IGN's PlayStation Show
New PS Plus' First Big Update - Beyond 758

Podcast Beyond - IGN's PlayStation Show

Play Episode Listen Later Jul 13, 2022 70:01


On this week's episode of IGN's weekly PlayStation Show, Podcast Beyond!, host Jonathon Dornbush is joined by Jada Griffin and Mark Medina to talk about the latest and greatest in the world of PlayStation, and also why Mark has a grudge against his PS5.   But before that, the panel jumps into the new PS Plus update for the month of July, talking about the first big refresh for PS Plus Extra and Premium subscribers. We break down the new releases like Stray coming to the service, alongside games like Final Fantasy VII Remake and Marvel's Avengers, and how we feel about the value of the Extra and Premium tiers one month into the service. Plus, some hopes for where PS Plus goes from here. Next we talk about some of the news of the week, including The Last of Us Part 1 going gold, Haven fully being acquired, release dates for games like Skull and Bones and Valkyrie Elysium, and a whole bunch of indies coming to PlayStation over the coming months. We also dig into a wonderful Memory Card stories about memory cards and a surprising Jak and Daxter moment, Mark's reasons for wanting to hit his PS5 with a baseball bat and whether the two need to go to couples' therapy, and more.  If you'd like to write into the show with questions, thoughts on topics discussed, or Memory Card stories, reach out to beyond@ign.com!