POPULARITY
Categories
Peter Spezia is here! Nintendo Switch 2 has finally been revealed and it felt very much like the E3 conferences of old. The Direct was packed with highs and lows and left a wake of confused fervor behind. Join us as we dissect and react to the next generation of Nintendo home consoles! You can download a copy of this episode's transcript here. Show Notes Last Minute Predictions Peter's Prediction Tweet Max's Big Three Predictions for 2025 Kirby Air Ride (Game Concepts) - Masahiro Sakurai The Switch 2 Experience Nintendo Switch 2 Direct Nintendo Switch 2 – Overview Trailer Austin Evans Reactions to Smooth Control Sticks Nintendo Treehouse Live Four Swords GameChat Example Ask the Developer Vol. 16: Nintendo Switch 2 — Part 3: Inherently added value Virtual Game Cards Third Party Games Phil Spencer's Comments on Series S and the handheld PC Market Mario Kart World Mario Kart World Trailer Nintendo Switch 2 Welcome Tour Nintendo Switch 2 Welcome Tour Treehouse Live Gameplay Nintendo says Switch 2 could've been called ‘Super Nintendo Switch' Nintendo Switch 2 Enhanced Editions The Legend of Zelda games - Nintendo Switch 2 Enhanced Editions Kirby and the Forgotten Land – Nintendo Switch 2 Edition + Star-Crossed World Super Mario Party Jamboree – Nintendo Switch 2 Edition + Jamboree TV Metroid Prime 4: Beyond – Nintendo Switch 2 Edition Treehouse Live Gameplay Switch 1 Games with Free Updates Donkey Kong Bananza Donkey Kong Bananza Trailer Wrap-Up “A Passion for Smash” – Celebrating 15 Years of Super Smash Bros. Brawl with Peter Spezia Peter Spezia Original Soundchat Peter's Twitter @PeteSpeakEasy Peter's Bluesky @petespeakeasy.bsky.social Max Frequency - Max's home online Chapter Select - A seasonal, retrospective podcast where we bounce back and forth between a series exploring its evolution, design, and legacy.
A few musician friends were asking me how I grew my project. In this pod, I go through the mindset and perspective that worked well for me, as well as a bit of psychology I learned from Oliver Sacks' "This Is Your Brain on Music."For 30% off your first year with DistroKid to share your music with the world click DistroKid.com/vip/lovemusicmoreWant to hear my music? For all things links visit ScoobertDoobert.pizzaSubscribe to this pod's blog on Substack to receive deeper dives on the regular
In February 2025, occupational therapist Kelsie Olds, who hosts a facebook page called The Occuplaytional Therapist, published a post that struck a nerve--leading to almost 4000 reactions to date, and almost 2000 shares. The post is a powerful statement on the political nature of play, of respecting children, of supporting families, and taking care of each other. Listen in as she talks us through it.For more of Kelsie's work, find her on Facebook as The Occuplaytional Therapist, or follow her substack here: https://occuplaytionaltherapist.substack.com/Want to support the show? You can make a one time $5 or more gift, or become a member for bonus content! Find more information here: buymeacoffee.com/heatherf Thanks for listening! Save 10% on professional development from Explorations Early Learning and support the show with the coupon code NERD. Like the show? Consider supporting our work by becoming a Patron, shopping our Amazon Link, or sharing it with someone who might enjoy it. You can leave a comment or ask a question here. Click here for more Heather. For a small fee we can issue self-study certificates for listening to podcasts.
The Creator of the the Terrifier franchise was bothered by some of the most annoying people on the internet and we got new trailers for Fantastic 4 and Jurassic Park. Also Buffy the Vampire Slayer is probably getting a remake and almost no one is happy about it.Leave a comment below! We try to respond to as many as we can!Don't forget to Like and Subscribe!#movies #games #tv Donate Here - https://www.paypal.com/donate?hosted_button_id=Y6TSU94STL9PUAll our Links - https://direct.me/theundergroundWhat is our Value for Value System?Value for Value is a listener based business model where you determine the value our content is worth. If you feel you are getting value from our content, please consider becoming a supporter by donating your time, talent, & treasure. Time: meaning any effort you put in to improving or developing our content or sharing it.Talent: meaning any skills you possess that you want to contribute to help us develop our platform (ie., artwork for podcast episodes, branding design, editing, etc). Treasure: pay a one-off amount or a recurring contribution for the value you think our service is worth. Please be sure with any payment you send via PayPal to include a note, so that we can read it on the livestream, if you'd like. Your donations keep our content advertisement free. Thank you.Where do you support us? Click the direct.me link to find our PayPal link for contributions as well as our YouTube, Odysee, TikTok, Instagram, and Twitter links! We appreciate the engagement from all of you! Contribution Amounts:Donors of less than $100 will automatically become Producers of the corresponding episode!Donors of $100 and above will automatically become Associate Executive Producers of the corresponding episode!Donors of $200 and above will receive the Executive Producer credit for that episode!We will list the credits in our show notes as Executive Producer, Associate Executive Producer, & Producer and is a genuine credit we will vouch for. Generally, executive producers are primarily responsible for financing the project. Therefore, this is a legitimate credit for your resume. Please note any amount will remain anonymous upon request.All donors will receive a special mention on the show unless otherwise noted!Special Note: The Value for Value business model originated with Adam Curry & John C. Dvorak of the No Agenda Podcast.https://www.youtube.com/watch?v=PgihPtnBSek
Cindy Guthrie is an artist who draws inspiration from her life experiences. Living as an Okie for the past 20+ years, she spent her childhood with one foot in the middle of Dallas and the other in my family's Polled Hereford ranch in Emory, Texas. Texas wildflowers, blackberries, sandy garden rows, and the white-faced cows on her grandparent's farm, are enduring images of beauty from her childhood. She also draws from travel experiences, especially beachy locales and the southwest/desert landscape, as favorite subjects for her paintings and photography. She hopes a painting will bring to mind and into your home a special memory or feeling of freedom, adventure, joy, hope or childhood. Today, we're talking about: From “never tried to paint” to professional artist later in life Prioritizing passion over skill and allowing this to lead you Learning to look with artist eyes, beginning to really see C A N D A C E C O F E R author + speaker website | instagram | youtube | facebook
On this episode of the Awayken Space podcast I dive into the misconception many people have about life being inherently painful and difficult.
Also: why do we habituate to life's greatest pleasures? This episode originally aired on July 26, 2020.
Jase gets riled up at the New York Yankees' beard policy snub, insisting there's only reason a man should ever trim his chin hairs. Zach does his best to fulfill his oath to track down a treasure hunt for Jase in England and offers a unique perspective on Jesus' wine-into-water miracle. The guys explore why organized religion is often unappealing to modern society. In this episode: John 2 “Unashamed” Episode 1048 is sponsored by: https://tnusa.com/unashamed — Call 1-800-958-1000 or visit the website for more details. https://preborn.com/unashamed — Click the link or dial #250 and use keyword BABY to donate today. Get your tickets now for LAST BREATH, rated PG-13. Opens Friday, February 28th in theaters everywhere! Listen to Not Yet Now with Zach Dasher on Apple, Spotify, iHeart, or anywhere you get podcasts. — Learn more about your ad choices. Visit megaphone.fm/adchoices
Lissa Anglin is a seasoned freelancer and creative sidekick to vibrant and optimistic brands. She operates as the Creative Director for her business, Lissa Anglin Creative, and she and her team craft compelling imagery, strategic branding, captivating design, and seamless social media management for their clients. With a degree in art, Lissa's stationery and fine art photography is also featured on platforms like Minted, Greenvelope, and Target. A long career in photography and content creation also led her to food styling for commercial clients, and she can frequently be found on set touting an apron, and a few industry tricks up her sleeve. Beyond her creative pursuits, Lissa enjoys serving her church and her bustling family life with her husband and three kids. Today, we're talking about: Your first step to building a brand How to build a brand that is both unforgettable and genuine to your mission Owning and designing your own AirBnb C A N D A C E C O F E R author + speaker website | instagram | youtube | facebook
Late night comedians decide to make jokes about the president again. Real or not?
Did you know that adding a simple Code Interpreter took o3 from 9.2% to 32% on FrontierMath? The Latent Space crew is hosting a hack night Feb 11th in San Francisco focused on CodeGen use cases, co-hosted with E2B and Edge AGI; watch E2B's new workshop and RSVP here!We're happy to announce that today's guest Samuel Colvin will be teaching his very first Pydantic AI workshop at the newly announced AI Engineer NYC Workshops day on Feb 22! 25 tickets left.If you're a Python developer, it's very likely that you've heard of Pydantic. Every month, it's downloaded >300,000,000 times, making it one of the top 25 PyPi packages. OpenAI uses it in its SDK for structured outputs, it's at the core of FastAPI, and if you've followed our AI Engineer Summit conference, Jason Liu of Instructor has given two great talks about it: “Pydantic is all you need” and “Pydantic is STILL all you need”. Now, Samuel Colvin has raised $17M from Sequoia to turn Pydantic from an open source project to a full stack AI engineer platform with Logfire, their observability platform, and PydanticAI, their new agent framework.Logfire: bringing OTEL to AIOpenTelemetry recently merged Semantic Conventions for LLM workloads which provides standard definitions to track performance like gen_ai.server.time_per_output_token. In Sam's view at least 80% of new apps being built today have some sort of LLM usage in them, and just like web observability platform got replaced by cloud-first ones in the 2010s, Logfire wants to do the same for AI-first apps. If you're interested in the technical details, Logfire migrated away from Clickhouse to Datafusion for their backend. We spent some time on the importance of picking open source tools you understand and that you can actually contribute to upstream, rather than the more popular ones; listen in ~43:19 for that part.Agents are the killer app for graphsPydantic AI is their attempt at taking a lot of the learnings that LangChain and the other early LLM frameworks had, and putting Python best practices into it. At an API level, it's very similar to the other libraries: you can call LLMs, create agents, do function calling, do evals, etc.They define an “Agent” as a container with a system prompt, tools, structured result, and an LLM. Under the hood, each Agent is now a graph of function calls that can orchestrate multi-step LLM interactions. You can start simple, then move toward fully dynamic graph-based control flow if needed.“We were compelled enough by graphs once we got them right that our agent implementation [...] is now actually a graph under the hood.”Why Graphs?* More natural for complex or multi-step AI workflows.* Easy to visualize and debug with mermaid diagrams.* Potential for distributed runs, or “waiting days” between steps in certain flows.In parallel, you see folks like Emil Eifrem of Neo4j talk about GraphRAG as another place where graphs fit really well in the AI stack, so it might be time for more people to take them seriously.Full Video EpisodeLike and subscribe!Chapters* 00:00:00 Introductions* 00:00:24 Origins of Pydantic* 00:05:28 Pydantic's AI moment * 00:08:05 Why build a new agents framework?* 00:10:17 Overview of Pydantic AI* 00:12:33 Becoming a believer in graphs* 00:24:02 God Model vs Compound AI Systems* 00:28:13 Why not build an LLM gateway?* 00:31:39 Programmatic testing vs live evals* 00:35:51 Using OpenTelemetry for AI traces* 00:43:19 Why they don't use Clickhouse* 00:48:34 Competing in the observability space* 00:50:41 Licensing decisions for Pydantic and LogFire* 00:51:48 Building Pydantic.run* 00:55:24 Marimo and the future of Jupyter notebooks* 00:57:44 London's AI sceneShow Notes* Sam Colvin* Pydantic* Pydantic AI* Logfire* Pydantic.run* Zod* E2B* Arize* Langsmith* Marimo* Prefect* GLA (Google Generative Language API)* OpenTelemetry* Jason Liu* Sebastian Ramirez* Bogomil Balkansky* Hood Chatham* Jeremy Howard* Andrew LambTranscriptAlessio [00:00:03]: Hey, everyone. Welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:12]: Good morning. And today we're very excited to have Sam Colvin join us from Pydantic AI. Welcome. Sam, I heard that Pydantic is all we need. Is that true?Samuel [00:00:24]: I would say you might need Pydantic AI and Logfire as well, but it gets you a long way, that's for sure.Swyx [00:00:29]: Pydantic almost basically needs no introduction. It's almost 300 million downloads in December. And obviously, in the previous podcasts and discussions we've had with Jason Liu, he's been a big fan and promoter of Pydantic and AI.Samuel [00:00:45]: Yeah, it's weird because obviously I didn't create Pydantic originally for uses in AI, it predates LLMs. But it's like we've been lucky that it's been picked up by that community and used so widely.Swyx [00:00:58]: Actually, maybe we'll hear it. Right from you, what is Pydantic and maybe a little bit of the origin story?Samuel [00:01:04]: The best name for it, which is not quite right, is a validation library. And we get some tension around that name because it doesn't just do validation, it will do coercion by default. We now have strict mode, so you can disable that coercion. But by default, if you say you want an integer field and you get in a string of 1, 2, 3, it will convert it to 123 and a bunch of other sensible conversions. And as you can imagine, the semantics around it. Exactly when you convert and when you don't, it's complicated, but because of that, it's more than just validation. Back in 2017, when I first started it, the different thing it was doing was using type hints to define your schema. That was controversial at the time. It was genuinely disapproved of by some people. I think the success of Pydantic and libraries like FastAPI that build on top of it means that today that's no longer controversial in Python. And indeed, lots of other people have copied that route, but yeah, it's a data validation library. It uses type hints for the for the most part and obviously does all the other stuff you want, like serialization on top of that. But yeah, that's the core.Alessio [00:02:06]: Do you have any fun stories on how JSON schemas ended up being kind of like the structure output standard for LLMs? And were you involved in any of these discussions? Because I know OpenAI was, you know, one of the early adopters. So did they reach out to you? Was there kind of like a structure output console in open source that people were talking about or was it just a random?Samuel [00:02:26]: No, very much not. So I originally. Didn't implement JSON schema inside Pydantic and then Sebastian, Sebastian Ramirez, FastAPI came along and like the first I ever heard of him was over a weekend. I got like 50 emails from him or 50 like emails as he was committing to Pydantic, adding JSON schema long pre version one. So the reason it was added was for OpenAPI, which is obviously closely akin to JSON schema. And then, yeah, I don't know why it was JSON that got picked up and used by OpenAI. It was obviously very convenient for us. That's because it meant that not only can you do the validation, but because Pydantic will generate you the JSON schema, it will it kind of can be one source of source of truth for structured outputs and tools.Swyx [00:03:09]: Before we dive in further on the on the AI side of things, something I'm mildly curious about, obviously, there's Zod in JavaScript land. Every now and then there is a new sort of in vogue validation library that that takes over for quite a few years and then maybe like some something else comes along. Is Pydantic? Is it done like the core Pydantic?Samuel [00:03:30]: I've just come off a call where we were redesigning some of the internal bits. There will be a v3 at some point, which will not break people's code half as much as v2 as in v2 was the was the massive rewrite into Rust, but also fixing all the stuff that was broken back from like version zero point something that we didn't fix in v1 because it was a side project. We have plans to move some of the basically store the data in Rust types after validation. Not completely. So we're still working to design the Pythonic version of it, in order for it to be able to convert into Python types. So then if you were doing like validation and then serialization, you would never have to go via a Python type we reckon that can give us somewhere between three and five times another three to five times speed up. That's probably the biggest thing. Also, like changing how easy it is to basically extend Pydantic and define how particular types, like for example, NumPy arrays are validated and serialized. But there's also stuff going on. And for example, Jitter, the JSON library in Rust that does the JSON parsing, has SIMD implementation at the moment only for AMD64. So we can add that. We need to go and add SIMD for other instruction sets. So there's a bunch more we can do on performance. I don't think we're going to go and revolutionize Pydantic, but it's going to continue to get faster, continue, hopefully, to allow people to do more advanced things. We might add a binary format like CBOR for serialization for when you'll just want to put the data into a database and probably load it again from Pydantic. So there are some things that will come along, but for the most part, it should just get faster and cleaner.Alessio [00:05:04]: From a focus perspective, I guess, as a founder too, how did you think about the AI interest rising? And then how do you kind of prioritize, okay, this is worth going into more, and we'll talk about Pydantic AI and all of that. What was maybe your early experience with LLAMP, and when did you figure out, okay, this is something we should take seriously and focus more resources on it?Samuel [00:05:28]: I'll answer that, but I'll answer what I think is a kind of parallel question, which is Pydantic's weird, because Pydantic existed, obviously, before I was starting a company. I was working on it in my spare time, and then beginning of 22, I started working on the rewrite in Rust. And I worked on it full-time for a year and a half, and then once we started the company, people came and joined. And it was a weird project, because that would never go away. You can't get signed off inside a startup. Like, we're going to go off and three engineers are going to work full-on for a year in Python and Rust, writing like 30,000 lines of Rust just to release open-source-free Python library. The result of that has been excellent for us as a company, right? As in, it's made us remain entirely relevant. And it's like, Pydantic is not just used in the SDKs of all of the AI libraries, but I can't say which one, but one of the big foundational model companies, when they upgraded from Pydantic v1 to v2, their number one internal model... The metric of performance is time to first token. That went down by 20%. So you think about all of the actual AI going on inside, and yet at least 20% of the CPU, or at least the latency inside requests was actually Pydantic, which shows like how widely it's used. So we've benefited from doing that work, although it didn't, it would have never have made financial sense in most companies. In answer to your question about like, how do we prioritize AI, I mean, the honest truth is we've spent a lot of the last year and a half building. Good general purpose observability inside LogFire and making Pydantic good for general purpose use cases. And the AI has kind of come to us. Like we just, not that we want to get away from it, but like the appetite, uh, both in Pydantic and in LogFire to go and build with AI is enormous because it kind of makes sense, right? Like if you're starting a new greenfield project in Python today, what's the chance that you're using GenAI 80%, let's say, globally, obviously it's like a hundred percent in California, but even worldwide, it's probably 80%. Yeah. And so everyone needs that stuff. And there's so much yet to be figured out so much like space to do things better in the ecosystem in a way that like to go and implement a database that's better than Postgres is a like Sisyphean task. Whereas building, uh, tools that are better for GenAI than some of the stuff that's about now is not very difficult. Putting the actual models themselves to one side.Alessio [00:07:40]: And then at the same time, then you released Pydantic AI recently, which is, uh, um, you know, agent framework and early on, I would say everybody like, you know, Langchain and like, uh, Pydantic kind of like a first class support, a lot of these frameworks, we're trying to use you to be better. What was the decision behind we should do our own framework? Were there any design decisions that you disagree with any workloads that you think people didn't support? Well,Samuel [00:08:05]: it wasn't so much like design and workflow, although I think there were some, some things we've done differently. Yeah. I think looking in general at the ecosystem of agent frameworks, the engineering quality is far below that of the rest of the Python ecosystem. There's a bunch of stuff that we have learned how to do over the last 20 years of building Python libraries and writing Python code that seems to be abandoned by people when they build agent frameworks. Now I can kind of respect that, particularly in the very first agent frameworks, like Langchain, where they were literally figuring out how to go and do this stuff. It's completely understandable that you would like basically skip some stuff.Samuel [00:08:42]: I'm shocked by the like quality of some of the agent frameworks that have come out recently from like well-respected names, which it just seems to be opportunism and I have little time for that, but like the early ones, like I think they were just figuring out how to do stuff and just as lots of people have learned from Pydantic, we were able to learn a bit from them. I think from like the gap we saw and the thing we were frustrated by was the production readiness. And that means things like type checking, even if type checking makes it hard. Like Pydantic AI, I will put my hand up now and say it has a lot of generics and you need to, it's probably easier to use it if you've written a bit of Rust and you really understand generics, but like, and that is, we're not claiming that that makes it the easiest thing to use in all cases, we think it makes it good for production applications in big systems where type checking is a no-brainer in Python. But there are also a bunch of stuff we've learned from maintaining Pydantic over the years that we've gone and done. So every single example in Pydantic AI's documentation is run on Python. As part of tests and every single print output within an example is checked during tests. So it will always be up to date. And then a bunch of things that, like I say, are standard best practice within the rest of the Python ecosystem, but I'm not followed surprisingly by some AI libraries like coverage, linting, type checking, et cetera, et cetera, where I think these are no-brainers, but like weirdly they're not followed by some of the other libraries.Alessio [00:10:04]: And can you just give an overview of the framework itself? I think there's kind of like the. LLM calling frameworks, there are the multi-agent frameworks, there's the workflow frameworks, like what does Pydantic AI do?Samuel [00:10:17]: I glaze over a bit when I hear all of the different sorts of frameworks, but I like, and I will tell you when I built Pydantic, when I built Logfire and when I built Pydantic AI, my methodology is not to go and like research and review all of the other things. I kind of work out what I want and I go and build it and then feedback comes and we adjust. So the fundamental building block of Pydantic AI is agents. The exact definition of agents and how you want to define them. is obviously ambiguous and our things are probably sort of agent-lit, not that we would want to go and rename them to agent-lit, but like the point is you probably build them together to build something and most people will call an agent. So an agent in our case has, you know, things like a prompt, like system prompt and some tools and a structured return type if you want it, that covers the vast majority of cases. There are situations where you want to go further and the most complex workflows where you want graphs and I resisted graphs for quite a while. I was sort of of the opinion you didn't need them and you could use standard like Python flow control to do all of that stuff. I had a few arguments with people, but I basically came around to, yeah, I can totally see why graphs are useful. But then we have the problem that by default, they're not type safe because if you have a like add edge method where you give the names of two different edges, there's no type checking, right? Even if you go and do some, I'm not, not all the graph libraries are AI specific. So there's a, there's a graph library called, but it allows, it does like a basic runtime type checking. Ironically using Pydantic to try and make up for the fact that like fundamentally that graphs are not typed type safe. Well, I like Pydantic, but it did, that's not a real solution to have to go and run the code to see if it's safe. There's a reason that starting type checking is so powerful. And so we kind of, from a lot of iteration eventually came up with a system of using normally data classes to define nodes where you return the next node you want to call and where we're able to go and introspect the return type of a node to basically build the graph. And so the graph is. Yeah. Inherently type safe. And once we got that right, I, I wasn't, I'm incredibly excited about graphs. I think there's like masses of use cases for them, both in gen AI and other development, but also software's all going to have interact with gen AI, right? It's going to be like web. There's no longer be like a web department in a company is that there's just like all the developers are building for web building with databases. The same is going to be true for gen AI.Alessio [00:12:33]: Yeah. I see on your docs, you call an agent, a container that contains a system prompt function. Tools, structure, result, dependency type model, and then model settings. Are the graphs in your mind, different agents? Are they different prompts for the same agent? What are like the structures in your mind?Samuel [00:12:52]: So we were compelled enough by graphs once we got them right, that we actually merged the PR this morning. That means our agent implementation without changing its API at all is now actually a graph under the hood as it is built using our graph library. So graphs are basically a lower level tool that allow you to build these complex workflows. Our agents are technically one of the many graphs you could go and build. And we just happened to build that one for you because it's a very common, commonplace one. But obviously there are cases where you need more complex workflows where the current agent assumptions don't work. And that's where you can then go and use graphs to build more complex things.Swyx [00:13:29]: You said you were cynical about graphs. What changed your mind specifically?Samuel [00:13:33]: I guess people kept giving me examples of things that they wanted to use graphs for. And my like, yeah, but you could do that in standard flow control in Python became a like less and less compelling argument to me because I've maintained those systems that end up with like spaghetti code. And I could see the appeal of this like structured way of defining the workflow of my code. And it's really neat that like just from your code, just from your type hints, you can get out a mermaid diagram that defines exactly what can go and happen.Swyx [00:14:00]: Right. Yeah. You do have very neat implementation of sort of inferring the graph from type hints, I guess. Yeah. Is what I would call it. Yeah. I think the question always is I have gone back and forth. I used to work at Temporal where we would actually spend a lot of time complaining about graph based workflow solutions like AWS step functions. And we would actually say that we were better because you could use normal control flow that you already knew and worked with. Yours, I guess, is like a little bit of a nice compromise. Like it looks like normal Pythonic code. But you just have to keep in mind what the type hints actually mean. And that's what we do with the quote unquote magic that the graph construction does.Samuel [00:14:42]: Yeah, exactly. And if you look at the internal logic of actually running a graph, it's incredibly simple. It's basically call a node, get a node back, call that node, get a node back, call that node. If you get an end, you're done. We will add in soon support for, well, basically storage so that you can store the state between each node that's run. And then the idea is you can then distribute the graph and run it across computers. And also, I mean, the other weird, the other bit that's really valuable is across time. Because it's all very well if you look at like lots of the graph examples that like Claude will give you. If it gives you an example, it gives you this lovely enormous mermaid chart of like the workflow, for example, managing returns if you're an e-commerce company. But what you realize is some of those lines are literally one function calls another function. And some of those lines are wait six days for the customer to print their like piece of paper and put it in the post. And if you're writing like your demo. Project or your like proof of concept, that's fine because you can just say, and now we call this function. But when you're building when you're in real in real life, that doesn't work. And now how do we manage that concept to basically be able to start somewhere else in the in our code? Well, this graph implementation makes it incredibly easy because you just pass the node that is the start point for carrying on the graph and it continues to run. So it's things like that where I was like, yeah, I can just imagine how things I've done in the past would be fundamentally easier to understand if we had done them with graphs.Swyx [00:16:07]: You say imagine, but like right now, this pedantic AI actually resume, you know, six days later, like you said, or is this just like a theoretical thing we can go someday?Samuel [00:16:16]: I think it's basically Q&A. So there's an AI that's asking the user a question and effectively you then call the CLI again to continue the conversation. And it basically instantiates the node and calls the graph with that node again. Now, we don't have the logic yet for effectively storing state in the database between individual nodes that we're going to add soon. But like the rest of it is basically there.Swyx [00:16:37]: It does make me think that not only are you competing with Langchain now and obviously Instructor, and now you're going into sort of the more like orchestrated things like Airflow, Prefect, Daxter, those guys.Samuel [00:16:52]: Yeah, I mean, we're good friends with the Prefect guys and Temporal have the same investors as us. And I'm sure that my investor Bogomol would not be too happy if I was like, oh, yeah, by the way, as well as trying to take on Datadog. We're also going off and trying to take on Temporal and everyone else doing that. Obviously, we're not doing all of the infrastructure of deploying that right yet, at least. We're, you know, we're just building a Python library. And like what's crazy about our graph implementation is, sure, there's a bit of magic in like introspecting the return type, you know, extracting things from unions, stuff like that. But like the actual calls, as I say, is literally call a function and get back a thing and call that. It's like incredibly simple and therefore easy to maintain. The question is, how useful is it? Well, I don't know yet. I think we have to go and find out. We have a whole. We've had a slew of people joining our Slack over the last few days and saying, tell me how good Pydantic AI is. How good is Pydantic AI versus Langchain? And I refuse to answer. That's your job to go and find that out. Not mine. We built a thing. I'm compelled by it, but I'm obviously biased. The ecosystem will work out what the useful tools are.Swyx [00:17:52]: Bogomol was my board member when I was at Temporal. And I think I think just generally also having been a workflow engine investor and participant in this space, it's a big space. Like everyone needs different functions. I think the one thing that I would say like yours, you know, as a library, you don't have that much control of it over the infrastructure. I do like the idea that each new agents or whatever or unit of work, whatever you call that should spin up in this sort of isolated boundaries. Whereas yours, I think around everything runs in the same process. But you ideally want to sort of spin out its own little container of things.Samuel [00:18:30]: I agree with you a hundred percent. And we will. It would work now. Right. As in theory, you're just like as long as you can serialize the calls to the next node, you just have to all of the different containers basically have to have the same the same code. I mean, I'm super excited about Cloudflare workers running Python and being able to install dependencies. And if Cloudflare could only give me my invitation to the private beta of that, we would be exploring that right now because I'm super excited about that as a like compute level for some of this stuff where exactly what you're saying, basically. You can run everything as an individual. Like worker function and distribute it. And it's resilient to failure, et cetera, et cetera.Swyx [00:19:08]: And it spins up like a thousand instances simultaneously. You know, you want it to be sort of truly serverless at once. Actually, I know we have some Cloudflare friends who are listening, so hopefully they'll get in front of the line. Especially.Samuel [00:19:19]: I was in Cloudflare's office last week shouting at them about other things that frustrate me. I have a love-hate relationship with Cloudflare. Their tech is awesome. But because I use it the whole time, I then get frustrated. So, yeah, I'm sure I will. I will. I will get there soon.Swyx [00:19:32]: There's a side tangent on Cloudflare. Is Python supported at full? I actually wasn't fully aware of what the status of that thing is.Samuel [00:19:39]: Yeah. So Pyodide, which is Python running inside the browser in scripting, is supported now by Cloudflare. They basically, they're having some struggles working out how to manage, ironically, dependencies that have binaries, in particular, Pydantic. Because these workers where you can have thousands of them on a given metal machine, you don't want to have a difference. You basically want to be able to have a share. Shared memory for all the different Pydantic installations, effectively. That's the thing they work out. They're working out. But Hood, who's my friend, who is the primary maintainer of Pyodide, works for Cloudflare. And that's basically what he's doing, is working out how to get Python running on Cloudflare's network.Swyx [00:20:19]: I mean, the nice thing is that your binary is really written in Rust, right? Yeah. Which also compiles the WebAssembly. Yeah. So maybe there's a way that you'd build... You have just a different build of Pydantic and that ships with whatever your distro for Cloudflare workers is.Samuel [00:20:36]: Yes, that's exactly what... So Pyodide has builds for Pydantic Core and for things like NumPy and basically all of the popular binary libraries. Yeah. It's just basic. And you're doing exactly that, right? You're using Rust to compile the WebAssembly and then you're calling that shared library from Python. And it's unbelievably complicated, but it works. Okay.Swyx [00:20:57]: Staying on graphs a little bit more, and then I wanted to go to some of the other features that you have in Pydantic AI. I see in your docs, there are sort of four levels of agents. There's single agents, there's agent delegation, programmatic agent handoff. That seems to be what OpenAI swarms would be like. And then the last one, graph-based control flow. Would you say that those are sort of the mental hierarchy of how these things go?Samuel [00:21:21]: Yeah, roughly. Okay.Swyx [00:21:22]: You had some expression around OpenAI swarms. Well.Samuel [00:21:25]: And indeed, OpenAI have got in touch with me and basically, maybe I'm not supposed to say this, but basically said that Pydantic AI looks like what swarms would become if it was production ready. So, yeah. I mean, like, yeah, which makes sense. Awesome. Yeah. I mean, in fact, it was specifically saying, how can we give people the same feeling that they were getting from swarms that led us to go and implement graphs? Because my, like, just call the next agent with Python code was not a satisfactory answer to people. So it was like, okay, we've got to go and have a better answer for that. It's not like, let us to get to graphs. Yeah.Swyx [00:21:56]: I mean, it's a minimal viable graph in some sense. What are the shapes of graphs that people should know? So the way that I would phrase this is I think Anthropic did a very good public service and also kind of surprisingly influential blog post, I would say, when they wrote Building Effective Agents. We actually have the authors coming to speak at my conference in New York, which I think you're giving a workshop at. Yeah.Samuel [00:22:24]: I'm trying to work it out. But yes, I think so.Swyx [00:22:26]: Tell me if you're not. yeah, I mean, like, that was the first, I think, authoritative view of, like, what kinds of graphs exist in agents and let's give each of them a name so that everyone is on the same page. So I'm just kind of curious if you have community names or top five patterns of graphs.Samuel [00:22:44]: I don't have top five patterns of graphs. I would love to see what people are building with them. But like, it's been it's only been a couple of weeks. And of course, there's a point is that. Because they're relatively unopinionated about what you can go and do with them. They don't suit them. Like, you can go and do lots of lots of things with them, but they don't have the structure to go and have like specific names as much as perhaps like some other systems do. I think what our agents are, which have a name and I can't remember what it is, but this basically system of like, decide what tool to call, go back to the center, decide what tool to call, go back to the center and then exit. One form of graph, which, as I say, like our agents are effectively one implementation of a graph, which is why under the hood they are now using graphs. And it'll be interesting to see over the next few years whether we end up with these like predefined graph names or graph structures or whether it's just like, yep, I built a graph or whether graphs just turn out not to match people's mental image of what they want and die away. We'll see.Swyx [00:23:38]: I think there is always appeal. Every developer eventually gets graph religion and goes, oh, yeah, everything's a graph. And then they probably over rotate and go go too far into graphs. And then they have to learn a whole bunch of DSLs. And then they're like, actually, I didn't need that. I need this. And they scale back a little bit.Samuel [00:23:55]: I'm at the beginning of that process. I'm currently a graph maximalist, although I haven't actually put any into production yet. But yeah.Swyx [00:24:02]: This has a lot of philosophical connections with other work coming out of UC Berkeley on compounding AI systems. I don't know if you know of or care. This is the Gartner world of things where they need some kind of industry terminology to sell it to enterprises. I don't know if you know about any of that.Samuel [00:24:24]: I haven't. I probably should. I should probably do it because I should probably get better at selling to enterprises. But no, no, I don't. Not right now.Swyx [00:24:29]: This is really the argument is that instead of putting everything in one model, you have more control and more maybe observability to if you break everything out into composing little models and changing them together. And obviously, then you need an orchestration framework to do that. Yeah.Samuel [00:24:47]: And it makes complete sense. And one of the things we've seen with agents is they work well when they work well. But when they. Even if you have the observability through log five that you can see what was going on, if you don't have a nice hook point to say, hang on, this is all gone wrong. You have a relatively blunt instrument of basically erroring when you exceed some kind of limit. But like what you need to be able to do is effectively iterate through these runs so that you can have your own control flow where you're like, OK, we've gone too far. And that's where one of the neat things about our graph implementation is you can basically call next in a loop rather than just running the full graph. And therefore, you have this opportunity to to break out of it. But yeah, basically, it's the same point, which is like if you have two bigger unit of work to some extent, whether or not it involves gen AI. But obviously, it's particularly problematic in gen AI. You only find out afterwards when you've spent quite a lot of time and or money when it's gone off and done done the wrong thing.Swyx [00:25:39]: Oh, drop on this. We're not going to resolve this here, but I'll drop this and then we can move on to the next thing. This is the common way that we we developers talk about this. And then the machine learning researchers look at us. And laugh and say, that's cute. And then they just train a bigger model and they wipe us out in the next training run. So I think there's a certain amount of we are fighting the bitter lesson here. We're fighting AGI. And, you know, when AGI arrives, this will all go away. Obviously, on Latent Space, we don't really discuss that because I think AGI is kind of this hand wavy concept that isn't super relevant. But I think we have to respect that. For example, you could do a chain of thoughts with graphs and you could manually orchestrate a nice little graph that does like. Reflect, think about if you need more, more inference time, compute, you know, that's the hot term now. And then think again and, you know, scale that up. Or you could train Strawberry and DeepSeq R1. Right.Samuel [00:26:32]: I saw someone saying recently, oh, they were really optimistic about agents because models are getting faster exponentially. And I like took a certain amount of self-control not to describe that it wasn't exponential. But my main point was. If models are getting faster as quickly as you say they are, then we don't need agents and we don't really need any of these abstraction layers. We can just give our model and, you know, access to the Internet, cross our fingers and hope for the best. Agents, agent frameworks, graphs, all of this stuff is basically making up for the fact that right now the models are not that clever. In the same way that if you're running a customer service business and you have loads of people sitting answering telephones, the less well trained they are, the less that you trust them, the more that you need to give them a script to go through. Whereas, you know, so if you're running a bank and you have lots of customer service people who you don't trust that much, then you tell them exactly what to say. If you're doing high net worth banking, you just employ people who you think are going to be charming to other rich people and set them off to go and have coffee with people. Right. And the same is true of models. The more intelligent they are, the less we need to tell them, like structure what they go and do and constrain the routes in which they take.Swyx [00:27:42]: Yeah. Yeah. Agree with that. So I'm happy to move on. So the other parts of Pydantic AI that are worth commenting on, and this is like my last rant, I promise. So obviously, every framework needs to do its sort of model adapter layer, which is, oh, you can easily swap from OpenAI to Cloud to Grok. You also have, which I didn't know about, Google GLA, which I didn't really know about until I saw this in your docs, which is generative language API. I assume that's AI Studio? Yes.Samuel [00:28:13]: Google don't have good names for it. So Vertex is very clear. That seems to be the API that like some of the things use, although it returns 503 about 20% of the time. So... Vertex? No. Vertex, fine. But the... Oh, oh. GLA. Yeah. Yeah.Swyx [00:28:28]: I agree with that.Samuel [00:28:29]: So we have, again, another example of like, well, I think we go the extra mile in terms of engineering is we run on every commit, at least commit to main, we run tests against the live models. Not lots of tests, but like a handful of them. Oh, okay. And we had a point last week where, yeah, GLA is a little bit better. GLA1 was failing every single run. One of their tests would fail. And we, I think we might even have commented out that one at the moment. So like all of the models fail more often than you might expect, but like that one seems to be particularly likely to fail. But Vertex is the same API, but much more reliable.Swyx [00:29:01]: My rant here is that, you know, versions of this appear in Langchain and every single framework has to have its own little thing, a version of that. I would put to you, and then, you know, this is, this can be agree to disagree. This is not needed in Pydantic AI. I would much rather you adopt a layer like Lite LLM or what's the other one in JavaScript port key. And that's their job. They focus on that one thing and they, they normalize APIs for you. All new models are automatically added and you don't have to duplicate this inside of your framework. So for example, if I wanted to use deep seek, I'm out of luck because Pydantic AI doesn't have deep seek yet.Samuel [00:29:38]: Yeah, it does.Swyx [00:29:39]: Oh, it does. Okay. I'm sorry. But you know what I mean? Should this live in your code or should it live in a layer that's kind of your API gateway that's a defined piece of infrastructure that people have?Samuel [00:29:49]: And I think if a company who are well known, who are respected by everyone had come along and done this at the right time, maybe we should have done it a year and a half ago and said, we're going to be the universal AI layer. That would have been a credible thing to do. I've heard varying reports of Lite LLM is the truth. And it didn't seem to have exactly the type safety that we needed. Also, as I understand it, and again, I haven't looked into it in great detail. Part of their business model is proxying the request through their, through their own system to do the generalization. That would be an enormous put off to an awful lot of people. Honestly, the truth is I don't think it is that much work unifying the model. I get where you're coming from. I kind of see your point. I think the truth is that everyone is centralizing around open AIs. Open AI's API is the one to do. So DeepSeq support that. Grok with OK support that. Ollama also does it. I mean, if there is that library right now, it's more or less the open AI SDK. And it's very high quality. It's well type checked. It uses Pydantic. So I'm biased. But I mean, I think it's pretty well respected anyway.Swyx [00:30:57]: There's different ways to do this. Because also, it's not just about normalizing the APIs. You have to do secret management and all that stuff.Samuel [00:31:05]: Yeah. And there's also. There's Vertex and Bedrock, which to one extent or another, effectively, they host multiple models, but they don't unify the API. But they do unify the auth, as I understand it. Although we're halfway through doing Bedrock. So I don't know about it that well. But they're kind of weird hybrids because they support multiple models. But like I say, the auth is centralized.Swyx [00:31:28]: Yeah, I'm surprised they don't unify the API. That seems like something that I would do. You know, we can discuss all this all day. There's a lot of APIs. I agree.Samuel [00:31:36]: It would be nice if there was a universal one that we didn't have to go and build.Alessio [00:31:39]: And I guess the other side of, you know, routing model and picking models like evals. How do you actually figure out which one you should be using? I know you have one. First of all, you have very good support for mocking in unit tests, which is something that a lot of other frameworks don't do. So, you know, my favorite Ruby library is VCR because it just, you know, it just lets me store the HTTP requests and replay them. That part I'll kind of skip. I think you are busy like this test model. We're like just through Python. You try and figure out what the model might respond without actually calling the model. And then you have the function model where people can kind of customize outputs. Any other fun stories maybe from there? Or is it just what you see is what you get, so to speak?Samuel [00:32:18]: On those two, I think what you see is what you get. On the evals, I think watch this space. I think it's something that like, again, I was somewhat cynical about for some time. Still have my cynicism about some of the well, it's unfortunate that so many different things are called evals. It would be nice if we could agree. What they are and what they're not. But look, I think it's a really important space. I think it's something that we're going to be working on soon, both in Pydantic AI and in LogFire to try and support better because it's like it's an unsolved problem.Alessio [00:32:45]: Yeah, you do say in your doc that anyone who claims to know for sure exactly how your eval should be defined can safely be ignored.Samuel [00:32:52]: We'll delete that sentence when we tell people how to do their evals.Alessio [00:32:56]: Exactly. I was like, we need we need a snapshot of this today. And so let's talk about eval. So there's kind of like the vibe. Yeah. So you have evals, which is what you do when you're building. Right. Because you cannot really like test it that many times to get statistical significance. And then there's the production eval. So you also have LogFire, which is kind of like your observability product, which I tried before. It's very nice. What are some of the learnings you've had from building an observability tool for LEMPs? And yeah, as people think about evals, even like what are the right things to measure? What are like the right number of samples that you need to actually start making decisions?Samuel [00:33:33]: I'm not the best person to answer that is the truth. So I'm not going to come in here and tell you that I think I know the answer on the exact number. I mean, we can do some back of the envelope statistics calculations to work out that like having 30 probably gets you most of the statistical value of having 200 for, you know, by definition, 15% of the work. But the exact like how many examples do you need? For example, that's a much harder question to answer because it's, you know, it's deep within the how models operate in terms of LogFire. One of the reasons we built LogFire the way we have and we allow you to write SQL directly against your data and we're trying to build the like powerful fundamentals of observability is precisely because we know we don't know the answers. And so allowing people to go and innovate on how they're going to consume that stuff and how they're going to process it is we think that's valuable. Because even if we come along and offer you an evals framework on top of LogFire, it won't be right in all regards. And we want people to be able to go and innovate and being able to write their own SQL connected to the API. And effectively query the data like it's a database with SQL allows people to innovate on that stuff. And that's what allows us to do it as well. I mean, we do a bunch of like testing what's possible by basically writing SQL directly against LogFire as any user could. I think the other the other really interesting bit that's going on in observability is OpenTelemetry is centralizing around semantic attributes for GenAI. So it's a relatively new project. A lot of it's still being added at the moment. But basically the idea that like. They unify how both SDKs and or agent frameworks send observability data to to any OpenTelemetry endpoint. And so, again, we can go and having that unification allows us to go and like basically compare different libraries, compare different models much better. That stuff's in a very like early stage of development. One of the things we're going to be working on pretty soon is basically, I suspect, GenAI will be the first agent framework that implements those semantic attributes properly. Because, again, we control and we can say this is important for observability, whereas most of the other agent frameworks are not maintained by people who are trying to do observability. With the exception of Langchain, where they have the observability platform, but they chose not to go down the OpenTelemetry route. So they're like plowing their own furrow. And, you know, they're a lot they're even further away from standardization.Alessio [00:35:51]: Can you maybe just give a quick overview of how OTEL ties into the AI workflows? There's kind of like the question of is, you know, a trace. And a span like a LLM call. Is it the agent? It's kind of like the broader thing you're tracking. How should people think about it?Samuel [00:36:06]: Yeah, so they have a PR that I think may have now been merged from someone at IBM talking about remote agents and trying to support this concept of remote agents within GenAI. I'm not particularly compelled by that because I don't think that like that's actually by any means the common use case. But like, I suppose it's fine for it to be there. The majority of the stuff in OTEL is basically defining how you would instrument. A given call to an LLM. So basically the actual LLM call, what data you would send to your telemetry provider, how you would structure that. Apart from this slightly odd stuff on remote agents, most of the like agent level consideration is not yet implemented in is not yet decided effectively. And so there's a bit of ambiguity. Obviously, what's good about OTEL is you can in the end send whatever attributes you like. But yeah, there's quite a lot of churn in that space and exactly how we store the data. I think that one of the most interesting things, though, is that if you think about observability. Traditionally, it was sure everyone would say our observability data is very important. We must keep it safe. But actually, companies work very hard to basically not have anything that sensitive in their observability data. So if you're a doctor in a hospital and you search for a drug for an STI, the sequel might be sent to the observability provider. But none of the parameters would. It wouldn't have the patient number or their name or the drug. With GenAI, that distinction doesn't exist because it's all just messed up in the text. If you have that same patient asking an LLM how to. What drug they should take or how to stop smoking. You can't extract the PII and not send it to the observability platform. So the sensitivity of the data that's going to end up in observability platforms is going to be like basically different order of magnitude to what's in what you would normally send to Datadog. Of course, you can make a mistake and send someone's password or their card number to Datadog. But that would be seen as a as a like mistake. Whereas in GenAI, a lot of data is going to be sent. And I think that's why companies like Langsmith and are trying hard to offer observability. On prem, because there's a bunch of companies who are happy for Datadog to be cloud hosted, but want self-hosted self-hosting for this observability stuff with GenAI.Alessio [00:38:09]: And are you doing any of that today? Because I know in each of the spans you have like the number of tokens, you have the context, you're just storing everything. And then you're going to offer kind of like a self-hosting for the platform, basically. Yeah. Yeah.Samuel [00:38:23]: So we have scrubbing roughly equivalent to what the other observability platforms have. So if we, you know, if we see password as the key, we won't send the value. But like, like I said, that doesn't really work in GenAI. So we're accepting we're going to have to store a lot of data and then we'll offer self-hosting for those people who can afford it and who need it.Alessio [00:38:42]: And then this is, I think, the first time that most of the workloads performance is depending on a third party. You know, like if you're looking at Datadog data, usually it's your app that is driving the latency and like the memory usage and all of that. Here you're going to have spans that maybe take a long time to perform because the GLA API is not working or because OpenAI is kind of like overwhelmed. Do you do anything there since like the provider is almost like the same across customers? You know, like, are you trying to surface these things for people and say, hey, this was like a very slow span, but actually all customers using OpenAI right now are seeing the same thing. So maybe don't worry about it or.Samuel [00:39:20]: Not yet. We do a few things that people don't generally do in OTA. So we send. We send information at the beginning. At the beginning of a trace as well as sorry, at the beginning of a span, as well as when it finishes. By default, OTA only sends you data when the span finishes. So if you think about a request which might take like 20 seconds, even if some of the intermediate spans finished earlier, you can't basically place them on the page until you get the top level span. And so if you're using standard OTA, you can't show anything until those requests are finished. When those requests are taking a few hundred milliseconds, it doesn't really matter. But when you're doing Gen AI calls or when you're like running a batch job that might take 30 minutes. That like latency of not being able to see the span is like crippling to understanding your application. And so we've we do a bunch of slightly complex stuff to basically send data about a span as it starts, which is closely related. Yeah.Alessio [00:40:09]: Any thoughts on all the other people trying to build on top of OpenTelemetry in different languages, too? There's like the OpenLEmetry project, which doesn't really roll off the tongue. But how do you see the future of these kind of tools? Is everybody going to have to build? Why does everybody want to build? They want to build their own open source observability thing to then sell?Samuel [00:40:29]: I mean, we are not going off and trying to instrument the likes of the OpenAI SDK with the new semantic attributes, because at some point that's going to happen and it's going to live inside OTEL and we might help with it. But we're a tiny team. We don't have time to go and do all of that work. So OpenLEmetry, like interesting project. But I suspect eventually most of those semantic like that instrumentation of the big of the SDKs will live, like I say, inside the main OpenTelemetry report. I suppose. What happens to the agent frameworks? What data you basically need at the framework level to get the context is kind of unclear. I don't think we know the answer yet. But I mean, I was on the, I guess this is kind of semi-public, because I was on the call with the OpenTelemetry call last week talking about GenAI. And there was someone from Arize talking about the challenges they have trying to get OpenTelemetry data out of Langchain, where it's not like natively implemented. And obviously they're having quite a tough time. And I was realizing, hadn't really realized this before, but how lucky we are to primarily be talking about our own agent framework, where we have the control rather than trying to go and instrument other people's.Swyx [00:41:36]: Sorry, I actually didn't know about this semantic conventions thing. It looks like, yeah, it's merged into main OTel. What should people know about this? I had never heard of it before.Samuel [00:41:45]: Yeah, I think it looks like a great start. I think there's some unknowns around how you send the messages that go back and forth, which is kind of the most important part. It's the most important thing of all. And that is moved out of attributes and into OTel events. OTel events in turn are moving from being on a span to being their own top-level API where you send data. So there's a bunch of churn still going on. I'm impressed by how fast the OTel community is moving on this project. I guess they, like everyone else, get that this is important, and it's something that people are crying out to get instrumentation off. So I'm kind of pleasantly surprised at how fast they're moving, but it makes sense.Swyx [00:42:25]: I'm just kind of browsing through the specification. I can already see that this basically bakes in whatever the previous paradigm was. So now they have genai.usage.prompt tokens and genai.usage.completion tokens. And obviously now we have reasoning tokens as well. And then only one form of sampling, which is top-p. You're basically baking in or sort of reifying things that you think are important today, but it's not a super foolproof way of doing this for the future. Yeah.Samuel [00:42:54]: I mean, that's what's neat about OTel is you can always go and send another attribute and that's fine. It's just there are a bunch that are agreed on. But I would say, you know, to come back to your previous point about whether or not we should be relying on one centralized abstraction layer, this stuff is moving so fast that if you start relying on someone else's standard, you risk basically falling behind because you're relying on someone else to keep things up to date.Swyx [00:43:14]: Or you fall behind because you've got other things going on.Samuel [00:43:17]: Yeah, yeah. That's fair. That's fair.Swyx [00:43:19]: Any other observations just about building LogFire, actually? Let's just talk about this. So you announced LogFire. I was kind of only familiar with LogFire because of your Series A announcement. I actually thought you were making a separate company. I remember some amount of confusion with you when that came out. So to be clear, it's Pydantic LogFire and the company is one company that has kind of two products, an open source thing and an observability thing, correct? Yeah. I was just kind of curious, like any learnings building LogFire? So classic question is, do you use ClickHouse? Is this like the standard persistence layer? Any learnings doing that?Samuel [00:43:54]: We don't use ClickHouse. We started building our database with ClickHouse, moved off ClickHouse onto Timescale, which is a Postgres extension to do analytical databases. Wow. And then moved off Timescale onto DataFusion. And we're basically now building, it's DataFusion, but it's kind of our own database. Bogomil is not entirely happy that we went through three databases before we chose one. I'll say that. But like, we've got to the right one in the end. I think we could have realized that Timescale wasn't right. I think ClickHouse. They both taught us a lot and we're in a great place now. But like, yeah, it's been a real journey on the database in particular.Swyx [00:44:28]: Okay. So, you know, as a database nerd, I have to like double click on this, right? So ClickHouse is supposed to be the ideal backend for anything like this. And then moving from ClickHouse to Timescale is another counterintuitive move that I didn't expect because, you know, Timescale is like an extension on top of Postgres. Not super meant for like high volume logging. But like, yeah, tell us those decisions.Samuel [00:44:50]: So at the time, ClickHouse did not have good support for JSON. I was speaking to someone yesterday and said ClickHouse doesn't have good support for JSON and got roundly stepped on because apparently it does now. So they've obviously gone and built their proper JSON support. But like back when we were trying to use it, I guess a year ago or a bit more than a year ago, everything happened to be a map and maps are a pain to try and do like looking up JSON type data. And obviously all these attributes, everything you're talking about there in terms of the GenAI stuff. You can choose to make them top level columns if you want. But the simplest thing is just to put them all into a big JSON pile. And that was a problem with ClickHouse. Also, ClickHouse had some really ugly edge cases like by default, or at least until I complained about it a lot, ClickHouse thought that two nanoseconds was longer than one second because they compared intervals just by the number, not the unit. And I complained about that a lot. And then they caused it to raise an error and just say you have to have the same unit. Then I complained a bit more. And I think as I understand it now, they have some. They convert between units. But like stuff like that, when all you're looking at is when a lot of what you're doing is comparing the duration of spans was really painful. Also things like you can't subtract two date times to get an interval. You have to use the date sub function. But like the fundamental thing is because we want our end users to write SQL, the like quality of the SQL, how easy it is to write, matters way more to us than if you're building like a platform on top where your developers are going to write the SQL. And once it's written and it's working, you don't mind too much. So I think that's like one of the fundamental differences. The other problem that I have with the ClickHouse and Impact Timescale is that like the ultimate architecture, the like snowflake architecture of binary data in object store queried with some kind of cache from nearby. They both have it, but it's closed sourced and you only get it if you go and use their hosted versions. And so even if we had got through all the problems with Timescale or ClickHouse, we would end up like, you know, they would want to be taking their 80% margin. And then we would be wanting to take that would basically leave us less space for margin. Whereas data fusion. Properly open source, all of that same tooling is open source. And for us as a team of people with a lot of Rust expertise, data fusion, which is implemented in Rust, we can literally dive into it and go and change it. So, for example, I found that there were some slowdowns in data fusion's string comparison kernel for doing like string contains. And it's just Rust code. And I could go and rewrite the string comparison kernel to be faster. Or, for example, data fusion, when we started using it, didn't have JSON support. Obviously, as I've said, it's something we can do. It's something we needed. I was able to go and implement that in a weekend using our JSON parser that we built for Pydantic Core. So it's the fact that like data fusion is like for us the perfect mixture of a toolbox to build a database with, not a database. And we can go and implement stuff on top of it in a way that like if you were trying to do that in Postgres or in ClickHouse. I mean, ClickHouse would be easier because it's C++, relatively modern C++. But like as a team of people who are not C++ experts, that's much scarier than data fusion for us.Swyx [00:47:47]: Yeah, that's a beautiful rant.Alessio [00:47:49]: That's funny. Most people don't think they have agency on these projects. They're kind of like, oh, I should use this or I should use that. They're not really like, what should I pick so that I contribute the most back to it? You know, so but I think you obviously have an open source first mindset. So that makes a lot of sense.Samuel [00:48:05]: I think if we were probably better as a startup, a better startup and faster moving and just like headlong determined to get in front of customers as fast as possible, we should have just started with ClickHouse. I hope that long term we're in a better place for having worked with data fusion. We like we're quite engaged now with the data fusion community. Andrew Lam, who maintains data fusion, is an advisor to us. We're in a really good place now. But yeah, it's definitely slowed us down relative to just like building on ClickHouse and moving as fast as we can.Swyx [00:48:34]: OK, we're about to zoom out and do Pydantic run and all the other stuff. But, you know, my last question on LogFire is really, you know, at some point you run out sort of community goodwill just because like, oh, I use Pydantic. I love Pydantic. I'm going to use LogFire. OK, then you start entering the territory of the Datadogs, the Sentrys and the honeycombs. Yeah. So where are you going to really spike here? What differentiator here?Samuel [00:48:59]: I wasn't writing code in 2001, but I'm assuming that there were people talking about like web observability and then web observability stopped being a thing, not because the web stopped being a thing, but because all observability had to do web. If you were talking to people in 2010 or 2012, they would have talked about cloud observability. Now that's not a term because all observability is cloud first. The same is going to happen to gen AI. And so whether or not you're trying to compete with Datadog or with Arise and Langsmith, you've got to do first class. You've got to do general purpose observability with first class support for AI. And as far as I know, we're the only people really trying to do that. I mean, I think Datadog is starting in that direction. And to be honest, I think Datadog is a much like scarier company to compete with than the AI specific observability platforms. Because in my opinion, and I've also heard this from lots of customers, AI specific observability where you don't see everything else going on in your app is not actually that useful. Our hope is that we can build the first general purpose observability platform with first class support for AI. And that we have this open source heritage of putting developer experience first that other companies haven't done. For all I'm a fan of Datadog and what they've done. If you search Datadog logging Python. And you just try as a like a non-observability expert to get something up and running with Datadog and Python. It's not trivial, right? That's something Sentry have done amazingly well. But like there's enormous space in most of observability to do DX better.Alessio [00:50:27]: Since you mentioned Sentry, I'm curious how you thought about licensing and all of that. Obviously, your MIT license, you don't have any rolling license like Sentry has where you can only use an open source, like the one year old version of it. Was that a hard decision?Samuel [00:50:41]: So to be clear, LogFire is co-sourced. So Pydantic and Pydantic AI are MIT licensed and like properly open source. And then LogFire for now is completely closed source. And in fact, the struggles that Sentry have had with licensing and the like weird pushback the community gives when they take something that's closed source and make it source available just meant that we just avoided that whole subject matter. I think the other way to look at it is like in terms of either headcount or revenue or dollars in the bank. The amount of open source we do as a company is we've got to be open source. We're up there with the most prolific open source companies, like I say, per head. And so we didn't feel like we were morally obligated to make LogFire open source. We have Pydantic. Pydantic is a foundational library in Python. That and now Pydantic AI are our contribution to open source. And then LogFire is like openly for profit, right? As in we're not claiming otherwise. We're not sort of trying to walk a line if it's open source. But really, we want to make it hard to deploy. So you probably want to pay us. We're trying to be straight. That it's to pay for. We could change that at some point in the future, but it's not an immediate plan.Alessio [00:51:48]: All right. So the first one I saw this new I don't know if it's like a product you're building the Pydantic that run, which is a Python browser sandbox. What was the inspiration behind that? We talk a lot about code interpreter for lamps. I'm an investor in a company called E2B, which is a code sandbox as a service for remote execution. Yeah. What's the Pydantic that run story?Samuel [00:52:09]: So Pydantic that run is again completely open source. I have no interest in making it into a product. We just needed a sandbox to be able to demo LogFire in particular, but also Pydantic AI. So it doesn't have it yet, but I'm going to add basically a proxy to OpenAI and the other models so that you can run Pydantic AI in the browser. See how it works. Tweak the prompt, et cetera, et cetera. And we'll have some kind of limit per day of what you can spend on it or like what the spend is. The other thing we wanted to b
Send us a textSince it was announced early last week, Warner Bros. Discovery's controversial decision to ditch Eurosport in the UK and Ireland, after over three decades of quirky, wonderful broadcasting, and move all of its cycling coverage to the all-encompassing, £31-a-month TNT Sports (hiking the price up by 443 per cent in the process), has been the subject of intense debate among cycling fans, riders, and stakeholders.In part one of this week's road.cc Podcast, Ryan, Dan, and Emily dissect the earth-shattering news, the backlash from across the cycling world, and what it all means for the future of cycling coverage (and the sport itself) in the UK and Ireland.And in part two, road.cc's tech editor Mat Brett sits down for a chat with one of those high-profile cycling figures set to be directly affected by this new, monopolised cycling media landscape, especially after July's last (for the foreseeable future, anyway) free-to-air Tour de France on ITV4 – four-time Tour stage winner-turned-ITV commentator David Millar.The former Garmin rider chats about his new role as brand director at premium bike manufacturer Factor, his “geeky” love of bikes, and the “death by a thousand cuts” demise of his clothing brand CHPT3 last year. Millar also assesses the recent safety debates in pro cycling, from yellow cards and gear restrictions to airbags, and concludes that the key to making the “inherently dangerous” world of bike racing safer could be “empowering” the peloton to self-police and respect each other.
One-off housing in rural Ireland is “inherently unsustainable”, according to experts whose warnings were ignored by Government months before Storm Éowyn left thousands in isolated areas without power. We discuss further with Brendan O'Sullivan is Head of University College Cork's Planning School and one of the report authors.
One-off housing in rural Ireland is “inherently unsustainable”, according to experts whose warnings were ignored by Government months before Storm Éowyn left thousands in isolated areas without power. We discuss further with Brendan O'Sullivan is Head of University College Cork's Planning School and one of the report authors.
What's success to you? How does your own psyche hold you back? Let's dive deep into musical meaning, and tackle the hardest question of them all: Does the world really need more music? For 30% off your first year with DistroKid to share your music with the world click DistroKid.com/vip/lovemusicmore Want to hear my music? For all things links visit ScoobertDoobert.pizza Subscribe to this pod's blog on Substack to receive deeper dives on the regular
Take a Network Break! Guest co-host John Burke joins Drew Conry-Murray for this week’s analysis of tech news. They discuss a string of serious vulnerabilities in Wavlink Wi-Fi routers, Fortinet taking a one-two security punch, and CISA director Jen Easterly calling out US hardware and software companies for being “inherently insecure.” Microsoft and Google put... Read more »
Take a Network Break! Guest co-host John Burke joins Drew Conry-Murray for this week’s analysis of tech news. They discuss a string of serious vulnerabilities in Wavlink Wi-Fi routers, Fortinet taking a one-two security punch, and CISA director Jen Easterly calling out US hardware and software companies for being “inherently insecure.” Microsoft and Google put... Read more »
Take a Network Break! Guest co-host John Burke joins Drew Conry-Murray for this week’s analysis of tech news. They discuss a string of serious vulnerabilities in Wavlink Wi-Fi routers, Fortinet taking a one-two security punch, and CISA director Jen Easterly calling out US hardware and software companies for being “inherently insecure.” Microsoft and Google put... Read more »
Underground Feed Back Stereo x Brothers Perspective Magazine Broadcast
Underground Feed Back Stereo - Brothers Perspective Magazine - Personal Opinion Database - inherently racist colonial oppressors Black August Resistance Uprising against white aggression in Montgomery Alabama in 2023. Black People suffer in a place many are void of Self Awareness and Dignified Liberation. These europeons stole the land by killing the natives of lands but not to share with the original inhabitant or those they enslaved. These tyrants are negative to the core and cant do good. The fight is to know what an oppressor is and how a system operates from this oppression. The euro colonizers designs all the laws to neglect BLACK People from benefiting from the Land. The Black people are enslaved property on stolen land not able to benefit from the life they live! The payback for such atrocities can never be forgiven. Its the mind you must maintain against colonial genocide. This also happens with the endless rejection letters from art galleries etc. No respect to you! Sound Art? Black People Dont Benefit from Slavery! Tune in to these educated brothers as they deliver Personal Opinions for Brothers Perspective Audio Feedback #Reparations #diabetes #75dab #WilliamFroggieJames #lyching #basketball #nyc #fakereligion #war #neverapologize #brooklyn #guncontrol #birthcontrol #gentrification #trump #affirmitiveaction #criticalracetheory #tennessee #stopviolence #blackmusic #marshallact #music #europeanrecoveryprogram #chicago #sense #zantac #rayygunn #blackjobs #southsidechicago #blackart #redlining #maumau #biko70 #chicago #soldout #dei #equality #podcast #PersonalOpinionDataBase #protest #blackart #africanart #gasprices #colonialoppressors #undergroundfeedbackstereo #blackpeople #race #womansbasketball #blackjesus #colonialoppression #blackpeopledontbenefitfromslavery #Montgomery #alabama #foldingchairs #blackrussianjesus #gaza #brothersperspectivemagazine #art #slavery brothersperspective.com undergroundfeedbackstereo.com feat. art 75dab
Pat, Zach, Rick, and Chance announce the nominees for the Tuggys 2024 and commence the draft for their Fantasy Critic League for 2025!
Plus, Trump's anti-immigration plans face resource challenge at the state level
In this special health-focused round-up, Lesley and Brad revisit conversations with four inspiring guests: Uma Naralkar, Jenn Pike, Celeste Holbrook, and Jenny Swisher. From understanding your menstrual cycle and hormones to embracing pleasure and advocating for yourself, this episode delivers practical insights to help you live your healthiest life.If you have any questions about this episode or want to get some of the resources we mentioned, head over to LesleyLogan.co/podcast. If you have any comments or questions about the Be It pod shoot us a message at beit@lesleylogan.co. And as always, if you're enjoying the show please share it with someone who you think would enjoy it as well. It is your continued support that will help us continue to help others. Thank you so much! Never miss another show by subscribing at LesleyLogan.co/subscribe.In this episode you will learn about:The connection between nutrition, movement, lifestyle, and mindset for optimal health.Understanding the four phases of the menstrual cycle and how they affect daily life.Shifting perspectives on intimacy to find pleasure and reduce stigma.How to advocate for your health by asking the right questions and knowing your body.Episode References/Links:Ep. 25 ft. Uma Naralkar - https://beitpod.com/ep25Uma's Website https://omwithatwist.com/Ep. 55 ft. Jenn Pike - https://beitpod.com/ep55The Hormone Project: https://jennpike.com/thehormoneprojectEp. 85 ft. Celeste Holbrook - https://beitpod.com/ep85Website: https://www.drcelesteholbrook.com/Ep. 139 ft. Jenny Swisher - https://beitpod.com/139SYNC Your Life Podcast: https://jennyswisher.com/podcast/ If you enjoyed this episode, make sure and give us a five star rating and leave us a review on iTunes, Podcast Addict, Podchaser or Castbox. DEALS! DEALS! DEALS! DEALS!Check out all our Preferred Vendors & Special Deals from Clair Sparrow, Sensate, Lyfefuel BeeKeeper's Naturals, Sauna Space, HigherDose, AG1 and ToeSox Be in the know with all the workshops at OPCBe It Till You See It Podcast SurveyBe a part of Lesley's Pilates MentorshipFREE Ditching Busy Webinar Resources:Watch the Be It Till You See It podcast on YouTube!Lesley Logan websiteBe It Till You See It PodcastOnline Pilates Classes by Lesley LoganOnline Pilates Classes by Lesley Logan on YouTubeProfitable Pilates Follow Us on Social Media:InstagramThe Be It Till You See It Podcast YouTube channelFacebookLinkedInThe OPC YouTube Channel Episode Transcript:Lesley Logan 0:00 Welcome to the Be It Till You See It podcast where we talk about taking messy action, knowing that perfect is boring. I'm Lesley Logan, Pilates instructor and fitness business coach. I've trained thousands of people around the world and the number one thing I see stopping people from achieving anything is self-doubt. My friends, action brings clarity and it's the antidote to fear. Each week, my guest will bring bold, executable, intrinsic and targeted steps that you can use to put yourself first and Be It Till You See It. It's a practice, not a perfect. Let's get started. Lesley Logan 0:42 Welcome back to Be It Till You See It. You guys, we are continuing our, what do you call it? A round up, babe? You call it collection?Brad Crowell 0:49 Yeah, we call it the December round-up.Lesley Logan 0:51 Yeah. It's basically like a reflection review. And this particular episode has four of our favorite guests that have to do with health. We have these, have had multiple episodes that have to do with health.Brad Crowell 1:03 Many, many, many. Lesley Logan 1:04 Many. And so we are going to span the wide ranging topic of health, which can be a lot of things. We've got the tripod of health. We've got hormones in this one. We're gonna have sex in this one. Brad Crowell 1:13 Yeah, food is as part of the tripod. Lesley Logan 1:15 Yes, yes. We got lots of stuff so. Brad Crowell 1:18 Fitness, of course. Lesley Logan 1:19 So if you have been wondering, what health episode should I listen to during this chaotic month of December when most of my podcasts aren't listing anything new? The Be It Pod has given you four awesome ones, and we'll link even the numbers. You can go back and listen to the full interview in our catalog when you're ready.Brad Crowell 1:38 Let's dig in the first episode that we're gonna talk about today, that we're bringing back is episode number 25.Lesley Logan 1:45 Twenty-five.Brad Crowell 1:45 Twenty-five all the way back towards the very beginning.Lesley Logan 1:49 It's like 2022.Brad Crowell 1:51 We had a chance to interview Uma Naralkar, who talks a lot about food and nutrition, and we have two sections of this that we thought were really spectacular. So.Lesley Logan 2:07 Yeah, so first up, I really, I thought it was really cool and vulnerable that she talked about when she moved to the US and what the food was like, and how that challenged her and got her interested in what she has become known for, and being a nutritionist and things like that. So I'm really excited for us to hear her story of moving to the US.Brad Crowell 2:27 Yeah, so, and also she talked about this, her process of how she works with her clients, and she created something called the Tripod of Optimal Health. And I'm not going to tell you what it is, because you're going to hear it just after this. So tune in.Uma Naralkar 2:41 The biggest difference for me was the food, right? So in India, we have a lot of health. Inherently, there's health and cooks and food is never something that I had to even think about. So that's the reason why it was always so well -balanced and healthy, because it was like home-cooked Indian food and all the beautiful dals and vegetables, and it was primarily vegetarian. We ate meat on the weekends as like a treat. Dessert would always be homemade, something made in ghee, like, very, very like, decently portioned. And I came to America where everything was supersized, right? And I was a student. And, I mean, I was, first, it was shocking, then it was exciting, and then it was kind of like, I didn't have a choice. I was hungry, and I had to eat, and I was a student, so it was like, McDonald's and all the other and it was truly exciting, I have to say, in the beginning, because I was like, what is going on? Why are these people eating so much? But it was a huge adjustment. And you know, when you're asking me about how I, you know, the thing that I had to kind of like, get over and just be like, I'm going to embody this. I am. You know, the book Atomic Habits. Have you read that?Lesley Logan 4:01 Yes. Uma Naralkar 4:01 James Clear. He talks about shifting your identity to who you want to be. Do you remember that part of the book? What he's saying is that if you, you know you, if you want something, if you truly believe that you want something, you need to believe that you have it, and you need to shift your identity in the sense that you know I am a confident 20 year old girl in the United States, where I don't know shit about this country and I truly don't understand, have the words that they use. And at 20, was I clear about what I'm saying now? No, not at all, because it was nerve-racking. And the reason why I'm bringing it up is because the biggest obstacle, apart from the food, my biggest challenge, was speaking, or just speaking out in class, or just raising my hand, or just standing in front of an audience and saying, like anything, it was something that I didn't grow up with. In India, you never get an opportunity to speak anything. Everything is crowded and they don't have time for anybody speaking. So I think it was a true challenge, and it sounds so, it doesn't sound like a big deal because my children, both of them, grew up here. They're Californians, and, you know, I can see how speaking is so inherent, right? Like you're in a group setting, or if you're in a big crowd, just saying what you feel is pretty standard. First off, yes, to therapy. I think all kinds of therapy is, I appreciate all of it. And I think people, it's still, it's very interesting. Still, people have a lot of resistance to see a therapist or to, you know, just to open up and talk to someone else about what's going on. So yes, to therapy, but more than that, yeah, nutrition, what you're eating, is going to be foundational movement and how active you are and what you're doing there, as well as your stress levels, your sleep, all that, I think ties in. It is pretty holistic. I don't think it's one or the other. And I have a lot of really fit clients who are like, I mean, as fit as they can be, who are miserable, who are so unhappy, who are, who are they like, constantly looking for ways to, you know, get to the next level. And, quite frankly, they don't even know what the next level is. So I think it's, everyone's very different. And for one person, maybe it's like, you know, your nutrition is seriously lacking, and we need to make some switches so that you start, like, having a better relationship with food. But for someone else, it might just be something as simple as, you know, like doing yoga or getting out in nature, someone who's like, stuck in front of their computer all day and doesn't even like, realize it like, for example, like the best, I think the best example I can give is like being in a casino, right? Like, in inside a casino, like, how clever is that? It's like the lights are always the same, it's always bright, it's always entertaining. There's enough blue light to kick the melatonin out, so you're always in that cortisol rush. They want that because they want you to play. But that's how we are pretty, pretty much living our life like, like we're in a casino, right? Because we're indoors, we are in front of the computer, then we are watching something, and then we expect to have a good night's sleep. So I feel like it's, it's just, it all ties in, and it's not one thing I call it, I call it the Tripod, actually, of Optimal Health, which is what you're eating, what your movement, your life activity, your lifestyle, and then your mental health, your mindset, right? They all tie in. And then your health is sort of like sitting on that tripod. So if one of those legs is like wobbly, then the whole thing is going to collapse.Lesley Logan 7:59 So that was Episode 25 and we would love to know, we would love for you to share with us what part of the Tripod of Health that you're going to work on as we come into 2025 and no, it won't be a New Year's resolution. It will just be a thing that you're doing. Now we have Episode 55, so we're going way back in the catalog today's episode, and it's how are hormones dictating your life? And one of the things. Brad Crowell 8:21 With? Lesley Logan 8:21 With Jenn Pike. Brad Crowell 8:22 With Jenn Pike. Lesley Logan 8:23 Yeah, one of the things that we talk about that I'm really excited for you to talk, like, here is that the four different phases in your cycle, and this is really, really important, because I have a lot of people ask me a lot of questions about perimenopause. I want more episodes on this. But if you are not perimenopausal yet, or maybe you still have your cycle, but you're kind of, you know, that's what perimenopause is. You got to know what parts of the cycle you're in, because it affects how you work out. It affects what you should be eating. I had, there's some dream guests on my list that I want to have in future episodes, but we need to know these parts for those guests to make any sense. So like, dive into that first part with the different phases of your cycle, even if you think you know them.Brad Crowell 9:00 Yeah, the second part of this episode, though, I thought was really beneficial, was talking about educating both men and women on this. So I remember listening to this the first time, you know, a couple years ago, and I was taking notes because I knew none of this. I don't know how (inaudible)Lesley Logan 9:17 And you have a mom and a sister.Brad Crowell 9:18 And I went through high school and college, and never learned any of this stuff. Lesley Logan 9:22 And you had a wife before me. Brad Crowell 9:23 And I did have a wife before you, still didn't know any of this stuff. So, so the, she, Jenn talks about stigmatism, shame and embarrassment and the value of educating her son. I think she has sons. I can't remember. Son. She's one son. She's talking about how he knows just as much about the female body as her daughter and the value like, they, as a couple, decided to educate their son on purpose to avoid stigmatism and shame and embarrassment. So I thought that was really great.Lesley Logan 9:57 I love it. I love her. I love her for that already.Brad Crowell 10:00 Yeah. It's a win. There you go.Jenn Pike 10:03 So we go through four different phases in our cycle. So our cycle and our period are not the same thing. Your cycle is from day one of your bleed all the way through until you have your next bleed. That's a full cycle. Most women, it's going to range anywhere from 23 to 35 days. And in that cycle you have four different phases. So you have the phase that you bleed in, which is your actual period, when you come out of your period, you actually have what's referred to as the first phase, which is the follicular phase. And this is where your body, your hormones and estrogen and testosterone are starting to climb. Your uterine lining is starting to thicken again. This is typically where we actually feel more connected to our body. We do well with the estrogen surge. We feel clear, more focused, energized, happier. We're like gung-ho. We want to create new projects. We're super, you know, on point. Leading into ovulation, ovulation comes, it tends to be much more of a you know, I want to put myself out there. Confidence can peak a little higher, sex drive, typically. And the way I'm painting this picture, this isn't going to be for every woman. I'm just going to kind of give one example, and then I'll apex it on the other side. Once ovulation happens, you've now had this dip in estrogen and testosterone, and your luteinizing hormones increase as long as you've ovulated, your progesterone also increases. And that actually is a much more calming hormone. It helps us to integrate. It brings us into a place that is much more reflective, in that luteal phase, which are the couple weeks coming into your period. It's a time to really look at like what is working and what is not. It's time to finish projects. It's a time when you can feel really connected to your body, and then this is one of the times where you'll also know if things are out of balance, if that like seven to 10 day period of time before you bleed again, your mood's all over the place, you're emotional, your sleep is off, your gut is off, you're spotting. Your breasts are tender, like you're just like, oh my God, here we go again. My skin's breaking out. All the things are happening. That's a really strong indication that something is out of balance in your system. And it could be that you didn't ovulate, that you have lower progesterone, you have too much estrogen, it could be that all the hormones are sitting flat. It could be that testosterone and DHA is too high. So this is why testing and testing at the appropriate time of the month is such a valuable tool for women, because when you see it and someone's explaining it to you you're like, oh my gosh, I feel like you just described me to a tee. Yeah.Lesley Logan 12:34 No, I'm like, I'm like, sitting here, and I'm like, taking it all in, and I, like that whole part where it's like, that 7, 10, days before you just said, like, this is what you're gonna feel like, but this is also you could feel a look where things are out of whack. And I think we're taught, or at least I felt like, I felt like that's just the normal thing, like things are out of whack. And, yeah, what it sounds like, is it, and I did experience this, I did seed cycling for a long time because I felt like my swings were too big. And I was like, y'all, my boobs are a little bigger because of COVID and age, but they were very small back then. And I was like, they are too small to be this tender. Like this is not fun for me. And so I heard about seed cycling, and I did it consistently for three years. Not only did I literally make myself like clockwork with my cycle, I stopped breaking out. I don't have tenderness, and I've weaned off of it, and it hasn't been an issue, but I did notice that difference in that time before, it was almost like my period was a surprise each time, because I was like, oh, I didn't even know it's coming. (inaudible) Was feeling so good. That's so fascinating. Okay, so thank you for walking us through that. I think that it's helpful to know, like, just when you have the information, like you said, you just can expect things a little different, and you can know more about how you should be feeling, as opposed to like. Why do I feel like this versus yesterday? I felt better.Jenn Pike 13:55 I just want to say something quick on that before you go and you're talking about, you know, doing the recap with your husband. So I have two kids, a girl and a boy. My son knows just as much about the female body and cycles as my daughter, and that's on purpose, because part of the stigmatism and the shame and embarrassment ends when we stop excluding men and boys from the conversation as well. It, you know, it's like there's going to come a time in a boy's life where he's gonna, you know, you're either gonna be around a woman or your girlfriend or whatever it is, and you need to be able to understand what she's going through. And as I always say to my son, like bud, you wouldn't even be here if it weren't for our bodies doing this. So you should be darn grateful. Brad Crowell 14:33 All right, so that was Episode 55 with Jenn Pike. Hope you found it super helpful and educational. Lesley Logan 14:40 Her entire episode is so, has so, it's chock-full of information. You can, you could do, if you just used her episode to figure out what your health changes are for 2025 you would have enough to work on.Brad Crowell 14:54 Yeah, she's got a lot going on, and it's amazing. All right, next up we got Episode 85 let's talk about sex baby with Celeste Holbrook.Lesley Logan 15:02 I'm obsessed with her. Just so you know, I'm actually having a call with her tomorrow morning (inaudible) on the day that I, because I just love her. Brad Crowell 15:09 Well, she basically talked about, it's kind of a tack on to what we were just talking about with with Jenn Pike, about removing shame and embarrassment. This is about destigmatizing sex and the language around sex. And one thing she said that I thought was amazing was she pets her dog because she wants to feel calm. She rides her bike because she wants to, well, feel free. She has sex because she wants to feel pleasure, right? And it's like, we make it this taboo, weird, awkward thing, and she's like, but it shouldn't be that, you know? And she talks, she goes really in-depth about how, you know, how you might find pleasure in sex.Lesley Logan 15:48 Just so you know, I loved her so much we had her on the podcast twice. And we actually talked about bodies and all that stuff. So she's just fabulous. And especially for any of you who are raised in the purity culture, this episode is extremely freeing and informative.Brad Crowell 16:04 Yeah, yeah. So enjoy.Celeste Holbrook 16:06 I always think about what we want to feel in sex. Because everything that we do behaviorally, we do it because we want to feel something. So, like, I pet my dog because I want to feel calm. I ride my bike because I want to feel free. I do certain sexual activities because I want to feel pleasure, connection, erotic, intimate, loving, whatever it is that I want to feel in sex. And so start with the feeling. So, write down my dream sexual experience would feel like, and then write those words down, and then you can work your way backwards, like, okay, if I want to feel confident, what do I need to do behaviorally in order to feel confident? Maybe I need to learn more about my body. Maybe I need to establish a better relationship with my vulva and, like, clitoris. Maybe I need to have a masturbation practice. Maybe I need to read some more books, right? So start with what you want to feel and then work your way backwards. I want to feel connected. Okay, maybe I need to work on communication styles with my partner. Maybe I need to learn how to ask more for what I want, and maybe I don't know what I want. So maybe I need to take one more step back and figure out what I like and what I don't like, and do some more creative exploration in sex, you know. So I like to start out with that list of what we want to feel, because then you can build behaviors behind that.Lesley Logan 17:23 All right. So that was Celeste Holbrook's Episode 85 at the Be It Pod. If you want to go listen to the whole thing.Lesley Logan 17:31 Up next, we actually have Episode 139, Cycle Thinking Fitness & Balancing Your Hormones with Jenny Swisher. This is really, so again, we're having hormones, this is a totally different thing. So, we're actually going to be talking more about advocating for yourself, and ladies, but also gents listening, we always have a few good men, we often have been raised that like the doctors know best but really you know your body best and I think that this episode is one of those reminders that you can be your own best doctor and when you know your body best you can actually advocate for yourself and get the best health for yourself but especially for your hormones. And Jenny Swisher is really, I mean, like, what she's been doing since being on the podcast, really helping people understand their hormones, has been pretty epic.Brad Crowell 18:19 I just want to say that while we don't know medicine, because we're not doctors and didn't dedicate ourselves to study that there generally is logic behind the medicine. So if you're being given advice that is completely illogical or confusing to you before you just say yeah, let's do it, ask them to explain that further and understand it more. And it's okay to say that doesn't make sense to me.Lesley Logan 18:45 We didn't put the clip here. But if you want more, if you're inspired to be an advocate for yourself, definitely listen to Lindsay Miller's episode, Lindsay Moore's episode on, on being an advocate. And I do think, Brad, you make a, bring up a good point, like there is logic to it, but also they have to listen to you, like, they're not, at least in the States, they're not allowed to leave the room until you're done and you say, I have no more questions. And it is a practice. It's called a medical practice, and so they're practicing just like you'd have a Pilates practice, and so it's really, you should not feel ashamed or embarrassed to be like, hmm, I think I'm going to get another opinion on that.Brad Crowell 19:25 Yeah, yeah. That's okay. Lesley Logan 19:27 Yeah. So here is Jenny Swisher to inspire you to be your own best doctor.Jenny Swisher 19:31 I think you have to be your own best doctor. And I think, but you have to go into the the appointment knowing that, I mean, I don't know about anybody listening, but I know for me, especially after I feel like I'm an expert in sitting in doctor's offices after years of doing it, I felt like I got to the point where they were just going to diagnose or give me whatever I was leading them to. You know what I mean, like you're leading the doctor to the eventual answer. And so the more hormone literate you can become about your own body and your own cycle, for example, and in the case of hormone health, the easier it's going to be for the doctor to make those connections or to really, truly help you. I find that most people don't have the awareness that they need, the self-awareness and the body awareness of their own body to be able to go and get a proper answer from a doctor. And so it starts with that. But then when you are in that situation, when you go into it knowing like, this is how my body is supposed to operate. This is how it's supposed to feel. These are the things that I've learned about hormone health. And I'm not I'm low in energy, or I'm this, or I'm that, then you can go into the appointment and say hey, I think this is how I'm supposed to be feeling. But instead, I feel this way. What are some things that we can look into?Lesley Logan 20:35 All right. That was Episode 139, with Jenny Swisher, so you can go and listen to her full episode, if you'd like here in the Be It catalog. Again, this is a round-up of just a few of our favorite health episodes, and we hope that you're enjoying getting just some reminders of some of the epic guests we've had, or maybe we're peaking your interest in a topic that you're wanting to go back and learn more about. All of our guests are pretty amazing. And I can't believe that was like almost 300 episodes ago. Some of these are like 400 episodes ago. So, but also, like, I still take these tips. I still remember these people's tips in my daily life. I reflect back upon them, and so they really meant a lot to me.Lesley Logan 21:18 I'm Lesley Logan. Brad Crowell 21:19 And I'm Brad Crowell.Lesley Logan 21:20 Thank you so much for being a listener of the Be It Till You See It Podcast. We hope that you would love this. Send one of these episodes to a friend who needs it. Especially right now, you know, sometimes we think we have to do holiday gifts. And really, you can actually be like, here's someone to listen to on your long drive to go see your family in a chaotic time, you know, like, these can be the thing that keeps people warm at night. Really, you can, like, listen, they can curl up and listen to a good podcast. And so, until next time.Brad Crowell 21:46 Bye for now.Lesley Logan 21:47 No. Until next time, Be It Till You See It. Brad Crowell 21:52 Oh. Lesley Logan 21:53 And then. Brad Crowell 21:53 So, until next time. Lesley Logan 21:55 Be It Till You See It.Brad Crowell 21:58 Bye for now.Lesley Logan 22:01 That's all I got for this episode of the Be It Till You See It Podcast. One thing that would help both myself and future listeners is for you to rate the show and leave a review and follow or subscribe for free wherever you listen to your podcast. Also, make sure to introduce yourself over at the Be It Pod on Instagram. I would love to know more about you. Share this episode with whoever you think needs to hear it. Help us and others Be It Till You See It. Have an awesome day. Be It Till You See It is a production of The Bloom Podcast Network. If you want to leave us a message or a question that we might read on another episode, you can text us at +1-310-905-5534 or send a DM on Instagram @BeItPod.Brad Crowell 22:43 It's written, filmed, and recorded by your host, Lesley Logan, and me, Brad Crowell.Lesley Logan 22:48 It is transcribed, produced and edited by the epic team at Disenyo.co.Brad Crowell 22:53 Our theme music is by Ali at Apex Production Music and our branding by designer and artist, Gianfranco Cioffi.Lesley Logan 23:00 Special thanks to Melissa Solomon for creating our visuals.Brad Crowell 23:03 Also to Angelina Herico for adding all of our content to our website. And finally to Meridith Root for keeping us all on point and on time.Support this podcast at — https://redcircle.com/be-it-till-you-see-it/donationsAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
In this episode, Steph & Steph take it back to basics and tackle a question humanity has been wrestling with forever: what does it really mean to be a good person? It's a simple yet profound question—one we don't discuss nearly enough. So, we decided to kickstart the conversation and explore what everyone thinks. We dig beyond the surface (as always) to reflect on how and why good people act the way they do. We ask if it's possible to be both good and bad at the same time and wonder if most of us are a mix of both, depending on the moment or circumstance. We'd love to hear your take on this and the episode as a whole! Drop a comment on social or your favorite listening platform and join the conversation.
For review:1. Israel Alert for Iranian Weapon Transfers to Hezbollah.The IDF on Monday said it would ensure Iran does not smuggle weapons from Syria to Hezbollah in Lebanon as the Islamic Republic sends reinforcements to its ally Syrian President Bashar Assad to counter an ongoing rebel assault.2. NATO Sec General Talks Ukraine Negotiations.NATO Secretary-General Mark Rutte: "The front is not moving eastwards. It is slowly moving westwards,” Rutte said. “So we have to make sure that Ukraine gets into a position of strength, and then it should be for the Ukrainian government to decide on the next steps, in terms of opening peace talks and how to conduct them.”3. French-German defense form KNDS to get new CEO.Formed in 2015, KNDS is a joint venture between France's Nexter and Germany's Krauss-Maffei Wegmann (KMV), two of Europe's largest land system manufacturers. The company makes the Leopard 2 main battle tank, Puma infantry fighting vehicle, and PzH 2000 self-propelled howitzer (155mm).4. US Army Autonomous Precision Strike Missile Variant. The US Army is developing a fifth Precision Strike Missile (variant), that it could potentially launch from an autonomous launcher to hit targets beyond 1,000 km.5. USMC 3d Marine Littoral Regiment (Hawaii) receives over an unspecified amount of Navy-Marine Expeditionary Ship Interdiction Systems (NMESIS).In 2021, the Marine Corps identified the procurement of 14 NMESIS batteries, composed of 18 launchers each. These unmanned launchers are equipped with two low-observable Naval Strike Missiles capable of reaching targets 185 kilometers away.6. GAO Reports Poor Condition of US Navy Amphibious Fleet.Half of the Navy ships the Marine Corps would use to make amphibious assaults are in “poor condition,” and some of the vessels have been unavailable for operational or training use for years at a time, according to a pointed new watchdog report.7. Indo-PACOM Combatant Commander (US Navy Admiral Samuel Paparo) concerned of strategic, long-range weapon transfers outside of US arsenal.“Inherently, it imposes costs on the readiness of America to respond in the Indo-Pacific region, which is the most stressing theater … because [China] is the most capable potential adversary in the world,” he stated.
It's understandable why people would think, “This place is full of Christians. It must be one of the safest places in my community.” But churches are not inherently safe. Quite the opposite. Churches have a target on them! You are a spiritual target for the powers of darkness. Sam Rainer and Matt McCraw discuss some key issues involving church safety. The post Why Churches Are Not Inherently Safe Places appeared first on Church Answers.
Want to reach out to us? Want to leave a comment or review? Want to give us a suggestion or berate Anthony? Send us a text by clicking this link!Embark on a journey with us as we unravel the rich tapestry of Catholicism, interwoven with personal anecdotes and profound reflections. Picture yourself on a high-speed bullet train from Florence to Rome, surrounded by laughter and camaraderie, as we share the excitement of our upcoming Italian escapade. Our discussion promises to enlighten, as we explore the vibrant diversity within the Catholic Church and compare it to the seemingly homogeneous landscape of American mega churches.Moving from light-hearted travel tales to thought-provoking issues, we tackle the serious topic of the Catholic Church's response—or lack thereof—to cultural challenges. Reflect on the story of a desecrated Virgin Mary statue in Switzerland and the remarkable resilience of the Catholic community. We navigate the complexities of church leadership and historical precedents, pondering the future of Catholicism. Enrich your understanding with insights into art, architecture, and the enduring influence of literary figures like G.K. Chesterton. As we prepare for our pilgrimage, we invite you to reflect on the deeper meanings of faith, unity, and tradition in a rapidly changing world.Support the show********************************************************https://www.avoidingbabylon.comMerchandise: https://shop.avoidingbabylon.comLocals Community: https://avoidingbabylon.locals.comRSS Feed for Podcast Apps: https://feeds.buzzsprout.com/1987412.rssSpiritusTV: https://spiritustv.com/@avoidingbabylonOdysee: https://odysee.com/@AvoidingBabylon
ARE WE BORN RACIST? ARE WE INHERENTLY RACIST? Is bigotry in our DNA, a remnant of our fear of “the other” way back when that was necessary? If so, why do some battle with their instincts while others embrace them? Humans are the most cooperative species on the planet – all part of a huge interconnected ecosystem. We have built vast cities, connected by a global nervous system of roads, shipping lanes and optical fibers. We have sent thousands of satellites spinning around the planet. Even seemingly simple objects like a graphite pencil are the work of thousands of hands from around the world, as the wonderful essay I-Pencil, quoted above, by Leonard Read describes. Yet we can also be surprisingly intolerant of each other. If we are completely honest, there is perhaps a little bit of xenophobia, racism, sexism and bigotry deep within all of us, if we would only allow it. Luckily, we can choose to control and suppress such tendencies for our own wellbeing and the good of society. When the media, and especially people we trust, talk in such a way, it has a profound effect on our receiving minds. It can even shape our beliefs in what we might think are purely rational issues. For example, the belief in whether humans are causing climate change is strongly associated with US political party membership. This is because we tend to adopt a common position on a topic to signal we are part of a group, just like football fans wear certain colors or have tattoos to show their tribal loyalty. Even strong individuals who stand up to oppressive regimes typically have shared ideals and norms with other members of a resistance movement. This tribalism can all feel very visceral and natural because, well, in a way, it is. It fires up the primal parts of our brain designed for such responses. Yet, there are other natural attitudes, such as compassion and consideration for others, that can be suppressed in such circumstances. Imbalanced cultures produce imbalanced brains. This combination of nature and nurture shaping our attitudes and behavior is apparent in many human characteristics, and unpicking some of these examples can help us see opportunities to steer the process. Consider the tendency to become overweight in modern society. In premodern times, sugary and fatty foods were rare and valuable for humans. Now, they are everywhere. A biological trait – the craving for sugary or fatty foods – which was adaptive in premodern times, has become detrimental and maladaptive. Surely our modern cultures can protect us from these innate drives when they are unhealthy for ourselves and society? After all, we effectively suppress violent behavior in society through the way we bring up children, policing and the prison system. Instead of acknowledging and protecting us from the innate drive to binge on unhealthy food, however, our modern cultures (in many countries at least) actually exacerbate that particular problem. The result is 2 billion people – over a quarter of the world's population – overweight or obese, while another 2 billion suffer some kind of micronutrient deficiency. When we understand how our hardwired urges interact with an unhelpful cultural context, we can begin to design positive interventions. In the case of obesity, this might mean less junk food marketing and altering the composition of manufactured food. We can also change our own behavior, for example laying down new routines and healthier eating habits. Climate change could boost bigotry But what about bigotry and xenophobia? Can't we simply design the right fixes for them? That may depend on how big the problems we face in future are. For example, growing ecological crises – climate change, pollution and biodiversity loss – may actually lead to more bigoted and xenophobic attitudes. Rewiring the brain Thankfully, we can use rational thinking to develop strategies to overcome these attitudes. We can reinforce positive values, building trust and compassion, reducing the distinction between our in-group and the “other”. An important first step is appreciating our connectedness to other people. We all evolved from the same bacteria-like ancestor, and right now we share over 99% of our DNA with everyone else on the planet. Our minds are closely linked through social networks, and the things we create are often the inevitable next step in a series of interdependent innovations. Innovation is part of a great, linked creative human endeavor with no respect for race or national boundaries. In the face of overwhelming evidence from multiple scientific disciplines (biology, psychology, neuroscience) you can even question whether we exist as discrete individuals, or whether this sense individuality is an illusion (as I argue in my book The Self Delusion). We evolved to believe we are discrete individuals because it brought survival benefits (such as memory formation and an ability to track complex social interactions). But taken too far, self-centered individualism can prevent us from solving collective problems. Beyond theory, practice is also necessary to literally rewire our brains – reinforcing the neural networks through which compassionate behavior arises. Outdoor community activities have been shown to increase our psychological connectedness to others. Similarly, meditation approaches alter neural networks in the brain and reduce our sense of isolated self-identity, instead promoting compassion towards others. Even computer games and books can be designed to increase empathy. Finally, at the societal level, we need frank and open debate about environmental change and its current and future human impacts – crucially, how our attitudes and values can affect other lives and livelihoods. We need public dialogue around climate-driven human migration and how we respond to that as a society, allowing us to mitigate the knee-jerk reaction of devaluing others. Let's defuse this ticking ethical timebomb and shame those who stoke flames of bigotry beneath it. Instead, we can open ourselves up to a more expansive attitude of connectedness, empowering us to work together in cooperation with our fellow human kin. It is possible to steer our cultures and rewire our brains so that xenophobia and bigotry all but disappear. Indeed, working collaboratively across borders to overcome the global challenges of the 21st century relies upon us doing just that. ------------------------------------------------------------ It is not there are so many people that are racist it is the perception or what Racism has been Conditioned into our society actually IS. Asking someone “Where do you work?” When I was young was not an unnatural question it was a matter of Conversation….Now someone that is hyper PC sensitive could “SOMEHOW” interpret that as “Racist”. Or demonstrating Patriotism and not being very tolerant of those that disrespect those that serve and preserve “”AS RACIST”. I think these hyper sensitive PC guys with the orange feet and horn honk noses perhaps should be asking the people that are suppose to be offended “IS this Racist?” To YOU? Would Probably find out those people have zero tolerance for Disrespecting their flag and their brother, father, sister, cousin that IS Protecting and Preserving…The Left had better get their message straightened out or their not going to have any voice. The only reason their voice is heard now is because the MSM is owned by a handful of Corporations that want a ONE World Deal. They don't want The USA to be Sovereign …they don't want us to have borders… What the Left has done to the Minorities (they're suppose to care so much about ha ha)…The MSM has done to the left liberal agenda…they're being used just like the minorities have been used. Well the Minorities are waking up…My Black friends My Hispanic friends have BEEN Woke up…they're successful in their business and want the Economy they're now enjoying. They're perception is exactly my perception I'm white, they're black or brown but, we all have the same thoughts…Give us a Chance and we will succeed…They hated Obama with a Passion not because he was black or half white but, BECAUSE he killed their business…graveyard dead. Those ARE the FACTS accept them or get used…Go get a job, start a business or just find your happy place BUT if you cannot do any of those things NOW?? Find the nearest Volcano and sacrifice yourself to the Village idiots god. Because …you will never have this opportunity again. Reagan was the last and that was when I was in College…Quit whining and use this CHANCE. The End.
Yes, that's right! Contrary to popular belief, witchcraft doesn't actually have anything (inherently) to do with feminism. Can you be a witch and a feminist? Of course. But being a witch doesn't mean you have to have certain politics, or even have political inclinations and opinions at all - let alone work with them in your spirituality and magic. This episode was inspired by an experience I was very blessed to have recently: being asked to comment upon a book up for potential publication. I'm not naming names (and never will, that's not the point; also, these ideas are widespread), but it has become taken for granted in many spaces that witchcraft and far left politics are one and the same. In this episode I explain why that's not true. Also, a little update on my new poetry book, Rapeseed, and the successful Bodymagic Kickstarter campaign! https://sabrinamscott.com https://instagram.com/sabrinamscott say hi: ceo@sabrinamscott.com
Alessio will be at AWS re:Invent next week and hosting a casual coffee meetup on Wednesday, RSVP here! And subscribe to our calendar for our Singapore, NeurIPS, and all upcoming meetups!We are still taking questions for our next big recap episode! Submit questions and messages on Speakpipe here for a chance to appear on the show!If you've been following the AI agents space, you have heard of Lindy AI; while founder Flo Crivello is hesitant to call it "blowing up," when folks like Andrew Wilkinson start obsessing over your product, you're definitely onto something.In our latest episode, Flo walked us through Lindy's evolution from late 2022 to now, revealing some design choices about agent platform design that go against conventional wisdom in the space.The Great Reset: From Text Fields to RailsRemember late 2022? Everyone was "LLM-pilled," believing that if you just gave a language model enough context and tools, it could do anything. Lindy 1.0 followed this pattern:* Big prompt field ✅* Bunch of tools ✅* Prayer to the LLM gods ✅Fast forward to today, and Lindy 2.0 looks radically different. As Flo put it (~17:00 in the episode): "The more you can put your agent on rails, one, the more reliable it's going to be, obviously, but two, it's also going to be easier to use for the user."Instead of a giant, intimidating text field, users now build workflows visually:* Trigger (e.g., "Zendesk ticket received")* Required actions (e.g., "Check knowledge base")* Response generationThis isn't just a UI change - it's a fundamental rethinking of how to make AI agents reliable. As Swyx noted during our discussion: "Put Shoggoth in a box and make it a very small, minimal viable box. Everything else should be traditional if-this-then-that software."The Surprising Truth About Model LimitationsHere's something that might shock folks building in the space: with Claude 3.5 Sonnet, the model is no longer the bottleneck. Flo's exact words (~31:00): "It is actually shocking the extent to which the model is no longer the limit. It was the limit a year ago. It was too expensive. The context window was too small."Some context: Lindy started when context windows were 4K tokens. Today, their system prompt alone is larger than that. But what's really interesting is what this means for platform builders:* Raw capabilities aren't the constraint anymore* Integration quality matters more than model performance* User experience and workflow design are the new bottlenecksThe Search Engine Parallel: Why Horizontal Platforms Might WinOne of the spiciest takes from our conversation was Flo's thesis on horizontal vs. vertical agent platforms. He draws a fascinating parallel to search engines (~56:00):"I find it surprising the extent to which a horizontal search engine has won... You go through Google to search Reddit. You go through Google to search Wikipedia... search in each vertical has more in common with search than it does with each vertical."His argument: agent platforms might follow the same pattern because:* Agents across verticals share more commonalities than differences* There's value in having agents that can work together under one roof* The R&D cost of getting agents right is better amortized across use casesThis might explain why we're seeing early vertical AI companies starting to expand horizontally. The core agent capabilities - reliability, context management, tool integration - are universal needs.What This Means for BuildersIf you're building in the AI agents space, here are the key takeaways:* Constrain First: Rather than maximizing capabilities, focus on reliable execution within narrow bounds* Integration Quality Matters: With model capabilities plateauing, your competitive advantage lies in how well you integrate with existing tools* Memory Management is Key: Flo revealed they actively prune agent memories - even with larger context windows, not all memories are useful* Design for Discovery: Lindy's visual workflow builder shows how important interface design is for adoptionThe Meta LayerThere's a broader lesson here about AI product development. Just as Lindy evolved from "give the LLM everything" to "constrain intelligently," we might see similar evolution across the AI tooling space. The winners might not be those with the most powerful models, but those who best understand how to package AI capabilities in ways that solve real problems reliably.Full Video PodcastFlo's talk at AI Engineer SummitChapters* 00:00:00 Introductions * 00:04:05 AI engineering and deterministic software * 00:08:36 Lindys demo* 00:13:21 Memory management in AI agents * 00:18:48 Hierarchy and collaboration between Lindys * 00:21:19 Vertical vs. horizontal AI tools * 00:24:03 Community and user engagement strategies * 00:26:16 Rickrolling incident with Lindy * 00:28:12 Evals and quality control in AI systems * 00:31:52 Model capabilities and their impact on Lindy * 00:39:27 Competition and market positioning * 00:42:40 Relationship between Factorio and business strategy * 00:44:05 Remote work vs. in-person collaboration * 00:49:03 Europe vs US Tech* 00:58:59 Testing the Overton window and free speech * 01:04:20 Balancing AI safety concerns with business innovation Show Notes* Lindy.ai* Rick Rolling* Flo on X* TeamFlow* Andrew Wilkinson* Dust* Poolside.ai* SB1047* Gathertown* Sid Sijbrandij* Matt Mullenweg* Factorio* Seeing Like a StateTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:12]: Hey, and today we're joined in the studio by Florent Crivello. Welcome.Flo [00:00:15]: Hey, yeah, thanks for having me.Swyx [00:00:17]: Also known as Altimore. I always wanted to ask, what is Altimore?Flo [00:00:21]: It was the name of my character when I was playing Dungeons & Dragons. Always. I was like 11 years old.Swyx [00:00:26]: What was your classes?Flo [00:00:27]: I was an elf. I was a magician elf.Swyx [00:00:30]: Well, you're still spinning magic. Right now, you're a solo founder and CEO of Lindy.ai. What is Lindy?Flo [00:00:36]: Yeah, we are a no-code platform letting you build your own AI agents easily. So you can think of we are to LangChain as Airtable is to MySQL. Like you can just pin up AI agents super easily by clicking around and no code required. You don't have to be an engineer and you can automate business workflows that you simply could not automate before in a few minutes.Swyx [00:00:55]: You've been in our orbit a few times. I think you spoke at our Latent Space anniversary. You spoke at my summit, the first summit, which was a really good keynote. And most recently, like we actually already scheduled this podcast before this happened. But Andrew Wilkinson was like, I'm obsessed by Lindy. He's just created a whole bunch of agents. So basically, why are you blowing up?Flo [00:01:16]: Well, thank you. I think we are having a little bit of a moment. I think it's a bit premature to say we're blowing up. But why are things going well? We revamped the product majorly. We called it Lindy 2.0. I would say we started working on that six months ago. We've actually not really announced it yet. It's just, I guess, I guess that's what we're doing now. And so we've basically been cooking for the last six months, like really rebuilding the product from scratch. I think I'll list you, actually, the last time you tried the product, it was still Lindy 1.0. Oh, yeah. If you log in now, the platform looks very different. There's like a ton more features. And I think one realization that we made, and I think a lot of folks in the agent space made the same realization, is that there is such a thing as too much of a good thing. I think many people, when they started working on agents, they were very LLM peeled and chat GPT peeled, right? They got ahead of themselves in a way, and us included, and they thought that agents were actually, and LLMs were actually more advanced than they actually were. And so the first version of Lindy was like just a giant prompt and a bunch of tools. And then the realization we had was like, hey, actually, the more you can put your agent on Rails, one, the more reliable it's going to be, obviously, but two, it's also going to be easier to use for the user, because you can really, as a user, you get, instead of just getting this big, giant, intimidating text field, and you type words in there, and you have no idea if you're typing the right word or not, here you can really click and select step by step, and tell your agent what to do, and really give as narrow or as wide a guardrail as you want for your agent. We started working on that. We called it Lindy on Rails about six months ago, and we started putting it into the hands of users over the last, I would say, two months or so, and I think things really started going pretty well at that point. The agent is way more reliable, way easier to set up, and we're already seeing a ton of new use cases pop up.Swyx [00:03:00]: Yeah, just a quick follow-up on that. You launched the first Lindy in November last year, and you were already talking about having a DSL, right? I remember having this discussion with you, and you were like, it's just much more reliable. Is this still the DSL under the hood? Is this a UI-level change, or is it a bigger rewrite?Flo [00:03:17]: No, it is a much bigger rewrite. I'll give you a concrete example. Suppose you want to have an agent that observes your Zendesk tickets, and it's like, hey, every time you receive a Zendesk ticket, I want you to check my knowledge base, so it's like a RAG module and whatnot, and then answer the ticket. The way it used to work with Lindy before was, you would type the prompt asking it to do that. You check my knowledge base, and so on and so forth. The problem with doing that is that it can always go wrong. You're praying the LLM gods that they will actually invoke your knowledge base, but I don't want to ask it. I want it to always, 100% of the time, consult the knowledge base after it receives a Zendesk ticket. And so with Lindy, you can actually have the trigger, which is Zendesk ticket received, have the knowledge base consult, which is always there, and then have the agent. So you can really set up your agent any way you want like that.Swyx [00:04:05]: This is something I think about for AI engineering as well, which is the big labs want you to hand over everything in the prompts, and only code of English, and then the smaller brains, the GPU pours, always want to write more code to make things more deterministic and reliable and controllable. One way I put it is put Shoggoth in a box and make it a very small, the minimal viable box. Everything else should be traditional, if this, then that software.Flo [00:04:29]: I love that characterization, put the Shoggoth in the box. Yeah, we talk about using as much AI as necessary and as little as possible.Alessio [00:04:37]: And what was the choosing between kind of like this drag and drop, low code, whatever, super code-driven, maybe like the Lang chains, auto-GPT of the world, and maybe the flip side of it, which you don't really do, it's like just text to agent, it's like build the workflow for me. Like what have you learned actually putting this in front of users and figuring out how much do they actually want to add it versus like how much, you know, kind of like Ruby on Rails instead of Lindy on Rails, it's kind of like, you know, defaults over configuration.Flo [00:05:06]: I actually used to dislike when people said, oh, text is not a great interface. I was like, ah, this is such a mid-take, I think text is awesome. And I've actually come around, I actually sort of agree now that text is really not great. I think for people like you and me, because we sort of have a mental model, okay, when I type a prompt into this text box, this is what it's going to do, it's going to map it to this kind of data structure under the hood and so forth. I guess it's a little bit blackmailing towards humans. You jump on these calls with humans and you're like, here's a text box, this is going to set up an agent for you, do it. And then they type words like, I want you to help me put order in my inbox. Oh, actually, this is a good one. This is actually a good one. What's a bad one? I would say 60 or 70% of the prompts that people type don't mean anything. Me as a human, as AGI, I don't understand what they mean. I don't know what they mean. It is actually, I think whenever you can have a GUI, it is better than to have just a pure text interface.Alessio [00:05:58]: And then how do you decide how much to expose? So even with the tools, you have Slack, you have Google Calendar, you have Gmail. Should people by default just turn over access to everything and then you help them figure out what to use? I think that's the question. When I tried to set up Slack, it was like, hey, give me access to all channels and everything, which for the average person probably makes sense because you don't want to re-prompt them every time you add new channels. But at the same time, for maybe the more sophisticated enterprise use cases, people are like, hey, I want to really limit what you have access to. How do you kind of thread that balance?Flo [00:06:35]: The general philosophy is we ask for the least amount of permissions needed at any given moment. I don't think Slack, I could be mistaken, but I don't think Slack lets you request permissions for just one channel. But for example, for Google, obviously there are hundreds of scopes that you could require for Google. There's a lot of scopes. And sometimes it's actually painful to set up your Lindy because you're going to have to ask Google and add scopes five or six times. We've had sessions like this. But that's what we do because, for example, the Lindy email drafter, she's going to ask you for your authorization once for, I need to be able to read your email so I can draft a reply, and then another time for I need to be able to write a draft for them. We just try to do it very incrementally like that.Alessio [00:07:15]: Do you think OAuth is just overall going to change? I think maybe before it was like, hey, we need to set up OAuth that humans only want to kind of do once. So we try to jam-pack things all at once versus what if you could on-demand get different permissions every time from different parts? Do you ever think about designing things knowing that maybe AI will use it instead of humans will use it? Yeah, for sure.Flo [00:07:37]: One pattern we've started to see is people provisioning accounts for their AI agents. And so, in particular, Google Workspace accounts. So, for example, Lindy can be used as a scheduling assistant. So you can just CC her to your emails when you're trying to find time with someone. And just like a human assistant, she's going to go back and forth and offer other abilities and so forth. Very often, people don't want the other party to know that it's an AI. So it's actually funny. They introduce delays. They ask the agent to wait before replying, so it's not too obvious that it's an AI. And they provision an account on Google Suite, which costs them like $10 a month or something like that. So we're seeing that pattern more and more. I think that does the job for now. I'm not optimistic on us actually patching OAuth. Because I agree with you, ultimately, we would want to patch OAuth because the new account thing is kind of a clutch. It's really a hack. You would want to patch OAuth to have more granular access control and really be able to put your sugar in the box. I'm not optimistic on us doing that before AGI, I think. That's a very close timeline.Swyx [00:08:36]: I'm mindful of talking about a thing without showing it. And we already have the setup to show it. Why don't we jump into a screen share? For listeners, you can jump on the YouTube and like and subscribe. But also, let's have a look at how you show off Lindy. Yeah, absolutely.Flo [00:08:51]: I'll give an example of a very simple Lindy and then I'll graduate to a much more complicated one. A super simple Lindy that I have is, I unfortunately bought some investment properties in the south of France. It was a really, really bad idea. And I put them on a Holydew, which is like the French Airbnb, if you will. And so I received these emails from time to time telling me like, oh, hey, you made 200 bucks. Someone booked your place. When I receive these emails, I want to log this reservation in a spreadsheet. Doing this without an AI agent or without AI in general is a pain in the butt because you must write an HTML parser for this email. And so it's just hard. You may not be able to do it and it's going to break the moment the email changes. By contrast, the way it works with Lindy, it's really simple. It's two steps. It's like, okay, I receive an email. If it is a reservation confirmation, I have this filter here. Then I append a row to this spreadsheet. And so this is where you can see the AI part where the way this action is configured here, you see these purple fields on the right. Each of these fields is a prompt. And so I can say, okay, you extract from the email the day the reservation begins on. You extract the amount of the reservation. You extract the number of travelers of the reservation. And now you can see when I look at the task history of this Lindy, it's really simple. It's like, okay, you do this and boom, appending this row to this spreadsheet. And this is the information extracted. So effectively, this node here, this append row node is a mini agent. It can see everything that just happened. It has context over the task and it's appending the row. And then it's going to send a reply to the thread. That's a very simple example of an agent.Swyx [00:10:34]: A quick follow-up question on this one while we're still on this page. Is that one call? Is that a structured output call? Yeah. Okay, nice. Yeah.Flo [00:10:41]: And you can see here for every node, you can configure which model you want to power the node. Here I use cloud. For this, I use GPT-4 Turbo. Much more complex example, my meeting recorder. It looks very complex because I've added to it over time, but at a high level, it's really simple. It's like when a meeting begins, you record the meeting. And after the meeting, you send me a summary and you send me coaching notes. So I receive, like my Lindy is constantly coaching me. And so you can see here in the prompt of the coaching notes, I've told it, hey, you know, was I unnecessarily confrontational at any point? I'm French, so I have to watch out for that. Or not confrontational enough. Should I have double-clicked on any issue, right? So I can really give it exactly the kind of coaching that I'm expecting. And then the interesting thing here is, like, you can see the agent here, after it sent me these coaching notes, moves on. And it does a bunch of other stuff. So it goes on Slack. It disseminates the notes on Slack. It does a bunch of other stuff. But it's actually able to backtrack and resume the automation at the coaching notes email if I responded to that email. So I'll give a super concrete example. This is an actual coaching feedback that I received from Lindy. She was like, hey, this was a sales call I had with a customer. And she was like, I found your explanation of Lindy too technical. And I was able to follow up and just ask a follow-up question in the thread here. And I was like, why did you find too technical about my explanation? And Lindy restored the context. And so she basically picked up the automation back up here in the tree. And she has all of the context of everything that happened, including the meeting in which I was. So she was like, oh, you used the words deterministic and context window and agent state. And that concept exists at every level for every channel and every action that Lindy takes. So another example here is, I mentioned she also disseminates the notes on Slack. So this was a meeting where I was not, right? So this was a teammate. He's an indie meeting recorder, posts the meeting notes in this customer discovery channel on Slack. So you can see, okay, this is the onboarding call we had. This was the use case. Look at the questions. How do I make Lindy slower? How do I add delays to make Lindy slower? And I was able, in the Slack thread, to ask follow-up questions like, oh, what did we answer to these questions? And it's really handy because I know I can have this sort of interactive Q&A with these meetings. It means that very often now, I don't go to meetings anymore. I just send my Lindy. And instead of going to like a 60-minute meeting, I have like a five-minute chat with my Lindy afterwards. And she just replied. She was like, well, this is what we replied to this customer. And I can just be like, okay, good job, Jack. Like, no notes about your answers. So that's the kind of use cases people have with Lindy. It's a lot of like, there's a lot of sales automations, customer support automations, and a lot of this, which is basically personal assistance automations, like meeting scheduling and so forth.Alessio [00:13:21]: Yeah, and I think the question that people might have is memory. So as you get coaching, how does it track whether or not you're improving? You know, if these are like mistakes you made in the past, like, how do you think about that?Flo [00:13:31]: Yeah, we have a memory module. So I'll show you my meeting scheduler, Lindy, which has a lot of memories because by now I've used her for so long. And so every time I talk to her, she saves a memory. If I tell her, you screwed up, please don't do this. So you can see here, oh, it's got a double memory here. This is the meeting link I have, or this is the address of the office. If I tell someone to meet me at home, this is the address of my place. This is the code. I guess we'll have to edit that out. This is not the code of my place. No dogs. Yeah, so Lindy can just manage her own memory and decide when she's remembering things between executions. Okay.Swyx [00:14:11]: I mean, I'm just going to take the opportunity to ask you, since you are the creator of this thing, how come there's so few memories, right? Like, if you've been using this for two years, there should be thousands of thousands of things. That is a good question.Flo [00:14:22]: Agents still get confused if they have too many memories, to my point earlier about that. So I just am out of a call with a member of the Lama team at Meta, and we were chatting about Lindy, and we were going into the system prompt that we sent to Lindy, and all of that stuff. And he was amazed, and he was like, it's a miracle that it's working, guys. He was like, this kind of system prompt, this does not exist, either pre-training or post-training. These models were never trained to do this kind of stuff. It's a miracle that they can be agents at all. And so what I do, I actually prune the memories. You know, it's actually something I've gotten into the habit of doing from back when we had GPT 3.5, being Lindy agents. I suspect it's probably not as necessary in the Cloud 3.5 Sunette days, but I prune the memories. Yeah, okay.Swyx [00:15:05]: The reason is because I have another assistant that also is recording and trying to come up with facts about me. It comes up with a lot of trivial, useless facts that I... So I spend most of my time pruning. Actually, it's not super useful. I'd much rather have high-quality facts that it accepts. Or maybe I was even thinking, were you ever tempted to add a wake word to only memorize this when I say memorize this? And otherwise, don't even bother.Flo [00:15:30]: I have a Lindy that does this. So this is my inbox processor, Lindy. It's kind of beefy because there's a lot of different emails. But somewhere in here,Swyx [00:15:38]: there is a rule where I'm like,Flo [00:15:39]: aha, I can email my inbox processor, Lindy. It's really handy. So she has her own email address. And so when I process my email inbox, I sometimes forward an email to her. And it's a newsletter, or it's like a cold outreach from a recruiter that I don't care about, or anything like that. And I can give her a rule. And I can be like, hey, this email I want you to archive, moving forward. Or I want you to alert me on Slack when I have this kind of email. It's really important. And so you can see here, the prompt is, if I give you a rule about a kind of email, like archive emails from X, save it as a new memory. And I give it to the memory saving skill. And yeah.Swyx [00:16:13]: One thing that just occurred to me, so I'm a big fan of virtual mailboxes. I recommend that everybody have a virtual mailbox. You could set up a physical mail receive thing for Lindy. And so then Lindy can process your physical mail.Flo [00:16:26]: That's actually a good idea. I actually already have something like that. I use like health class mail. Yeah. So yeah, most likely, I can process my physical mail. Yeah.Swyx [00:16:35]: And then the other product's idea I have, looking at this thing, is people want to brag about the complexity of their Lindys. So this would be like a 65 point Lindy, right?Flo [00:16:43]: What's a 65 point?Swyx [00:16:44]: Complexity counting. Like how many nodes, how many things, how many conditions, right? Yeah.Flo [00:16:49]: This is not the most complex one. I have another one. This designer recruiter here is kind of beefy as well. Right, right, right. So I'm just saying,Swyx [00:16:56]: let people brag. Let people be super users. Oh, right.Flo [00:16:59]: Give them a score. Give them a score.Swyx [00:17:01]: Then they'll just be like, okay, how high can you make this score?Flo [00:17:04]: Yeah, that's a good point. And I think that's, again, the beauty of this on-rails phenomenon. It's like, think of the equivalent, the prompt equivalent of this Lindy here, for example, that we're looking at. It'd be monstrous. And the odds that it gets it right are so low. But here, because we're really holding the agent's hand step by step by step, it's actually super reliable. Yeah.Swyx [00:17:22]: And is it all structured output-based? Yeah. As far as possible? Basically. Like, there's no non-structured output?Flo [00:17:27]: There is. So, for example, here, this AI agent step, right, or this send message step, sometimes it gets to... That's just plain text.Swyx [00:17:35]: That's right.Flo [00:17:36]: Yeah. So I'll give you an example. Maybe it's TMI. I'm having blood pressure issues these days. And so this Lindy here, I give it my blood pressure readings, and it updates a log that I have of my blood pressure that it sends to my doctor.Swyx [00:17:49]: Oh, so every Lindy comes with a to-do list?Flo [00:17:52]: Yeah. Every Lindy has its own task history. Huh. Yeah. And so you can see here, this is my main Lindy, my personal assistant, and I've told it, where is this? There is a point where I'm like, if I am giving you a health-related fact, right here, I'm giving you health information, so then you update this log that I have in this Google Doc, and then you send me a message. And you can see, I've actually not configured this send message node. I haven't told it what to send me a message for. Right? And you can see, it's actually lecturing me. It's like, I'm giving it my blood pressure ratings. It's like, hey, it's a bit high. Here are some lifestyle changes you may want to consider.Alessio [00:18:27]: I think maybe this is the most confusing or new thing for people. So even I use Lindy and I didn't even know you could have multiple workflows in one Lindy. I think the mental model is kind of like the Zapier workflows. It starts and it ends. It doesn't choose between. How do you think about what's a Lindy versus what's a sub-function of a Lindy? Like, what's the hierarchy?Flo [00:18:48]: Yeah. Frankly, I think the line is a little arbitrary. It's kind of like when you code, like when do you start to create a new class versus when do you overload your current class. I think of it in terms of like jobs to be done and I think of it in terms of who is the Lindy serving. This Lindy is serving me personally. It's really my day-to-day Lindy. I give it a bunch of stuff, like very easy tasks. And so this is just the Lindy I go to. Sometimes when a task is really more specialized, so for example, I have this like summarizer Lindy or this designer recruiter Lindy. These tasks are really beefy. I wouldn't want to add this to my main Lindy, so I just created a separate Lindy for it. Or when it's a Lindy that serves another constituency, like our customer support Lindy, I don't want to add that to my personal assistant Lindy. These are two very different Lindys.Alessio [00:19:31]: And you can call a Lindy from within another Lindy. That's right. You can kind of chain them together.Flo [00:19:36]: Lindys can work together, absolutely.Swyx [00:19:38]: A couple more things for the video portion. I noticed you have a podcast follower. We have to ask about that. What is that?Flo [00:19:46]: So this one wakes me up every... So wakes herself up every week. And she sends me... So she woke up yesterday, actually. And she searches for Lenny's podcast. And she looks for like the latest episode on YouTube. And once she finds it, she transcribes the video and then she sends me the summary by email. I don't listen to podcasts as much anymore. I just like read these summaries. Yeah.Alessio [00:20:09]: We should make a latent space Lindy. Marketplace.Swyx [00:20:12]: Yeah. And then you have a whole bunch of connectors. I saw the list briefly. Any interesting one? Complicated one that you're proud of? Anything that you want to just share? Connector stories.Flo [00:20:23]: So many of our workflows are about meeting scheduling. So we had to build some very open unity tools around meeting scheduling. So for example, one that is surprisingly hard is this find available times action. You would not believe... This is like a thousand lines of code or something. It's just a very beefy action. And you can pass it a bunch of parameters about how long is the meeting? When does it start? When does it end? What are the meetings? The weekdays in which I meet? How many time slots do you return? What's the buffer between my meetings? It's just a very, very, very complex action. I really like our GitHub action. So we have a Lindy PR reviewer. And it's really handy because anytime any bug happens... So the Lindy reads our guidelines on Google Docs. By now, the guidelines are like 40 pages long or something. And so every time any new kind of bug happens, we just go to the guideline and we add the lines. Like, hey, this has happened before. Please watch out for this category of bugs. And it's saving us so much time every day.Alessio [00:21:19]: There's companies doing PR reviews. Where does a Lindy start? When does a company start? Or maybe how do you think about the complexity of these tasks when it's going to be worth having kind of like a vertical standalone company versus just like, hey, a Lindy is going to do a good job 99% of the time?Flo [00:21:34]: That's a good question. We think about this one all the time. I can't say that we've really come up with a very crisp articulation of when do you want to use a vertical tool versus when do you want to use a horizontal tool. I think of it as very similar to the internet. I find it surprising the extent to which a horizontal search engine has won. But I think that Google, right? But I think the even more surprising fact is that the horizontal search engine has won in almost every vertical, right? You go through Google to search Reddit. You go through Google to search Wikipedia. I think maybe the biggest exception is e-commerce. Like you go to Amazon to search e-commerce, but otherwise you go through Google. And I think that the reason for that is because search in each vertical has more in common with search than it does with each vertical. And search is so expensive to get right. Like Google is a big company that it makes a lot of sense to aggregate all of these different use cases and to spread your R&D budget across all of these different use cases. I have a thesis, which is, it's a really cool thesis for Lindy, is that the same thing is true for agents. I think that by and large, in a lot of verticals, agents in each vertical have more in common with agents than they do with each vertical. I also think there are benefits in having a single agent platform because that way your agents can work together. They're all like under one roof. That way you only learn one platform and so you can create agents for everything that you want. And you don't have to like pay for like a bunch of different platforms and so forth. So I think ultimately, it is actually going to shake out in a way that is similar to search in that search is everywhere on the internet. Every website has a search box, right? So there's going to be a lot of vertical agents for everything. I think AI is going to completely penetrate every category of software. But then I also think there are going to be a few very, very, very big horizontal agents that serve a lot of functions for people.Swyx [00:23:14]: That is actually one of the questions that we had about the agent stuff. So I guess we can transition away from the screen and I'll just ask the follow-up, which is, that is a hot topic. You're basically saying that the current VC obsession of the day, which is vertical AI enabled SaaS, is mostly not going to work out. And then there are going to be some super giant horizontal SaaS.Flo [00:23:34]: Oh, no, I'm not saying it's either or. Like SaaS today, vertical SaaS is huge and there's also a lot of horizontal platforms. If you look at like Airtable or Notion, basically the entire no-code space is very horizontal. I mean, Loom and Zoom and Slack, there's a lot of very horizontal tools out there. Okay.Swyx [00:23:49]: I was just trying to get a reaction out of you for hot takes. Trying to get a hot take.Flo [00:23:54]: No, I also think it is natural for the vertical solutions to emerge first because it's just easier to build. It's just much, much, much harder to build something horizontal. Cool.Swyx [00:24:03]: Some more Lindy-specific questions. So we covered most of the top use cases and you have an academy. That was nice to see. I also see some other people doing it for you for free. So like Ben Spites is doing it and then there's some other guy who's also doing like lessons. Yeah. Which is kind of nice, right? Yeah, absolutely. You don't have to do any of that.Flo [00:24:20]: Oh, we've been seeing it more and more on like LinkedIn and Twitter, like people posting their Lindys and so forth.Swyx [00:24:24]: I think that's the flywheel that you built the platform where creators see value in allying themselves to you. And so then, you know, your incentive is to make them successful so that they can make other people successful and then it just drives more and more engagement. Like it's earned media. Like you don't have to do anything.Flo [00:24:39]: Yeah, yeah. I mean, community is everything.Swyx [00:24:41]: Are you doing anything special there? Any big wins?Flo [00:24:44]: We have a Slack community that's pretty active. I can't say we've invested much more than that so far.Swyx [00:24:49]: I would say from having, so I have some involvement in the no-code community. I would say that Webflow going very hard after no-code as a category got them a lot more allies than just the people using Webflow. So it helps you to grow the community beyond just Lindy. And I don't know what this is called. Maybe it's just no-code again. Maybe you want to call it something different. But there's definitely an appetite for this and you are one of a broad category, right? Like just before you, we had Dust and, you know, they're also kind of going after a similar market. Zapier obviously is not going to try to also compete with you. Yeah. There's no question there. It's just like a reaction about community. Like I think a lot about community. Lanespace is growing the community of AI engineers. And I think you have a slightly different audience of, I don't know what.Flo [00:25:33]: Yeah. I think the no-code tinkerers is the community. Yeah. It is going to be the same sort of community as what Webflow, Zapier, Airtable, Notion to some extent.Swyx [00:25:43]: Yeah. The framing can be different if you were, so I think tinkerers has this connotation of not serious or like small. And if you framed it to like no-code EA, we're exclusively only for CEOs with a certain budget, then you just have, you tap into a different budget.Flo [00:25:58]: That's true. The problem with EA is like, the CEO has no willingness to actually tinker and play with the platform.Swyx [00:26:05]: Maybe Andrew's doing that. Like a lot of your biggest advocates are CEOs, right?Flo [00:26:09]: A solopreneur, you know, small business owners, I think Andrew is an exception. Yeah. Yeah, yeah, he is.Swyx [00:26:14]: He's an exception in many ways. Yep.Alessio [00:26:16]: Just before we wrap on the use cases, is Rick rolling your customers? Like a officially supported use case or maybe tell that story?Flo [00:26:24]: It's one of the main jobs to be done, really. Yeah, we woke up recently, so we have a Lindy obviously doing our customer support and we do check after the Lindy. And so we caught this email exchange where someone was asking Lindy for video tutorials. And at the time, actually, we did not have video tutorials. We do now on the Lindy Academy. And Lindy responded to the email. It's like, oh, absolutely, here's a link. And we were like, what? Like, what kind of link did you send? And so we clicked on the link and it was a recall. We actually reacted fast enough that the customer had not yet opened the email. And so we reacted immediately. Like, oh, hey, actually, sorry, this is the right link. And so the customer never reacted to the first link. And so, yeah, I tweeted about that. It went surprisingly viral. And I checked afterwards in the logs. We did like a database query and we found, I think, like three or four other instances of it having happened before.Swyx [00:27:12]: That's surprisingly low.Flo [00:27:13]: It is low. And we fixed it across the board by just adding a line to the system prompt that's like, hey, don't recall people, please don't recall.Swyx [00:27:21]: Yeah, yeah, yeah. I mean, so, you know, you can explain it retroactively, right? Like, that YouTube slug has been pasted in so many different corpuses that obviously it learned to hallucinate that.Alessio [00:27:31]: And it pretended to be so many things. That's the thing.Swyx [00:27:34]: I wouldn't be surprised if that takes one token. Like, there's this one slug in the tokenizer and it's just one token.Flo [00:27:41]: That's the idea of a YouTube video.Swyx [00:27:43]: Because it's used so much, right? And you have to basically get it exactly correct. It's probably not. That's a long speech.Flo [00:27:52]: It would have been so good.Alessio [00:27:55]: So this is just a jump maybe into evals from here. How could you possibly come up for an eval that says, make sure my AI does not recall my customer? I feel like when people are writing evals, that's not something that they come up with. So how do you think about evals when it's such like an open-ended problem space?Flo [00:28:12]: Yeah, it is tough. We built quite a bit of infrastructure for us to create evals in one click from any conversation history. So we can point to a conversation and we can be like, in one click we can turn it into effectively a unit test. It's like, this is a good conversation. This is how you're supposed to handle things like this. Or if it's a negative example, then we modify a little bit the conversation after generating the eval. So it's very easy for us to spin up this kind of eval.Alessio [00:28:36]: Do you use an off-the-shelf tool which is like Brain Trust on the podcast? Or did you just build your own?Flo [00:28:41]: We unfortunately built our own. We're most likely going to switch to Brain Trust. Well, when we built it, there was nothing. Like there was no eval tool, frankly. I mean, we started this project at the end of 2022. It was like, it was very, very, very early. I wouldn't recommend it to build your own eval tool. There's better solutions out there and our eval tool breaks all the time and it's a nightmare to maintain. And that's not something we want to be spending our time on.Swyx [00:29:04]: I was going to ask that basically because I think my first conversations with you about Lindy was that you had a strong opinion that everyone should build their own tools. And you were very proud of your evals. You're kind of showing off to me like how many evals you were running, right?Flo [00:29:16]: Yeah, I think that was before all of these tools came around. I think the ecosystem has matured a fair bit.Swyx [00:29:21]: What is one thing that Brain Trust has nailed that you always struggled to do?Flo [00:29:25]: We're not using them yet, so I couldn't tell. But from what I've gathered from the conversations I've had, like they're doing what we do with our eval tool, but better.Swyx [00:29:33]: And like they do it, but also like 60 other companies do it, right? So I don't know how to shop apart from brand. Word of mouth.Flo [00:29:41]: Same here.Swyx [00:29:42]: Yeah, like evals or Lindys, there's two kinds of evals, right? Like in some way, you don't have to eval your system as much because you've constrained the language model so much. And you can rely on open AI to guarantee that the structured outputs are going to be good, right? We had Michelle sit where you sit and she explained exactly how they do constraint grammar sampling and all that good stuff. So actually, I think it's more important for your customers to eval their Lindys than you evaling your Lindy platform because you just built the platform. You don't actually need to eval that much.Flo [00:30:14]: Yeah. In an ideal world, our customers don't need to care about this. And I think the bar is not like, look, it needs to be at 100%. I think the bar is it needs to be better than a human. And for most use cases we serve today, it is better than a human, especially if you put it on Rails.Swyx [00:30:30]: Is there a limiting factor of Lindy at the business? Like, is it adding new connectors? Is it adding new node types? Like how do you prioritize what is the most impactful to your company?Flo [00:30:41]: Yeah. The raw capabilities for sure are a big limit. It is actually shocking the extent to which the model is no longer the limit. It was the limit a year ago. It was too expensive. The context window was too small. It's kind of insane that we started building this when the context windows were like 4,000 tokens. Like today, our system prompt is more than 4,000 tokens. So yeah, the model is actually very much not a limit anymore. It almost gives me pause because I'm like, I want the model to be a limit. And so no, the integrations are ones, the core capabilities are ones. So for example, we are investing in a system that's basically, I call it like the, it's a J hack. Give me these names, like the poor man's RLHF. So you can turn on a toggle on any step of your Lindy workflow to be like, ask me for confirmation before you actually execute this step. So it's like, hey, I receive an email, you send a reply, ask me for confirmation before actually sending it. And so today you see the email that's about to get sent and you can either approve, deny, or change it and then approve. And we are making it so that when you make a change, we are then saving this change that you're making or embedding it in the vector database. And then we are retrieving these examples for future tasks and injecting them into the context window. So that's the kind of capability that makes a huge difference for users. That's the bottleneck today. It's really like good old engineering and product work.Swyx [00:31:52]: I assume you're hiring. We'll do a call for hiring at the end.Alessio [00:31:54]: Any other comments on the model side? When did you start feeling like the model was not a bottleneck anymore? Was it 4.0? Was it 3.5? 3.5.Flo [00:32:04]: 3.5 Sonnet, definitely. I think 4.0 is overhyped, frankly. We don't use 4.0. I don't think it's good for agentic behavior. Yeah, 3.5 Sonnet is when I started feeling that. And then with prompt caching with 3.5 Sonnet, like that fills the cost, cut the cost again. Just cut it in half. Yeah.Swyx [00:32:21]: Your prompts are... Some of the problems with agentic uses is that your prompts are kind of dynamic, right? Like from caching to work, you need the front prefix portion to be stable.Flo [00:32:32]: Yes, but we have this append-only ledger paradigm. So every node keeps appending to that ledger and every filled node inherits all the context built up by all the previous nodes. And so we can just decide, like, hey, every X thousand nodes, we trigger prompt caching again.Swyx [00:32:47]: Oh, so you do it like programmatically, not all the time.Flo [00:32:50]: No, sorry. Anthropic manages that for us. But basically, it's like, because we keep appending to the prompt, the prompt caching works pretty well.Alessio [00:32:57]: We have this small podcaster tool that I built for the podcast and I rewrote all of our prompts because I noticed, you know, I was inputting stuff early on. I wonder how much more money OpenAN and Anthropic are making just because people don't rewrite their prompts to be like static at the top and like dynamic at the bottom.Flo [00:33:13]: I think that's the remarkable thing about what we're having right now. It's insane that these companies are routinely cutting their costs by two, four, five. Like, they basically just apply constraints. They want people to take advantage of these innovations. Very good.Swyx [00:33:25]: Do you have any other competitive commentary? Commentary? Dust, WordWare, Gumloop, Zapier? If not, we can move on.Flo [00:33:31]: No comment.Alessio [00:33:32]: I think the market is,Flo [00:33:33]: look, I mean, AGI is coming. All right, that's what I'm talking about.Swyx [00:33:38]: I think you're helping. Like, you're paving the road to AGI.Flo [00:33:41]: I'm playing my small role. I'm adding my small brick to this giant, giant, giant castle. Yeah, look, when it's here, we are going to, this entire category of software is going to create, it's going to sound like an exaggeration, but it is a fact it is going to create trillions of dollars of value in a few years, right? It's going to, for the first time, we're actually having software directly replace human labor. I see it every day in sales calls. It's like, Lindy is today replacing, like, we talk to even small teams. It's like, oh, like, stop, this is a 12-people team here. I guess we'll set up this Lindy for one or two days, and then we'll have to decide what to do with this 12-people team. And so, yeah. To me, there's this immense uncapped market opportunity. It's just such a huge ocean, and there's like three sharks in the ocean. I'm focused on the ocean more than on the sharks.Swyx [00:34:25]: So we're moving on to hot topics, like, kind of broadening out from Lindy, but obviously informed by Lindy. What are the high-order bits of good agent design?Flo [00:34:31]: The model, the model, the model, the model. I think people fail to truly, and me included, they fail to truly internalize the bitter lesson. So for the listeners out there who don't know about it, it's basically like, you just scale the model. Like, GPUs go brr, it's all that matters. I think it also holds for the cognitive architecture. I used to be very cognitive architecture-filled, and I was like, ah, and I was like a critic, and I was like a generator, and all this, and then it's just like, GPUs go brr, like, just like let the model do its job. I think we're seeing it a little bit right now with O1. I'm seeing some tweets that say that the new 3.5 SONNET is as good as O1, but with none of all the crazy...Swyx [00:35:09]: It beats O1 on some measures. On some reasoning tasks. On AIME, it's still a lot lower. Like, it's like 14 on AIME versus O1, it's like 83.Flo [00:35:17]: Got it. Right. But even O1 is still the model. Yeah.Swyx [00:35:22]: Like, there's no cognitive architecture on top of it.Flo [00:35:23]: You can just wait for O1 to get better.Alessio [00:35:25]: And so, as a founder, how do you think about that, right? Because now, knowing this, wouldn't you just wait to start Lindy? You know, you start Lindy, it's like 4K context, the models are not that good. It's like, but you're still kind of like going along and building and just like waiting for the models to get better. How do you today decide, again, what to build next, knowing that, hey, the models are going to get better, so maybe we just shouldn't focus on improving our prompt design and all that stuff and just build the connectors instead or whatever? Yeah.Flo [00:35:51]: I mean, that's exactly what we do. Like, all day, we always ask ourselves, oh, when we have a feature idea or a feature request, we ask ourselves, like, is this the kind of thing that just gets better while we sleep because models get better? I'm reminded, again, when we started this in 2022, we spent a lot of time because we had to around context pruning because 4,000 tokens is really nothing. You really can't do anything with 4,000 tokens. All that work was throwaway work. Like, now it's like it was for nothing, right? Now we just assume that infinite context windows are going to be here in a year or something, a year and a half, and infinitely cheap as well, and dynamic compute is going to be here. Like, we just assume all of these things are going to happen, and so we really focus, our job to be done in the industry is to provide the input and output to the model. I really compare it all the time to the PC and the CPU, right? Apple is busy all day. They're not like a CPU wrapper. They have a lot to build, but they don't, well, now actually they do build the CPU as well, but leaving that aside, they're busy building a laptop. It's just a lot of work to build these things. It's interesting because, like,Swyx [00:36:45]: for example, another person that we're close to, Mihaly from Repl.it, he often says that the biggest jump for him was having a multi-agent approach, like the critique thing that you just said that you don't need, and I wonder when, in what situations you do need that and what situations you don't. Obviously, the simple answer is for coding, it helps, and you're not coding, except for, are you still generating code? In Indy? Yeah.Flo [00:37:09]: No, we do. Oh, right. No, no, no, the cognitive architecture changed. We don't, yeah.Swyx [00:37:13]: Yeah, okay. For you, you're one shot, and you chain tools together, and that's it. And if the user really wantsFlo [00:37:18]: to have this kind of critique thing, you can also edit the prompt, you're welcome to. I have some of my Lindys, I've told them, like, hey, be careful, think step by step about what you're about to do, but that gives you a little bump for some use cases, but, yeah.Alessio [00:37:30]: What about unexpected model releases? So, Anthropic released computer use today. Yeah. I don't know if many people were expecting computer use to come out today. Do these things make you rethink how to design, like, your roadmap and things like that, or are you just like, hey, look, whatever, that's just, like, a small thing in their, like, AGI pursuit, that, like, maybe they're not even going to support, and, like, it's still better for us to build our own integrations into systems and things like that. Because maybe people will say, hey, look, why am I building all these API integrationsFlo [00:38:02]: when I can just do computer use and never go to the product? Yeah. No, I mean, we did take into account computer use. We were talking about this a year ago or something, like, we've been talking about it as part of our roadmap. It's been clear to us that it was coming, My philosophy about it is anything that can be done with an API must be done by an API or should be done by an API for a very long time. I think it is dangerous to be overly cavalier about improvements of model capabilities. I'm reminded of iOS versus Android. Android was built on the JVM. There was a garbage collector, and I can only assume that the conversation that went down in the engineering meeting room was, oh, who cares about the garbage collector? Anyway, Moore's law is here, and so that's all going to go to zero eventually. Sure, but in the meantime, you are operating on a 400 MHz CPU. It was like the first CPU on the iPhone 1, and it's really slow, and the garbage collector is introducing a tremendous overhead on top of that, especially a memory overhead. For the longest time, and it's really only been recently that Android caught up to iOS in terms of how smooth the interactions were, but for the longest time, Android phones were significantly slowerSwyx [00:39:07]: and laggierFlo [00:39:08]: and just not feeling as good as iOS devices. Look, when you're talking about modules and magnitude of differences in terms of performance and reliability, which is what we are talking about when we're talking about API use versus computer use, then you can't ignore that, right? And so I think we're going to be in an API use world for a while.Swyx [00:39:27]: O1 doesn't have API use today. It will have it at some point, and it's on the roadmap. There is a future in which OpenAI goes much harder after your business, your market, than it is today. Like, ChatGPT, it's its own business. All they need to do is add tools to the ChatGPT, and now they're suddenly competing with you. And by the way, they have a GPT store where a bunch of people have already configured their tools to fit with them. Is that a concern?Flo [00:39:56]: I think even the GPT store, in a way, like the way they architect it, for example, their plug-in systems are actually grateful because we can also use the plug-ins. It's very open. Now, again, I think it's going to be such a huge market. I think there's going to be a lot of different jobs to be done. I know they have a huge enterprise offering and stuff, but today, ChatGPT is a consumer app. And so, the sort of flow detail I showed you, this sort of workflow, this sort of use cases that we're going after, which is like, we're doing a lot of lead generation and lead outreach and all of that stuff. That's not something like meeting recording, like Lindy Today right now joins your Zoom meetings and takes notes, all of that stuff.Swyx [00:40:34]: I don't see that so farFlo [00:40:35]: on the OpenAI roadmap.Swyx [00:40:36]: Yeah, but they do have an enterprise team that we talk to You're hiring GMs?Flo [00:40:42]: We did.Swyx [00:40:43]: It's a fascinating way to build a business, right? Like, what should you, as CEO, be in charge of? And what should you basically hireFlo [00:40:52]: a mini CEO to do? Yeah, that's a good question. I think that's also something we're figuring out. The GM thing was inspired from my days at Uber, where we hired one GM per city or per major geo area. We had like all GMs, regional GMs and so forth. And yeah, Lindy is so horizontal that we thought it made sense to hire GMs to own each vertical and the go-to market of the vertical and the customization of the Lindy templates for these verticals and so forth. What should I own as a CEO? I mean, the canonical reply here is always going to be, you know, you own the fundraising, you own the culture, you own the... What's the rest of the canonical reply? The culture, the fundraising.Swyx [00:41:29]: I don't know,Flo [00:41:30]: products. Even that, eventually, you do have to hand out. Yes, the vision, the culture, and the foundation. Well, you've done your job as a CEO. In practice, obviously, yeah, I mean, all day, I do a lot of product work still and I want to keep doing product work for as long as possible.Swyx [00:41:48]: Obviously, like you're recording and managing the team. Yeah.Flo [00:41:52]: That one feels like the most automatable part of the job, the recruiting stuff.Swyx [00:41:56]: Well, yeah. You saw myFlo [00:41:59]: design your recruiter here. Relationship between Factorio and building Lindy. We actually very often talk about how the business of the future is like a game of Factorio. Yeah. So, in the instance, it's like Slack and you've got like 5,000 Lindys in the sidebar and your job is to somehow manage your 5,000 Lindys. And it's going to be very similar to company building because you're going to look for like the highest leverage way to understand what's going on in your AI company and understand what levels do you have to make impact in that company. So, I think it's going to be very similar to like a human company except it's going to go infinitely faster. Today, in a human company, you could have a meeting with your team and you're like, oh, I'm going to build a facility and, you know, now it's like, okay,Swyx [00:42:40]: boom, I'm going to spin up 50 designers. Yeah. Like, actually, it's more important that you can clone an existing designer that you know works because the hiring process, you cannot clone someone because every new person you bring in is going to have their own tweaksFlo [00:42:54]: and you don't want that. Yeah.Swyx [00:42:56]: That's true. You want an army of mindless dronesFlo [00:42:59]: that all work the same way.Swyx [00:43:00]: The reason I bring this, bring Factorio up as well is one, Factorio Space just came out. Apparently, a whole bunch of people stopped working. I tried out Factorio. I never really got that much into it. But the other thing was, you had a tweet recently about how the sort of intentional top-down design was not as effective as just build. Yeah. Just ship.Flo [00:43:21]: I think people read a little bit too much into that tweet. It went weirdly viral. I was like, I did not intend it as a giant statement online.Swyx [00:43:28]: I mean, you notice you have a pattern with this, right? Like, you've done this for eight years now.Flo [00:43:33]: You should know. I legit was just hearing an interesting story about the Factorio game I had. And everybody was like, oh my God, so deep. I guess this explains everything about life and companies. There is something to be said, certainly, about focusing on the constraint. And I think it is Patrick Collison who said, people underestimate the extent to which moonshots are just one pragmatic step taken after the other. And I think as long as you have some inductive bias about, like, some loose idea about where you want to go, I think it makes sense to follow a sort of greedy search along that path. I think planning and organizing is important. And having older is important.Swyx [00:44:05]: I'm wrestling with that. There's two ways I encountered it recently. One with Lindy. When I tried out one of your automation templates and one of them was quite big and I just didn't understand it, right? So, like, it was not as useful to me as a small one that I can just plug in and see all of. And then the other one was me using Cursor. I was very excited about O1 and I just up frontFlo [00:44:27]: stuffed everythingSwyx [00:44:28]: I wanted to do into my prompt and expected O1 to do everything. And it got itself into a huge jumbled mess and it was stuck. It was really... There was no amount... I wasted, like, two hours on just, like, trying to get out of that hole. So I threw away the code base, started small, switched to Clouds on it and build up something working and just add it over time and it just worked. And to me, that was the factorial sentiment, right? Maybe I'm one of those fanboys that's just, like, obsessing over the depth of something that you just randomly tweeted out. But I think it's true for company building, for Lindy building, for coding.Flo [00:45:02]: I don't know. I think it's fair and I think, like, you and I talked about there's the Tuft & Metal principle and there's this other... Yes, I love that. There's the... I forgot the name of this other blog post but it's basically about this book Seeing Like a State that talks about the need for legibility and people who optimize the system for its legibility and anytime you make a system... So legible is basically more understandable. Anytime you make a system more understandable from the top down, it performs less well from the bottom up. And it's fine but you should at least make this trade-off with your eyes wide open. You should know, I am sacrificing performance for understandability, for legibility. And in this case, for you, it makes sense. It's like you are actually optimizing for legibility. You do want to understand your code base but in some other cases it may not make sense. Sometimes it's better to leave the system alone and let it be its glorious, chaotic, organic self and just trust that it's going to perform well even though you don't understand it completely.Swyx [00:45:55]: It does remind me of a common managerial issue or dilemma which you experienced in the small scale of Lindy where, you know, do you want to organize your company by functional sections or by products or, you know, whatever the opposite of functional is. And you tried it one way and it was more legible to you as CEO but actually it stopped working at the small level. Yeah.Flo [00:46:17]: I mean, one very small example, again, at a small scale is we used to have everything on Notion. And for me, as founder, it was awesome because everything was there. The roadmap was there. The tasks were there. The postmortems were there. And so, the postmortem was linkedSwyx [00:46:31]: to its task.Flo [00:46:32]: It was optimized for you. Exactly. And so, I had this, like, one pane of glass and everything was on Notion. And then the team, one day,Swyx [00:46:39]: came to me with pitchforksFlo [00:46:40]: and they really wanted to implement Linear. And I had to bite my fist so hard. I was like, fine, do it. Implement Linear. Because I was like, at the end of the day, the team needs to be able to self-organize and pick their own tools.Alessio [00:46:51]: Yeah. But it did make the company slightly less legible for me. Another big change you had was going away from remote work, every other month. The discussion comes up again. What was that discussion like? How did your feelings change? Was there kind of like a threshold of employees and team size where you felt like, okay, maybe that worked. Now it doesn't work anymore. And how are you thinking about the futureFlo [00:47:12]: as you scale the team? Yeah. So, for context, I used to have a business called TeamFlow. The business was about building a virtual office for remote teams. And so, being remote was not merely something we did. It was, I was banging the remote drum super hard and helping companies to go remote. And so, frankly, in a way, it's a bit embarrassing for me to do a 180 like that. But I guess, when the facts changed, I changed my mind. What happened? Well, I think at first, like everyone else, we went remote by necessity. It was like COVID and you've got to go remote. And on paper, the gains of remote are enormous. In particular, from a founder's standpoint, being able to hire from anywhere is huge. Saving on rent is huge. Saving on commute is huge for everyone and so forth. But then, look, we're all here. It's like, it is really making it much harder to work together. And I spent three years of my youth trying to build a solution for this. And my conclusion is, at least we couldn't figure it out and no one else could. Zoom didn't figure it out. We had like a bunch of competitors. Like, Gathertown was one of the bigger ones. We had dozens and dozens of competitors. No one figured it out. I don't know that software can actually solve this problem. The reality of it is, everyone just wants to get off the darn Zoom call. And it's not a good feeling to be in your home office if you're even going to have a home office all day. It's harder to build culture. It's harder to get in sync. I think software is peculiar because it's like an iceberg. It's like the vast majority of it is submerged underwater. And so, the quality of the software that you ship is a function of the alignment of your mental models about what is below that waterline. Can you actually get in sync about what it is exactly fundamentally that we're building? What is the soul of our product? And it is so much harder to get in sync about that when you're remote. And then you waste time in a thousand ways because people are offline and you can't get a hold of them or you can't share your screen. It's just like you feel like you're walking in molasses all day. And eventually, I was like, okay, this is it. We're not going to do this anymore.Swyx [00:49:03]: Yeah. I think that is the current builder San Francisco consensus here. Yeah. But I still have a big... One of my big heroes as a CEO is Sid Subban from GitLab.Flo [00:49:14]: Mm-hmm.Swyx [00:49:15]: Matt MullenwegFlo [00:49:16]: used to be a hero.Swyx [00:49:17]: But these people run thousand-person remote businesses. The main idea is that at some company
Music News: Pink Floyd and Joni MitchellIn this episode of the Deadhead Cannabis Show, Larry Mishkin reflects on the intersection of music and cannabis in the wake of the recent elections. He delves into the Grateful Dead's legacy, highlighting a notable performance from 1973, and explores the lyrical depth of 'To Lay Me Down.' The conversation also touches on music news, including Pink Floyd's 'Dark Side of the Moon' and Joni Mitchell's recent birthday. The episode concludes with a discussion on recent research indicating that cannabis may serve as a substitute for more dangerous substances. This conversation explores the complex relationship between cannabis use and substance consumption among young adults, the implications of Florida's failed marijuana legalization initiative, and the potential of cannabis as a harm reduction tool for opioid use. It also highlights popular cannabis strains and their effects, alongside a cultural reflection on the Grateful Dead's music. Chapters00:00 Post-Election Reflections: Music and Cannabis08:29 The Grateful Dead's Musical Legacy14:48 Exploring the Lyrics: To Lay Me Down21:59 Music News: Pink Floyd and Joni Mitchell37:06 Weather Report Suite: A Musical Journey43:10 Second Set Highlights: Mississippi Half-Step and Beyond49:36 Marijuana Research: Substitution Effects51:24 Cannabis Use Among Young Adults56:13 Florida's Marijuana Legalization Initiative01:05:01 Cannabis as a Tool for Opioid Harm Reduction01:11:10 Strains of the Week and Cannabis Culture Larry's Notes:Grateful DeadNovember 11, 1973 (51 years ago)Winterland ArenaSan Francisco, CAGrateful Dead Live at Winterland Arena on 1973-11-11 : Free Borrow & Streaming : Internet Archive Happy Veteran's Day A very famous show from a very famous year. Many feel 1973 was the peak of the band's post psychedelic era. Certainly right up there with 1977 as top years for the band, even by November they were still in full stride during a three night run at Winterland, this being the third and final night of the run. In 2008 the Dead released the box set: “Winterland 1973: The complete recordings” featuring shows from Nov. 9, 10 and 11, 1973. This was the Dead's second “complete recordings” release featuring all of the nights of a single run. The first was “Fillmore West, 1969, the Complete Recordings” from Feb. 27, 28 and March 1 and 2 (IMHO the best collection of live music ever released by the band). The band later released a follow up, Winterland 1977: The Complete Recordings a three night run June 7, 8 and 9, 1977 that is also an outstanding box set. Today's show has a 16 song first set, a six song second set and a three song encore, a true rarity for a Dead show of any era (other than NYE shows). The second set consists of ½ Step, Big River, Dark Star with MLBJ, Eyes of the World, China Doll and Sugar Magnolia and is as well played as any set ever played by the band. They were on fire for these three days. A great collection of music and killer three night run for those lucky enough to have snagged a ticket for any or all of the nights. Patrick Carr wrote in the NY Times that: “The Dead had learned how to conceive and perform a music which often induced something closely akin to the psychedelic experience; they were and are experts in the art and science of showing people another world, or a temporary altering (raising) of world consciousness. It sounds pseudomystical pretentious perhaps, but the fact is that it happens and it is intentional.” INTRO: Promised Land (show opener into Bertha/Greatest Story/Sugaree/Black Throated Wind) Track #1 0 – 2:10 "Promised Land" is a song lyric written by Chuck Berry to the melody of "Wabash Cannonball", an American folk song. The song was first recorded in this version by Berry in 1964 for his album St. Louis to Liverpool. Released in December 1964, it was Berry's fourth single issued following his prison term for a Mann Act conviction. The record peaked at #41 in the Billboard charts on January 16, 1965. Berry wrote the song while in prison, and borrowed an atlas from the prison library to plot the itinerary. In the lyrics, the singer (who refers to himself as "the poor boy") tells of his journey from Norfolk, Virginia, to the "Promised Land", Los Angeles, California, mentioning various cities in Southern states that he passes through on his journey. Describing himself as a "poor boy," the protagonist boards a Greyhound bus in Norfolk, Virginia that passes Raleigh, N.C., stops in Charlotte, North Carolina, and bypasses Rock Hill, South Carolina. The bus rolls out of Atlanta but breaks down, leaving him stranded in downtown Birmingham, Alabama. He then takes a train "across Mississippi clean" to New Orleans. From there, he goes to Houston, where "the people there who care a bit about me" buy him a silk suit, luggage and a plane ticket to Los Angeles. Upon landing in Los Angeles, he calls Norfolk, Virginia ("Tidewater four, ten-oh-nine") to tell the folks back home he made it to the "promised land." The lyric: "Swing low, sweet chariot, come down easy/Taxi to the terminal zone" refers to the gospel lyric: "Swing low, sweet Chariot, coming for to carry me Home" since both refer to a common destination, "The Promised Land," which in this case is California, reportedly a heaven on earth. Billboard called the song a "true blue Berry rocker with plenty of get up and go," adding that "rinky piano and wailing Berry electric guitar fills all in neatly."[2]Cash Box described it as "a 'pull-out-all-the-stops' rocker that Chuck pounds out solid sales authority" and "a real mover that should head out for hit territory in no time flat."[3] In 2021, it was listed at No. 342 on Rolling Stone's "Top 500 Greatest Songs of All Time". Apparently played by the Warlocks and the Grateful Dead in their earliest days, Bob Weir started playing this with the Dead in 1971, and it remained a regular right through to the band's last show ever in 1995. Among those deeply touched by Chuck's genius were Jerry Garcia and the Grateful Dead. They often paid homage to Chuck by weaving his songs into their performances, breathing new life into his timeless melodies. "Promised Land," with its relentless drive, became an anthem of journey and aspiration. Their electrifying renditions of "Johnny B. Goode" were not mere covers but jubilant celebrations of a narrative that resonated with the dreamer in all of us. The Grateful Dead's performances of "Around and Around" echoed Chuck's mastery of capturing life's cyclical rhythms—a dance of beginnings and endings, joy and sorrow. And when they took on "Run Rudolph Run," they infused the festive classic with their own psychedelic flair, bridging the gap between tradition and innovation. A moment etched in musical history was when Chuck Berry shared the stage with the Grateful Dead during their induction into the Rock and Roll Hall of Fame in 1994. The air was thick with reverence and electricity—a meeting of titans where the past, present, and future of rock converged in harmonious resonance. Again, in May 1995, Chuck opened for the Grateful Dead in Portland, Oregon. It was a night where legends collided, and the music swirled like a tempest, leaving a lasting impression on all who were fortunate enough to witness it. This version really rocks out. I especially love Keith's piano which is featured prominently in this clip. Played: 430 timesFirst: May 28, 1971 at Winterland Arena, San Francisco, CA, USALast: July 9, 1995 at Soldier Field, Chicago, IL, USA SHOW No. 1: To Lay Me Down (out of Black Throated Wind/into El Paso/Ramble On Rose/Me and Bobby McGee Track #6 2:21 – 4:20 David Dodd: “To Lay Me Down” is one of the magical trio of lyrics composed in a single afternoon in 1970 in London, “over a half-bottle of retsina,” according to Robert Hunter. The other two were “Ripple” and “Brokedown Palace.” Well, first—wouldn't we all like to have a day like that! And, second—what unites these three lyrics, aside from the fact that they were all written on the same day? Hunter wrote, in his foreword to The Complete Annotated Grateful Dead Lyrics:”And I wrote reams of bad songs, bitching about everything under the sun, which I kept to myself: Cast not thy swines before pearls. And once in a while something would sort of pop out of nowhere. The sunny London afternoon I wrote ‘Brokedown Palace,' ‘To Lay Me Down,' and ‘Ripple,' all keepers, was in no way typical, but it remains in my mind as the personal quintessence of the union between writer and Muse, a promising past and bright future prospects melding into one great glowing apocatastasis.” “‘To Lay me Down' was written a while before the others [on the Garcia album], on the same day as the lyrics to ‘Brokedown Palace' and ‘Ripple'—the second day of my first visit to England. I found myself left alone in Alan Trists's flat on Devonshire Terrace in West Kensington, with a supply of very nice thick linen paper, sun shining brightly through the window, a bottle of Greek Retsina wine at my elbow. The songs flowed like molten gold onto the page and stand as written. The images for ‘To Lay Me Down' were inspired at Hampstead Heath (the original title to the song) the day before—lying on the grass and clover on a day of swallowtailed clouds, across from Jack Straw's Castle [a pub, now closed and converted into flats--dd], reunited with the girlfriend of my youth, after a long separation.” Garcia's setting for the words is, like his music for those other two songs, perfect. The three-quarter time (notated as having a nine-eight feel), coupled with the gospel style of the melody and chords, makes for a dreamy, beauty-soaked song. I heard it on the radio today (yes, on the radio, yes, today—and no, not on a Grateful Dead Hour, but just in the course of regular programming), and it struck me that it was a gorgeous vehicle for Garcia's voice. By which I mean: for that strongly emotive, sweet but not sappy, rough but not unschooled instrument that was Garcia's alone. I have started to think that my usual recitation of where a song was first played, where it was last played, and where it was recorded by the band borders on pointless. All that info is readily available. What's interesting about the performance history of “To Lay Me Down” is that it was dropped from the rotation for more than 200 shows three times, and that its final performance, in 1992, came 125 shows after the penultimate one. The reappearance of the song, in the 1980 acoustic shows, came nearly six years after the previous performances in 1974. “Ripple” had a similar pattern, reappearing in those 1980 acoustic sets after 550 performances, or nearly ten years. Of the magical trio from that day of molten gold in West Kensington, “Brokedown Palace” had the most solid place in the Dead's performance rotation, with only one huge gap in its appearances—165 shows between 1977 and 1979. So, in terms of story, what can be discerned? The short version, for me: even if it's just for a day, even if it's just once more, even if it's just one last time—it's worth it. It's golden. It's home. This version is really great to listen to. Jerry's voice is still so young and strong. And the group singing works really well. Jerry's also kills it with his lead guitar jamming. Released on “Garcia” in 1972 Played: 64 timesFirst: July 30, 1970 at The Matrix, San Francisco, CA, USALast: June 28, 1992 at Deer Creek Music Center, Noblesville, IN, USA MUSIC NEWS: Music Intro: Brain Damage Pink Floyd Pink Floyd - Brain Damage (2023 Remaster) 0:00 – 1:47 "Brain Damage" is the ninth track[nb 1] from English rock band Pink Floyd's 1973 album The Dark Side of the Moon.[2][3] It was sung on record by Roger Waters (with harmonies by David Gilmour), who would continue to sing it on his solo tours. Gilmour sang the lead vocal when Pink Floyd performed it live on their 1994 tour (as can be heard on Pulse). The band originally called this track "Lunatic" during live performances and recording sessions. "Brain Damage" was released as a digital single on 19 January 2023 to promote The Dark Side of the Moon 50th Anniversary box set.[4] The uncredited manic laughter is that of Pink Floyd's then-road manager, Peter Watts. The Dark Side of the Moon is the eighth studio album by the English rock band Pink Floyd, released on 1 March 1973, by Harvest Records in the UK and Capitol Records in the US. Developed during live performances before recording began, it was conceived as a concept album that would focus on the pressures faced by the band during their arduous lifestyle, and also deal with the mental health problems of the former band member Syd Barrett, who had departed the group in 1968. New material was recorded in two sessions in 1972 and 1973 at EMI Studios (now Abbey Road Studios) in London. The Dark Side of the Moon is among the most critically acclaimed albums and often features in professional listings of the greatest of all time. It brought Pink Floyd international fame, wealth and plaudits to all four band members. A blockbuster release of the album era, it also propelled record sales throughout the music industry during the 1970s. The Dark Side of the Moon is certified 14x platinum in the United Kingdom, and topped the US Billboard Top LPs & Tape chart, where it has charted for 990 weeks. By 2013, The Dark Side of the Moon had sold over 45 million copies worldwide, making it the band's best-selling release, the best-selling album of the 1970s, and the fourth-best-selling album in history.[3] In 2012, the album was selected for preservation in the United States National Recording Registry by the Library of Congress as being "culturally, historically, or aesthetically significant". It was inducted into the Grammy Hall of Fame in 1999. David Gilmour Addresses Synchronicity Theory Between ‘The Dark Side of the Moon' and ‘Wizard of Oz'On Thursday, November 7, 2024, Pink Floyd's David Gilmour appeared on The Tonight Show Starring Jimmy Fallon amid his extensive run at New York's Madison Square Garden, where he is supporting his latest solo release, Luck and Strange. During the music industry legend's stop by the late-night talk show, he spoke with the program's host, who questioned the theory of synchronicity between TheDark Side of the Moon and The Wizard of Oz, commonly referred to as the Dark Side of the Rainbow.“You said that you think it's your best work since Dark Side of the Moon,” Fallon questioned at the top of the segment, comparing Gilmour's comments regarding his latest release, and the Pink Floyd classic. “When we finished Dark Side, there was a lot of crossfades and stuff between all the tracks. They had all to be done separately and then they all have to be edited in the old days before Pro Tools. When we finally finished, we sat down in the control room at Abbey Road and listened to it all the way through. And, wow. I–I guess all of us–have the feeling that it was something quite amazing–that we got it, and at the same point on this album, I had a very similar feeling, which is why I said that.” Fallon stewed on Luck and Strange during a series of follow-up questions that assisted in painting a portrait of familial involvement during the making of Gilmour's 2024 release–harnessing the conversation to the artist's preferred homebred approach before they segued into the realm of the Emerald City. Fallon landed on the topic of Oz during a bit aimed at busting rumors that have populated throughout the musician's 60-year tenure in the spotlight.“The Pink Floyd album, Dark Side of the Moon, was written to synchronize with the movie Wizard of Oz,” Fallon suggested. Prompting Gilmour's humor-tinged response, “Well, of course it was.” Fallon threw his hands up in response, acting on the comedic angle, before the musician clarified, “No, no. We listened to it, Polly and I, years ago–” Fallon stopped the artist to ask, “There's no planning that out?” Gilmour continued, “No. No, I mean, I only heard about it years later. Somebody said you put the needle on–vinyl that is– and on the third–you know you got the film running somehow–and on the third roar of the MGM lion, you put the needle on for the beginning of Dark Side, and there's these strange synchronicities that happen.” Fallon asked if Gilmour had ever tested the theory, to which he exclaimed, “Yeah!” He went on to admit, “And there are these strange coincidences–I'll call them coincidences.” Joni Mitchell turns 81 - Joni Mitchell was born on Nov. 7th in 1943, making her 81 this past Thursday. Mitchell began her career in small nightclubs in Saskatoon, Saskatchewan, Canada, and grew to become one of the most influential singer-songwriters in modern music history. Rising to fame during the 1960s, Mitchell became a key narrator in the folk music movement, alongside others like Bob Dylan. Over the decades, she has released 19 studio albums, including the seminal “Blue,” which was rated the third best album ever made in Rolling Stone's 2020 list of the "500 Greatest Albums of All Time.” In 2023, Joni Mitchell at Newport was released, a live album of her 2022 performance at the Newport Folk Festival. More recently she was the featured performer at the Joni Jam at the Gorge in George, WA in June, 2023 3. Dan “Lebo” Lebowitz to Celebrate 50th Birthday at Sweetwater Music Hall with Members of ALO, Tea Leaf Green and More Sweetwater Music Hall (in Mill Valley, CA) has announced details pertaining to Dan “Lebo” Lebowitz's 50th Birthday Bash. The event is slated to take place on Saturday, November 23, 2024, and functions as a celebratory occasion to honor the jam stalwart and beloved member of the Bay Area music scene's five decade ride. The six-string virtuoso, known for his work with Animal Liberation Orchestra (ALO), Phil Lesh & Friends, and his own self-titled Friends project, has tapped an all-star group of regional talent to assist during the live show. Appearing on the birthday lineup, in addition to the bandleader are Vicki Randle (percussion, vocals; The Tonight Show Band), Steve Adams (bass; ALO), Trevor Garrod (keys; Tea Leaf Green) and Scott Rager (drums; Tea Leaf Green). “Possessing a signature tone, the vehicle for his fluid, buttery sound is a flat top acoustic guitar that he has personally sliced and diced into an electric flat top, with a vintage style humbucker pickup. Inherently committed to an improvisational approach, Lebo embodies the realm of melodic and soulful sounds,” the press release includes, drawing on the unique factors which have made Lebo a standout amongst his musical contemporaries. As an added distinction, and play into the birthday angle of event's surprise and celebration, special guest appearances are slated to occur, as referenced via press release and the artist's post on Instagram, where he noted additional inclusions as TBA. SHOW No. 2: Weather Report Suite Prelude (out of China >Rider/Me & My Uncle/Loose Lucy Track #14 3:10 – end INTO Weather Report Suite Part I (out of WRS Prelude/ into WRS Part II (Let It Grow)/Set break - 16 songs Track #15 0:00 – 1:03 David Dodd: This week, by request, we're looking at “Weather Report Suite,” (Prelude, Part 1, and Part 2). For a short time, the three pieces that comprise the Suite were played as such, but that was relatively short-lived by Grateful Dead standards. The Prelude debuted in November 1972, originally as a separate piece from its eventual companions. The Dead played it, according to DeadBase, four more times in the spring of 1973 before it was first matched up with Weather Report Suite Parts 1 & 2, in September of that year. It was played regularly through October of 1974, and then dropped from the repertoire. The instrumental “Prelude,” composed by Weir, sets the stage for the two pieces to follow. I think it's one of the most beautiful little pieces of music I know—I have never once skipped through it over years of listening. I just let it wash over me and know that its simplicity and beauty are preparing me for the melancholy of Part 1, and the sometimes epic grandeur of Part 2. Part 1 is a song co-written with Eric Andersen, a well-known singer-songwriter who wrote the classic “Thirsty Boots.” He was on the Festival Express Tour (of “Might As Well” fame) across Canada along with the Dead, and I'm guessing that's where Weir and he met and concocted this piece. Happy to be corrected on that by anyone who knows better. Andersen and Weir share the lyric credit, and the music is credited to Weir. Once it appeared in the rotation, in September 1973, it stayed in the repertoire only as long as the Prelude did, dropping entirely in October 1974. The song addresses the seasons, and their changing mirrors the the singer's state of mind as he reflects on the coming of love, and maybe its going, too: a circle of seasons, and the blooming and fading of roses. I particularly like the line “And seasons will end in tumbled rhyme and little change, the wind and rain.” There's something very hopeful buried in the song's melancholy. Is that melancholy just a projection of mine? I think there's something about Weir's singing that gets at that emotion. Loss, and the hope that there might be new love. Weather Report Suite, Part 2 (“Let It Grow”) is a very different beast. It remained steadily in the rotation for the next 21 years after its debut, and the band played it 276 times. Its season of rarity was 1979, when it was played only three times, but otherwise, it was not far from the rotation. It could be stretched into a lengthy jamming tune (clocking at over 15 minutes several times), building to a thundering crescendo. And the “Weather Report” aspect of the song is what was really the most fun many times. Released on Wake of The Flood in 1973. WRS Prelude and Part I:Played: 46 timesFirst: September 8, 1973 at Nassau Veterans Memorial Coliseum, Uniondale, NY, USALast: October 18, 1974 at Winterland Arena, San Francisco, CA, USA SHOW No. 3: Mississippi Half Step Uptown Toodeloo (Second Set Opener/into Big River/Dark Star) Track #17 3:17 – 4:55 Released on Wake of the Flood in 1973. Mississippi Half-Step Uptown Toodeloo was first performed live by the Grateful Dead on July 16, 1972. It was a frequent part of the repertoire through to 1974. From 1976 onward it was played less frequently with usually between 5 and 15 performances each year. It was not played at all in 1983 and 1984. The last performance was in July 1995. In total it was performed around 236 times. The majority of performances from 1978 onward were as the opening song of a show. Huner/Garcia special. Great story. Great lyrics: “what's the point of calling shots, this cue ain't straight in line. Cue ball is made of Styrofoam and no one's got the time” Always one of my favorite songs to hear in concert. ½ Step>Franklin's were especially fun as a one two show opener punch. Played: 236 timesFirst: July 16, 1972 at Dillon Stadium, Hartford, CT, USALast: July 6, 1995 at the Riverport Amphitheatre in Maryland Heights (St. Louis), MO MJ NEWS: INTRO MUSIC: Willin' Little Feat Little Feat - Willin' sung by Lowell George Live 1977. HQ Video. 0:10 – 1:32 1977 "Willin'" is a song written by American musician Lowell George, and first recorded with his group Little Feat on their 1971 debut album. The song has since been performed by a variety of artists. George wrote the song while he was a member of the Mothers of Invention. When George sang an early version of the song for bandleader Frank Zappa, Zappa suggested that the guitarist form his own band rather than continue under Zappa's tutelage.[1] He did just that, and the song was subsequently recorded by Lowell's band Little Feat. The song was included on Little Feat's 1971 self-titled debut album. The band re-recorded the song at a slower tempo to much greater success on their 1972 Sailin' Shoes album. A live version recorded in 1977 appears on their 1978 album Waiting for Columbus. The lyrics are from the point of view of a truck driver who has driven from Tucson to Tucumcari (NM), Tehachapi (CA) to Tonopah (AZ)" and "smuggled some smokes and folks from Mexico"; the song has become a trucker anthem. And of course, he asks for “weed, whites (speed) and wine” to get him through his drive. 1. Using Marijuana Is Tied To Lower Consumption Of Alcohol, Opioids And Other Drugs, New Study Reveals 2. Why Florida's Marijuana Legalization Ballot Initiative Failed Despite Trump Endorsement, Historic Funding And Majority Voter Support 3. Marijuana Has ‘Great Deal Of Potential' To Treat Opioid Use Disorder, Study Finds, Predicting It'll Become More Common In Treatment 4. Colorado Springs Voters Approve Two Contradictory Marijuana Ballot Measures To Both Allow And Ban Recreational Sales Strains of the week: Sub Zero - Sub Zero is a potent Indica-dominanthybrid cannabis strain that combines the robust genetics of Afghan, Colombian, and Mexican origins. This marijuana strain offers a complex flavor profile with notes of apple, menthol, chestnut, lime, and berry, providing a unique and refreshing sensory experience. The aroma of Sub Zero is as intriguing as its flavor, characterized by a rich combination of woody, earthy, and citrus notes, thanks to a terpene profile rich in Humulene, Limonene, Linalool, and Carene. These terpenes not only enhance the flavor but also contribute to the strain's therapeutic properties. Apple Fritter - Apple Fritter, also known as “Apple Fritters,” is a rare evenly balanced hybrid strain (50% indica/50% sativa) created through crossing the classic Sour Apple X Animal Cookies strains. Best known for making the High Times' 2016 “World's Strongest Strains” List, this baby brings on a hard-hitting high and super delicious flavor that will have you begging for more after just one taste. Extract: Dulce Limon – hyrbrid sativa dominant Pineapple Fizz – slightly indica dominant hybrid strain SHOW No. 4: Dark Star (Mind Left Body Jam) Track #18 34:45 – end This is the name given to a 4-chord sequence played as a jam by the Grateful Dead. It is thought by some to be related to the Paul Kantner song "Your Mind Has Left Your Body." The title "Mind Left Body Jam" was originally used by DeadBase. The first Grateful Dead CD to include a version was "Dozin' At The Knick", where the title was "Mud Love Buddy Jam" in a humorous reference to the DeadBase/taper title. But subsequent releases have adopted the "Mind Left Body Jam" title.Here, it comes out of a 36 minute Dark Star that many say is one of the best ever and links it to an excellent Eyes of the World.Fun to feature one of the band's thematic jams every now and then. The truly improvisational side of the Dead and their live performances. Played: 9 timesFirst: October 19, 1973 at Jim Norick Arena, Oklahoma City, OK, USALast: March 24, 1990 at Knickerbocker Arena, Albany, NY, USA INTO Eyes of the World (into China Doll/Sugar Mag as second set closer) Track #19 0:00 – 2:25 David Dodd: “Eyes of the World” is a Robert Hunter lyric set by Jerry Garcia. It appeared in concert for the first time in that same show on February 9, 1973, at the Maples Pavilion at Stanford University, along with “They Love Each Other,” “China Doll,” “Here Comes Sunshine,” “Loose Lucy,” “Row Jimmy,” and “Wave That Flag.” Its final performance by the Dead was on July 6, 1995, at Riverport Amphitheatre, in Maryland Heights, Missouri, when it opened the second set, and led into “Unbroken Chain.” It was performed 381 times, with 49 of those performances occurring in 1973. It was released on “Wake of the Flood” in November, 1973. (I have begun to notice something I never saw before in the song statistics in Deadbase—the 49 performances in 1973 made me look twice at the song-by-song table of performances broken out by year in DeadBase X, which clearly shows the pattern of new songs being played in heavy rotation when they are first broken out, and then either falling away entirely, or settling into a more steady, less frequent pattern as the years go by. Makes absolute sense!) Sometimes criticized, lyrically, as being a bit too hippy-dippy for its own good, “Eyes of the World” might be heard as conveying a message of hope, viewing human consciousness as having value for the planet as a whole. There are echoes in the song of a wide range of literary and musical influences, from Blaise Pascal to (perhaps) Ken Kesey; from talk of a redeemer to the title of the song itself. In an interview, Hunter made an interesting statement about the “songs of our own,” which appear twice in “Eyes of the World.” He said that he thinks it's possible each of us may have some tune, or song, that we hum or sing to ourselves, nothing particularly amazing or fine, necessarily, that is our own song. Our song. The song leaves plenty of room for our own interpretation of certain lines and sections. The verse about the redeemer fading away, being followed by a clay-laden wagon. The myriad of images of birds, beeches, flowers, seeds, horses.... One of my all time favorite songs, Dead or otherwise. A perfect jam tune. Great lyrics, fun sing along chorus and some of the finest music you will ever hear between the verses. First really fell for it while at a small show one night my junior year at Michigan in the Michigan Union, a Cleveland based dead cover band call Oroboros. We were all dancing and this tune just seemed to go on forever, it might have been whatever we were on at the time, but regardless, this tune really caught my attention. I then did the standard Dead dive to find as many versions of the song as I could on the limited live Dead releases at that time and via show tapes. Often followed Estimated Prophet in the first part of the second set, china/rider/estimated/eyes or scarlet/fire/estimated/eyes and sometimes even Help/Slip/Frank/Estimated/Eyes. Regardless of where it appeared, hearing the opening notes was magical because you knew that for the next 10 – 12 minutes Jerry had you in the palm of his hand. This is just a great version, coming out of the Dark Star/Mind Left Body Jam and then continuing on into China Doll (two great Jerry tunes in a row!) and a standout Sugar Mag to close out the second set. Any '73 Eyes will leave you in awe and this one is one of the best. Played: 382 timesFirst: February 9, 1973 at Maples Pavilion, Stanford University, Stanford, CA, USALast: July 6, 1995 at Riverport Amphitheatre, Maryland Heights (St. Louis), MO OUTRO: And We Bid You Goodnight (encore out of Uncle John's Band/Johnny B. Goode) 3 song encore!! Track #25 :40 – 3:03 The Grateful Dead performed the song a number of times in the 1968-1970 and 1989-1990 periods but infrequently during the rest of their performing career. On Grateful Dead recordings the title used is either And We Bid You Goodnight or We Bid You Goodnight. The Grateful Dead version of this traditional 'lowering down' funeral song originates from a recording by Joseph Spence and the Pindar Family which was released in 1965. The title used on that recording, as on many others, is I Bid You Good Night. This song appears to share a common ancestry with the song Sleep On Beloved from North East England. I got to see it the first night at Alpine Valley in 1989 (the Dead's last year at Alpine) and it really caught the crowd off guard. Great reaction from the Deadheads. Kind of a chills down your spine thing. I was with One armed Lary and Alex, both had been with us at Deer Creek right before. Lary stayed for all three nights but Alex had to take off after the first show. Great times. Played: 69 timesFirst: January 26, 1968 at Eagles Auditorium, Seattle, WA, USALast: September 26, 1991 at Boston Garden, Boston, MA, USA Thank you for listening. Join us again next week for more music news, marijuana news and another featured Grateful Dead show. Have a great week, have fun, be safe and as always, enjoy your cannabis responsibly. .Produced by PodConx Deadhead Cannabis Show - https://podconx.com/podcasts/deadhead-cannabis-showLarry Mishkin - https://podconx.com/guests/larry-mishkinRob Hunt - https://podconx.com/guests/rob-huntJay Blakesberg - https://podconx.com/guests/jay-blakesbergSound Designed by Jamie Humiston - https://www.linkedin.com/in/jamie-humiston-91718b1b3/Recorded on Squadcast
In this seventh episode of the inaugural season, I reach out to individuals who have legislated against pit bull dogs while serving political office, including the two former Ohio state lawmakers responsible for authoring the pit bull law that passed in 1987. Introducing the “I Am Human. This Is My Dog.” podcast – the show devoted to putting the individual back into the dog, as well as their human, while also examining difficult, and oftentimes controversial, animal welfare related topics that have been largely ignored, but are critical for real progress. For more information about this and future episodes, visit: riverfirefilms.com/podcast CREDITS: Produced by: River Fire Films, LLC Hosted by: Jeff Theman Introduction Voiceover by: Nat Lauzon Introduction Music: “Crimson Fly” by Huma-Huma MUSIC CREDITS: "Primordial Waters" by The David Roy Collection "The Beginning" by Nobou "A New Dawn" by Nobou "Watching" by The David Roy Collection "Time Goes By" by Swan Productions "Quiet Stillness" by Patrick Rundblad "Whispers In The Attic" by Cosmo Lawson "Lux" by Tenacious Orchestra "Slow Momentum" by Mark Fabian --- Support this podcast: https://podcasters.spotify.com/pod/show/riverfirefilms/support
Is it inherently racist? Hour 3 10/3/2024 full 1942 Thu, 03 Oct 2024 21:00:05 +0000 s2gRpk2clIRDjUePIXZ8onn0mmpBu1ng news The Dana & Parks Podcast news Is it inherently racist? Hour 3 10/3/2024 You wanted it... Now here it is! Listen to each hour of the Dana & Parks Show whenever and wherever you want! 2024 © 2021 Audacy, Inc. News False https://player.amperwavepodcasting.com?feed-link=ht
Episode 18 includes the following sections:- Designing to be inherently good- Keeping products and materials in use Season 6 of Purpose Inspired is based on the book, Thriving: The Breakthrough Movement to Regenerate Nature, Society and the Economy, as read by the author and host of this podcast, Wayne Visser.Thriving is available in the following formats:- Hardback- Ebook- Audiobook
JLP Wed 9-25-24 Bill Lockwood; black callers; great advice… Hr 1 GUEST, Bill Lockwood: Communism. Amnesty. Assassination. Kamala, Neocons. Christians, soft Mike Pence. // Hr 2 Pro-black callers: blame gov't! Supers… JLP sings. Calls: Canada. Little Malcolm X. How to slow down? // Hr 3 Manhood Hour: Israel-Hezbollah war. Calls… Distraught wife. Thoughts. School "scream boxes." // Biblical Question: Why is your life one collision after another? GUEST INFO: Check out Patriotic Pulpit and Bible Studies with Bill Lockwood. Support via https://americanlibertywithbilllockwood.com Today's show sponsored by SEVENWOOD FINANCIAL SERVICES — Your experts in insuring retirement income — Schedule free consultation https://www.sevenwoodfinancialservices.com/eric.html TIMESTAMPS (0:00:00) HOUR 1 (0:04:50) Bill Lockwood: End game. Amnesty; illegal population. (0:12:00) Second assassination attempt. Useful idiots. (0:19:45) Kamala Harris, puppet. Will Trump win? (0:25:00) Democrats, Neocons: Socialists BREAK (0:32:05) …Christians not voting. Little guy Mike Pence. God in control? (0:38:10) Israel-Hezbollah, Iran. UN done any good? Air-headed Reps (0:44:25) MAURICE, NY, 1st: Clown! Why Trump? Why Kamala? (0:51:23) MAURICE: Love Trump? Personal attack! Stoop to his level. Live your own life? (0:55:00) NEWS: Inflated eggs. Tel Aviv. Storm Helene. Secret Service. (1:00:55) HOUR 2: BQ for the lost. (1:03:40) ARMANTE, 27, NV, 1st: Inherently harmonious. Black Panthers (1:12:00) ARMANTE: Asperger's? We gave you Obama. I'm black. (1:19:19) JLP: Catch yourself when you're about to blame. (1:20:34) JOHN, KY: Agree, super articulate. Gov't keeps us down. C—n character. (1:23:10) Supers: BQ, Jesus, Bible Thumper, tongues (1:32:19) Supers: Guardian angel? JLP sings "Jehovah Jireh". Read guests' books? (1:40:10) ELI, Canada… sense, Haitian, thank you, nice call! (1:46:35) WILLIAM, CA: Armante, little Malcolm X; black parents, Panthers (1:51:25) JAY, PA, 1st: Fast talker, slow learner. How to slow down? Forgave mama. James 1: 19 (1:55:00) NEWS: Ukraine aid. Drug prices. Brett Favre. 988 hotline. (2:00:55) HOUR 3 (2:03:45) Manhood Hour: Kamala chirp (2:05:45) Israel-Hezbollah war, beautiful rugs, Biden (2:11:00) War not the answer. JLP visited Israel. Reveals secrets? (2:13:33) DEBORAH, IL: Victims, accountability, black slums; voting (2:22:00) ASHLEY, CA: wants husband to tell her. Extremes. Relax, let life happen. (2:31:55) Announcements (2:33:55) ASHLEY: Calm down (2:36:03) AARON, MD: Identified with thoughts, emotions unnecessary (2:41:45) School "scream boxes" (2:43:22) Man and woman fight: Anger is evil (2:45:20) CHARLES, MI: Punchie! Church. Let people live their lives! (2:46:57) JENNIFER, CO, 1st, mother of 8. Cambodian Hebrew husband. Home school. (2:48:55) Supers: Not worried. BQ. Blame. Gates of Hell. Blacks. (2:56:40) Closing
Whether you got nothing or tons-- the Human Condition is everyone's. Complain if you like, but keep it in check, focusing on failure will just leave you a wreck!
We all sometimes fall, Get twisted, confused, take our eye off the ball, But falling doesn't mean failure—no. Failure only happens when… After we fall we quit and refuse to ever get up again.
www.missingwitches.com/lammas-2024-magic-is-inherently-political-with-una-maria-blyth-and-loretta-ledesma ÙnaInstagram: @unaofthepeatbogPatreon: www.patreon.com/unaofthepeatbog LorettaInstagram: @thedeathwitchRitualCravt: www.ritualcravt.com/readers/loretta-ledesmaThe Death Witch: www.thedeathwitch.black About Missing WitchesAmy Torok and Risa Dickens produce the Missing Witches Podcast. We do every aspect from research to recording, it is a DIY labour of love and craft. Missing Witches is entirely member-supported, and getting to know the members of our Coven has been the most fun, electrifying, unexpectedly radical part of the project. These days the Missing Witches Coven gathers in our private, online coven circle to offer each other collaborative courses in ritual, weaving, divination, and more; we organize writing groups and witchy book clubs; and we gather on the Full and New Moon from all over the world. Our coven includes solitary practitioners, community leaders, techno pagans, crones, baby witches, neuroqueers, and folks who hug trees and have just been looking for their people. Our coven is trans-inclusive, anti-racist, feminist, pro-science, anti-ableist, and full of love. If that sounds like your people, come find out more. Please know that we've been missing YOU. https://www.missingwitches.com/join-the-coven/
Our culture seems to dive headlong into a more and more polarized discussion around the nuance of nutrition. It seems that everyone has a very strong opinion regarding which foods are "Good" or "Bad", and it leads many people to (unknowingly, many times) create moral boundaries around certain foods. Often, we feel "bad" or "stupid" for allowing ourselves to eat foods like chips, fries, or pizza, while we only feel like we've done "good" when we eat foods like salads, chicken breast, or even when we avoid food altogether. Today, Laura and Jonny dive a little deeper into the implications of this kind of thinking as Laura discusses how she selects her own foods. They talk through the identity markers associated with dieting, and how the different roles a person plays can effect their own "nutritional worldview". Want to learn more about the Streamline Nutrition System? Join our newsletter for up-to-date info and to be first in line to hear when the System is officially available to the public! As a "Thank You", we're giving away FREE access to the Streamline Nutrition Calculator, the tool that we use with clients to help them create a custom macronutrient profile suited to their goals. Download it for FREE here: https://streamline-training-systems.ck.page/03698bd059
So, we do not have an inherent sin nature. We are children of the Aeons of God. We are children of the Fullness, and it's actually an insult to the Fullness and to the Son of God to say that their children—for are we not the children of God? Are we not brothers and sisters of Jesus?—it's a big insult to the Aeons and the angels and the Son of God that made us to say that we're inherently evil. And it's not because we fell. The Fall was instigated long before the humans came along. The Fall is the nature of our material universe, that's all. It's basically metaphorical language for moving from a different realm, a different home—from the ethereal non-material space of heaven, we might call it, or the Fullness of God.
Do you ever get the sinking feeling that, no matter how hard you work or what you do, it's never enough? Our culture is stuck in an old paradigm that views success as a destination rather than a journey, attainable by only the lucky few at the top. But what if there was another way? For insight into her “new paradigm of success,” we are joined by Brooke Taylor, a Transformational Career Coach for women, former Marketing Lead at Google, and global expert in a phenomenon she has coined the Success Wound, which is the pain high-achievers experience when they mistake their success for self-worth. In this episode, Brooke breaks down some of the most common misconceptions that high-achievers have about success and shares her game-changing methodology to create a powerfully aligned life on your own terms and transform your relationship with success forever. Tuning in, you'll learn about the five types of unfulfilled achievers, how your Success Wound might be driving your “success ideal,” and actionable steps you can take to begin to heal. If you're ready to build a dream career (and an even better life), this episode is for you! Key Points From This Episode: Defining the Success Wound and how it manifests differently in men and women. [04:57] What your success ideal might look like when it's driven by your Success Wound. [11:16] Common misconceptions about success and the five "unfulfilled achiever” archetypes. [14:16] How to identify whether you're operating from your wounding or your wholeness. [18:40] The sometimes unseen impact of paradoxical expectations of success for women. [23:25] Inherently patriarchal capitalistic cultural notions that inform our success ideals. [27:33] Why we need to shift our being, working, and thinking to heal the Success Wound. [28:52] What a state of aligned ambition looks like and why it's so important. [32:09] For More Information: Brooke Taylor Coaching Brooke Taylor on LinkedIn Brooke Taylor on Instagram Links Mentioned in Today's Episode: To discover what 1% of female leaders know that you don't, sign up for Brooke's Finally Fulfilled Group Coaching Program! Get proven steps to reclaim your fulfillment, self-worth, and career clarity with her Healing the Success Wound Mini-Course. What type of unfulfilled achiever are you? Find out with this quiz. Read ‘Successful Women Share Advice on How to Overcome Imposter Syndrome at Work'. Check out the ‘Top Career Coaches For Women', featuring Kathy and Brooke! Understand what motivates you to take action with Kathy's Dominant Action Style Quiz. Watch 'America Ferrera's Iconic Barbie Speech'. Get your copy of Never Enough: When Achievement Culture Becomes Toxic – and What We Can Do About It by Jennifer Breheny Wallace. ——————— IS IT FINALLY TIME FOR A TRUE SHIFT IN YOUR CAREER AND LEADERSHIP? Do you feel ready and excited to make the essential changes you've been longing for in your career but need some empowering support to begin? That help is here! Join me as I coach and guide you through powerful, proven steps that unlock your fullest and happiest career potential. For a limited time, take advantage of my 10% discount for you, my amazing Finding Brave listeners. Save 10% on both my top-rated one-on-one 6-session Career & Leadership Breakthrough coaching program AND my Most Powerful You video training course that will help you close the 7 most damaging power and confidence gaps that are blocking thousands of professionals from the success, reward, and impact they want and deserve. You can participate in both of these programs from the comfort of your own home, at your own pace. Now's the time. Don't wait! Build the career you've been longing for this year. I'd love to help you. REGISTER NOW and use the 10% discount code BRAVEPOD10 to save 10% on these programs TODAY! Career & Leadership Breakthrough 6-session Program The Most Powerful You Self-Paced Video Training Program ——————— Need some great podcast production support? Check out We Edit Podcasts! Hi, folks! Kathy here. So, are you thinking of launching a new podcast or have you been at it a while and recognize it's time for more or better production help to create the best podcast you can? I totally understand — I've been podcasting for over 6 years and know how challenging it can be. That's why I'm very excited to share key info about the great product team I'm using called We Edit Podcasts. I've been working with them for well over a year, and I've been so happy with the results! They're a full-service production agency and their services give me access to a wonderful team of seasoned audio engineers and editors who help create a polished, professional sound. And they work hard to ensure that my particular podcasting approach and style comes through in every episode. They also help me make sure my guests are reflected in the best possible light through the creation of terrific show notes, which is an important part of the show for me. Their process is easy and streamlined, and their responsiveness and customer service are terrific too. If you're ready for better production help, definitely check them out and take advantage of their FREE trial episode, allowing you to sample their process and quality to see if it's a great fit for you. I'm confident you'll love them. Just paste this link into your browser: >> http://weeditpodcasts.com/findingbrave
The trio is back, and this time to discuss the science of saturated fat. For decades, saturated fat was widely blamed for a dramatic rise in rates of obesity and heart disease. In recent years, that narrative has been challenged by proponents of increasingly popular ketogenic and carnivorous diets. For many, the back and forth on this topic is dizzying, and confusing. Fortunately, we have Dr. Trexler to walk us through several recent studies to help determine if saturated fat is actually inherently more fattening than other fat sources, either via its impact on energy expenditure or appetite, and then how it plays out in the real world, and finally, to discuss what you need to know, and what - if anything - you should change about your diet.
Hey, welcome to today's episode of Shrink for the Shy Guy! It's Dr. Aziz and I'm excited to be with you. How are you doing today? Are you feeling free? Self-confident? On your own side? Capable? Inherently worthy? Or maybe not? Wherever you are today, that's okay. Sometimes people think that if they've been listening to this show, reading my books, or practicing these concepts for a while, they're supposed to feel confident all the time. And if they don't, it feels like a personal failing. Let's clear that up right now—there is no perfection here. Even after all these years of teaching this stuff, I can still experience self-criticism, anxiety, or worry. But I can also not run those patterns. The key is to have the potential for liberation where you can sometimes run those social anxiety patterns and sometimes not. So today's episode is titled "What If Their Thoughts About You Don't Matter?" This isn't about forcing yourself to not care about what people think. Instead, we're going to soften the clinging worry about others' thoughts and judgments. If you find this show helpful, would you consider leaving a review on Apple Podcasts, Spotify, or wherever you listen? Those reviews help the show reach more people who might benefit from it, spreading liberation. Imagine if you could feel free even if people have negative thoughts about you. What if their judgments don't matter so much? Today, we're exploring that idea. Judgments often meet needs for certainty and significance. If we can see this with compassion and curiosity, we can start to liberate ourselves from the weight of others' thoughts. Stay tuned as we dive deeper into this topic and, as always, thank you for being with me today. Until next time, may you have the courage to be who you are and to know on a deep level that you're awesome. -------- Have you ever found yourself paralyzed by the fear of what others think about you? The constant worry about their judgments can be suffocating. But what if their thoughts about you don't matter? Imagine the freedom you'd feel if you could let go of that fear. In today's episode of Shrink for the Shy Guy, Dr. Aziz dives into this very topic, offering insights that could transform your life. The Trap of Social Anxiety "Social anxiety patterns often involve hyper-focusing on yourself, imagining others are judging you, and trying to control the outcome to make sure people like you." This is a common experience for many professionals. The fear of judgment can lead to avoiding social interactions, which in turn increases feelings of isolation and disconnection. Understanding Judgment People's judgments are often more about them than you. Dr. Aziz explains, "When someone judges you, they might be trying to meet their own needs for certainty or significance." Recognizing this can help you see that their thoughts don't hold as much power as you might believe. Think about an elderly relative who criticizes someone's outfit. Do those judgments really matter? Probably not. Similarly, the negative thoughts others might have about you are often fleeting and inconsequential. Shifting Your Perspective To overcome the fear of judgment, Dr. Aziz suggests a shift in perspective: Identify Your Fears: Write down the judgments you fear the most. This could be fears of being seen as awkward, stupid, or desperate. Reflect on These Judgments: Consider if these are judgments you frequently place on yourself. Understand that others' judgments often stem from their insecurities. Practice Exposure: Look at these fears and challenge their power over you. Recognize that everyone has judgments and that they don't define you. Embrace the Journey Building confidence is a journey, not a destination. It involves taking consistent action, facing fears, and practicing self-compassion. Remember, the goal isn't to eliminate fear but to learn to live with it and not let it control you. Final Thoughts What if their thoughts about you don't matter? Imagine the freedom and confidence you'd feel. Start small, practice these steps, and gradually build your resilience. For more resources, visit www.socialconfidencecenter.com, where you can find free courses and tools to help you on your journey to confidence. Until next time, may you have the courage to be who you are and to know on a deep level that you are awesome. Thanks for listening to Shrink for the Shy Guy with Dr. Aziz. If you know anyone who can benefit from what you've just heard, please let them know and send them a link to shrinkfortheshyguy.com. For free blogs, e-books, and training videos on overcoming shyness and increasing confidence, go to socialconfidencecenter.com.
Producer Dan Cook is also a pastor, and shares some faith-based thoughts on the potential value of change and dangers of closing yourself off to new information and perspectives.
This week, we've got data security being both funded AND acquired. We discuss Lacework's fall from unicorn status and why rumors that it went to Fortinet for considerably more than Wiz was willing to pay make sense. Microsoft Recall and Apple Intelligence are the perfect bookends for a conversation about the importance of handling consumer privacy concerns at launch. How can the Snowflake breach both be one of the biggest breaches ever, but also not a breach at all (for Snowflake, at least). It's time to have a conversation about shared responsibilities, and when the line between CSP and customer needs to shift. The CSA's AI Resilience Benchmark leaves much to be desired (like, an actual usable benchmark) and Greg Linares tells a wild story about how the first Microsoft Office 2007 vulnerability was discovered. Finally, the Light Phone III was announced. Do we finally have a usable minimalist, social media detox-friendly phone option? Will Adrian have to buy one to find out? Several recent trends underscore the increasing importance of Know Your Business (KYB) practices in today's business landscape. One significant trend is the rise in financial crimes, including money laundering, fraud, and terrorist financing. Technological advancements have transformed the way businesses operate, leading to increased digitization, online transactions, and remote customer interactions. While these developments offer numerous benefits, they also create opportunities for criminals to exploit vulnerabilities. Higher value remote transactions are performed at higher volumes. In addition, government programs such as the PPP program created a need for onboarding business quickly. This created a influx of fraudulent entities and claim who are now exploiting other channels. The convergence of these trends highlights the critical role of KYB in safeguarding businesses, ensuring regulatory compliance, and fostering trust among stakeholders in today's dynamic and interconnected business environment. Segment Resources: https://files.scmagazine.com/wp-content/uploads/2024/05/idi-Identiverse-Brochure_05-2024-KYB-PRINT.pdf This segment is sponsored by IDI. Visit https://securityweekly.com/idiidv to learn more about them! From wrestling with integration complexities to managing unexpected glitches, the realities of SSO implementation can produce very different results than what you want. Are users actually using SSO to login or are they still using the direct logins they gained before enabling SSO? We explore the reasons behind why SSO efficacy isn't always what it seems and what you can do about it. This segment is sponsored by Savvy. Visit https://securityweekly.com/savvyidv for a no cost SaaS-Identity checkup! With identity being the new security perimeter, identity platforms are now an integral part of the core security stack. Inherently these platforms are complex and it takes months and years for organizations to realize the business value. And this is going to get worse. The sheer volume and velocity with which new identity types are being added, as well the sophistication of attacks on identity platforms, requires a transformational shift to Identity security and governance. 50% operational efficiency and delivering security at scale are the two big initiatives which organizations have embarked on. In this session, Vibhuti Sinha, Chief Product Officer of Saviynt will share his insights and discuss how Saviynt is at the forefront of this transformation. This segment is sponsored by Saviynt. Visit https://securityweekly.com/saviyntidv to learn more about them! Enterprises often struggle with achieving business value in identity programs. This is typically the result of technology choices that require a disproportionately greater amount of effort and focus and underestimating the workforce required for organizational change management. With 30 years in the industry and a depth of accumulated knowledge working with large, global customers and vendors, we share how to identify and realize the business value in your organization's identity program. Segment Resources: https://files.scmagazine.com/wp-content/uploads/2024/05/SDG-IAM-Brief-1.pdf https://files.scmagazine.com/wp-content/uploads/2024/05/SDG-IAM-Modernization-Service-Brief-1-1.pdf This segment is sponsored by SDG. Visit https://securityweekly.com/sdgidv to learn more about them! In today's increasingly complex cloud environments, ensuring continuous access to identity services is critical for maintaining business operations and security. Gerry Gebel, VP of Product and Standards at Strata Identity, will discuss the recently announced Identity Continuity product, designed to provide uninterrupted identity services even during outages. Unlike traditional disaster recovery solutions, Identity Continuity autonomously fails over to alternate identity providers, ensuring seamless access management. Join us to explore how Strata Identity is enhancing resilience in the identity management space. Segment Resources: Strata Identity Continuity Product page: https://www.strata.io/maverics-platform/identity-continuity/ State of Multi-Cloud Identity report: https://strata.io/wp-content/uploads/2023/08/State-of-multi-cloud-identity-2023_Strata-Identity.pdf Parametrix Survey = https://www.reinsurancene.ws/leading-cloud-service-providers-faced-1000-disruptions-in-2022-parametrix/ This segment is sponsored by Strata. Visit https://securityweekly.com/strataidv to learn more about them! Digital businesses are under attack from account and platform fraud, including Account Takeover (ATO), account opening fraud, and many variations of fraudulent account scams, impersonations, transactions and collusions. Learn best practices to stop fraud with better detection and prevention that can also improve customer satisfaction and operating efficiencies. This segment is sponsored by Verosint. Visit https://securityweekly.com/verosintidv to learn more about them! Visit https://www.securityweekly.com/esw for all the latest episodes! Show Notes: https://securityweekly.com/esw-365
Episode 1508 | Adriel Sanchez and Bill Maier answer caller questions. Show Notes CoreChristianity.com 1. How can I explain the Trinity to my doubting friend? 2. What does "raca" mean in Matthew 5:22? 3. Are "ghosts sightings" always evil? 4. Should I continue as a leader in my church if my daughter is rebelling? Today's Offer: How To Keep Your Faith After High School Want to partner with us in our work here at Core Christianity? Consider becoming a member of the Inner Core. View our latest special offers here or call 1-833-THE-CORE (833-843-2673) to request them by phone.
Voice Acting Mastery: Become a Master Voice Actor in the World of Voice Over
Welcome to episode 207 of the Voice Acting Mastery podcast with yours truly, Crispin Freeman! As always, you can listen to the podcast using the player above, or download the mp3 using the link at the bottom of this blog post. The podcast is also available via the iTunes Store online. Just follow this […]
You know as well as I do the world needs more capable men. Inherently, we all want to do right by the people we love and care about, but the question is, are we doing anything about it? It can be a challenge to take care of ourselves with all the demands of life but the demands of life require that we operate at our fullest potential. My guest today, Jason Khalipa, is a man who knows all too well what it takes to succeed at the highest level and be there for our people. He is a Crossfit Games Champion, Jiu-Jitsu Brown Belt, and an incredibly successful entrepreneur. Today, we talk about balancing professional and personal pursuits, how and why to prioritize fitness, changing culture in your home, work, and community, the power of shared suffering with other men, and why every man needs to train, protect, and provide. SHOW HIGHLIGHTS Creating the fittest community locally How training hard can translate mentally and into real life circumstances All men should be able to train, protect, and provide An in depth explanation of EMOM and AMRAP The benefits of CrossFit and strength training Order of Man Merchandise. Pick yours up today! Get your signed copy of Ryan's latest book, The Masculinity Manifesto Want maximum health, wealth, relationships, and abundance in your life? Sign up for our free course, 30 Days to Battle Ready Download the NEW Order of Man Twelve-Week Battle Planner App and maximize your week.