Podcasts about sisyphean

King of Ephyra in Greek mythology

  • 238PODCASTS
  • 267EPISODES
  • 55mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Apr 29, 2025LATEST
sisyphean

POPULARITY

20172018201920202021202220232024


Best podcasts about sisyphean

Latest podcast episodes about sisyphean

The Trailhead
How Much Running Is Too Much Running? Asking for a Friend (feat. Mario Fraioli)

The Trailhead

Play Episode Listen Later Apr 29, 2025 55:48


This week on The Trailhead, Zoë and Brendan put coach and writer Mario Fraioli in the hot seat with their burning trail running questions.  From “How slow is too slow?” to “When does running become hiking?” we tackle the big, the small, and the existential dilemmas runners face on (and off) the trail. Come for the laughs, stay for Mario's wisdom, and find out why running might just be a Sisyphean project we willingly signed up for. Check out Mario on the Morning Shakeout.  This episode is brought to you by Arc'teryx Footwear. The Vertex Speed is your new go-to for mountain adventures.   Obsessively designed. Athlete-tested.  Trailhead approved. Available online and in Arc'teryx stores.  Catch the Trailhead live at Meet the Minotaur Skyrace in June!

Bike Talk
#2513 State of the Union, Rebuilding a bike-oriented LA, and Bike storage as key

Bike Talk

Play Episode Listen Later Apr 2, 2025 58:08


Listener Email: John Gibilisco on the Sisyphean undertaking of Omaha bike advocacy (1:40). #Teslatakedown (2:45). The MAGAS are cutting all federal funding for bike infrastructure but also reneging on grants to projects like Reconnecting Communities, which would address the harms of highways. With Yonah Freemark, a principal research associate in the Housing and Communities Division at the Urban Institute (4:05). Boston's bike friendly Mayor Wu is ripping out protected bike lanes to appease the right, according to advocates like Boston Cyclists' Union Communications Manager Mandy Wilkins (15:59). LA architect Neal Payton on how to rebuild Los Angeles to be more bike oriented after the fires (23:17). Bike storage is essential to more biking in cities, and Shabazz Stuart, co-founder and CEO of Oonee bike parking, wants to scale it up (40:28).

Unstoppable Farce; The Mitch Maloney Story
Chapter 19: Conan O'Brien's Diminishing Returns

Unstoppable Farce; The Mitch Maloney Story

Play Episode Listen Later Apr 2, 2025 48:56


Mitch makes the rounds of all the popular late night chat shows.Endnotes:“Marlon Bundo” with Jill Twiss, A Day in the Life of Marlon Bundo (Chronicle Books, San Francisco, 2018) An audacious statement on societal inclusivity,  employing a metaphorical layering akin to the works of postmodern deconstructionists, a critique of the infantilization of the literary world. Slack Score: 11; Snark Score: 12; Overall FCA ranking: 71Jimmy Fallon, Your Babies First Word Will be Dada (Feiwel and Friends, New York, 2015) A deconstruction of phonetics, subverting language  into a world where meaning is elusive and language is presented as a fragmented system. The seemingly chaotic string of sounds presented as the child's first words parallels the avant-garde's challenge to linguistic precision.  Slack Score: 15; Snark Score: 2; Overall FCA ranking: 43Seth Myers, I'm Not Scared, You're Scared (Flamingo Books, New York, 2022) A navigation of the disorienting terrain of self-perception, the dialogue itself oscillates between a strange, almost surreal repetition of thoughts, as though the characters are trapped in a loop of denial and confrontation — much like the cyclical nature of fear itself. Slack Score: 15; Snark Score: 12.5; Overall FCA ranking: 169Stephen Colbert, I Am A Pole (and So Can You), (Spartina, New York, 2012)  In this surrealist work, the reader is asked to engage in an almost Sisyphean act of identification: the protagonist, a figure who, through sheer assertion, becomes a "Pole,"  Through a chaotic blend of humor and paradox, I'm a Pole (and So Can You!) disrupts the reader's expectations, presenting identity not as a fact but as an ever-shifting, often absurd construct. Slack Score: 13; Snark Score: 14; Overall FCA ranking: 78Jimmy Kimmel, The Serious Goose, (Random House, New York, 2019) The progressive, almost hypnotic attempts by the reader (or rather, the characters in the book) to force the goose to smile mirror the struggle between the human desire for emotional expression and the societal pressures to remain “serious."  Slack Score: 2; Snark Score: 8; Overall FCA ranking: 36Amber Ruffin, Sidney the Squirrel Doesn't Fit In (Brightstar Tales, Oklahoma City, 2025) The acorn, traditionally a symbol of growth and potential, is something Sidney is unable to "digest" in the same way as his peers. The “tree of conformity” where all other squirrels gather confines Sidney's sense of self. His inability to fit in is not merely a social issue, but a philosophical one: is the need to fit in an authentic desire or an imposition of artificial conformity? Slack Score: 7; Snark Score: 11; Overall FCA ranking: 57Conan O'Brien,Floyd the Flamingo Who Couldn't Stop Dancing, (Sprinklewood Press, Modesto, 2026) Floyd's dance becomes both a figurative “dance of death,” as he can never escape the invisible chains of social approval.  O'Brien challenges the reader to reconsider the true cost of “fitting in” and whether perpetual performance is a path to freedom or a cage of self-doubt. Slack Score: -6; Snark Score: 9.5; Overall FCA ranking: 110Jon Stewart, Naked Pictures of Famous People (Harper-Collins, New York, 1998)  Stewart's manipulation of famous historical and pop culture figures often distances them from their real-world counterparts, forcing readers to confront the notion that fame itself is a form of performance, a simulation of identity rather than an expression of authentic selfhood. Slack Score: 12.5; Snark Score: 15; Overall FCA ranking: 24

The Chad & Cheese Podcast
EUROPE: Regulation, Randstad, and a Tech Rumble

The Chad & Cheese Podcast

Play Episode Listen Later Mar 19, 2025 37:58


In this episode of The Chad and Cheese Podcast Does Europe, hosts Chad Sowash, Joel Cheesman, and the suavely named Lieven Van Nieuwenhuyze plunge headfirst into Europe's recruitment jungle with all the grace of a tipsy tourist. They kick things off with a chuckle-fest intro—because who doesn't love a podcast that roasts itself?—before Chad regales us with tales of cultural clashes, fresh from his Champions League pilgrimage (think soccer, not superheroes). The trio then pivots faster than a Google algorithm to dissect JobIndex's legal tango with the tech giant, sparking a spicy debate: is Europe's innovation getting crushed under regulation's heavy boot? Chad, ever the optimist, defends rules like GDPR with a railroad metaphor—because nothing screams “cutting-edge” like 19th-century trains—while Lieven and Joel wrestle with privacy and the Sisyphean task of global data harmony. Next up, they poke at Randstad, the staffing behemoth caught in a midlife crisis: cling to its clipboard-and-handshake roots or swipe right on a tech makeover? The hosts unpack Randstad's tech-phobia, its snoozing innovation fund, and the Job.com flop that looms like a ghost story at a campfire. Meanwhile, plucky startups Alfa and Avery crash the party, shaking up recruitment as Europe's stocks flex harder than the S&P 500—AI flexing its robot muscles all the while. Takeaways? Randstad's scared of the future, global partnerships are the hot new dance, and “follow the money” is the mantra as Europe woos investors. They cap it with a wink to AI's takeover and a cliffhanger: will Randstad's dusty legacy save it, or is it time for a glow-up?

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Did you know that adding a simple Code Interpreter took o3 from 9.2% to 32% on FrontierMath? The Latent Space crew is hosting a hack night Feb 11th in San Francisco focused on CodeGen use cases, co-hosted with E2B and Edge AGI; watch E2B's new workshop and RSVP here!We're happy to announce that today's guest Samuel Colvin will be teaching his very first Pydantic AI workshop at the newly announced AI Engineer NYC Workshops day on Feb 22! 25 tickets left.If you're a Python developer, it's very likely that you've heard of Pydantic. Every month, it's downloaded >300,000,000 times, making it one of the top 25 PyPi packages. OpenAI uses it in its SDK for structured outputs, it's at the core of FastAPI, and if you've followed our AI Engineer Summit conference, Jason Liu of Instructor has given two great talks about it: “Pydantic is all you need” and “Pydantic is STILL all you need”. Now, Samuel Colvin has raised $17M from Sequoia to turn Pydantic from an open source project to a full stack AI engineer platform with Logfire, their observability platform, and PydanticAI, their new agent framework.Logfire: bringing OTEL to AIOpenTelemetry recently merged Semantic Conventions for LLM workloads which provides standard definitions to track performance like gen_ai.server.time_per_output_token. In Sam's view at least 80% of new apps being built today have some sort of LLM usage in them, and just like web observability platform got replaced by cloud-first ones in the 2010s, Logfire wants to do the same for AI-first apps. If you're interested in the technical details, Logfire migrated away from Clickhouse to Datafusion for their backend. We spent some time on the importance of picking open source tools you understand and that you can actually contribute to upstream, rather than the more popular ones; listen in ~43:19 for that part.Agents are the killer app for graphsPydantic AI is their attempt at taking a lot of the learnings that LangChain and the other early LLM frameworks had, and putting Python best practices into it. At an API level, it's very similar to the other libraries: you can call LLMs, create agents, do function calling, do evals, etc.They define an “Agent” as a container with a system prompt, tools, structured result, and an LLM. Under the hood, each Agent is now a graph of function calls that can orchestrate multi-step LLM interactions. You can start simple, then move toward fully dynamic graph-based control flow if needed.“We were compelled enough by graphs once we got them right that our agent implementation [...] is now actually a graph under the hood.”Why Graphs?* More natural for complex or multi-step AI workflows.* Easy to visualize and debug with mermaid diagrams.* Potential for distributed runs, or “waiting days” between steps in certain flows.In parallel, you see folks like Emil Eifrem of Neo4j talk about GraphRAG as another place where graphs fit really well in the AI stack, so it might be time for more people to take them seriously.Full Video EpisodeLike and subscribe!Chapters* 00:00:00 Introductions* 00:00:24 Origins of Pydantic* 00:05:28 Pydantic's AI moment * 00:08:05 Why build a new agents framework?* 00:10:17 Overview of Pydantic AI* 00:12:33 Becoming a believer in graphs* 00:24:02 God Model vs Compound AI Systems* 00:28:13 Why not build an LLM gateway?* 00:31:39 Programmatic testing vs live evals* 00:35:51 Using OpenTelemetry for AI traces* 00:43:19 Why they don't use Clickhouse* 00:48:34 Competing in the observability space* 00:50:41 Licensing decisions for Pydantic and LogFire* 00:51:48 Building Pydantic.run* 00:55:24 Marimo and the future of Jupyter notebooks* 00:57:44 London's AI sceneShow Notes* Sam Colvin* Pydantic* Pydantic AI* Logfire* Pydantic.run* Zod* E2B* Arize* Langsmith* Marimo* Prefect* GLA (Google Generative Language API)* OpenTelemetry* Jason Liu* Sebastian Ramirez* Bogomil Balkansky* Hood Chatham* Jeremy Howard* Andrew LambTranscriptAlessio [00:00:03]: Hey, everyone. Welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:12]: Good morning. And today we're very excited to have Sam Colvin join us from Pydantic AI. Welcome. Sam, I heard that Pydantic is all we need. Is that true?Samuel [00:00:24]: I would say you might need Pydantic AI and Logfire as well, but it gets you a long way, that's for sure.Swyx [00:00:29]: Pydantic almost basically needs no introduction. It's almost 300 million downloads in December. And obviously, in the previous podcasts and discussions we've had with Jason Liu, he's been a big fan and promoter of Pydantic and AI.Samuel [00:00:45]: Yeah, it's weird because obviously I didn't create Pydantic originally for uses in AI, it predates LLMs. But it's like we've been lucky that it's been picked up by that community and used so widely.Swyx [00:00:58]: Actually, maybe we'll hear it. Right from you, what is Pydantic and maybe a little bit of the origin story?Samuel [00:01:04]: The best name for it, which is not quite right, is a validation library. And we get some tension around that name because it doesn't just do validation, it will do coercion by default. We now have strict mode, so you can disable that coercion. But by default, if you say you want an integer field and you get in a string of 1, 2, 3, it will convert it to 123 and a bunch of other sensible conversions. And as you can imagine, the semantics around it. Exactly when you convert and when you don't, it's complicated, but because of that, it's more than just validation. Back in 2017, when I first started it, the different thing it was doing was using type hints to define your schema. That was controversial at the time. It was genuinely disapproved of by some people. I think the success of Pydantic and libraries like FastAPI that build on top of it means that today that's no longer controversial in Python. And indeed, lots of other people have copied that route, but yeah, it's a data validation library. It uses type hints for the for the most part and obviously does all the other stuff you want, like serialization on top of that. But yeah, that's the core.Alessio [00:02:06]: Do you have any fun stories on how JSON schemas ended up being kind of like the structure output standard for LLMs? And were you involved in any of these discussions? Because I know OpenAI was, you know, one of the early adopters. So did they reach out to you? Was there kind of like a structure output console in open source that people were talking about or was it just a random?Samuel [00:02:26]: No, very much not. So I originally. Didn't implement JSON schema inside Pydantic and then Sebastian, Sebastian Ramirez, FastAPI came along and like the first I ever heard of him was over a weekend. I got like 50 emails from him or 50 like emails as he was committing to Pydantic, adding JSON schema long pre version one. So the reason it was added was for OpenAPI, which is obviously closely akin to JSON schema. And then, yeah, I don't know why it was JSON that got picked up and used by OpenAI. It was obviously very convenient for us. That's because it meant that not only can you do the validation, but because Pydantic will generate you the JSON schema, it will it kind of can be one source of source of truth for structured outputs and tools.Swyx [00:03:09]: Before we dive in further on the on the AI side of things, something I'm mildly curious about, obviously, there's Zod in JavaScript land. Every now and then there is a new sort of in vogue validation library that that takes over for quite a few years and then maybe like some something else comes along. Is Pydantic? Is it done like the core Pydantic?Samuel [00:03:30]: I've just come off a call where we were redesigning some of the internal bits. There will be a v3 at some point, which will not break people's code half as much as v2 as in v2 was the was the massive rewrite into Rust, but also fixing all the stuff that was broken back from like version zero point something that we didn't fix in v1 because it was a side project. We have plans to move some of the basically store the data in Rust types after validation. Not completely. So we're still working to design the Pythonic version of it, in order for it to be able to convert into Python types. So then if you were doing like validation and then serialization, you would never have to go via a Python type we reckon that can give us somewhere between three and five times another three to five times speed up. That's probably the biggest thing. Also, like changing how easy it is to basically extend Pydantic and define how particular types, like for example, NumPy arrays are validated and serialized. But there's also stuff going on. And for example, Jitter, the JSON library in Rust that does the JSON parsing, has SIMD implementation at the moment only for AMD64. So we can add that. We need to go and add SIMD for other instruction sets. So there's a bunch more we can do on performance. I don't think we're going to go and revolutionize Pydantic, but it's going to continue to get faster, continue, hopefully, to allow people to do more advanced things. We might add a binary format like CBOR for serialization for when you'll just want to put the data into a database and probably load it again from Pydantic. So there are some things that will come along, but for the most part, it should just get faster and cleaner.Alessio [00:05:04]: From a focus perspective, I guess, as a founder too, how did you think about the AI interest rising? And then how do you kind of prioritize, okay, this is worth going into more, and we'll talk about Pydantic AI and all of that. What was maybe your early experience with LLAMP, and when did you figure out, okay, this is something we should take seriously and focus more resources on it?Samuel [00:05:28]: I'll answer that, but I'll answer what I think is a kind of parallel question, which is Pydantic's weird, because Pydantic existed, obviously, before I was starting a company. I was working on it in my spare time, and then beginning of 22, I started working on the rewrite in Rust. And I worked on it full-time for a year and a half, and then once we started the company, people came and joined. And it was a weird project, because that would never go away. You can't get signed off inside a startup. Like, we're going to go off and three engineers are going to work full-on for a year in Python and Rust, writing like 30,000 lines of Rust just to release open-source-free Python library. The result of that has been excellent for us as a company, right? As in, it's made us remain entirely relevant. And it's like, Pydantic is not just used in the SDKs of all of the AI libraries, but I can't say which one, but one of the big foundational model companies, when they upgraded from Pydantic v1 to v2, their number one internal model... The metric of performance is time to first token. That went down by 20%. So you think about all of the actual AI going on inside, and yet at least 20% of the CPU, or at least the latency inside requests was actually Pydantic, which shows like how widely it's used. So we've benefited from doing that work, although it didn't, it would have never have made financial sense in most companies. In answer to your question about like, how do we prioritize AI, I mean, the honest truth is we've spent a lot of the last year and a half building. Good general purpose observability inside LogFire and making Pydantic good for general purpose use cases. And the AI has kind of come to us. Like we just, not that we want to get away from it, but like the appetite, uh, both in Pydantic and in LogFire to go and build with AI is enormous because it kind of makes sense, right? Like if you're starting a new greenfield project in Python today, what's the chance that you're using GenAI 80%, let's say, globally, obviously it's like a hundred percent in California, but even worldwide, it's probably 80%. Yeah. And so everyone needs that stuff. And there's so much yet to be figured out so much like space to do things better in the ecosystem in a way that like to go and implement a database that's better than Postgres is a like Sisyphean task. Whereas building, uh, tools that are better for GenAI than some of the stuff that's about now is not very difficult. Putting the actual models themselves to one side.Alessio [00:07:40]: And then at the same time, then you released Pydantic AI recently, which is, uh, um, you know, agent framework and early on, I would say everybody like, you know, Langchain and like, uh, Pydantic kind of like a first class support, a lot of these frameworks, we're trying to use you to be better. What was the decision behind we should do our own framework? Were there any design decisions that you disagree with any workloads that you think people didn't support? Well,Samuel [00:08:05]: it wasn't so much like design and workflow, although I think there were some, some things we've done differently. Yeah. I think looking in general at the ecosystem of agent frameworks, the engineering quality is far below that of the rest of the Python ecosystem. There's a bunch of stuff that we have learned how to do over the last 20 years of building Python libraries and writing Python code that seems to be abandoned by people when they build agent frameworks. Now I can kind of respect that, particularly in the very first agent frameworks, like Langchain, where they were literally figuring out how to go and do this stuff. It's completely understandable that you would like basically skip some stuff.Samuel [00:08:42]: I'm shocked by the like quality of some of the agent frameworks that have come out recently from like well-respected names, which it just seems to be opportunism and I have little time for that, but like the early ones, like I think they were just figuring out how to do stuff and just as lots of people have learned from Pydantic, we were able to learn a bit from them. I think from like the gap we saw and the thing we were frustrated by was the production readiness. And that means things like type checking, even if type checking makes it hard. Like Pydantic AI, I will put my hand up now and say it has a lot of generics and you need to, it's probably easier to use it if you've written a bit of Rust and you really understand generics, but like, and that is, we're not claiming that that makes it the easiest thing to use in all cases, we think it makes it good for production applications in big systems where type checking is a no-brainer in Python. But there are also a bunch of stuff we've learned from maintaining Pydantic over the years that we've gone and done. So every single example in Pydantic AI's documentation is run on Python. As part of tests and every single print output within an example is checked during tests. So it will always be up to date. And then a bunch of things that, like I say, are standard best practice within the rest of the Python ecosystem, but I'm not followed surprisingly by some AI libraries like coverage, linting, type checking, et cetera, et cetera, where I think these are no-brainers, but like weirdly they're not followed by some of the other libraries.Alessio [00:10:04]: And can you just give an overview of the framework itself? I think there's kind of like the. LLM calling frameworks, there are the multi-agent frameworks, there's the workflow frameworks, like what does Pydantic AI do?Samuel [00:10:17]: I glaze over a bit when I hear all of the different sorts of frameworks, but I like, and I will tell you when I built Pydantic, when I built Logfire and when I built Pydantic AI, my methodology is not to go and like research and review all of the other things. I kind of work out what I want and I go and build it and then feedback comes and we adjust. So the fundamental building block of Pydantic AI is agents. The exact definition of agents and how you want to define them. is obviously ambiguous and our things are probably sort of agent-lit, not that we would want to go and rename them to agent-lit, but like the point is you probably build them together to build something and most people will call an agent. So an agent in our case has, you know, things like a prompt, like system prompt and some tools and a structured return type if you want it, that covers the vast majority of cases. There are situations where you want to go further and the most complex workflows where you want graphs and I resisted graphs for quite a while. I was sort of of the opinion you didn't need them and you could use standard like Python flow control to do all of that stuff. I had a few arguments with people, but I basically came around to, yeah, I can totally see why graphs are useful. But then we have the problem that by default, they're not type safe because if you have a like add edge method where you give the names of two different edges, there's no type checking, right? Even if you go and do some, I'm not, not all the graph libraries are AI specific. So there's a, there's a graph library called, but it allows, it does like a basic runtime type checking. Ironically using Pydantic to try and make up for the fact that like fundamentally that graphs are not typed type safe. Well, I like Pydantic, but it did, that's not a real solution to have to go and run the code to see if it's safe. There's a reason that starting type checking is so powerful. And so we kind of, from a lot of iteration eventually came up with a system of using normally data classes to define nodes where you return the next node you want to call and where we're able to go and introspect the return type of a node to basically build the graph. And so the graph is. Yeah. Inherently type safe. And once we got that right, I, I wasn't, I'm incredibly excited about graphs. I think there's like masses of use cases for them, both in gen AI and other development, but also software's all going to have interact with gen AI, right? It's going to be like web. There's no longer be like a web department in a company is that there's just like all the developers are building for web building with databases. The same is going to be true for gen AI.Alessio [00:12:33]: Yeah. I see on your docs, you call an agent, a container that contains a system prompt function. Tools, structure, result, dependency type model, and then model settings. Are the graphs in your mind, different agents? Are they different prompts for the same agent? What are like the structures in your mind?Samuel [00:12:52]: So we were compelled enough by graphs once we got them right, that we actually merged the PR this morning. That means our agent implementation without changing its API at all is now actually a graph under the hood as it is built using our graph library. So graphs are basically a lower level tool that allow you to build these complex workflows. Our agents are technically one of the many graphs you could go and build. And we just happened to build that one for you because it's a very common, commonplace one. But obviously there are cases where you need more complex workflows where the current agent assumptions don't work. And that's where you can then go and use graphs to build more complex things.Swyx [00:13:29]: You said you were cynical about graphs. What changed your mind specifically?Samuel [00:13:33]: I guess people kept giving me examples of things that they wanted to use graphs for. And my like, yeah, but you could do that in standard flow control in Python became a like less and less compelling argument to me because I've maintained those systems that end up with like spaghetti code. And I could see the appeal of this like structured way of defining the workflow of my code. And it's really neat that like just from your code, just from your type hints, you can get out a mermaid diagram that defines exactly what can go and happen.Swyx [00:14:00]: Right. Yeah. You do have very neat implementation of sort of inferring the graph from type hints, I guess. Yeah. Is what I would call it. Yeah. I think the question always is I have gone back and forth. I used to work at Temporal where we would actually spend a lot of time complaining about graph based workflow solutions like AWS step functions. And we would actually say that we were better because you could use normal control flow that you already knew and worked with. Yours, I guess, is like a little bit of a nice compromise. Like it looks like normal Pythonic code. But you just have to keep in mind what the type hints actually mean. And that's what we do with the quote unquote magic that the graph construction does.Samuel [00:14:42]: Yeah, exactly. And if you look at the internal logic of actually running a graph, it's incredibly simple. It's basically call a node, get a node back, call that node, get a node back, call that node. If you get an end, you're done. We will add in soon support for, well, basically storage so that you can store the state between each node that's run. And then the idea is you can then distribute the graph and run it across computers. And also, I mean, the other weird, the other bit that's really valuable is across time. Because it's all very well if you look at like lots of the graph examples that like Claude will give you. If it gives you an example, it gives you this lovely enormous mermaid chart of like the workflow, for example, managing returns if you're an e-commerce company. But what you realize is some of those lines are literally one function calls another function. And some of those lines are wait six days for the customer to print their like piece of paper and put it in the post. And if you're writing like your demo. Project or your like proof of concept, that's fine because you can just say, and now we call this function. But when you're building when you're in real in real life, that doesn't work. And now how do we manage that concept to basically be able to start somewhere else in the in our code? Well, this graph implementation makes it incredibly easy because you just pass the node that is the start point for carrying on the graph and it continues to run. So it's things like that where I was like, yeah, I can just imagine how things I've done in the past would be fundamentally easier to understand if we had done them with graphs.Swyx [00:16:07]: You say imagine, but like right now, this pedantic AI actually resume, you know, six days later, like you said, or is this just like a theoretical thing we can go someday?Samuel [00:16:16]: I think it's basically Q&A. So there's an AI that's asking the user a question and effectively you then call the CLI again to continue the conversation. And it basically instantiates the node and calls the graph with that node again. Now, we don't have the logic yet for effectively storing state in the database between individual nodes that we're going to add soon. But like the rest of it is basically there.Swyx [00:16:37]: It does make me think that not only are you competing with Langchain now and obviously Instructor, and now you're going into sort of the more like orchestrated things like Airflow, Prefect, Daxter, those guys.Samuel [00:16:52]: Yeah, I mean, we're good friends with the Prefect guys and Temporal have the same investors as us. And I'm sure that my investor Bogomol would not be too happy if I was like, oh, yeah, by the way, as well as trying to take on Datadog. We're also going off and trying to take on Temporal and everyone else doing that. Obviously, we're not doing all of the infrastructure of deploying that right yet, at least. We're, you know, we're just building a Python library. And like what's crazy about our graph implementation is, sure, there's a bit of magic in like introspecting the return type, you know, extracting things from unions, stuff like that. But like the actual calls, as I say, is literally call a function and get back a thing and call that. It's like incredibly simple and therefore easy to maintain. The question is, how useful is it? Well, I don't know yet. I think we have to go and find out. We have a whole. We've had a slew of people joining our Slack over the last few days and saying, tell me how good Pydantic AI is. How good is Pydantic AI versus Langchain? And I refuse to answer. That's your job to go and find that out. Not mine. We built a thing. I'm compelled by it, but I'm obviously biased. The ecosystem will work out what the useful tools are.Swyx [00:17:52]: Bogomol was my board member when I was at Temporal. And I think I think just generally also having been a workflow engine investor and participant in this space, it's a big space. Like everyone needs different functions. I think the one thing that I would say like yours, you know, as a library, you don't have that much control of it over the infrastructure. I do like the idea that each new agents or whatever or unit of work, whatever you call that should spin up in this sort of isolated boundaries. Whereas yours, I think around everything runs in the same process. But you ideally want to sort of spin out its own little container of things.Samuel [00:18:30]: I agree with you a hundred percent. And we will. It would work now. Right. As in theory, you're just like as long as you can serialize the calls to the next node, you just have to all of the different containers basically have to have the same the same code. I mean, I'm super excited about Cloudflare workers running Python and being able to install dependencies. And if Cloudflare could only give me my invitation to the private beta of that, we would be exploring that right now because I'm super excited about that as a like compute level for some of this stuff where exactly what you're saying, basically. You can run everything as an individual. Like worker function and distribute it. And it's resilient to failure, et cetera, et cetera.Swyx [00:19:08]: And it spins up like a thousand instances simultaneously. You know, you want it to be sort of truly serverless at once. Actually, I know we have some Cloudflare friends who are listening, so hopefully they'll get in front of the line. Especially.Samuel [00:19:19]: I was in Cloudflare's office last week shouting at them about other things that frustrate me. I have a love-hate relationship with Cloudflare. Their tech is awesome. But because I use it the whole time, I then get frustrated. So, yeah, I'm sure I will. I will. I will get there soon.Swyx [00:19:32]: There's a side tangent on Cloudflare. Is Python supported at full? I actually wasn't fully aware of what the status of that thing is.Samuel [00:19:39]: Yeah. So Pyodide, which is Python running inside the browser in scripting, is supported now by Cloudflare. They basically, they're having some struggles working out how to manage, ironically, dependencies that have binaries, in particular, Pydantic. Because these workers where you can have thousands of them on a given metal machine, you don't want to have a difference. You basically want to be able to have a share. Shared memory for all the different Pydantic installations, effectively. That's the thing they work out. They're working out. But Hood, who's my friend, who is the primary maintainer of Pyodide, works for Cloudflare. And that's basically what he's doing, is working out how to get Python running on Cloudflare's network.Swyx [00:20:19]: I mean, the nice thing is that your binary is really written in Rust, right? Yeah. Which also compiles the WebAssembly. Yeah. So maybe there's a way that you'd build... You have just a different build of Pydantic and that ships with whatever your distro for Cloudflare workers is.Samuel [00:20:36]: Yes, that's exactly what... So Pyodide has builds for Pydantic Core and for things like NumPy and basically all of the popular binary libraries. Yeah. It's just basic. And you're doing exactly that, right? You're using Rust to compile the WebAssembly and then you're calling that shared library from Python. And it's unbelievably complicated, but it works. Okay.Swyx [00:20:57]: Staying on graphs a little bit more, and then I wanted to go to some of the other features that you have in Pydantic AI. I see in your docs, there are sort of four levels of agents. There's single agents, there's agent delegation, programmatic agent handoff. That seems to be what OpenAI swarms would be like. And then the last one, graph-based control flow. Would you say that those are sort of the mental hierarchy of how these things go?Samuel [00:21:21]: Yeah, roughly. Okay.Swyx [00:21:22]: You had some expression around OpenAI swarms. Well.Samuel [00:21:25]: And indeed, OpenAI have got in touch with me and basically, maybe I'm not supposed to say this, but basically said that Pydantic AI looks like what swarms would become if it was production ready. So, yeah. I mean, like, yeah, which makes sense. Awesome. Yeah. I mean, in fact, it was specifically saying, how can we give people the same feeling that they were getting from swarms that led us to go and implement graphs? Because my, like, just call the next agent with Python code was not a satisfactory answer to people. So it was like, okay, we've got to go and have a better answer for that. It's not like, let us to get to graphs. Yeah.Swyx [00:21:56]: I mean, it's a minimal viable graph in some sense. What are the shapes of graphs that people should know? So the way that I would phrase this is I think Anthropic did a very good public service and also kind of surprisingly influential blog post, I would say, when they wrote Building Effective Agents. We actually have the authors coming to speak at my conference in New York, which I think you're giving a workshop at. Yeah.Samuel [00:22:24]: I'm trying to work it out. But yes, I think so.Swyx [00:22:26]: Tell me if you're not. yeah, I mean, like, that was the first, I think, authoritative view of, like, what kinds of graphs exist in agents and let's give each of them a name so that everyone is on the same page. So I'm just kind of curious if you have community names or top five patterns of graphs.Samuel [00:22:44]: I don't have top five patterns of graphs. I would love to see what people are building with them. But like, it's been it's only been a couple of weeks. And of course, there's a point is that. Because they're relatively unopinionated about what you can go and do with them. They don't suit them. Like, you can go and do lots of lots of things with them, but they don't have the structure to go and have like specific names as much as perhaps like some other systems do. I think what our agents are, which have a name and I can't remember what it is, but this basically system of like, decide what tool to call, go back to the center, decide what tool to call, go back to the center and then exit. One form of graph, which, as I say, like our agents are effectively one implementation of a graph, which is why under the hood they are now using graphs. And it'll be interesting to see over the next few years whether we end up with these like predefined graph names or graph structures or whether it's just like, yep, I built a graph or whether graphs just turn out not to match people's mental image of what they want and die away. We'll see.Swyx [00:23:38]: I think there is always appeal. Every developer eventually gets graph religion and goes, oh, yeah, everything's a graph. And then they probably over rotate and go go too far into graphs. And then they have to learn a whole bunch of DSLs. And then they're like, actually, I didn't need that. I need this. And they scale back a little bit.Samuel [00:23:55]: I'm at the beginning of that process. I'm currently a graph maximalist, although I haven't actually put any into production yet. But yeah.Swyx [00:24:02]: This has a lot of philosophical connections with other work coming out of UC Berkeley on compounding AI systems. I don't know if you know of or care. This is the Gartner world of things where they need some kind of industry terminology to sell it to enterprises. I don't know if you know about any of that.Samuel [00:24:24]: I haven't. I probably should. I should probably do it because I should probably get better at selling to enterprises. But no, no, I don't. Not right now.Swyx [00:24:29]: This is really the argument is that instead of putting everything in one model, you have more control and more maybe observability to if you break everything out into composing little models and changing them together. And obviously, then you need an orchestration framework to do that. Yeah.Samuel [00:24:47]: And it makes complete sense. And one of the things we've seen with agents is they work well when they work well. But when they. Even if you have the observability through log five that you can see what was going on, if you don't have a nice hook point to say, hang on, this is all gone wrong. You have a relatively blunt instrument of basically erroring when you exceed some kind of limit. But like what you need to be able to do is effectively iterate through these runs so that you can have your own control flow where you're like, OK, we've gone too far. And that's where one of the neat things about our graph implementation is you can basically call next in a loop rather than just running the full graph. And therefore, you have this opportunity to to break out of it. But yeah, basically, it's the same point, which is like if you have two bigger unit of work to some extent, whether or not it involves gen AI. But obviously, it's particularly problematic in gen AI. You only find out afterwards when you've spent quite a lot of time and or money when it's gone off and done done the wrong thing.Swyx [00:25:39]: Oh, drop on this. We're not going to resolve this here, but I'll drop this and then we can move on to the next thing. This is the common way that we we developers talk about this. And then the machine learning researchers look at us. And laugh and say, that's cute. And then they just train a bigger model and they wipe us out in the next training run. So I think there's a certain amount of we are fighting the bitter lesson here. We're fighting AGI. And, you know, when AGI arrives, this will all go away. Obviously, on Latent Space, we don't really discuss that because I think AGI is kind of this hand wavy concept that isn't super relevant. But I think we have to respect that. For example, you could do a chain of thoughts with graphs and you could manually orchestrate a nice little graph that does like. Reflect, think about if you need more, more inference time, compute, you know, that's the hot term now. And then think again and, you know, scale that up. Or you could train Strawberry and DeepSeq R1. Right.Samuel [00:26:32]: I saw someone saying recently, oh, they were really optimistic about agents because models are getting faster exponentially. And I like took a certain amount of self-control not to describe that it wasn't exponential. But my main point was. If models are getting faster as quickly as you say they are, then we don't need agents and we don't really need any of these abstraction layers. We can just give our model and, you know, access to the Internet, cross our fingers and hope for the best. Agents, agent frameworks, graphs, all of this stuff is basically making up for the fact that right now the models are not that clever. In the same way that if you're running a customer service business and you have loads of people sitting answering telephones, the less well trained they are, the less that you trust them, the more that you need to give them a script to go through. Whereas, you know, so if you're running a bank and you have lots of customer service people who you don't trust that much, then you tell them exactly what to say. If you're doing high net worth banking, you just employ people who you think are going to be charming to other rich people and set them off to go and have coffee with people. Right. And the same is true of models. The more intelligent they are, the less we need to tell them, like structure what they go and do and constrain the routes in which they take.Swyx [00:27:42]: Yeah. Yeah. Agree with that. So I'm happy to move on. So the other parts of Pydantic AI that are worth commenting on, and this is like my last rant, I promise. So obviously, every framework needs to do its sort of model adapter layer, which is, oh, you can easily swap from OpenAI to Cloud to Grok. You also have, which I didn't know about, Google GLA, which I didn't really know about until I saw this in your docs, which is generative language API. I assume that's AI Studio? Yes.Samuel [00:28:13]: Google don't have good names for it. So Vertex is very clear. That seems to be the API that like some of the things use, although it returns 503 about 20% of the time. So... Vertex? No. Vertex, fine. But the... Oh, oh. GLA. Yeah. Yeah.Swyx [00:28:28]: I agree with that.Samuel [00:28:29]: So we have, again, another example of like, well, I think we go the extra mile in terms of engineering is we run on every commit, at least commit to main, we run tests against the live models. Not lots of tests, but like a handful of them. Oh, okay. And we had a point last week where, yeah, GLA is a little bit better. GLA1 was failing every single run. One of their tests would fail. And we, I think we might even have commented out that one at the moment. So like all of the models fail more often than you might expect, but like that one seems to be particularly likely to fail. But Vertex is the same API, but much more reliable.Swyx [00:29:01]: My rant here is that, you know, versions of this appear in Langchain and every single framework has to have its own little thing, a version of that. I would put to you, and then, you know, this is, this can be agree to disagree. This is not needed in Pydantic AI. I would much rather you adopt a layer like Lite LLM or what's the other one in JavaScript port key. And that's their job. They focus on that one thing and they, they normalize APIs for you. All new models are automatically added and you don't have to duplicate this inside of your framework. So for example, if I wanted to use deep seek, I'm out of luck because Pydantic AI doesn't have deep seek yet.Samuel [00:29:38]: Yeah, it does.Swyx [00:29:39]: Oh, it does. Okay. I'm sorry. But you know what I mean? Should this live in your code or should it live in a layer that's kind of your API gateway that's a defined piece of infrastructure that people have?Samuel [00:29:49]: And I think if a company who are well known, who are respected by everyone had come along and done this at the right time, maybe we should have done it a year and a half ago and said, we're going to be the universal AI layer. That would have been a credible thing to do. I've heard varying reports of Lite LLM is the truth. And it didn't seem to have exactly the type safety that we needed. Also, as I understand it, and again, I haven't looked into it in great detail. Part of their business model is proxying the request through their, through their own system to do the generalization. That would be an enormous put off to an awful lot of people. Honestly, the truth is I don't think it is that much work unifying the model. I get where you're coming from. I kind of see your point. I think the truth is that everyone is centralizing around open AIs. Open AI's API is the one to do. So DeepSeq support that. Grok with OK support that. Ollama also does it. I mean, if there is that library right now, it's more or less the open AI SDK. And it's very high quality. It's well type checked. It uses Pydantic. So I'm biased. But I mean, I think it's pretty well respected anyway.Swyx [00:30:57]: There's different ways to do this. Because also, it's not just about normalizing the APIs. You have to do secret management and all that stuff.Samuel [00:31:05]: Yeah. And there's also. There's Vertex and Bedrock, which to one extent or another, effectively, they host multiple models, but they don't unify the API. But they do unify the auth, as I understand it. Although we're halfway through doing Bedrock. So I don't know about it that well. But they're kind of weird hybrids because they support multiple models. But like I say, the auth is centralized.Swyx [00:31:28]: Yeah, I'm surprised they don't unify the API. That seems like something that I would do. You know, we can discuss all this all day. There's a lot of APIs. I agree.Samuel [00:31:36]: It would be nice if there was a universal one that we didn't have to go and build.Alessio [00:31:39]: And I guess the other side of, you know, routing model and picking models like evals. How do you actually figure out which one you should be using? I know you have one. First of all, you have very good support for mocking in unit tests, which is something that a lot of other frameworks don't do. So, you know, my favorite Ruby library is VCR because it just, you know, it just lets me store the HTTP requests and replay them. That part I'll kind of skip. I think you are busy like this test model. We're like just through Python. You try and figure out what the model might respond without actually calling the model. And then you have the function model where people can kind of customize outputs. Any other fun stories maybe from there? Or is it just what you see is what you get, so to speak?Samuel [00:32:18]: On those two, I think what you see is what you get. On the evals, I think watch this space. I think it's something that like, again, I was somewhat cynical about for some time. Still have my cynicism about some of the well, it's unfortunate that so many different things are called evals. It would be nice if we could agree. What they are and what they're not. But look, I think it's a really important space. I think it's something that we're going to be working on soon, both in Pydantic AI and in LogFire to try and support better because it's like it's an unsolved problem.Alessio [00:32:45]: Yeah, you do say in your doc that anyone who claims to know for sure exactly how your eval should be defined can safely be ignored.Samuel [00:32:52]: We'll delete that sentence when we tell people how to do their evals.Alessio [00:32:56]: Exactly. I was like, we need we need a snapshot of this today. And so let's talk about eval. So there's kind of like the vibe. Yeah. So you have evals, which is what you do when you're building. Right. Because you cannot really like test it that many times to get statistical significance. And then there's the production eval. So you also have LogFire, which is kind of like your observability product, which I tried before. It's very nice. What are some of the learnings you've had from building an observability tool for LEMPs? And yeah, as people think about evals, even like what are the right things to measure? What are like the right number of samples that you need to actually start making decisions?Samuel [00:33:33]: I'm not the best person to answer that is the truth. So I'm not going to come in here and tell you that I think I know the answer on the exact number. I mean, we can do some back of the envelope statistics calculations to work out that like having 30 probably gets you most of the statistical value of having 200 for, you know, by definition, 15% of the work. But the exact like how many examples do you need? For example, that's a much harder question to answer because it's, you know, it's deep within the how models operate in terms of LogFire. One of the reasons we built LogFire the way we have and we allow you to write SQL directly against your data and we're trying to build the like powerful fundamentals of observability is precisely because we know we don't know the answers. And so allowing people to go and innovate on how they're going to consume that stuff and how they're going to process it is we think that's valuable. Because even if we come along and offer you an evals framework on top of LogFire, it won't be right in all regards. And we want people to be able to go and innovate and being able to write their own SQL connected to the API. And effectively query the data like it's a database with SQL allows people to innovate on that stuff. And that's what allows us to do it as well. I mean, we do a bunch of like testing what's possible by basically writing SQL directly against LogFire as any user could. I think the other the other really interesting bit that's going on in observability is OpenTelemetry is centralizing around semantic attributes for GenAI. So it's a relatively new project. A lot of it's still being added at the moment. But basically the idea that like. They unify how both SDKs and or agent frameworks send observability data to to any OpenTelemetry endpoint. And so, again, we can go and having that unification allows us to go and like basically compare different libraries, compare different models much better. That stuff's in a very like early stage of development. One of the things we're going to be working on pretty soon is basically, I suspect, GenAI will be the first agent framework that implements those semantic attributes properly. Because, again, we control and we can say this is important for observability, whereas most of the other agent frameworks are not maintained by people who are trying to do observability. With the exception of Langchain, where they have the observability platform, but they chose not to go down the OpenTelemetry route. So they're like plowing their own furrow. And, you know, they're a lot they're even further away from standardization.Alessio [00:35:51]: Can you maybe just give a quick overview of how OTEL ties into the AI workflows? There's kind of like the question of is, you know, a trace. And a span like a LLM call. Is it the agent? It's kind of like the broader thing you're tracking. How should people think about it?Samuel [00:36:06]: Yeah, so they have a PR that I think may have now been merged from someone at IBM talking about remote agents and trying to support this concept of remote agents within GenAI. I'm not particularly compelled by that because I don't think that like that's actually by any means the common use case. But like, I suppose it's fine for it to be there. The majority of the stuff in OTEL is basically defining how you would instrument. A given call to an LLM. So basically the actual LLM call, what data you would send to your telemetry provider, how you would structure that. Apart from this slightly odd stuff on remote agents, most of the like agent level consideration is not yet implemented in is not yet decided effectively. And so there's a bit of ambiguity. Obviously, what's good about OTEL is you can in the end send whatever attributes you like. But yeah, there's quite a lot of churn in that space and exactly how we store the data. I think that one of the most interesting things, though, is that if you think about observability. Traditionally, it was sure everyone would say our observability data is very important. We must keep it safe. But actually, companies work very hard to basically not have anything that sensitive in their observability data. So if you're a doctor in a hospital and you search for a drug for an STI, the sequel might be sent to the observability provider. But none of the parameters would. It wouldn't have the patient number or their name or the drug. With GenAI, that distinction doesn't exist because it's all just messed up in the text. If you have that same patient asking an LLM how to. What drug they should take or how to stop smoking. You can't extract the PII and not send it to the observability platform. So the sensitivity of the data that's going to end up in observability platforms is going to be like basically different order of magnitude to what's in what you would normally send to Datadog. Of course, you can make a mistake and send someone's password or their card number to Datadog. But that would be seen as a as a like mistake. Whereas in GenAI, a lot of data is going to be sent. And I think that's why companies like Langsmith and are trying hard to offer observability. On prem, because there's a bunch of companies who are happy for Datadog to be cloud hosted, but want self-hosted self-hosting for this observability stuff with GenAI.Alessio [00:38:09]: And are you doing any of that today? Because I know in each of the spans you have like the number of tokens, you have the context, you're just storing everything. And then you're going to offer kind of like a self-hosting for the platform, basically. Yeah. Yeah.Samuel [00:38:23]: So we have scrubbing roughly equivalent to what the other observability platforms have. So if we, you know, if we see password as the key, we won't send the value. But like, like I said, that doesn't really work in GenAI. So we're accepting we're going to have to store a lot of data and then we'll offer self-hosting for those people who can afford it and who need it.Alessio [00:38:42]: And then this is, I think, the first time that most of the workloads performance is depending on a third party. You know, like if you're looking at Datadog data, usually it's your app that is driving the latency and like the memory usage and all of that. Here you're going to have spans that maybe take a long time to perform because the GLA API is not working or because OpenAI is kind of like overwhelmed. Do you do anything there since like the provider is almost like the same across customers? You know, like, are you trying to surface these things for people and say, hey, this was like a very slow span, but actually all customers using OpenAI right now are seeing the same thing. So maybe don't worry about it or.Samuel [00:39:20]: Not yet. We do a few things that people don't generally do in OTA. So we send. We send information at the beginning. At the beginning of a trace as well as sorry, at the beginning of a span, as well as when it finishes. By default, OTA only sends you data when the span finishes. So if you think about a request which might take like 20 seconds, even if some of the intermediate spans finished earlier, you can't basically place them on the page until you get the top level span. And so if you're using standard OTA, you can't show anything until those requests are finished. When those requests are taking a few hundred milliseconds, it doesn't really matter. But when you're doing Gen AI calls or when you're like running a batch job that might take 30 minutes. That like latency of not being able to see the span is like crippling to understanding your application. And so we've we do a bunch of slightly complex stuff to basically send data about a span as it starts, which is closely related. Yeah.Alessio [00:40:09]: Any thoughts on all the other people trying to build on top of OpenTelemetry in different languages, too? There's like the OpenLEmetry project, which doesn't really roll off the tongue. But how do you see the future of these kind of tools? Is everybody going to have to build? Why does everybody want to build? They want to build their own open source observability thing to then sell?Samuel [00:40:29]: I mean, we are not going off and trying to instrument the likes of the OpenAI SDK with the new semantic attributes, because at some point that's going to happen and it's going to live inside OTEL and we might help with it. But we're a tiny team. We don't have time to go and do all of that work. So OpenLEmetry, like interesting project. But I suspect eventually most of those semantic like that instrumentation of the big of the SDKs will live, like I say, inside the main OpenTelemetry report. I suppose. What happens to the agent frameworks? What data you basically need at the framework level to get the context is kind of unclear. I don't think we know the answer yet. But I mean, I was on the, I guess this is kind of semi-public, because I was on the call with the OpenTelemetry call last week talking about GenAI. And there was someone from Arize talking about the challenges they have trying to get OpenTelemetry data out of Langchain, where it's not like natively implemented. And obviously they're having quite a tough time. And I was realizing, hadn't really realized this before, but how lucky we are to primarily be talking about our own agent framework, where we have the control rather than trying to go and instrument other people's.Swyx [00:41:36]: Sorry, I actually didn't know about this semantic conventions thing. It looks like, yeah, it's merged into main OTel. What should people know about this? I had never heard of it before.Samuel [00:41:45]: Yeah, I think it looks like a great start. I think there's some unknowns around how you send the messages that go back and forth, which is kind of the most important part. It's the most important thing of all. And that is moved out of attributes and into OTel events. OTel events in turn are moving from being on a span to being their own top-level API where you send data. So there's a bunch of churn still going on. I'm impressed by how fast the OTel community is moving on this project. I guess they, like everyone else, get that this is important, and it's something that people are crying out to get instrumentation off. So I'm kind of pleasantly surprised at how fast they're moving, but it makes sense.Swyx [00:42:25]: I'm just kind of browsing through the specification. I can already see that this basically bakes in whatever the previous paradigm was. So now they have genai.usage.prompt tokens and genai.usage.completion tokens. And obviously now we have reasoning tokens as well. And then only one form of sampling, which is top-p. You're basically baking in or sort of reifying things that you think are important today, but it's not a super foolproof way of doing this for the future. Yeah.Samuel [00:42:54]: I mean, that's what's neat about OTel is you can always go and send another attribute and that's fine. It's just there are a bunch that are agreed on. But I would say, you know, to come back to your previous point about whether or not we should be relying on one centralized abstraction layer, this stuff is moving so fast that if you start relying on someone else's standard, you risk basically falling behind because you're relying on someone else to keep things up to date.Swyx [00:43:14]: Or you fall behind because you've got other things going on.Samuel [00:43:17]: Yeah, yeah. That's fair. That's fair.Swyx [00:43:19]: Any other observations just about building LogFire, actually? Let's just talk about this. So you announced LogFire. I was kind of only familiar with LogFire because of your Series A announcement. I actually thought you were making a separate company. I remember some amount of confusion with you when that came out. So to be clear, it's Pydantic LogFire and the company is one company that has kind of two products, an open source thing and an observability thing, correct? Yeah. I was just kind of curious, like any learnings building LogFire? So classic question is, do you use ClickHouse? Is this like the standard persistence layer? Any learnings doing that?Samuel [00:43:54]: We don't use ClickHouse. We started building our database with ClickHouse, moved off ClickHouse onto Timescale, which is a Postgres extension to do analytical databases. Wow. And then moved off Timescale onto DataFusion. And we're basically now building, it's DataFusion, but it's kind of our own database. Bogomil is not entirely happy that we went through three databases before we chose one. I'll say that. But like, we've got to the right one in the end. I think we could have realized that Timescale wasn't right. I think ClickHouse. They both taught us a lot and we're in a great place now. But like, yeah, it's been a real journey on the database in particular.Swyx [00:44:28]: Okay. So, you know, as a database nerd, I have to like double click on this, right? So ClickHouse is supposed to be the ideal backend for anything like this. And then moving from ClickHouse to Timescale is another counterintuitive move that I didn't expect because, you know, Timescale is like an extension on top of Postgres. Not super meant for like high volume logging. But like, yeah, tell us those decisions.Samuel [00:44:50]: So at the time, ClickHouse did not have good support for JSON. I was speaking to someone yesterday and said ClickHouse doesn't have good support for JSON and got roundly stepped on because apparently it does now. So they've obviously gone and built their proper JSON support. But like back when we were trying to use it, I guess a year ago or a bit more than a year ago, everything happened to be a map and maps are a pain to try and do like looking up JSON type data. And obviously all these attributes, everything you're talking about there in terms of the GenAI stuff. You can choose to make them top level columns if you want. But the simplest thing is just to put them all into a big JSON pile. And that was a problem with ClickHouse. Also, ClickHouse had some really ugly edge cases like by default, or at least until I complained about it a lot, ClickHouse thought that two nanoseconds was longer than one second because they compared intervals just by the number, not the unit. And I complained about that a lot. And then they caused it to raise an error and just say you have to have the same unit. Then I complained a bit more. And I think as I understand it now, they have some. They convert between units. But like stuff like that, when all you're looking at is when a lot of what you're doing is comparing the duration of spans was really painful. Also things like you can't subtract two date times to get an interval. You have to use the date sub function. But like the fundamental thing is because we want our end users to write SQL, the like quality of the SQL, how easy it is to write, matters way more to us than if you're building like a platform on top where your developers are going to write the SQL. And once it's written and it's working, you don't mind too much. So I think that's like one of the fundamental differences. The other problem that I have with the ClickHouse and Impact Timescale is that like the ultimate architecture, the like snowflake architecture of binary data in object store queried with some kind of cache from nearby. They both have it, but it's closed sourced and you only get it if you go and use their hosted versions. And so even if we had got through all the problems with Timescale or ClickHouse, we would end up like, you know, they would want to be taking their 80% margin. And then we would be wanting to take that would basically leave us less space for margin. Whereas data fusion. Properly open source, all of that same tooling is open source. And for us as a team of people with a lot of Rust expertise, data fusion, which is implemented in Rust, we can literally dive into it and go and change it. So, for example, I found that there were some slowdowns in data fusion's string comparison kernel for doing like string contains. And it's just Rust code. And I could go and rewrite the string comparison kernel to be faster. Or, for example, data fusion, when we started using it, didn't have JSON support. Obviously, as I've said, it's something we can do. It's something we needed. I was able to go and implement that in a weekend using our JSON parser that we built for Pydantic Core. So it's the fact that like data fusion is like for us the perfect mixture of a toolbox to build a database with, not a database. And we can go and implement stuff on top of it in a way that like if you were trying to do that in Postgres or in ClickHouse. I mean, ClickHouse would be easier because it's C++, relatively modern C++. But like as a team of people who are not C++ experts, that's much scarier than data fusion for us.Swyx [00:47:47]: Yeah, that's a beautiful rant.Alessio [00:47:49]: That's funny. Most people don't think they have agency on these projects. They're kind of like, oh, I should use this or I should use that. They're not really like, what should I pick so that I contribute the most back to it? You know, so but I think you obviously have an open source first mindset. So that makes a lot of sense.Samuel [00:48:05]: I think if we were probably better as a startup, a better startup and faster moving and just like headlong determined to get in front of customers as fast as possible, we should have just started with ClickHouse. I hope that long term we're in a better place for having worked with data fusion. We like we're quite engaged now with the data fusion community. Andrew Lam, who maintains data fusion, is an advisor to us. We're in a really good place now. But yeah, it's definitely slowed us down relative to just like building on ClickHouse and moving as fast as we can.Swyx [00:48:34]: OK, we're about to zoom out and do Pydantic run and all the other stuff. But, you know, my last question on LogFire is really, you know, at some point you run out sort of community goodwill just because like, oh, I use Pydantic. I love Pydantic. I'm going to use LogFire. OK, then you start entering the territory of the Datadogs, the Sentrys and the honeycombs. Yeah. So where are you going to really spike here? What differentiator here?Samuel [00:48:59]: I wasn't writing code in 2001, but I'm assuming that there were people talking about like web observability and then web observability stopped being a thing, not because the web stopped being a thing, but because all observability had to do web. If you were talking to people in 2010 or 2012, they would have talked about cloud observability. Now that's not a term because all observability is cloud first. The same is going to happen to gen AI. And so whether or not you're trying to compete with Datadog or with Arise and Langsmith, you've got to do first class. You've got to do general purpose observability with first class support for AI. And as far as I know, we're the only people really trying to do that. I mean, I think Datadog is starting in that direction. And to be honest, I think Datadog is a much like scarier company to compete with than the AI specific observability platforms. Because in my opinion, and I've also heard this from lots of customers, AI specific observability where you don't see everything else going on in your app is not actually that useful. Our hope is that we can build the first general purpose observability platform with first class support for AI. And that we have this open source heritage of putting developer experience first that other companies haven't done. For all I'm a fan of Datadog and what they've done. If you search Datadog logging Python. And you just try as a like a non-observability expert to get something up and running with Datadog and Python. It's not trivial, right? That's something Sentry have done amazingly well. But like there's enormous space in most of observability to do DX better.Alessio [00:50:27]: Since you mentioned Sentry, I'm curious how you thought about licensing and all of that. Obviously, your MIT license, you don't have any rolling license like Sentry has where you can only use an open source, like the one year old version of it. Was that a hard decision?Samuel [00:50:41]: So to be clear, LogFire is co-sourced. So Pydantic and Pydantic AI are MIT licensed and like properly open source. And then LogFire for now is completely closed source. And in fact, the struggles that Sentry have had with licensing and the like weird pushback the community gives when they take something that's closed source and make it source available just meant that we just avoided that whole subject matter. I think the other way to look at it is like in terms of either headcount or revenue or dollars in the bank. The amount of open source we do as a company is we've got to be open source. We're up there with the most prolific open source companies, like I say, per head. And so we didn't feel like we were morally obligated to make LogFire open source. We have Pydantic. Pydantic is a foundational library in Python. That and now Pydantic AI are our contribution to open source. And then LogFire is like openly for profit, right? As in we're not claiming otherwise. We're not sort of trying to walk a line if it's open source. But really, we want to make it hard to deploy. So you probably want to pay us. We're trying to be straight. That it's to pay for. We could change that at some point in the future, but it's not an immediate plan.Alessio [00:51:48]: All right. So the first one I saw this new I don't know if it's like a product you're building the Pydantic that run, which is a Python browser sandbox. What was the inspiration behind that? We talk a lot about code interpreter for lamps. I'm an investor in a company called E2B, which is a code sandbox as a service for remote execution. Yeah. What's the Pydantic that run story?Samuel [00:52:09]: So Pydantic that run is again completely open source. I have no interest in making it into a product. We just needed a sandbox to be able to demo LogFire in particular, but also Pydantic AI. So it doesn't have it yet, but I'm going to add basically a proxy to OpenAI and the other models so that you can run Pydantic AI in the browser. See how it works. Tweak the prompt, et cetera, et cetera. And we'll have some kind of limit per day of what you can spend on it or like what the spend is. The other thing we wanted to b

Press B To Cancel
Press B 237: Sisyphean Games 2025

Press B To Cancel

Play Episode Listen Later Dec 30, 2024 106:51


Sisyphean - adjective: Denoting or relating to a task that can never be completed. see: Circus Charlie In our Press B tradition the crew reveals if they conquered their gaming goals for 2024; then we pick new tasks for 2025. Will they be goals worthy of Sisyphus and risk the almighty Wheel of Pain? Press B To Cancel now on YouTube! For updates and more episodes please visit our website www.pressbtocancel.com, or find us on Twitter @pressbtocancel Press B is a member of the SuperPod Network; a gaming collective of fellow podcasters and shows. Special thanks to The Last Ancient on SoundCloud for our podcast theme. Find out more at http://pressbtocancel.com

Gravel Union Talks
Bike packing across Europe with Karen Ekman: inspiration and tips

Gravel Union Talks

Play Episode Listen Later Dec 26, 2024 70:15


Gravel Union Talks is a podcast series full of inspiring stories, news and events from the world of gravel biking. Each month hosts Carlo and Olly will be chatting with guests who are passionate about riding off the beaten track… adventure riding, bike packing and gravel racing. In this episode:Karen Ekman: Scandinavian bike packer and great story tellerGravel Union Talks podcast with hosts Carlo van Nistelrooy and Olly Townsend.GU's editor-in-chief Olly on last month's most popular articles on the Gravel Union platform.Want to bring in ideas for topics or guests? Mail Olly at info@gravelunion.cc Check out our platform and socialswww.gravelunion.cc Insta: @gravel_union Facebook: https://www.facebook.com/GravelUnion/Komoot: https://www.komoot.nl/user/1080024447202Thanks for listening. Please share a review, like and share! Don't forget: join us nowShownotes:On Karen Ekman: At this time of year, when for many of us it's cold and grey outside, having something inspirational as a future goal can be the best way of staying positive. Karen epitomises this attitude and we think will inspire quite a few of our listeners to think big in 2025. Karen Ekman is a Scandinavian bikepacker and this summer she rode 7000 km across Europe, taking in 13 different countries and climbing a mind-blowing total of 124,000 metres along the way. We're going to be chatting with Karen about her amazing adventure. Hopefully her inspiration, tips and guidance will inspire you to do your first or perhaps your most challenging bike packing trip in 2025.Gravel Inspiration – Finding your way in winterDepending on where you live, winter might mean rain, mud, slippery roots and the Sisyphean task of bike and kit washing/maintenance. But it can also offer dry, bright conditions and hardpacked trails. We can't do much about the weather, but we can (hopefully) help you to find the best possible winter gravel riding routes. Read on to find out more.  Break for the Border - A gravel odyssey in Patagonia, ArizonaTim Wild was offered the chance to go and ride in Patagonia, Arizona earlier this year and discovered majestic scenery, incredible gravel riding and an intriguing local community. Interested in finding out more about this amazing sounding destination? Then read on….Armchair Adventure - Wales never fails - A tale of BearBones200 2024There's a fine line between bravery and insanity as the saying goes. We suspect that Valerio Stuart is now pretty adept at finding this line (and then going well beyond it too). After a hideous experience at the 2023 Bearbones200, you would have thought he would have learnt his lesson, but no, he headed back for more punishment this year. Fortunately for us, he survived to tell the tale and what a great tale it is. 

Sneople At The Movies
Giles-Coded Giles Girl

Sneople At The Movies

Play Episode Listen Later Nov 1, 2024 105:32


Happy Halloween, beloved listeners! This week, we've got a topic that is both very seasonal and has been a long time coming - vampires! After getting thoroughly distracted for like ten whole minutes by National Mole Day, the Sneople crack open the massive topic of vampire media with a project that's been haunting Matty for years - Dracula Watch! They don't linger too long on that Sisyphean task, however, because there is simply too much to cover: Interview with a Vampire, Being Human, Twilight, and (obviously) Buffy the Vampire Slayer to name just a few. John Carpenter's extremely famous and beloved film Vampires may or may not come up too. Unclear. Also genuinely if you have a rec for a good Dracula adaptation that didn't come up in this episode, please email us at sneopleathemovies@gmail.com. At least one of us is dying out here.

Geek Critique Pod
The Magicians - S5e3

Geek Critique Pod

Play Episode Listen Later Oct 8, 2024 64:25


In the midst of The Mountain of Ghosts dealing with some serious matters, Britt and Chris are still laughing about Yellow Ferret Month, double-werewolves, and of course freshly baked cookies that are necessary for brainstorming the overthrow a 300-year long dictatorship. They also discuss pilgrimages and the Sisyphean nature of grief before sitting with the POVs of backpacker Eliot and centurion contestant Margo. Please tell a geeky friend about us and leave a review on your podcast app! If you really enjoy our content, become one of our amazing patrons to get more of it for just $1 per month here: https://www.patreon.com/geekbetweenthelines Every dollar helps keep the podcast going! You can also buy us a ko-fi for one-time support here: https://ko-fi.com/geekbetweenthelines Please follow us on social media, too: Instagram : https://www.instagram.com/geekbetweenthelines Pinterest : https://www.pinterest.com/geekbetweenthelines Facebook : https://www.facebook.com/geekbetweenthelines Twitter : https://twitter.com/geekbetween Website: https://geekbetweenthelines.wixsite.com/podcast Logo artist: https://www.lacelit.com

Autopod Decepticast: A Weekly Podcast Delivering a Minute-By-Minute Breakdown of the 1986 Transformers Movie.

A Sisyphean task!!! Guestisode with TFU's Anthony Brucale!!! Falcon Trivia: Hawk-Too-Ah, Sit on that Thang!! Stasis pod!! Blackarachnia's cyber-venom!! Cheetor's vision: The Spark!!! Help the protoform!! Airazor!! In The Real World! Iconic Moments!! Script Deviations!!! What's new, Pussycat?!?!FALCON QUIZ - 34:00COCKTAIL - 47:00REVIEW - 1:00:00REAL WORLD - 1:31:40SCRIPT DEVIATIONS - 1:41:00RATE THE SCHEME - 1:45:00

Bulletproof Screenplay® Podcast
BPS 387: How to Protect Your Film from Online Piracy with Evan Zeisel

Bulletproof Screenplay® Podcast

Play Episode Listen Later Oct 3, 2024 62:11


Movie piracy has hurt the pockets of every filmmaker. But indie filmmakers are often affected worse. Today on the show we have Evan Zeisel and he has been systematically tracking down piracy sites for years. Ten years ago, Evan made his first feature film and landed a distributor. Within a week of being on its first VOD site, his film was already popping up on numerous piracy sites.  He quickly learned through rigorous research to combat piracy and copyright infringement through the Digital Millennium Copyright Act, of 1998.Basically, the DMCA instrument protects copyright holders from piracy or infringement and it protects the First Amendment of users who, unknowing of the illegality, uses copyrighted contents online for commercial purposes. How do you counter online piracy and what is the DMCA?The Digital Millennium Copyright Act (DMCA) is a U.S. law enacted in 1998 in an effort to combat piracy while also protecting freedom of speech. The pitfall of the DMCA is that in order to “protect” free speech, it notes that any content put online is considered not to be copyright infringement unless the copyright holder, or representative thereof, directly informs the site or the individual who posted the content that the content is indeed copyrighted.After being informed, the site has “a reasonable amount of time” (deemed 48-72 hours, by de facto enforcement by the courts) to remove the content before it is considered to be an illegal act. What this means is that a content creator needs to find every occurrence of infringement on the Internet and then find the site's contact information, or Web Host/ISP's contact information, and send a very specifically formatted letter (as defined by the DMCA) to that contact, before it will ever be considered needed to be taken down.Once received, if the content is not removed, then the content creator can use the Violation Notice sent, and a screenshot of the piracy, as a basis for legal action.The issue is, attorneys cost money and there is an endless number of sites pirating content, so for the standard copyright holder taking legal action would be a Sisyphean act, costing them endless time and money, only to run up against pirates that hide behind fake email addresses and false contact information. A lot has changed in the computer and Internet world in the last 20+ years since the DMCA was enacted.Evan dissects in this interview the technicalities in reclaiming copyright, contacting violators, the language, or must-mentions required by the act. Evan tackles the mechanical challenges of tracking down his contents on piracy sites through an automated system, Copyright Slap, curated with help from a friend of his with a coding background, to efficiently contact these sites and have contents taken down in seconds. To date, they have identified the 1946 sites and taken down 6212.Every filmmaker, big and small deals with online piracy. Hopefully, this episode can help.Enjoy my conversation with Evan Zeisel.Become a supporter of this podcast: https://www.spreaker.com/podcast/bulletproof-screenwriting-podcast--2881148/support.

What Fresh Hell: Laughing in the Face of Motherhood | Parenting Tips From Funny Moms

Amy's book Happy to Help: Adventures of a People Pleaser is coming in January 2025. Pre-order your copy! Parenting is a series of everyday battles. But which ones are truly worth fighting? In this episode we discuss the Sisyphean nature of the overuse of the word 'like', to a cleaned-up playroom, to the unending struggle of getting tweens to wear pants—and which of these battles might be 1) winnable and 2) worth the work. Some things really do matter for the long-term success of our kids (and peace of our households); some might be worth letting go in order to let our kids have that win once in a while. In this episode, we unpack how to tell the difference. We love the sponsors that make this show possible! You can always find all the special deals and codes for all our current sponsors on our website: https://www.whatfreshhellpodcast.com/p/promo-codes/ mom friends, funny moms, parenting advice, parenting experts, parenting tips, mothers, families, parenting skills, parenting strategies, parenting styles, busy moms, self-help for moms, manage kid's behavior, teenager, tween, child development, family activities, family fun, parent child relationship, decluttering, kid-friendly, invisible workload, default parent, decision-making, decision fatigue, productivity Learn more about your ad choices. Visit megaphone.fm/adchoices

AP Taylor Swift
E54: One-Year Anniversary of AP Taylor Swift

AP Taylor Swift

Play Episode Listen Later Sep 25, 2024 48:11


Can we always be this close? We're celebrating our first anniversary! This week we're reminiscing about our first year of this podcast by talking about some of our favorite podcast moments in Year 1, and the songs we surprisingly haven't covered yet. And we're responding to listener requests, diving into specific lyrics requested by our dear listeners.  Mentioned in this episode:  Sisyphean task  Bookshop.org/shop/APTS The Daily, “The Year of Taylor Swift”  E42: Ecocriticism + TTPD  Animal Theory Substack E9: Fall Songs (aka Cornelia Street Moment) E24: Deep Dive - Right Where You Left Me E51: All Too Well (10 Minute Version) - Three Ways All Too Well (10 Minute Version) Short Film  “Reformed Rake” trope E33: Animal Theory You Know How to Ball, I Know Aristotle on TikTok Subscribe to get new episode updates: aptaylorswift.substack.com/subscribe ***   Episode Highlights:  [01:38] Songs we surprisingly haven't covered yet [08:46] “My Tears Ricochet” Bridge  [19:20] “Robin”  [22:51] “Right Where You Left Me” pre-chorus [26:45] “When your Brooklyn broke my skin and bones” All Too Well 10 Minute Version [32:36] “Do you miss the rogue who coaxed you into paradise and left you there” Coney Island  [38:28] “We can't make any promises now can we babe?” Delicate [44:28] Season 2 sneak peek   Follow us on social!  TikTok → tiktok.com/@APTaylorSwift Instagram → instagram.com/APTaylorSwift YouTube → youtube.com/@APTaylorSwift Link Tree →linktr.ee/aptaylorswift Bookshop.org → bookshop.org/shop/apts Libro.fm →  tinyurl.com/aptslibro Affiliate Codes:  Krowned Krystals - krownedkrystals.com use code APTS at checkout for 10% off!  Libro.fm - Looking for an audiobook? Check out our Libro.fm playlist and use code APTS30 for 30% off books found here tinyurl.com/aptslibro   This podcast is neither related to nor endorsed by Taylor Swift, her companies, or record labels. All opinions are our own. Intro music produced by Scott Zadig aka Scotty Z.

Paul's Security Weekly
Do phishing tests do more harm than good? & Speed, Flexibility, and AI - Wolfgang Goerlich, Whitney Young - ESW #376

Paul's Security Weekly

Play Episode Listen Later Sep 20, 2024 112:31


A month ago, my friend Wolfgang Goerlich posted a hot take on LinkedIn that is less and less of a hot take these days. He posted, "our industry needs to kill the phish test",and I knew we needed to have a chat, ideally captured here on the podcast. I've been on the fence when it comes to phishing simulation, partly because I used to phish people as a penetration tester. It always succeeded, and always would succeed, as long as it's part of someone's job to open emails and read them. Did that make phishing simulation a Sisyphean task? Was there any value in making some of the employees more 'phishing resistant'? And who is in charge of these simulations? Who looks at a fake end-of-quarter bonus email and says, "yeah, that's cool, send that out." Segment Resources: Phishing in Organizations: Findings from a Large-Scale and Long-Term Study The GoDaddy Phishing Awareness Test The Chicago Tribune - How a Phishing Awareness Test Went Very Wrong University of California Santa Cruz - This uni thought it would be a good idea to do a phishing test with a fake Ebola scare In this episode, we explore some compelling reasons for transitioning from traditional SOAR tools to next-generation SOAR platforms. Discover how workflow automation and orchestration offers unparalleled speed and flexibility, allowing organizations to stay ahead of evolving security threats. We also delve into how advancements in AI are driving this shift, making new platforms more adaptable and responsive to current market demands. Segment Resources: Learn more about using Tines for Security Peruse the Tines library of 'Stories' built by Tines partners and customers Learn how to integrate AI tooling into Tines stories and workflows This segment is sponsored by Tines. Visit https://securityweekly.com/tines to learn more about them! This week, the cybersecurity industry's most basic assumptions under scrutiny. Following up our conversation with Wolfgang Goerlich, where he questions the value of phishing simulations, we discuss essays that call into question: the maturity of the industry the supposed "talent gap" with millions of open jobs despite complaints that this industry is difficult to break into cybersecurity's 'delusion' problem Also some whoopsies: researchers accidentally take over a TLD When nearly all your customers make the same insecure configuration mistakes, maybe it's not all their fault, ServiceNow finds out Fortinet has a breach, but is it really accurate to call it that? Some Coalfire pentesters that were arrested in Iowa 5 years ago share some unheard details about the event, and how it is still impacting their lives on a daily basis five years later. The news this week isn't all negative though! We discuss an insightful essay on detection engineering for managers from Ryan McGeehan is a must read for secops managers. Finally, we discuss a fun and excellent writeup on what happens when you ignore the integrity of your data at the beginning of a 20 year research project that resulted in several bestselling books and a Netflix series! Visit https://www.securityweekly.com/esw for all the latest episodes! Show Notes: https://securityweekly.com/esw-376

Enterprise Security Weekly (Audio)
Do phishing tests do more harm than good? & Speed, Flexibility, and AI - Wolfgang Goerlich, Whitney Young - ESW #376

Enterprise Security Weekly (Audio)

Play Episode Listen Later Sep 20, 2024 112:31


A month ago, my friend Wolfgang Goerlich posted a hot take on LinkedIn that is less and less of a hot take these days. He posted, "our industry needs to kill the phish test",and I knew we needed to have a chat, ideally captured here on the podcast. I've been on the fence when it comes to phishing simulation, partly because I used to phish people as a penetration tester. It always succeeded, and always would succeed, as long as it's part of someone's job to open emails and read them. Did that make phishing simulation a Sisyphean task? Was there any value in making some of the employees more 'phishing resistant'? And who is in charge of these simulations? Who looks at a fake end-of-quarter bonus email and says, "yeah, that's cool, send that out." Segment Resources: Phishing in Organizations: Findings from a Large-Scale and Long-Term Study The GoDaddy Phishing Awareness Test The Chicago Tribune - How a Phishing Awareness Test Went Very Wrong University of California Santa Cruz - This uni thought it would be a good idea to do a phishing test with a fake Ebola scare In this episode, we explore some compelling reasons for transitioning from traditional SOAR tools to next-generation SOAR platforms. Discover how workflow automation and orchestration offers unparalleled speed and flexibility, allowing organizations to stay ahead of evolving security threats. We also delve into how advancements in AI are driving this shift, making new platforms more adaptable and responsive to current market demands. Segment Resources: Learn more about using Tines for Security Peruse the Tines library of 'Stories' built by Tines partners and customers Learn how to integrate AI tooling into Tines stories and workflows This segment is sponsored by Tines. Visit https://securityweekly.com/tines to learn more about them! This week, the cybersecurity industry's most basic assumptions under scrutiny. Following up our conversation with Wolfgang Goerlich, where he questions the value of phishing simulations, we discuss essays that call into question: the maturity of the industry the supposed "talent gap" with millions of open jobs despite complaints that this industry is difficult to break into cybersecurity's 'delusion' problem Also some whoopsies: researchers accidentally take over a TLD When nearly all your customers make the same insecure configuration mistakes, maybe it's not all their fault, ServiceNow finds out Fortinet has a breach, but is it really accurate to call it that? Some Coalfire pentesters that were arrested in Iowa 5 years ago share some unheard details about the event, and how it is still impacting their lives on a daily basis five years later. The news this week isn't all negative though! We discuss an insightful essay on detection engineering for managers from Ryan McGeehan is a must read for secops managers. Finally, we discuss a fun and excellent writeup on what happens when you ignore the integrity of your data at the beginning of a 20 year research project that resulted in several bestselling books and a Netflix series! Visit https://www.securityweekly.com/esw for all the latest episodes! Show Notes: https://securityweekly.com/esw-376

Paul's Security Weekly TV
Do phishing tests do more harm than good? - Wolfgang Goerlich - ESW #376

Paul's Security Weekly TV

Play Episode Listen Later Sep 20, 2024 34:21


A month ago, my friend Wolfgang Goerlich posted a hot take on LinkedIn that is less and less of a hot take these days. He posted, "our industry needs to kill the phish test",and I knew we needed to have a chat, ideally captured here on the podcast. I've been on the fence when it comes to phishing simulation, partly because I used to phish people as a penetration tester. It always succeeded, and always would succeed, as long as it's part of someone's job to open emails and read them. Did that make phishing simulation a Sisyphean task? Was there any value in making some of the employees more 'phishing resistant'? And who is in charge of these simulations? Who looks at a fake end-of-quarter bonus email and says, "yeah, that's cool, send that out." Segment Resources: Phishing in Organizations: Findings from a Large-Scale and Long-Term Study The GoDaddy Phishing Awareness Test The Chicago Tribune - How a Phishing Awareness Test Went Very Wrong University of California Santa Cruz - This uni thought it would be a good idea to do a phishing test with a fake Ebola scare Show Notes: https://securityweekly.com/esw-376

Enterprise Security Weekly (Video)
Do phishing tests do more harm than good? - Wolfgang Goerlich - ESW #376

Enterprise Security Weekly (Video)

Play Episode Listen Later Sep 20, 2024 34:21


A month ago, my friend Wolfgang Goerlich posted a hot take on LinkedIn that is less and less of a hot take these days. He posted, "our industry needs to kill the phish test",and I knew we needed to have a chat, ideally captured here on the podcast. I've been on the fence when it comes to phishing simulation, partly because I used to phish people as a penetration tester. It always succeeded, and always would succeed, as long as it's part of someone's job to open emails and read them. Did that make phishing simulation a Sisyphean task? Was there any value in making some of the employees more 'phishing resistant'? And who is in charge of these simulations? Who looks at a fake end-of-quarter bonus email and says, "yeah, that's cool, send that out." Segment Resources: Phishing in Organizations: Findings from a Large-Scale and Long-Term Study The GoDaddy Phishing Awareness Test The Chicago Tribune - How a Phishing Awareness Test Went Very Wrong University of California Santa Cruz - This uni thought it would be a good idea to do a phishing test with a fake Ebola scare Show Notes: https://securityweekly.com/esw-376

The Adamantium Podcast
E208 Barns Courtney #3

The Adamantium Podcast

Play Episode Listen Later Aug 27, 2024 48:14


Singer & songwriter, Barns Courtney, joins us for a third time on The Adamantium Podcast to discuss his latest album, Supernatural, the inspiration behind the post-apocalyptic cult leader concept, his look for the album, and why this album was the most Sisyphean project he's ever worked on. We also talk about touring with The Struts, on stage chaos, video games, and our mutual admiration for the great Camden Town artists.

Vitamind 一起冥想
Vitamind 一起聊聊 Ep.8 要怎麼健康使用社群媒體

Vitamind 一起冥想

Play Episode Listen Later Aug 12, 2024 37:30


片尾有徵求臨床心理師、諮商心理師的聽眾,感興趣上節目的朋朋歡迎私訊或來信 tiffany@itsvitamind.com ! ----- 提到的推薦: 書:設計你的小習慣 史丹佛大學行為設計實驗室精研,全球瘋IG背後的行為設計學家教你慣性動作養成的技術 作者: BJ.福格 (BJ Fogg) App: one sec ----- Reference: Orben, A. (2020). The Sisyphean cycle of technology panics. Perspectives on Psychological Science, 15(5), 1143–1157. https://doi.org/10.1177/1745691620919372 Schemer, C., Masur, P. K, Geiß, S., Müller, P., & Schäfer, S. (2021). The impact of internet and social media use on well-being: A longitudinal analysis of adolescents across nine years. Journal of Computer-Mediated Communication, 26(1), 1–21. https://doi.org/10.1093/jcmc/zmaa014 ----

BFM :: The Breakfast Grille
Jeffrey Sachs On UN SDGs: Herculean, Not Sisyphean

BFM :: The Breakfast Grille

Play Episode Listen Later Jul 11, 2024 24:50


The United Nations Sustainable Development Goals (SDGs) were conceived in 2015 as a pathway to galvanise the whole international community towards the common aims of economic development, social inclusion, and environmental sustainability. With 6 years to go until the 2030 deadline, how far have we progressed in achieving the 17 distinct targets - and why has investment in education consistently been overlooked? We discuss these themes with Prof. Jeffrey Sachs, President of the UN Sustainable Development Solutions Network.

The Remnant with Jonah Goldberg

Jonah abdicates his duties and conscripts Dispatch national correspondent Kevin D. Williamson to pick up the slack. Kevin is joined by Kent Lassman, the president and CEO of the Competitive Enterprise Institute, to discuss Adam Smith, free trade, and the recent SCOTUS verdict on Moore v. U.S. Kevin and Lassman shoulder the Sisyphean burden of decoding tax law, discuss the miracle of American innovation, and debate the hot question of punditlandia: Who should make law? Show Notes: -Follow Kevin's work at The Dispatch -Kent's CEI Page The Remnant is a production of The Dispatch, a digital media company covering politics, policy, and culture from a non-partisan, conservative perspective. To access all of The Dispatch's offerings—including Jonah's G-File newsletter, weekly livestreams, and other members-only content—click here. Learn more about your ad choices. Visit megaphone.fm/adchoices

The Rouge White & Blue CFL podcast
RWB CFL podcast #257: The season predictions bonanza!

The Rouge White & Blue CFL podcast

Play Episode Listen Later Jun 6, 2024 71:16


Happy Opening Week to Canadian Football League enthusiasts! The Rouge, White & Blue gets our game on in the episode to (admittedly futilely) forecast the upcoming season. First off, RWB co-hosts Os Davis and Joe Pritchard go long(-term) in picking division standing as well as win-loss records for each of the teams. Unlike previous seasons, there is much dissension between the two, who can really only agree  on a vision for two of the CFL's nine teams. Not only that, everyone's favourite Sisyphean task during the regular season, i.e. making weekly selections in the league's official “CFL Pick ‘Em” contest, has returned. Joe and Os get in their picks for week 1 and again mostly disagree on winners and losers. RWB's got a passing look at the point spreads for opening week, particularly that insulting touchdown-plus's worth of points the (*defending champion*) Montreal Alouettes are getting in Winnipeg, and the inexplicable positioning of the Saskatchewan Roughriders as an underdog at Edmonton. CFL kickoff: It's the most wonderful time of the year… The Rouge, White and Blue is now part of the Shotgun Sports Network! Watch this episode -- and those of other Shotgun shows on YouTube -- or listen from wherever you get your podcasts!

Your Planet, Your Health
The Lawn Con: Manufactured Conformity

Your Planet, Your Health

Play Episode Listen Later Jun 3, 2024 77:19


In this episode, Ralph and Luc unpack how Americans got so obsessed with maintaining square green carpets on their front lawns. We dive into the history to trace back the origins and dissemination of this artificial aesthetic. We also look into solutions, ranging from bans on leaf blowers to cash schemes to encourage people to quit their lawn.We read a poem about the lunacy of leaf blowers, and highlight ways in which manicured suburban imported lawn grass is a synecdoche for colonialism. You can also watch this episode on YouTube at: https://www.youtube.com/watch?v=t-l1JO3FbzEChapters:00:00 Introduction: Local bans on gas-powered lawn equipment01:48 Poem about leaf blowers by Touch Moonflower03:59 Commenting on the poem06:51 How did lawns become so common in the USA?07:56 Versailles' green carpet and Italian Renaissance landscapes inspired the British lawn18:59 How 18th Century aristocratic English turf grass took root on the new continent21:53 Thorstein Veblen on why American elites found lawns so respectable24:10 Founding fathers disseminate the pastoral ideal27:05 Planning communities of continuous lawn: Andrew Downing and Frederick Law Olmsted32:03 Frank J. Scott tells suburbanites that homogenous manicured grass is neighbourly34:48 How the lawn got cemented into the American imaginary in the aftermath of World War II37:16 Post WWII suburban developments empowered Home Owners Associations (HOAs)41:01 Quantifying the environmental impacts of modern US lawns45:47 Why imported turf grass is a synecdoche for colonialism50:40 Carpets of grass are fuel that spreads wildfires51:38 Gas powered leaf blowers are huge polluters55:00 How loud are leaf blowers?55:51 Lawn care is a Sisyphean task of sterilisation57:53 Norms around lawns are socially enforced59:59 What solutions have helped people quit their lawn?1:09:50 Conclusion and wrap up: the zeitgeist is shifting!1:11:50 Luc's cover of "Big Yellow Taxi" by Joni MitchellSources:• Ann Leighton, American Gardens in the Eighteenth Century, 1986. • Michael Pollan, “Why Mow? The Case Against Lawns”, The New York Times Magazine, May 1989.• Georges Teyssot, The American Lawn: Surface of Everyday Life, 1999.• Monique Mosser, The saga of grass: From the heavenly carpet to fallow fields, 1999.• Cristina Milesi, “More Lawns than Irrigated Corn”, NASA Earth Observatory, November 2005. • Paul Robbins, Lawn People: How Grasses, Weeds, and Chemicals Make Us Who We Are, 2007.• Ted Steinberg, American Green: The Obsessive Quest for the Perfect Lawn, 2007.• Elizabeth Kolbert, “Turf War”, The New Yorker, July 2008. • Joseph Manca, "British landscape gardening and Italian renaissance painting", Artibus et Historiae (297-322), 2015.• Jamie Banks and Robert McConnell, National Emissions from Lawn and Garden Equipment, Environmental Protection Agency, April 2015.• Christopher Ingraham, “Lawns are a soul-crushing timesuck and most of us would be better off without them”, The Washington Post, August 2015.

Brave New Work
11. The Ones Who Care The Most Will Leave You First

Brave New Work

Play Episode Listen Later May 27, 2024 47:31


In the nearly five years since launching this podcast, our inbox has received one type of question more than any other: “If I'm trying to change a system that just doesn't want to change, how do I keep going? When should I admit defeat and leave?” As people who function as “professional resistance” in organizations all over the world, this questions always hits us hard—because change itself is hard and often can lead to burnout. So we're finally having this conversation out in the open to tackle why the people who care the most are the ones who leave. Rodney and Sam dig into why burnout is so common among change agents, how to identify signs of meaningful progress, and when individuals and leaders should see the writing on the wall and throw in the towel. Oh, and we're on Instagram now! Check us out there for fun behind the scenes stuff and extra things you won't find anywhere else. To see the video version of this episode, head on over to Youtube. Mentioned references: "orthogonal" "wasta" "emotional labor of change": AWWTR Ep. 6 "Sisyphean" "the maze and the mouse" "see through The Matrix" Mission-Based Team: FoHR Ep. 1 "the yips" Rick Rubin EMDR Therapy Basecamp scandal: BNW Ep. 71 Want future of work insights and experiments you can try? Sign up for our newsletter. We're on LinkedIn! Follow Rodney, Sam and The Ready for more org design nerdery and join the conversation around episodes after they air. We want to hear from you. Send your thoughts and feedback to podcast@theready.com. Read the book that started it all at bravenewwork.com.

Unsportsmanlike Conduct
Big Chiefs Schedule Overreaction - 10

Unsportsmanlike Conduct

Play Episode Listen Later May 16, 2024 7:59


We go over the very tough and Sisyphean schedule of YOUR Kansas City Chiefs.

Podcast - The Undebeatables
The Undebeatables - Episode 705: Sisyphean Task

Podcast - The Undebeatables

Play Episode Listen Later May 7, 2024 32:03


This show we cover the first game of the Eastern Conference Semifinals versus the Knicks. Go Pacers!LinksPacers at Knicks Game 1 Box ScorePatreon

My Spouse Died Too
Episode 94: Ride Or Die: Loving Through Tragedy, A Husband's Memoir (1 of 3)

My Spouse Died Too

Play Episode Listen Later Apr 3, 2024 63:31


If you marry, and mark your day with ceremony, you might include these wedding vows: To have and to hold from this day forward, for better, for worse, for richer, for poorer, in sickness and in health, to love, cherish, and to obey, till death do us part. Half-easy to recite, but fulfill—a Sisyphean effort. Widowed guest co-host and author Jarie Bolander joins us.  Jarie's book is titled: Ride Or Die: Loving Through Tragedy, A Husband's Memoir.   Jarie's memoir, a poignant tribute to his late spouse Jane, is a testament to the power of love and commitment those exact wedding vows embody.  Here's the set-up… Friday, the day after Christmas 2015. Married less than two years, Jarie and Jane are San Francisco's young attractive power couple. Jarie is 45, a Silicon Valley engineer, entrepreneur, seven-book author, podcaster, blogger, and working on another start-up. Jarie is a highly functional introvert. Jarie's spouse Jane, an outright extrovert, runs the public relations firm she founded. A quenchless zest for life fills Jane, a 35-year-old fireball. Jarie and Jane work on making a baby. But after two miscarriages…diagnostic blood tests become routine. Now, the day after Christmas—after spending a few hectically fun-filled days at Jane's parent's house, it's time to drive the thirty-five-plus minutes home to San Francisco. Jarie looks forward to getting home midday and relaxing a bit before their restaurant dinner date. But Jane insists on having her next routine blood draw today. Jarie protests why Jane can't wait until the next week because it's barely the day after Christmas AND it's a Friday. The walk-in-no-appointment-necessary laboratory is on the way home. It's quick. Blood drawn.   35 minutes later, Jane and Jarie arrive home, unpack, and put their luggage away. Jane's cell phone rings. An unknown caller. Jarie says ignore it. Jane answers because restaurants often call to confirm reservations. The restaurant is not the caller--the medical facility calls. Jane's blood test results signal concern. The caller wants Jane to test more NOW. Please come into the hospital via the Emergency Room entrance. Jarie and Jane enter the ER entrance. And straight away,  escorted into a curtained section. Not even 6 minutes pass, two doctors enter. After introductions, one doctor asks Jane do you know why you're here? Jane says, because I was told over the phone my blood test was abnormal. The doctor agrees.  The doctors also ask about the small patches of red dots on Jane's tummy. The red dots appeared after the last miscarriage—severe cramping often bursts tiny surface blood vessels. Jane asks why, what about the red dots—and the doctors say they need an opinion from the on-call oncologist. Oncologist? Why an oncologist? One doctor says, well, we're not exactly sure, but it looks like you might have…leukemia. Jarie's book is the first I've read written from a widowed Man's viewpoint. Jarie's memoir NAILS it. So much echoes my own once-upon-a-time story. Jarie hands you his heart, his fears, his perceived failings. Weaknesses. Strengths. Obsessions. Addictions. Things you only tell your therapist.  Jarie's experience might parallel yours. For example, as men, we were raised to be protectors, not caregivers. An old-fashioned male archetype? In our DNA? Jarie painstakingly details his caregiving odyssey.  Losing himself in Jane's sickness, he copes by numbing. Alcohol. Pot, Caffeine. His therapist doesn't know to what extent. Jane's health declines. Jarie can't protect Jane. His self-perceived failure persecutes him. And from diagnosis to death, not even 18 months pass. Kindly observe what happens after Jane's death. Because Jarie continues his lionhearted pilgrimage— through grief and anger— to find himself, and love again.  Link to Jarie's website JarieBolander.com where you can purchase his book and learn about everything Jarie. Thanks for listening. Join us for part 2 of 3. Yes, and... Because you shouldn't have to journey alone, join me in the My Spouse Died Too community email list for members-only benefits: Behind-the-scenes commentary gives you deeper insight--helps you heal. Episode alerts so you'll know when a new episode is ready. Updates on past podcast guests because their journeys continue too. Plus more thoughts, resources, and random widowed journey stuff I discover. And it's the best way to contact me. Because you shouldn't have to journey alone. Sign-up takes less than thirty-two seconds. Here's the link: https://www.myspousediedtoo.com. Hope. Heal. Find love again. Give Grief The Middle Finger. ~ Emeric My Spouse Died Too podcast, images, logos, artwork copyright © 2019-2024 by Emeric McCleary. Music and lyrics © 2019-2024 by Emeric McCleary and Elena McCleary.

Time Pop
S5 Ep6: Happy Death Day 2U (2019)

Time Pop

Play Episode Listen Later Mar 28, 2024 50:30


Prepare to loop back into the time-twisting world of collegiate chaos with Time Pop's latest episode, where we dive headfirst into the mind-bending sequel "Happy Death Day 2U"!

Sorry, Honey, I Have to Take This
Operation SISYPHEAN TITILLATION Part 3

Sorry, Honey, I Have to Take This

Play Episode Listen Later Mar 13, 2024 75:53


The second of two bonus episodes we are putting together as submitted and voted on by unruly Discordites.The Agents find themselves low on resources and friends, and turn to The Program for help.Support The Work at: https://ko-fi.com/sorryhoneyVisit Us At: https://sorryhoney.captivate.fm/Join our Discord to tell us all the things we did wrong: https://discord.gg/XpUbfhCXVVFollow us on Twitter for additional content: https://twitter.com/SorryHoneyCastLikewise, Instagram: https://www.instagram.com/sorryhoneypodcast/Published by arrangement with the Delta Green Partnership. The intellectual property known as Delta Green is a trademark and copyright owned by the Delta Green Partnership, who has licensed its use here. Illustrations by Dennis Detwiller are reproduced by permission. The contents of this podcast are © GiggleDome Productions, LLC, excepting those elements that are components of Delta Green intellectual property.

Sorry, Honey, I Have to Take This
Operation SISYPHEAN TITILLATION Part 4

Sorry, Honey, I Have to Take This

Play Episode Listen Later Mar 13, 2024 75:31


The second of two bonus episodes we are putting together as submitted and voted on by unruly Discordites.Armed with a formidable clue, the Agents close in on an abandoned site in the Colorado wilderness to put a stop to a serial murderer.Selected Club Lemuria background jams by Boreal Us.Support The Work at: https://ko-fi.com/sorryhoneyVisit Us At: https://sorryhoney.captivate.fm/Join our Discord to tell us all the things we did wrong: https://discord.gg/XpUbfhCXVVFollow us on Twitter for additional content: https://twitter.com/SorryHoneyCastLikewise, Instagram: https://www.instagram.com/sorryhoneypodcast/Published by arrangement with the Delta Green Partnership. The intellectual property known as Delta Green is a trademark and copyright owned by the Delta Green Partnership, who has licensed its use here. Illustrations by Dennis Detwiller are reproduced by permission. The contents of this podcast are © GiggleDome Productions, LLC, excepting those elements that are components of Delta Green intellectual property.

Sorry, Honey, I Have to Take This
Operation SISYPHEAN TITILLATION Part 1

Sorry, Honey, I Have to Take This

Play Episode Listen Later Mar 6, 2024 71:42


The second of two bonus episodes we are putting together as submitted and voted on by unruly Discordites.Operation SISYPHEAN TITILLATION: In which an eclectic group of Agents is assembled to investigate the disturbing nature of a gruesome artistic endeavor.Support The Work at: https://ko-fi.com/sorryhoneyVisit Us At: https://sorryhoney.captivate.fm/Join our Discord to tell us all the things we did wrong: https://discord.gg/XpUbfhCXVVFollow us on Twitter for additional content: https://twitter.com/SorryHoneyCastLikewise, Instagram: https://www.instagram.com/sorryhoneypodcast/Published by arrangement with the Delta Green Partnership. The intellectual property known as Delta Green is a trademark and copyright owned by the Delta Green Partnership, who has licensed its use here. Illustrations by Dennis Detwiller are reproduced by permission. The contents of this podcast are © GiggleDome Productions, LLC, excepting those elements that are components of Delta Green intellectual property.

Sorry, Honey, I Have to Take This
Operation SISYPHEAN TITILLATION Part 2

Sorry, Honey, I Have to Take This

Play Episode Listen Later Mar 6, 2024 76:17


The second of two bonus episodes we are putting together as submitted and voted on by unruly Discordites.The Agents have a strong lead on the mysterious artist and their larger-than-life benefactor.Selected Club Lemuria background jams by Boreal Us.Support The Work at: https://ko-fi.com/sorryhoneyVisit Us At: https://sorryhoney.captivate.fm/Join our Discord to tell us all the things we did wrong: https://discord.gg/XpUbfhCXVVFollow us on Twitter for additional content: https://twitter.com/SorryHoneyCastLikewise, Instagram: https://www.instagram.com/sorryhoneypodcast/Published by arrangement with the Delta Green Partnership. The intellectual property known as Delta Green is a trademark and copyright owned by the Delta Green Partnership, who has licensed its use here. Illustrations by Dennis Detwiller are reproduced by permission. The contents of this podcast are © GiggleDome Productions, LLC, excepting those elements that are components of Delta Green intellectual property.

Academic Medicine Podcast
A Familiar Question

Academic Medicine Podcast

Play Episode Listen Later Feb 5, 2024 4:41


I started this letter with a question, but I pray not for an answer. I cannot accept one. Instead, please give me the strength to replace the wet mask soaked in my tears. Give me the power to continue the Sisyphean task of treating your ill and moving on to the next patient, especially on days like today. Norman R. Greenberg writes a letter to God asking why patients must suffer and how those who treat them can continue on amidst their grief. The essay read in this episode was published in the Teaching and Learning Moments column in the February 2024 issue of Academic Medicine. Read the essay at academicmedicine.org.

Web3 CMO Stories
"Warren Buffett in a Web3 World", with Matthew Snider | S3 E38

Web3 CMO Stories

Play Episode Listen Later Jan 30, 2024 30:05 Transcription Available


Unlock the synergy between Warren Buffett's investment sagacity and the dynamic world of Web3 with Matthew Snider of Block3 Strategy Group. In our enlightening chat, we peel back the layers of traditional investment philosophy to reveal how it weaves seamlessly with the fabric of blockchain and cryptocurrency. Matthew's tale, spanning from management consulting to being at the forefront of blockchain innovation, coupled with my own investment club roots, paves the way for a rich discourse on the promise of Web3 for business growth and the seismic shifts introduced by NFTs in the market.How can AI and blockchain serve as the compass and map in the terra incognita of Web3 decisions? We pore over the role of data as the lifeblood of strategy, from fueling software innovation to unveiling new dimensions of transparency and security. The volatility of the cryptocurrency landscape poses a Sisyphean challenge to traditional market analysis, yet through our exchange, we share the beacon of knowing one's circle of competence, guiding listeners to chart a prudent course through the tumultuous waters of investment choices.This episode is not just a conversation; it's a masterclass for any intrepid investor aiming to demystify the enigmatic world of Web3. I impart timeless truths from the investment legends, tailored for the digital frontier, as we navigate through the intricacies of blockchain investments. With Matthew Snider's expertise, we bridge the chasm between established economic precepts and the burgeoning digital economy, equipping you with the acumen needed for a foray into Web3's uncharted territory.This episode was recorded through a Podcastle call on November 29, 2023. Read the blog article and show notes here: https://webdrie.net/warren-buffett-in-a-web3-world-with-matthew-sniderReady to upgrade your Web3 marketing strategy? Don't miss Consensus 2024 on May 29-31 in Austin, Texas. It is the largest and longest-running event on crypto, blockchain and Web3. Use code CMOSTORIES to get 15% off your pass at www.consensus2024.coindesk.com

Philosophy? WTF??
Episode 206: Existentialism Part 7

Philosophy? WTF??

Play Episode Listen Later Jan 17, 2024 18:28


It's a new year and everyone is feeling optimistic and about the year ahead! Everyone except for Dr. Mike and Danny who return with more of their chat on the dark side of existentialism. Come join them as they talk Sisyphean ordeals, Star Trek and being 'Beach Ready'.

The Tampa Morgue
The Tampa Morgue- Episode #27-Guitarist/vocalist Dovydas Auglys (Crypts Of Despair, Luctus) visits the Tampa Morgue. (Interview)

The Tampa Morgue

Play Episode Listen Later Jan 12, 2024 167:47


   Guitarist and vocalist Dovydas Auglys (Crypts Of Despair, Luctus, Cold Embrace, Nahash, x-Sisyphean) stops by the Tampa Morgue. He takes us inside his musical journeys and also sheds some light on the Lithuanian undergound Metal scene past and present.  songs:Crypts of Despair: Anguished ExhaleCrypts of Despair: Path to VengeanceSisyphean: Shattered GlassLuctus: Kas Tu EsiCrypts of Despair: Choked By The Voidoriginal air date 1/12/2024contact:TheTampaMorgue@gmail.com The Tampa Morgue Podcast can be found on Spotify, Amazon Music, Apple Music, Apple Podcasts, YouTube and most places you listen to your podcasts. 

Press B To Cancel
Press B 189: Sisyphean Games 2024

Press B To Cancel

Play Episode Listen Later Jan 8, 2024 79:04


Sisyphean - adjective "Denoting or relating to a task that can never be completed. see: Battletoads" Brand new year means brand new gaming goals here at Press B. This week we wrap up last year's game selections, then we pick new goals for 2024. Will we go for challenges? Or just nostalgia? Everyone has games that they should have beaten in their backlog, tune in as we discuss ours. Press B To Cancel now on YouTube! For updates and more episodes please visit our website www.pressbtocancel.com, or find us on Twitter @pressbtocancel Special thanks to The Last Ancient on SoundCloud for our podcast theme. Find out more at http://pressbtocancel.comRead transcript

TechCrunch Startups – Spoken Edition
AI-powered search engine Perplexity AI, now valued at $520M, raises $70M

TechCrunch Startups – Spoken Edition

Play Episode Listen Later Jan 5, 2024 8:19


As search engine incumbents — namely Google — amp up their platforms with gen AI tech, startups are looking to reinvent AI-powered search from the ground up. It might seem like a Sisyphean task, going up against competitors with billions upon billions of users. Learn more about your ad choices. Visit megaphone.fm/adchoices

Hanks Bank
Polar Express (Director's Commentary) - Christmas Special

Hanks Bank

Play Episode Listen Later Dec 22, 2023 102:47


In an episode recorded live in person at 10.50pm in Toronto on October 28th, we perhaps make our worst episode yet. Listen at your own discretion. While continuing their Sisyphean task of watching the Polar Express once again, the boys discuss what bad decisions they made to reach this moment, whether Jamie should be a godfather and whether the simple idea being funny enough to outweigh the absolute hatred we feel four ourselves inside 

Six Miles To Supper
Update on My Hiatus, a Modified Media Fast, and a Bit About 2024

Six Miles To Supper

Play Episode Listen Later Dec 6, 2023 26:35


In today's episode I'm giving an update on my hiatus from this podcast and YouTube channel, my modified media fast, the projects I completed since we last spoke, and a look at what I'm thinking about for 2024. (:  Links to the various and sundry things mentioned in this episode: My Newsletter Julia Cameron's The Artist's Way on Amazon The Laid Back Guide to Weight Loss Maintenance Overcoming Weight Loss Obstacles The Swamps of Dorscha (Book II in the Forgotten Portal series) Intermittent Fasting Workbook Slow and Steady Success Academy Become an Insider on Youtube Discord Server Subscription Video: Intermittent Fasting During the Holidays Interivew on Justin Dorff's Channel   AI Generated Transcript of this Podcast Episode Welcome to the Six Miles to Supper podcast. I'm your host, Kayla Cox, and I've lost over £80 with intermittent fasting six days a week, eating whatever I wanted at my meals, taking a cheat day every Sunday and walking six miles a day. And I'm here to help you on your weight loss journey. On this episode, we're going to talk about all the things I've been up to since I've been on hiatus from this podcast.   First of all, I would like to say I'm sorry, I forgot to update you and let you know that I was on hiatus from this podcast. I realized today as I was going through the last podcast that I recorded, I really listened to it because I thought, surely I said, you know, I'm going on hiatus, but I didn't do that.   So if you are not subscribed to my newsletter and you don't watch my YouTube, you might have been wondering where I was. And by the way, if you'd like to subscribe to my newsletter or if you're interested in any of the things that I'm going to be talking about, you can find the links in the show notes for this episode.   The newsletter is really the best way to kind of keep up with what I'm doing because there are so many different places where I'm active that is easy for me to forget to update one. So in mid-September, I started looking around at my life and looking at, you know, how things went this past year and kind of thinking about what kind of direction I wanted to head in 2024.   And I realized I had all these projects that were sitting there, unfinished things I had been working on but had never really, you know, got them to the finish line. And so I thought, you know, I really want to finish these things by the end of the year. I knew that I couldn't continue doing the same schedule I had been doing and still try to finish these things.   I had already tried that and it had not worked. And that's the definition of insanity. Doing the same thing over and over again and expecting a different result. So I decided to go on a modified media fast. So if you've never read the excellent book by Julia Cameron called The Artist Way, you may not know what a media fest is, but a media farce is basically taking a break from consuming other people's creativity.   Now she recommends a seven day strict media fest, meaning for seven days straight, you don't read anything written by anyone else. You don't listen to music. You don't consume anybody else's creativity. And and it's a very great exercise. I highly recommend it to anybody. But I knew that that kind of strict fasting wasn't going to be sustainable for the amount of time that I guessed that this was going to take.   Now, I was a little bit wrong in how long I thought this media fest would go on. I thought really I would be done. Maybe, you know, after a month and then after I kind of got into it, I thought, well, maybe it'll be more like Halloween. As it turned out, it was mid-November before I was finished. And quite frankly, I'm still trying to get caught up and really restarted back on those things that I wanted to do.   For example, this is the first podcast episode I've created since the Media Fest was over. So what was I doing on the media Fest? How did I do it and why did I do it? So the way I decided to modify it fast, which is very much like how I modify regular fasting for weight loss purposes. You know, for me I was thinking about long term, okay, you know, I think this is going to happen for at least a month, maybe longer.   So what can I really stick with? Because sticking with it was a lot more important to me than, you know, having super strict rules and trying to be a perfectionist. So I decided for myself that Monday through Saturday, I was not going to consume other people's creativity just in general. That meant no reading except for the Bible and the Phillip Clear.   I didn't allow myself to listen to any podcast or watch any YouTube videos or watch any kind of TV or anything like that. The only exceptions to that were on date night and on Sleep on the Couch Night, which is the family movie night we have on Saturday nights. At that point, I would, you know, watch a movie with them.   And then on Sunday, I would just I wouldn't seek out creativity from other people. But if it happened to be on, I would watch it. So, for example, if my husband flipped on the TV on Sunday, I would sit there and watch it with him because it was important to me to spend time with him and not to just like leave the room because he wanted to watch TV.   And I also went on hiatus from creating new topical videos for YouTube and for creating podcasts for this podcast. So I did continue to do my weekly lives for my YouTube members on Wednesdays at noon, and I do one also just for the general YouTube public on Fridays at noon. And so I did those. I continued to those and I continued to do the vlog for members.   Also, but I didn't create other types of topical videos because, you know, the vlogs are unedited and so are the lives like alive. I just sit down, do the thing and then I'm done. Same with the vlog, just record it. I don't edit and then I just, you know, pop it on YouTube. But when I do a topical video that that's like a lot more work, you know, like I have to rehearse it, I have to outline it.   I have to be, you know, I have to go through it a lot and then I record it and then I edited it and then, you know, it's just this whole big long process. So I knew that I could continue to do those things, but that if I tried to do the topical videos and things like that, I would just not get done with these other projects so that those were my rules.   So I did this because I knew that if I did not create this kind of set of rules for myself, I would just never finish these projects. And these projects were important to me for various reasons. So let's talk about the projects I did and what that looked like. So the first project was finishing the laid back guide to Weight Loss Maintenance.   This is a book that I actually started in May of 2019, so four and a half years ago. So I had just written The Laid Back Guide to Intermittent Fasting and a very kind reader reached out and they had really enjoyed the book and they said, You know, the next book you should write is a book on maintenance because nobody writes books about maintenance, you know, And so I think that's what you should write next.   And I thought that's yeah, that's a really great idea. But I was totally intimidated by the idea of writing a book about maintenance. And also I felt like I needed more experience with maintenance. I felt very confident about weight loss itself with intermittent fasting, but I felt like, you know, I need to experience more of the maintenance because maintenance is where I always have failed.   Now, at that point I had maintained my initial £65 loss. I had done that for a year before I lost more weight. But then, you know, there was that year where I was losing more weight and I kind of felt like that didn't quite count as maintenance because when you're actively trying to lose weight, that's a different process than just trying to maintain your weight loss.   So I thought, you know, I really want to I want to maintain for a while longer before I really work on this book. So instead, I wrote another book called Overcoming Weight Loss Obstacles. And then I thought, I'll just put the maintenance book on the backburner. And so then I went back to it. After a while, I finally thought, you know, I feel like I could start writing this book again.   But I just kept working on it and working on it. And then I got it earlier this year in to kind of draft format, you know, like I felt like the the format was basically where I wanted it to be. But the way I write a book is basically I sometimes I outline, but then I kind of write the first draft and then I'll let it sit for a while and then I'll reread it and then I'll do a second draft.   And then more and more and more drafts until I finally get it to where I want it to be. And then at that point, I will read it out loud over and over and over and over again until I feel like it's exactly what I want to say. Or at least it said in the best way I know how to say it.   And then then I'll do all the other parts of publishing it, which is, you know, for me, because it's self-published, I, I then need to do the cover art and then put the thing out on KDP and which is Amazon's publishing platform. And so and also it is currently available on Amazon. And then later on I'm going to have it on other platforms as well.   After I finished the maintenance book, I went on to my next project, which was to redo the Overcoming Weight Loss Obstacles book cover. When I first put it out, it was just a text based cover and it was just what I could do at the time. You know, I really wanted to get the book out there. And so I, I sent it out there with a text based cover.   But over time, I just had this idea of something that I wanted to put on the cover. And it was kind of intimidating because I knew I wanted to draw it and I wanted to relay the message, and I had the idea that I would make it look like Sisyphus, you know, pushing the boulder up the hill. And so I started sketching it out and I had been working on this, you know, just in my head for a long time, but it never could quite make myself sit down and actually do the work.   So that was the next task I did, was I sat down and I worked on that and I had no excuse because it wasn't like, Well, I need to make another video or I need to make another podcast. I just need to focus on this. So I did. I and I'm happy with the final result. So I updated the cover and so the updated cover is now on Amazon and so if you're interested in looking at it, you can see it on there.   Since the Office is a character in Greek mythology and he had this punishment of pushing a boulder up a hill and just when he would get it to the top of the hill, it would roll back down. And so this has come to describe, you know, basically any kind of task where it feels like you accomplish it and then it just kind of falls apart and you have to do it over again, which is what weight loss felt like to me until I found intermittent fasting.   So I thought, Oh, that's kind of a, you know, like these obstacles, it feels sometimes like a Sisyphean task. And so that's why I did the cover the way I did it, except for the weight loss journey, Unlike Surfaces story can have a happy ending. You can get that boulder to the top of the hill and it doesn't have to roll back down.   The next project was to finish book two in my forgotten portal Young Adult series. And so I had written a book called Escape from All Shakes Castle several years ago, and I actually started writing Book two right away, even before book one was out. But I just worked on it and worked on it. And then, you know, several things kind of happened in my life that kind of interrupted my progress on it.   But I finally got back to it. And but again, it was one of those things that it was just sitting there, you know, kind of in this state of it just needed more attention. I needed to just put all of my attention on that thing. So that's what I did. I worked on it and worked on it and worked on it.   Same kind of process as I use for my nonfiction books, you know, reading it over and over and over again, having other people read it, tell me what they think, especially my kids. And once I got the manuscript finished, then it was a process of, you know, doing the cover and then putting it out there for the world.   And so then when I was finished with that, I went on to work on the intermittent fasting workbook. Now this is one of those projects I thought, Oh, this is going to take like a day. And, and it taught me again that I'm really terrible estimating the amount of work that needs to go into something because so I had had this workbook as a companion to the intermittent fasting for weight loss course that I have on Teachable.   So the workbook, as it was on the course, was basically just a compilation of all the worksheets that went along with the course. So, you know, you watch a video and then I would tell you, okay, now fill out this worksheet and then you would, you know, fill out the worksheet. And so my thought was, well, you know, that general process would be really helpful for people.   Not everybody maybe wants to buy the course, but some people maybe just want the workbook. So my thought was, well, oh, I can just take that workbook and kind of just, you know, create a little bit of explanation and then and then just put the workbook out there. But then as I got into the project, I realized, Well, no, you really need to give context and it needs to kind of stand on its own.   And so after a while of working on it, I realized like this is just a really big project. So the further I got into the project, the more I realized that it was a really good thing because it helped me to see that I needed to tweak the course a little bit. Now, when I look at the course as it is right now, I really like it.   I think it's helpful. I know that it's helpful because people have told me, you know, that they've gone through it and that it has helped them. But through the process of making this workbook and tweaking it, I, I, I broke down the process of weight loss even further into five distinct phases. And I did this because I wanted to base it on what I have learned in my own journey and also what I see other people kind of getting stuck on.   And so before I had like three phases. Basically the first phase was write your plan and start testing things out. Phase two is like going through just the process of losing weight and phase three was maintenance. So I ended up breaking it down a little bit further. So the first phase is preparation. It's about getting your mindset in the right place, about doing a little bit of work on the front end about, you know, getting your motivations in place so that when you do kind of struggle in the later parts of the journey, you won't quit.   And the second phase is about learning how to fast. The third phase is figuring out your plan, experimenting with things, getting that plan in place. Fourth phase is the actual process of losing weight, and the fifth phase is maintenance. So the work takes you through that process and if you'd like to buy the work, but you can do so in the link in the show notes.   But if you have ever purchased a course from me in the past or you are a past coaching client, then you can get the workbook for free. The way to get it is to log into the course that you purchased. So and it doesn't matter. It can be any of the courses that I've made. Just log into the course and you should be able to just download the workbook from the Intermittent Fasting workbook module.   If you are a past coaching client, please just email me and I will email you the PDF. So there is a challenge that comes up with something like a workbook. For some people what happens is, you know, you hear about a word like, Yes, that's going to, that's going to be a great thing to do and you'll buy the workbook and then maybe you'll read the entire workbook, but then you never actually do the work in the workbook.   That just happens to so many of us. You know, there are so many books that I've picked up. And in the past, what I would do is I wouldn't do the exercises. You know, I'd read this self-help book and I would just, you know, I'd read the questions, but I wouldn't actually journal them out like they said to do.   But what I learned on the weight loss journey was that if I actually did the exercises, that's when the change started to happen. It wasn't just from reading the book, it was from doing these, you know, journal exercises, writing things out, really thinking these things through. So with that in mind, I'm going to try to solve that problem using the Discord server.   This leads me to another announcement for all current or future or past students or coaching clients. I'm giving you permanent access to the Discord server that I've set up now. I originally set this up because the insiders on my YouTube channel, those are people who are just paying a monthly subscription. They were asking for a place to get together and hold each other accountable and so that's how the Discord server was born.   And as I've started to use the Discord server, I've really liked the different features it has. And what I've seen in there. So and the Discord server is going to figure heavily into what I'm going to do in 2024 in order to get access to the Discord server. All you need to do is log into the course and then you'll see the instructions how to get access.   Just be aware that it is a private server. So I need to manually approve everything, so just be aware of that. So just follow the instructions that I've given you in there and then you should be able to get access. So for those of you who don't know what discord is, discord is basically just a place where you can chat with other people.   And basically, if you can use Facebook, then you can use discord like it doesn't require a bunch of skills or anything. It's going to look a little different than Facebook, of course. But but the basics are the same, so don't be intimidated by that. So on the Discord server, right now we have an accountability channel. We also have places for people to share what they're doing during the fasting window to keep themselves busy.   It's called filling the void. Also, the Discord server is where I hold office hours now. So if you have questions on the weight loss journey about, you know, anything that you've dealt with in the course or, you know, just like if you're tracking and you're kind of like what in a second opinion on what's going on, or if you're just kind of having trouble, you can drop into the office hours that I hold and we can chat about it.   You can be on just in the chat, but so you don't have to be on camera, You don't have to be on a microphone, or you can share your screen with me or you can have your microphone on, or you can have your camera on. It's really up to you and I'll post my availability within the Discord server.   Also on the Discord server, we are going to be doing a book club. So the first book that we're doing is Steven Covey's Seven Habits of Highly Effective People, and we're starting that now here in December. So if you're interested in participating in that, go ahead and join the Discord server. Now, like I mentioned, you can get access to the discord server either by being an insider on YouTube or being a student inside Sloan say Success Academy, or you can purchase a server subscription directly on Discord. And the Discord server is also a place where I'm going to be experimenting with some different types of offerings. For example, with this workbook in mind, I'm thinking, you know, I know one of the things that's really hard to do is to carve out time for the weight loss journey, meaning it's hard to get yourself to sit down and, you know, do a worksheet, but it's very helpful to do so.   So I'm planning on doing what I call the work of weight loss blocks. The idea is based off of a thing I've seen other people doing called sprints. So basically, you know, people do this with writing sprints or reading sprints. They'll basically get together. You know, some YouTubers do this where they'll that like actually just sit there with their microphone off in and they'll like.   Right. So the idea is you kind of start out maybe the first 10 minutes or so I would be, I would be hosting it and I would take, you know, questions or maybe we would discuss a topic for about 10 minutes and then for the next, say, you know, 20 to 30 minutes, depending on how what works best for the group, we would do, you know, a sprint, quote unquote.   But really all it would mean is to do the work of weight loss. So for some people that might be doing a worksheet for other people that might be sitting down with their weight tracking spreadsheet and looking at the trends and, you know, updating it with notes and things like that or, you know, sitting down with your your plan and looking at is it working?   And should teenagers be made or it might be something else. But the idea is to carve out that time. And so I'm thinking about, you know, offering these on a weekly basis. It's going to be experimental because I don't really know it. It may not be something that people are interested in or it may need to be tweaked or something else might work better.   But I have plenty of administrative tasks that I also need to make myself sit down and do. So My plan is to be doing that kind of work while other people are doing the other kind of work. And then after the sprint is over, after that 30 minutes is up, then we'll have about another 10 minutes to kind of talk about, you know, anything that came up for us during that sprint and and then then it's done.   So that's kind of my idea. So just be on the lookout for that as well. So once I finished the workbook, then it was time for my last project, which was a painting for my daughter. Apparently I had promised my daughter a painting and it's really important to me to keep promises. It was a big lesson I learned on the weight loss journey that it is important to keep promises to yourself is important also to keep promises to other people.   And on the weight loss journey, I realized, you know, I'm pretty good about keeping promises to other people, but I'm really not good at keeping promises to myself, you know, promises that I'm going to lose weight, promises that I'm going to stick to a plan, that kind of thing. And I've really learned the importance of doing that, of, you know, when you say you're going to do something, then you need to do it, you know, and which also if you start to do that, you start to become more careful of about what you promise to do.   And so my daughter brought this up to me that I had promised her a painting. She said, you know, one day she just kind of randomly came up to me and she said, When are you going to do that painting that you promised? And I thought, I don't what what painting? I had completely forgotten that I had promised.   So she showed me the picture of this church that I had promised to paint for her. And and so I got myself to sit down and actually do the work and to do the painting. And it took a while to do this. But and I mean, it took many, many hours because I was not familiar at all really with the medium that I was using.   I was using acrylic paint. I had done like a couple of oil paintings before, just small little things. And this was a bigger canvas and it was very intimidating. But I learned a lot. I loved the process. I highly recommend doing art, you know, just creating things just, just for fun, you know, just just to do it. It's very relaxing.   Is far superior to watching a movie or anything like that. And so I finished the project and then with that project finished, I was finished with my media fast, and then I tried to get back into doing all those things that I had kind of got behind on. And so I've been slowly working my way back in. So as you can see, today is December 6th and I am, you know, finally putting out a podcast episode.   So, you know, doing a little quick math, that means it basically took me three weeks to kind of get back into the groove of of publishing things. I've been doing things I've been I did put up a topical video on YouTube about intermittent fasting during the holidays, and I've also been continuing to do the vlog for insider. I've been on the Discord server and I've also just been, you know, trying to get caught up on various things. I was interviewed by Justin Dorff for his YouTube channel and so so I've been busy creating things. It's just that this podcast has been kind of like the last thing for me to get back into. So going forward, my plan is to try to to create consistently for the YouTube channel and for this podcast, but also I really enjoy, you know, helping people on a more personal basis, you know, getting to know the people.   And, you know, on the Discord server, that's been really neat to be able to actually interact with people. It's been a difficult thing to do on YouTube itself because the comment section in the public comment section, I just can't do it. I can't go in it and keep my mental health in the right place. But this Discord server has been good so far, I think because it is private and it's not just open to anyone who wants to come in and and come.   It is just for people who really want to be there. So thank you guys for listening to this very long update, but I hope you enjoyed it and I will see you next one. Do you want to lose the weight without getting rid of the foods you love and that you know you'll go back to eating again? Anyway, my book, The Laid Back Guide to Intermittent Fasting, teaches you how to practice intermittent fasting so that you lose the weight sustainably and keep it off for good. You can get the audiobook read by me for free when you sign up for your 30 day trial of Audible, the link is in the show notes, and if you've gotten value from this podcast and you'd like to let other people know about it, it'd be great if you could leave a review on either iTunes or wherever you get your podcast. Thanks.

Kat and Moose Podcast
Medicine Women and a Sisyphean Task

Kat and Moose Podcast

Play Episode Listen Later Nov 17, 2023 47:38 Transcription Available


Ever wondered what it feels like to be a Medicine Person, embodying the energy and power that comes with the role? Join us as we explore this fascinating concept after a personal challenge I presented to Kat on her birthday. We imagine what a Medicine Person party would look like and reflect on the significance of dressing up for occasions. Along this journey, we delve into the healing properties of ketamine, and the physical and mental transformations that come with advancing age.Imagine your body as a stalwart protector, tirelessly working to shield you from trauma, and the magic that lies in learning to listen to its whispers. We share personal stories of panic attacks and terrifying encounters, and how our bodies react in these situations. What if our bodies had different enneagram numbers? We pose this thought-provoking question and discuss what it could signify. A little levity is sprinkled in as we reminisce about a hilarious incident from my twenties involving a unique way of spelling my name and a hip thrust!Reflecting on four years of consistently delivering the 'Cat and Moose' podcast, we can't help but draw parallels with Sisyphus pushing his boulder. Just like him, we've faced our own challenges with unflinching determination. We express our gratitude towards our producer Sarah Wee and discuss how important it is to be aware of our Caesarean ways. Also, we delve into our thoughts on body judgments and the importance of having open conversations about them. Pull up a chair and join us in this roller coaster chat full of intriguing topics.Support the showVisit us on the Interwebs! Follow us on Instagram and Facebook! Support the show!

simple prospering
EP 13: Building Block #2: Strengths and Needs

simple prospering

Play Episode Listen Later Nov 10, 2023 17:51


In building block #2 I'm talking about how to build a business around BOTH your strengths and your clients' true needs. Oftentimes things are lopsided in that equation. A symptom that things have gotten out of balance is that everything starts to feel Sisyphean- you keep rolling that boulder up the hill and it keeps rolling right back down.  In micro businesses we wear all the hats! So we have a lot less running room for a business that isn't well structured. I troubleshoot how to spot an unbalanced business, and how to weave your strengths and your clients needs into an offering that is satisfying and sustainable for both of you.  This is the audiobook version of chapter 2 from my free ebook: You Need a Holistic Business: Learn the Six Essential Building Blocks. You can download that at simpleprospering.com/freetrainings.   

That's Pretty Dark
Episode 60 | CTCD S1E5 — Night of the Weremole / Mother's Day

That's Pretty Dark

Play Episode Listen Later Sep 23, 2023 98:30


Ancient werewolf lore, Sisyphean games of Whack-A-Mole, and the unfortunate reality of generational curses… Oh lawd, it's about to get pretty dark! Continuing their Courage binge, these ‘90s kids learn the origin of the ol' “hair of the dog” cure, brush up on their understanding of lycanthropy, and tip-toe toward the emotional horrors that made Eustace one of the darkest villains of our childhood.Have a frightfully nostalgic memory to share or a pretty dark topic you'd like us to cover? Email us at thatsprettydarkpodcast@gmail.comGive to our Patreon for extra content: patreon.com/tpdpodcastFollow us on Instagram and Facebook @thatsprettydarkpodcast

SheVentures
From K–12 to Trade School, Find Out How 529 Plans Can Cover a Range of Education Expenses!

SheVentures

Play Episode Listen Later Aug 15, 2023 53:04


The devil's in the details, and it's easy to get lost in them where money is concerned. Post–high school education is no exception, and it's rarely made easy. From saving to taking out loans — and paying off said loans — the cycle seems downright Sisyphean.  Patricia Roberts, founder and COO of Gift of College and author of Route 529: A Parent's Guide for Saving for College, gets it. And, with her expertise with education savings plans, she's here to help you make the details and devil within work for you. Her company makes it easy for anyone to open an education savings plan online in a matter of minutes. Plus friends and family can contribute $25 to $200, and the funds don't expire! As a first-generation college student herself, Roberts knows the struggles many face when thinking about post–high school education. For Roberts (who's also an attorney), specializing in the intricacies of 529 education savings plans was personal — she and her husband wanted to provide their son with choices not readily available to them.  Some myth-busting about 529 education savings plans:  Did you know they pay for a myriad of education-related expenses, such as K–12, trade schools, and college? Many 529 plans have tax benefits — check your state. The owner of the 529 plan can change the beneficiary to another family member — or use it for themselves. Roberts also discusses diversity in the workplace and how far it's come along since the 90s — but acknowledges it still has a ways to go, especially in traditionally male-dominated industries like financial services.  Save time sifting through different 529 resources and find the information you need in one fell swoop with Roberts' insight. Learn about Roberts' recent pursuits on LinkedIn and Instagram, and check out the hashtag #radicalgenerosity on Twitter for inspiration! Find out who benefits from a 529, what can be done with the money, and more on this episode of SheVentures!  Roberts discusses the lessons learned from her late mother and how crucial self-care is to long-term health.   As a first-generation college student herself, Roberts reflects on the social and financial roadblocks she faced — and how she's striving to ensure her son avoids the same pitfalls. From working in financial services to attending law school at night, Roberts' early career and personal motivations led her to working with 529 savings plans (tax-advantaged education savings plans).  How diversity in the workplace has evolved since the 90s, according to Roberts  Corporate ladder climbing tips: Understand your motivation and your gifts, cultivate a community that supports you, and highlight your accomplishments when appropriate.  How 529 plans are underutilized — plus Roberts' tips for ensuring you reap the most benefit Through Gift of College (a college-savings platform started by Roberts), setting up a 529 plan can be as easy as buying a gift card. The differences and similarities between the 529 and the ABLE plan — and how each can best serve you   Roberts debunks the biggest misconceptions surrounding 529 plans.  Three skills that eased Roberts' transition from the corporate world to entrepreneurship  Roberts is active on LinkedIn and can be found on Instagram and Facebook at @route529mom. More information about Gift of College can be found at giftofcollege.com.

Pride Fitness And Movement
58: The Truth About Gymcels

Pride Fitness And Movement

Play Episode Listen Later Aug 10, 2023 23:52


In the dynamic landscape of online subcultures, the emergence of the term "gymcel" has ignited debates and introspection about self-improvement, male identity, and personal challenges. The gymcel, a fusion of "gym" and "incel" (involuntary celibate), encompasses individuals deeply invested in their fitness journeys. They appear as either individuals trying to compensate for perceived personality deficits or as men navigating a world where the gym offers reliable solace.The gymcel path embodies the Sisyphean struggle, relentlessly pursuing a perfection that mirrors Sisyphus rolling a boulder uphill. This unattainable pursuit revolves around reshaping the body, driven by an unrelenting urge to obliterate insecurities. However, the irony lies in focusing on the body while potentially ignoring the internal transformation that could provide true redemption.The term "gymcel" gained traction in the mid-2010s, reaching its zenith around 2018, sparking inquiries about its meaning and the prevalent hostility directed towards it. Beneath the surface, a deeper narrative unravels – a quest for identity in a society redefining norms. Figures like Jordan Peterson, David Goggins, and Joe Rogan have unintentionally become role models for gymcels, offering alternative paths to personal growth and challenging societal norms.Peterson, with his emphasis on responsibility, Goggins' relentless mindset, and Rogan's exploration of self, resonate with gymcels, who often lack culturally approved role models for masculinity. Aziz Sergeyevich Shavershian, known as Zyzz, also played a pivotal role in sparking the aesthetics renaissance by transforming himself into an admired aesthetic icon, appealing to the universal desire for self-improvement.The gymcel identity is not black and white; it comes with advantages and drawbacks. While dedication to the gym fosters discipline and growth, an excessive focus can lead to imbalanced lives. Surprisingly, beneath the surface, gymcels may seek genuine male companionship, challenging norms and expanding perspectives.Escaping the gymcel narrative requires introspection and diversification of pursuits. Transitioning from a sole focus on the gym involves small, meaningful steps towards connecting with old friends, engaging in new hobbies, and exploring romantic avenues.In conclusion, the gymcel phenomenon is a multifaceted exploration of identity, connection, and self-betterment. Gymcels navigate a society with limited paths to success, carving unique journeys that prompt reflection on aspirations, connections, and personal metamorphosis. Fulfillment, as witnessed through the gymcel's ascent, is a mosaic that rarely adheres to a linear trajectory.Support the show@andrewPFM @PrideFitnessandMovement

Press B To Cancel
Press B 167: Sisyphean Challenge Check-in

Press B To Cancel

Play Episode Listen Later Jul 31, 2023 92:21


Start of the year we each selected a game we knew deep down we just had to beat. Doom 3, Battletoads, Final Fantasy 6, Chrono Trigger. Only 6 months remain, where are we at? Pushing that retro boulder like Sisyphus, this week on Press B To Cancel. Press B To Cancel now on Youtube! For updates and more episodes please visit our website www.pressbtocancel.com, or find us on Twitter @pressbtocancel @pressbtocancel. Special thanks to The Last Ancient on SoundCloud for our podcast theme.

It's a Beautiful Day In The Gulch
Ep. 137 - The Two Most Regular Guys on Earth

It's a Beautiful Day In The Gulch

Play Episode Listen Later Jun 30, 2023 34:25


Having a normal conversation about Sisyphean tasks

Mogul Motivation
The Sisyphean Feeling

Mogul Motivation

Play Episode Listen Later May 31, 2023 12:14


Progress is real and inevitable when you're consistent with your goals. But fear, negativity, and perfection will manipulate you into thinking that it can never be done. #motivation #entrepreneurship #consistency #perseverance #dreamchasing

OffScrip with Matthew Zachary
Paul Simms Dares Pharma to Be Daring

OffScrip with Matthew Zachary

Play Episode Listen Later Mar 7, 2023 39:21


Self-proclaimed "noisy introvert" Paul Simms is the Chief Executive at Impatient Health, a platform on a Sisyphean mission to help the pharmaceutical industry be more ambitious and creative. While not only having one of the most outstanding British accents ever to grace this podcast, Paul is a true provocateur and a rabble-rouser. We want to think that everyone would appreciate the habitually conservative healthcare sector to be less risk-averse and recalcitrant and consider the possibility of making daring, experimental, creative, and ambitious changes.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Inside the Hive with Nick Bilton
Kevin McCarthy and the GOP's Cult of No Personality

Inside the Hive with Nick Bilton

Play Episode Listen Later Jan 4, 2023 41:22


Joe Hagan and Molly Jong-Fast analyze the war inside the Republican Party as California congressman Kevin McCarthy is thrice denied his bid for Speaker of the House. In disarray and denial, the GOP appears intent on kicking off the New Year with the same old extremist politics that have depressed its fortunes through the past three election cycles.    Then, Jong-Fast discusses her exclusive interview with Vice President Kamala Harris, who has the Sisyphean task of tackling immigration—an issue the GOP will surely exploit between now and 2024. Learn more about your ad choices. Visit podcastchoices.com/adchoices