Podcasts about pytest

  • 19PODCASTS
  • 92EPISODES
  • 36mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • May 2, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about pytest

Latest podcast episodes about pytest

Test & Code - Python Testing & Development
pytest-check - allow multiple failures per test

Test & Code - Python Testing & Development

Play Episode Listen Later May 2, 2025 9:56


pytest-check is a pytest plugin that allows multiple failures per test.Normally, a test function will fail and stop running with the first failed assert. That's totally fine for tons of kinds of software tests. However, there are times where you'd like to check more than one thing, and you'd really like to know the results of each check, even if one of them fails.pytest-check allows multiple failed "checks" per test function, so you can see the whole picture of what's going wrong.Links:pytest-checkTop pytest plugins Sponsored by: Porkbun -- named the #1 domain registrar by USA Today from 2023 to 2025!Get a .app or.dev domain name for only $5.99 first year.Learn pytest: The Complete pytest course is now a bundle, with each part available separately.pytest Primary Power teaches the super powers of pytest that you need to learn to use pytest effectively.Using pytest with Projects has lots of "when you need it" sections like debugging failed tests, mocking, testing strategy, and CIThen pytest Booster Rockets can help with advanced parametrization and building plugins.Whether you need to get started with pytest today, or want to power up your pytest skills, PythonTest has a course for you. ★ Support this podcast on Patreon ★

Python Podcast
Live von der DjangoCon Europe 2025 in Dublin - Tag 1

Python Podcast

Play Episode Listen Later Apr 23, 2025 36:14 Transcription Available


Live von der DjangoCon Europe 2025 in Dublin - Tag 1 (click here to comment) 23. April 2025, Jochen In dieser Sonderausgabe melden wir uns live von der DjangoCon Europe in Dublin!

Test & Code - Python Testing & Development
pytest-repeat - works fine on Python 3.14

Test & Code - Python Testing & Development

Play Episode Listen Later Apr 10, 2025 8:04


pytest-repeat is a pytest plugin that makes it easy to repeat a single test, or multiple tests, a specific number of times.  works fine on Python 3.14is tested on Python 3.9-3.14probably works fine still on 3.7 & 3.8This episode also discusses the attempted April Fools episode.Links:pytest-repeatThe April Fools episode: Python 3.14 won't repeat with pytest-repeat Sponsored by: The Complete pytest course is now a bundle, with each part available separately.pytest Primary Power teaches the super powers of pytest that you need to learn to use pytest effectively.Using pytest with Projects has lots of "when you need it" sections like debugging failed tests, mocking, testing strategy, and CIThen pytest Booster Rockets can help with advanced parametrization and building plugins.Whether you need to get started with pytest today, or want to power up your pytest skills, PythonTest has a course for you. ★ Support this podcast on Patreon ★

Test & Code - Python Testing & Development
Python 3.14 won't repeat with pytest-repeat

Test & Code - Python Testing & Development

Play Episode Listen Later Apr 1, 2025 4:38


pytest-repeat is a pytest plugin that makes it easy to repeat a single test, or multiple tests, a specific number of times.  Unfortunately, it doesn't seem to work with Python 3.14, even though there is no rational reason why it shouldn't work.Links:pytest-repeatGuido van Rossum returns as Python's BDFL Sponsored by: The Complete pytest course is now a bundle, with each part available separately.pytest Primary Power teaches the super powers of pytest that you need to learn to use pytest effectively.Using pytest with Projects has lots of "when you need it" sections like debugging failed tests, mocking, testing strategy, and CIThen pytest Booster Rockets can help with advanced parametrization and building plugins.Whether you need to get started with pytest today, or want to power up your pytest skills, PythonTest has a course for you. ★ Support this podcast on Patreon ★

Test & Code - Python Testing & Development
pytest-html - a plugin that generates HTML reports for test results

Test & Code - Python Testing & Development

Play Episode Listen Later Mar 27, 2025 6:35


pytest-html has got to be one of my all time favorite plugins. pytest-html is a plugin for pytest that generates a HTML report for test results. This episode digs into some of the super coolness of pytest-html.pytest-htmlrepo readme with screenshotenhancing reportspytest-metadata Sponsored by: The Complete pytest course is now a bundle, with each part available separately.pytest Primary Power teaches the super powers of pytest that you need to learn to use pytest effectively.Using pytest with Projects has lots of "when you need it" sections like debugging failed tests, mocking, testing strategy, and CIThen pytest Booster Rockets can help with advanced parametrization and building plugins.Whether you need to get started with pytest today, or want to power up your pytest skills, PythonTest has a course for you. ★ Support this podcast on Patreon ★

Test & Code - Python Testing & Development
pytest-md and pytest-md-report: Markdown reports for pytest

Test & Code - Python Testing & Development

Play Episode Listen Later Mar 1, 2025 9:43


Markdown reports as either text or markdown tables.Two fun plugins discussed.Links:pytest-md-reportpytest-mdTop pytest Plugins  Learn pytestpytest is the number one test framework for Python.Learn the basics super fast with Hello, pytest!Then later you can become a pytest expert with The Complete pytest CourseBoth courses are at courses.pythontest.com ★ Support this podcast on Patreon ★

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Did you know that adding a simple Code Interpreter took o3 from 9.2% to 32% on FrontierMath? The Latent Space crew is hosting a hack night Feb 11th in San Francisco focused on CodeGen use cases, co-hosted with E2B and Edge AGI; watch E2B's new workshop and RSVP here!We're happy to announce that today's guest Samuel Colvin will be teaching his very first Pydantic AI workshop at the newly announced AI Engineer NYC Workshops day on Feb 22! 25 tickets left.If you're a Python developer, it's very likely that you've heard of Pydantic. Every month, it's downloaded >300,000,000 times, making it one of the top 25 PyPi packages. OpenAI uses it in its SDK for structured outputs, it's at the core of FastAPI, and if you've followed our AI Engineer Summit conference, Jason Liu of Instructor has given two great talks about it: “Pydantic is all you need” and “Pydantic is STILL all you need”. Now, Samuel Colvin has raised $17M from Sequoia to turn Pydantic from an open source project to a full stack AI engineer platform with Logfire, their observability platform, and PydanticAI, their new agent framework.Logfire: bringing OTEL to AIOpenTelemetry recently merged Semantic Conventions for LLM workloads which provides standard definitions to track performance like gen_ai.server.time_per_output_token. In Sam's view at least 80% of new apps being built today have some sort of LLM usage in them, and just like web observability platform got replaced by cloud-first ones in the 2010s, Logfire wants to do the same for AI-first apps. If you're interested in the technical details, Logfire migrated away from Clickhouse to Datafusion for their backend. We spent some time on the importance of picking open source tools you understand and that you can actually contribute to upstream, rather than the more popular ones; listen in ~43:19 for that part.Agents are the killer app for graphsPydantic AI is their attempt at taking a lot of the learnings that LangChain and the other early LLM frameworks had, and putting Python best practices into it. At an API level, it's very similar to the other libraries: you can call LLMs, create agents, do function calling, do evals, etc.They define an “Agent” as a container with a system prompt, tools, structured result, and an LLM. Under the hood, each Agent is now a graph of function calls that can orchestrate multi-step LLM interactions. You can start simple, then move toward fully dynamic graph-based control flow if needed.“We were compelled enough by graphs once we got them right that our agent implementation [...] is now actually a graph under the hood.”Why Graphs?* More natural for complex or multi-step AI workflows.* Easy to visualize and debug with mermaid diagrams.* Potential for distributed runs, or “waiting days” between steps in certain flows.In parallel, you see folks like Emil Eifrem of Neo4j talk about GraphRAG as another place where graphs fit really well in the AI stack, so it might be time for more people to take them seriously.Full Video EpisodeLike and subscribe!Chapters* 00:00:00 Introductions* 00:00:24 Origins of Pydantic* 00:05:28 Pydantic's AI moment * 00:08:05 Why build a new agents framework?* 00:10:17 Overview of Pydantic AI* 00:12:33 Becoming a believer in graphs* 00:24:02 God Model vs Compound AI Systems* 00:28:13 Why not build an LLM gateway?* 00:31:39 Programmatic testing vs live evals* 00:35:51 Using OpenTelemetry for AI traces* 00:43:19 Why they don't use Clickhouse* 00:48:34 Competing in the observability space* 00:50:41 Licensing decisions for Pydantic and LogFire* 00:51:48 Building Pydantic.run* 00:55:24 Marimo and the future of Jupyter notebooks* 00:57:44 London's AI sceneShow Notes* Sam Colvin* Pydantic* Pydantic AI* Logfire* Pydantic.run* Zod* E2B* Arize* Langsmith* Marimo* Prefect* GLA (Google Generative Language API)* OpenTelemetry* Jason Liu* Sebastian Ramirez* Bogomil Balkansky* Hood Chatham* Jeremy Howard* Andrew LambTranscriptAlessio [00:00:03]: Hey, everyone. Welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:12]: Good morning. And today we're very excited to have Sam Colvin join us from Pydantic AI. Welcome. Sam, I heard that Pydantic is all we need. Is that true?Samuel [00:00:24]: I would say you might need Pydantic AI and Logfire as well, but it gets you a long way, that's for sure.Swyx [00:00:29]: Pydantic almost basically needs no introduction. It's almost 300 million downloads in December. And obviously, in the previous podcasts and discussions we've had with Jason Liu, he's been a big fan and promoter of Pydantic and AI.Samuel [00:00:45]: Yeah, it's weird because obviously I didn't create Pydantic originally for uses in AI, it predates LLMs. But it's like we've been lucky that it's been picked up by that community and used so widely.Swyx [00:00:58]: Actually, maybe we'll hear it. Right from you, what is Pydantic and maybe a little bit of the origin story?Samuel [00:01:04]: The best name for it, which is not quite right, is a validation library. And we get some tension around that name because it doesn't just do validation, it will do coercion by default. We now have strict mode, so you can disable that coercion. But by default, if you say you want an integer field and you get in a string of 1, 2, 3, it will convert it to 123 and a bunch of other sensible conversions. And as you can imagine, the semantics around it. Exactly when you convert and when you don't, it's complicated, but because of that, it's more than just validation. Back in 2017, when I first started it, the different thing it was doing was using type hints to define your schema. That was controversial at the time. It was genuinely disapproved of by some people. I think the success of Pydantic and libraries like FastAPI that build on top of it means that today that's no longer controversial in Python. And indeed, lots of other people have copied that route, but yeah, it's a data validation library. It uses type hints for the for the most part and obviously does all the other stuff you want, like serialization on top of that. But yeah, that's the core.Alessio [00:02:06]: Do you have any fun stories on how JSON schemas ended up being kind of like the structure output standard for LLMs? And were you involved in any of these discussions? Because I know OpenAI was, you know, one of the early adopters. So did they reach out to you? Was there kind of like a structure output console in open source that people were talking about or was it just a random?Samuel [00:02:26]: No, very much not. So I originally. Didn't implement JSON schema inside Pydantic and then Sebastian, Sebastian Ramirez, FastAPI came along and like the first I ever heard of him was over a weekend. I got like 50 emails from him or 50 like emails as he was committing to Pydantic, adding JSON schema long pre version one. So the reason it was added was for OpenAPI, which is obviously closely akin to JSON schema. And then, yeah, I don't know why it was JSON that got picked up and used by OpenAI. It was obviously very convenient for us. That's because it meant that not only can you do the validation, but because Pydantic will generate you the JSON schema, it will it kind of can be one source of source of truth for structured outputs and tools.Swyx [00:03:09]: Before we dive in further on the on the AI side of things, something I'm mildly curious about, obviously, there's Zod in JavaScript land. Every now and then there is a new sort of in vogue validation library that that takes over for quite a few years and then maybe like some something else comes along. Is Pydantic? Is it done like the core Pydantic?Samuel [00:03:30]: I've just come off a call where we were redesigning some of the internal bits. There will be a v3 at some point, which will not break people's code half as much as v2 as in v2 was the was the massive rewrite into Rust, but also fixing all the stuff that was broken back from like version zero point something that we didn't fix in v1 because it was a side project. We have plans to move some of the basically store the data in Rust types after validation. Not completely. So we're still working to design the Pythonic version of it, in order for it to be able to convert into Python types. So then if you were doing like validation and then serialization, you would never have to go via a Python type we reckon that can give us somewhere between three and five times another three to five times speed up. That's probably the biggest thing. Also, like changing how easy it is to basically extend Pydantic and define how particular types, like for example, NumPy arrays are validated and serialized. But there's also stuff going on. And for example, Jitter, the JSON library in Rust that does the JSON parsing, has SIMD implementation at the moment only for AMD64. So we can add that. We need to go and add SIMD for other instruction sets. So there's a bunch more we can do on performance. I don't think we're going to go and revolutionize Pydantic, but it's going to continue to get faster, continue, hopefully, to allow people to do more advanced things. We might add a binary format like CBOR for serialization for when you'll just want to put the data into a database and probably load it again from Pydantic. So there are some things that will come along, but for the most part, it should just get faster and cleaner.Alessio [00:05:04]: From a focus perspective, I guess, as a founder too, how did you think about the AI interest rising? And then how do you kind of prioritize, okay, this is worth going into more, and we'll talk about Pydantic AI and all of that. What was maybe your early experience with LLAMP, and when did you figure out, okay, this is something we should take seriously and focus more resources on it?Samuel [00:05:28]: I'll answer that, but I'll answer what I think is a kind of parallel question, which is Pydantic's weird, because Pydantic existed, obviously, before I was starting a company. I was working on it in my spare time, and then beginning of 22, I started working on the rewrite in Rust. And I worked on it full-time for a year and a half, and then once we started the company, people came and joined. And it was a weird project, because that would never go away. You can't get signed off inside a startup. Like, we're going to go off and three engineers are going to work full-on for a year in Python and Rust, writing like 30,000 lines of Rust just to release open-source-free Python library. The result of that has been excellent for us as a company, right? As in, it's made us remain entirely relevant. And it's like, Pydantic is not just used in the SDKs of all of the AI libraries, but I can't say which one, but one of the big foundational model companies, when they upgraded from Pydantic v1 to v2, their number one internal model... The metric of performance is time to first token. That went down by 20%. So you think about all of the actual AI going on inside, and yet at least 20% of the CPU, or at least the latency inside requests was actually Pydantic, which shows like how widely it's used. So we've benefited from doing that work, although it didn't, it would have never have made financial sense in most companies. In answer to your question about like, how do we prioritize AI, I mean, the honest truth is we've spent a lot of the last year and a half building. Good general purpose observability inside LogFire and making Pydantic good for general purpose use cases. And the AI has kind of come to us. Like we just, not that we want to get away from it, but like the appetite, uh, both in Pydantic and in LogFire to go and build with AI is enormous because it kind of makes sense, right? Like if you're starting a new greenfield project in Python today, what's the chance that you're using GenAI 80%, let's say, globally, obviously it's like a hundred percent in California, but even worldwide, it's probably 80%. Yeah. And so everyone needs that stuff. And there's so much yet to be figured out so much like space to do things better in the ecosystem in a way that like to go and implement a database that's better than Postgres is a like Sisyphean task. Whereas building, uh, tools that are better for GenAI than some of the stuff that's about now is not very difficult. Putting the actual models themselves to one side.Alessio [00:07:40]: And then at the same time, then you released Pydantic AI recently, which is, uh, um, you know, agent framework and early on, I would say everybody like, you know, Langchain and like, uh, Pydantic kind of like a first class support, a lot of these frameworks, we're trying to use you to be better. What was the decision behind we should do our own framework? Were there any design decisions that you disagree with any workloads that you think people didn't support? Well,Samuel [00:08:05]: it wasn't so much like design and workflow, although I think there were some, some things we've done differently. Yeah. I think looking in general at the ecosystem of agent frameworks, the engineering quality is far below that of the rest of the Python ecosystem. There's a bunch of stuff that we have learned how to do over the last 20 years of building Python libraries and writing Python code that seems to be abandoned by people when they build agent frameworks. Now I can kind of respect that, particularly in the very first agent frameworks, like Langchain, where they were literally figuring out how to go and do this stuff. It's completely understandable that you would like basically skip some stuff.Samuel [00:08:42]: I'm shocked by the like quality of some of the agent frameworks that have come out recently from like well-respected names, which it just seems to be opportunism and I have little time for that, but like the early ones, like I think they were just figuring out how to do stuff and just as lots of people have learned from Pydantic, we were able to learn a bit from them. I think from like the gap we saw and the thing we were frustrated by was the production readiness. And that means things like type checking, even if type checking makes it hard. Like Pydantic AI, I will put my hand up now and say it has a lot of generics and you need to, it's probably easier to use it if you've written a bit of Rust and you really understand generics, but like, and that is, we're not claiming that that makes it the easiest thing to use in all cases, we think it makes it good for production applications in big systems where type checking is a no-brainer in Python. But there are also a bunch of stuff we've learned from maintaining Pydantic over the years that we've gone and done. So every single example in Pydantic AI's documentation is run on Python. As part of tests and every single print output within an example is checked during tests. So it will always be up to date. And then a bunch of things that, like I say, are standard best practice within the rest of the Python ecosystem, but I'm not followed surprisingly by some AI libraries like coverage, linting, type checking, et cetera, et cetera, where I think these are no-brainers, but like weirdly they're not followed by some of the other libraries.Alessio [00:10:04]: And can you just give an overview of the framework itself? I think there's kind of like the. LLM calling frameworks, there are the multi-agent frameworks, there's the workflow frameworks, like what does Pydantic AI do?Samuel [00:10:17]: I glaze over a bit when I hear all of the different sorts of frameworks, but I like, and I will tell you when I built Pydantic, when I built Logfire and when I built Pydantic AI, my methodology is not to go and like research and review all of the other things. I kind of work out what I want and I go and build it and then feedback comes and we adjust. So the fundamental building block of Pydantic AI is agents. The exact definition of agents and how you want to define them. is obviously ambiguous and our things are probably sort of agent-lit, not that we would want to go and rename them to agent-lit, but like the point is you probably build them together to build something and most people will call an agent. So an agent in our case has, you know, things like a prompt, like system prompt and some tools and a structured return type if you want it, that covers the vast majority of cases. There are situations where you want to go further and the most complex workflows where you want graphs and I resisted graphs for quite a while. I was sort of of the opinion you didn't need them and you could use standard like Python flow control to do all of that stuff. I had a few arguments with people, but I basically came around to, yeah, I can totally see why graphs are useful. But then we have the problem that by default, they're not type safe because if you have a like add edge method where you give the names of two different edges, there's no type checking, right? Even if you go and do some, I'm not, not all the graph libraries are AI specific. So there's a, there's a graph library called, but it allows, it does like a basic runtime type checking. Ironically using Pydantic to try and make up for the fact that like fundamentally that graphs are not typed type safe. Well, I like Pydantic, but it did, that's not a real solution to have to go and run the code to see if it's safe. There's a reason that starting type checking is so powerful. And so we kind of, from a lot of iteration eventually came up with a system of using normally data classes to define nodes where you return the next node you want to call and where we're able to go and introspect the return type of a node to basically build the graph. And so the graph is. Yeah. Inherently type safe. And once we got that right, I, I wasn't, I'm incredibly excited about graphs. I think there's like masses of use cases for them, both in gen AI and other development, but also software's all going to have interact with gen AI, right? It's going to be like web. There's no longer be like a web department in a company is that there's just like all the developers are building for web building with databases. The same is going to be true for gen AI.Alessio [00:12:33]: Yeah. I see on your docs, you call an agent, a container that contains a system prompt function. Tools, structure, result, dependency type model, and then model settings. Are the graphs in your mind, different agents? Are they different prompts for the same agent? What are like the structures in your mind?Samuel [00:12:52]: So we were compelled enough by graphs once we got them right, that we actually merged the PR this morning. That means our agent implementation without changing its API at all is now actually a graph under the hood as it is built using our graph library. So graphs are basically a lower level tool that allow you to build these complex workflows. Our agents are technically one of the many graphs you could go and build. And we just happened to build that one for you because it's a very common, commonplace one. But obviously there are cases where you need more complex workflows where the current agent assumptions don't work. And that's where you can then go and use graphs to build more complex things.Swyx [00:13:29]: You said you were cynical about graphs. What changed your mind specifically?Samuel [00:13:33]: I guess people kept giving me examples of things that they wanted to use graphs for. And my like, yeah, but you could do that in standard flow control in Python became a like less and less compelling argument to me because I've maintained those systems that end up with like spaghetti code. And I could see the appeal of this like structured way of defining the workflow of my code. And it's really neat that like just from your code, just from your type hints, you can get out a mermaid diagram that defines exactly what can go and happen.Swyx [00:14:00]: Right. Yeah. You do have very neat implementation of sort of inferring the graph from type hints, I guess. Yeah. Is what I would call it. Yeah. I think the question always is I have gone back and forth. I used to work at Temporal where we would actually spend a lot of time complaining about graph based workflow solutions like AWS step functions. And we would actually say that we were better because you could use normal control flow that you already knew and worked with. Yours, I guess, is like a little bit of a nice compromise. Like it looks like normal Pythonic code. But you just have to keep in mind what the type hints actually mean. And that's what we do with the quote unquote magic that the graph construction does.Samuel [00:14:42]: Yeah, exactly. And if you look at the internal logic of actually running a graph, it's incredibly simple. It's basically call a node, get a node back, call that node, get a node back, call that node. If you get an end, you're done. We will add in soon support for, well, basically storage so that you can store the state between each node that's run. And then the idea is you can then distribute the graph and run it across computers. And also, I mean, the other weird, the other bit that's really valuable is across time. Because it's all very well if you look at like lots of the graph examples that like Claude will give you. If it gives you an example, it gives you this lovely enormous mermaid chart of like the workflow, for example, managing returns if you're an e-commerce company. But what you realize is some of those lines are literally one function calls another function. And some of those lines are wait six days for the customer to print their like piece of paper and put it in the post. And if you're writing like your demo. Project or your like proof of concept, that's fine because you can just say, and now we call this function. But when you're building when you're in real in real life, that doesn't work. And now how do we manage that concept to basically be able to start somewhere else in the in our code? Well, this graph implementation makes it incredibly easy because you just pass the node that is the start point for carrying on the graph and it continues to run. So it's things like that where I was like, yeah, I can just imagine how things I've done in the past would be fundamentally easier to understand if we had done them with graphs.Swyx [00:16:07]: You say imagine, but like right now, this pedantic AI actually resume, you know, six days later, like you said, or is this just like a theoretical thing we can go someday?Samuel [00:16:16]: I think it's basically Q&A. So there's an AI that's asking the user a question and effectively you then call the CLI again to continue the conversation. And it basically instantiates the node and calls the graph with that node again. Now, we don't have the logic yet for effectively storing state in the database between individual nodes that we're going to add soon. But like the rest of it is basically there.Swyx [00:16:37]: It does make me think that not only are you competing with Langchain now and obviously Instructor, and now you're going into sort of the more like orchestrated things like Airflow, Prefect, Daxter, those guys.Samuel [00:16:52]: Yeah, I mean, we're good friends with the Prefect guys and Temporal have the same investors as us. And I'm sure that my investor Bogomol would not be too happy if I was like, oh, yeah, by the way, as well as trying to take on Datadog. We're also going off and trying to take on Temporal and everyone else doing that. Obviously, we're not doing all of the infrastructure of deploying that right yet, at least. We're, you know, we're just building a Python library. And like what's crazy about our graph implementation is, sure, there's a bit of magic in like introspecting the return type, you know, extracting things from unions, stuff like that. But like the actual calls, as I say, is literally call a function and get back a thing and call that. It's like incredibly simple and therefore easy to maintain. The question is, how useful is it? Well, I don't know yet. I think we have to go and find out. We have a whole. We've had a slew of people joining our Slack over the last few days and saying, tell me how good Pydantic AI is. How good is Pydantic AI versus Langchain? And I refuse to answer. That's your job to go and find that out. Not mine. We built a thing. I'm compelled by it, but I'm obviously biased. The ecosystem will work out what the useful tools are.Swyx [00:17:52]: Bogomol was my board member when I was at Temporal. And I think I think just generally also having been a workflow engine investor and participant in this space, it's a big space. Like everyone needs different functions. I think the one thing that I would say like yours, you know, as a library, you don't have that much control of it over the infrastructure. I do like the idea that each new agents or whatever or unit of work, whatever you call that should spin up in this sort of isolated boundaries. Whereas yours, I think around everything runs in the same process. But you ideally want to sort of spin out its own little container of things.Samuel [00:18:30]: I agree with you a hundred percent. And we will. It would work now. Right. As in theory, you're just like as long as you can serialize the calls to the next node, you just have to all of the different containers basically have to have the same the same code. I mean, I'm super excited about Cloudflare workers running Python and being able to install dependencies. And if Cloudflare could only give me my invitation to the private beta of that, we would be exploring that right now because I'm super excited about that as a like compute level for some of this stuff where exactly what you're saying, basically. You can run everything as an individual. Like worker function and distribute it. And it's resilient to failure, et cetera, et cetera.Swyx [00:19:08]: And it spins up like a thousand instances simultaneously. You know, you want it to be sort of truly serverless at once. Actually, I know we have some Cloudflare friends who are listening, so hopefully they'll get in front of the line. Especially.Samuel [00:19:19]: I was in Cloudflare's office last week shouting at them about other things that frustrate me. I have a love-hate relationship with Cloudflare. Their tech is awesome. But because I use it the whole time, I then get frustrated. So, yeah, I'm sure I will. I will. I will get there soon.Swyx [00:19:32]: There's a side tangent on Cloudflare. Is Python supported at full? I actually wasn't fully aware of what the status of that thing is.Samuel [00:19:39]: Yeah. So Pyodide, which is Python running inside the browser in scripting, is supported now by Cloudflare. They basically, they're having some struggles working out how to manage, ironically, dependencies that have binaries, in particular, Pydantic. Because these workers where you can have thousands of them on a given metal machine, you don't want to have a difference. You basically want to be able to have a share. Shared memory for all the different Pydantic installations, effectively. That's the thing they work out. They're working out. But Hood, who's my friend, who is the primary maintainer of Pyodide, works for Cloudflare. And that's basically what he's doing, is working out how to get Python running on Cloudflare's network.Swyx [00:20:19]: I mean, the nice thing is that your binary is really written in Rust, right? Yeah. Which also compiles the WebAssembly. Yeah. So maybe there's a way that you'd build... You have just a different build of Pydantic and that ships with whatever your distro for Cloudflare workers is.Samuel [00:20:36]: Yes, that's exactly what... So Pyodide has builds for Pydantic Core and for things like NumPy and basically all of the popular binary libraries. Yeah. It's just basic. And you're doing exactly that, right? You're using Rust to compile the WebAssembly and then you're calling that shared library from Python. And it's unbelievably complicated, but it works. Okay.Swyx [00:20:57]: Staying on graphs a little bit more, and then I wanted to go to some of the other features that you have in Pydantic AI. I see in your docs, there are sort of four levels of agents. There's single agents, there's agent delegation, programmatic agent handoff. That seems to be what OpenAI swarms would be like. And then the last one, graph-based control flow. Would you say that those are sort of the mental hierarchy of how these things go?Samuel [00:21:21]: Yeah, roughly. Okay.Swyx [00:21:22]: You had some expression around OpenAI swarms. Well.Samuel [00:21:25]: And indeed, OpenAI have got in touch with me and basically, maybe I'm not supposed to say this, but basically said that Pydantic AI looks like what swarms would become if it was production ready. So, yeah. I mean, like, yeah, which makes sense. Awesome. Yeah. I mean, in fact, it was specifically saying, how can we give people the same feeling that they were getting from swarms that led us to go and implement graphs? Because my, like, just call the next agent with Python code was not a satisfactory answer to people. So it was like, okay, we've got to go and have a better answer for that. It's not like, let us to get to graphs. Yeah.Swyx [00:21:56]: I mean, it's a minimal viable graph in some sense. What are the shapes of graphs that people should know? So the way that I would phrase this is I think Anthropic did a very good public service and also kind of surprisingly influential blog post, I would say, when they wrote Building Effective Agents. We actually have the authors coming to speak at my conference in New York, which I think you're giving a workshop at. Yeah.Samuel [00:22:24]: I'm trying to work it out. But yes, I think so.Swyx [00:22:26]: Tell me if you're not. yeah, I mean, like, that was the first, I think, authoritative view of, like, what kinds of graphs exist in agents and let's give each of them a name so that everyone is on the same page. So I'm just kind of curious if you have community names or top five patterns of graphs.Samuel [00:22:44]: I don't have top five patterns of graphs. I would love to see what people are building with them. But like, it's been it's only been a couple of weeks. And of course, there's a point is that. Because they're relatively unopinionated about what you can go and do with them. They don't suit them. Like, you can go and do lots of lots of things with them, but they don't have the structure to go and have like specific names as much as perhaps like some other systems do. I think what our agents are, which have a name and I can't remember what it is, but this basically system of like, decide what tool to call, go back to the center, decide what tool to call, go back to the center and then exit. One form of graph, which, as I say, like our agents are effectively one implementation of a graph, which is why under the hood they are now using graphs. And it'll be interesting to see over the next few years whether we end up with these like predefined graph names or graph structures or whether it's just like, yep, I built a graph or whether graphs just turn out not to match people's mental image of what they want and die away. We'll see.Swyx [00:23:38]: I think there is always appeal. Every developer eventually gets graph religion and goes, oh, yeah, everything's a graph. And then they probably over rotate and go go too far into graphs. And then they have to learn a whole bunch of DSLs. And then they're like, actually, I didn't need that. I need this. And they scale back a little bit.Samuel [00:23:55]: I'm at the beginning of that process. I'm currently a graph maximalist, although I haven't actually put any into production yet. But yeah.Swyx [00:24:02]: This has a lot of philosophical connections with other work coming out of UC Berkeley on compounding AI systems. I don't know if you know of or care. This is the Gartner world of things where they need some kind of industry terminology to sell it to enterprises. I don't know if you know about any of that.Samuel [00:24:24]: I haven't. I probably should. I should probably do it because I should probably get better at selling to enterprises. But no, no, I don't. Not right now.Swyx [00:24:29]: This is really the argument is that instead of putting everything in one model, you have more control and more maybe observability to if you break everything out into composing little models and changing them together. And obviously, then you need an orchestration framework to do that. Yeah.Samuel [00:24:47]: And it makes complete sense. And one of the things we've seen with agents is they work well when they work well. But when they. Even if you have the observability through log five that you can see what was going on, if you don't have a nice hook point to say, hang on, this is all gone wrong. You have a relatively blunt instrument of basically erroring when you exceed some kind of limit. But like what you need to be able to do is effectively iterate through these runs so that you can have your own control flow where you're like, OK, we've gone too far. And that's where one of the neat things about our graph implementation is you can basically call next in a loop rather than just running the full graph. And therefore, you have this opportunity to to break out of it. But yeah, basically, it's the same point, which is like if you have two bigger unit of work to some extent, whether or not it involves gen AI. But obviously, it's particularly problematic in gen AI. You only find out afterwards when you've spent quite a lot of time and or money when it's gone off and done done the wrong thing.Swyx [00:25:39]: Oh, drop on this. We're not going to resolve this here, but I'll drop this and then we can move on to the next thing. This is the common way that we we developers talk about this. And then the machine learning researchers look at us. And laugh and say, that's cute. And then they just train a bigger model and they wipe us out in the next training run. So I think there's a certain amount of we are fighting the bitter lesson here. We're fighting AGI. And, you know, when AGI arrives, this will all go away. Obviously, on Latent Space, we don't really discuss that because I think AGI is kind of this hand wavy concept that isn't super relevant. But I think we have to respect that. For example, you could do a chain of thoughts with graphs and you could manually orchestrate a nice little graph that does like. Reflect, think about if you need more, more inference time, compute, you know, that's the hot term now. And then think again and, you know, scale that up. Or you could train Strawberry and DeepSeq R1. Right.Samuel [00:26:32]: I saw someone saying recently, oh, they were really optimistic about agents because models are getting faster exponentially. And I like took a certain amount of self-control not to describe that it wasn't exponential. But my main point was. If models are getting faster as quickly as you say they are, then we don't need agents and we don't really need any of these abstraction layers. We can just give our model and, you know, access to the Internet, cross our fingers and hope for the best. Agents, agent frameworks, graphs, all of this stuff is basically making up for the fact that right now the models are not that clever. In the same way that if you're running a customer service business and you have loads of people sitting answering telephones, the less well trained they are, the less that you trust them, the more that you need to give them a script to go through. Whereas, you know, so if you're running a bank and you have lots of customer service people who you don't trust that much, then you tell them exactly what to say. If you're doing high net worth banking, you just employ people who you think are going to be charming to other rich people and set them off to go and have coffee with people. Right. And the same is true of models. The more intelligent they are, the less we need to tell them, like structure what they go and do and constrain the routes in which they take.Swyx [00:27:42]: Yeah. Yeah. Agree with that. So I'm happy to move on. So the other parts of Pydantic AI that are worth commenting on, and this is like my last rant, I promise. So obviously, every framework needs to do its sort of model adapter layer, which is, oh, you can easily swap from OpenAI to Cloud to Grok. You also have, which I didn't know about, Google GLA, which I didn't really know about until I saw this in your docs, which is generative language API. I assume that's AI Studio? Yes.Samuel [00:28:13]: Google don't have good names for it. So Vertex is very clear. That seems to be the API that like some of the things use, although it returns 503 about 20% of the time. So... Vertex? No. Vertex, fine. But the... Oh, oh. GLA. Yeah. Yeah.Swyx [00:28:28]: I agree with that.Samuel [00:28:29]: So we have, again, another example of like, well, I think we go the extra mile in terms of engineering is we run on every commit, at least commit to main, we run tests against the live models. Not lots of tests, but like a handful of them. Oh, okay. And we had a point last week where, yeah, GLA is a little bit better. GLA1 was failing every single run. One of their tests would fail. And we, I think we might even have commented out that one at the moment. So like all of the models fail more often than you might expect, but like that one seems to be particularly likely to fail. But Vertex is the same API, but much more reliable.Swyx [00:29:01]: My rant here is that, you know, versions of this appear in Langchain and every single framework has to have its own little thing, a version of that. I would put to you, and then, you know, this is, this can be agree to disagree. This is not needed in Pydantic AI. I would much rather you adopt a layer like Lite LLM or what's the other one in JavaScript port key. And that's their job. They focus on that one thing and they, they normalize APIs for you. All new models are automatically added and you don't have to duplicate this inside of your framework. So for example, if I wanted to use deep seek, I'm out of luck because Pydantic AI doesn't have deep seek yet.Samuel [00:29:38]: Yeah, it does.Swyx [00:29:39]: Oh, it does. Okay. I'm sorry. But you know what I mean? Should this live in your code or should it live in a layer that's kind of your API gateway that's a defined piece of infrastructure that people have?Samuel [00:29:49]: And I think if a company who are well known, who are respected by everyone had come along and done this at the right time, maybe we should have done it a year and a half ago and said, we're going to be the universal AI layer. That would have been a credible thing to do. I've heard varying reports of Lite LLM is the truth. And it didn't seem to have exactly the type safety that we needed. Also, as I understand it, and again, I haven't looked into it in great detail. Part of their business model is proxying the request through their, through their own system to do the generalization. That would be an enormous put off to an awful lot of people. Honestly, the truth is I don't think it is that much work unifying the model. I get where you're coming from. I kind of see your point. I think the truth is that everyone is centralizing around open AIs. Open AI's API is the one to do. So DeepSeq support that. Grok with OK support that. Ollama also does it. I mean, if there is that library right now, it's more or less the open AI SDK. And it's very high quality. It's well type checked. It uses Pydantic. So I'm biased. But I mean, I think it's pretty well respected anyway.Swyx [00:30:57]: There's different ways to do this. Because also, it's not just about normalizing the APIs. You have to do secret management and all that stuff.Samuel [00:31:05]: Yeah. And there's also. There's Vertex and Bedrock, which to one extent or another, effectively, they host multiple models, but they don't unify the API. But they do unify the auth, as I understand it. Although we're halfway through doing Bedrock. So I don't know about it that well. But they're kind of weird hybrids because they support multiple models. But like I say, the auth is centralized.Swyx [00:31:28]: Yeah, I'm surprised they don't unify the API. That seems like something that I would do. You know, we can discuss all this all day. There's a lot of APIs. I agree.Samuel [00:31:36]: It would be nice if there was a universal one that we didn't have to go and build.Alessio [00:31:39]: And I guess the other side of, you know, routing model and picking models like evals. How do you actually figure out which one you should be using? I know you have one. First of all, you have very good support for mocking in unit tests, which is something that a lot of other frameworks don't do. So, you know, my favorite Ruby library is VCR because it just, you know, it just lets me store the HTTP requests and replay them. That part I'll kind of skip. I think you are busy like this test model. We're like just through Python. You try and figure out what the model might respond without actually calling the model. And then you have the function model where people can kind of customize outputs. Any other fun stories maybe from there? Or is it just what you see is what you get, so to speak?Samuel [00:32:18]: On those two, I think what you see is what you get. On the evals, I think watch this space. I think it's something that like, again, I was somewhat cynical about for some time. Still have my cynicism about some of the well, it's unfortunate that so many different things are called evals. It would be nice if we could agree. What they are and what they're not. But look, I think it's a really important space. I think it's something that we're going to be working on soon, both in Pydantic AI and in LogFire to try and support better because it's like it's an unsolved problem.Alessio [00:32:45]: Yeah, you do say in your doc that anyone who claims to know for sure exactly how your eval should be defined can safely be ignored.Samuel [00:32:52]: We'll delete that sentence when we tell people how to do their evals.Alessio [00:32:56]: Exactly. I was like, we need we need a snapshot of this today. And so let's talk about eval. So there's kind of like the vibe. Yeah. So you have evals, which is what you do when you're building. Right. Because you cannot really like test it that many times to get statistical significance. And then there's the production eval. So you also have LogFire, which is kind of like your observability product, which I tried before. It's very nice. What are some of the learnings you've had from building an observability tool for LEMPs? And yeah, as people think about evals, even like what are the right things to measure? What are like the right number of samples that you need to actually start making decisions?Samuel [00:33:33]: I'm not the best person to answer that is the truth. So I'm not going to come in here and tell you that I think I know the answer on the exact number. I mean, we can do some back of the envelope statistics calculations to work out that like having 30 probably gets you most of the statistical value of having 200 for, you know, by definition, 15% of the work. But the exact like how many examples do you need? For example, that's a much harder question to answer because it's, you know, it's deep within the how models operate in terms of LogFire. One of the reasons we built LogFire the way we have and we allow you to write SQL directly against your data and we're trying to build the like powerful fundamentals of observability is precisely because we know we don't know the answers. And so allowing people to go and innovate on how they're going to consume that stuff and how they're going to process it is we think that's valuable. Because even if we come along and offer you an evals framework on top of LogFire, it won't be right in all regards. And we want people to be able to go and innovate and being able to write their own SQL connected to the API. And effectively query the data like it's a database with SQL allows people to innovate on that stuff. And that's what allows us to do it as well. I mean, we do a bunch of like testing what's possible by basically writing SQL directly against LogFire as any user could. I think the other the other really interesting bit that's going on in observability is OpenTelemetry is centralizing around semantic attributes for GenAI. So it's a relatively new project. A lot of it's still being added at the moment. But basically the idea that like. They unify how both SDKs and or agent frameworks send observability data to to any OpenTelemetry endpoint. And so, again, we can go and having that unification allows us to go and like basically compare different libraries, compare different models much better. That stuff's in a very like early stage of development. One of the things we're going to be working on pretty soon is basically, I suspect, GenAI will be the first agent framework that implements those semantic attributes properly. Because, again, we control and we can say this is important for observability, whereas most of the other agent frameworks are not maintained by people who are trying to do observability. With the exception of Langchain, where they have the observability platform, but they chose not to go down the OpenTelemetry route. So they're like plowing their own furrow. And, you know, they're a lot they're even further away from standardization.Alessio [00:35:51]: Can you maybe just give a quick overview of how OTEL ties into the AI workflows? There's kind of like the question of is, you know, a trace. And a span like a LLM call. Is it the agent? It's kind of like the broader thing you're tracking. How should people think about it?Samuel [00:36:06]: Yeah, so they have a PR that I think may have now been merged from someone at IBM talking about remote agents and trying to support this concept of remote agents within GenAI. I'm not particularly compelled by that because I don't think that like that's actually by any means the common use case. But like, I suppose it's fine for it to be there. The majority of the stuff in OTEL is basically defining how you would instrument. A given call to an LLM. So basically the actual LLM call, what data you would send to your telemetry provider, how you would structure that. Apart from this slightly odd stuff on remote agents, most of the like agent level consideration is not yet implemented in is not yet decided effectively. And so there's a bit of ambiguity. Obviously, what's good about OTEL is you can in the end send whatever attributes you like. But yeah, there's quite a lot of churn in that space and exactly how we store the data. I think that one of the most interesting things, though, is that if you think about observability. Traditionally, it was sure everyone would say our observability data is very important. We must keep it safe. But actually, companies work very hard to basically not have anything that sensitive in their observability data. So if you're a doctor in a hospital and you search for a drug for an STI, the sequel might be sent to the observability provider. But none of the parameters would. It wouldn't have the patient number or their name or the drug. With GenAI, that distinction doesn't exist because it's all just messed up in the text. If you have that same patient asking an LLM how to. What drug they should take or how to stop smoking. You can't extract the PII and not send it to the observability platform. So the sensitivity of the data that's going to end up in observability platforms is going to be like basically different order of magnitude to what's in what you would normally send to Datadog. Of course, you can make a mistake and send someone's password or their card number to Datadog. But that would be seen as a as a like mistake. Whereas in GenAI, a lot of data is going to be sent. And I think that's why companies like Langsmith and are trying hard to offer observability. On prem, because there's a bunch of companies who are happy for Datadog to be cloud hosted, but want self-hosted self-hosting for this observability stuff with GenAI.Alessio [00:38:09]: And are you doing any of that today? Because I know in each of the spans you have like the number of tokens, you have the context, you're just storing everything. And then you're going to offer kind of like a self-hosting for the platform, basically. Yeah. Yeah.Samuel [00:38:23]: So we have scrubbing roughly equivalent to what the other observability platforms have. So if we, you know, if we see password as the key, we won't send the value. But like, like I said, that doesn't really work in GenAI. So we're accepting we're going to have to store a lot of data and then we'll offer self-hosting for those people who can afford it and who need it.Alessio [00:38:42]: And then this is, I think, the first time that most of the workloads performance is depending on a third party. You know, like if you're looking at Datadog data, usually it's your app that is driving the latency and like the memory usage and all of that. Here you're going to have spans that maybe take a long time to perform because the GLA API is not working or because OpenAI is kind of like overwhelmed. Do you do anything there since like the provider is almost like the same across customers? You know, like, are you trying to surface these things for people and say, hey, this was like a very slow span, but actually all customers using OpenAI right now are seeing the same thing. So maybe don't worry about it or.Samuel [00:39:20]: Not yet. We do a few things that people don't generally do in OTA. So we send. We send information at the beginning. At the beginning of a trace as well as sorry, at the beginning of a span, as well as when it finishes. By default, OTA only sends you data when the span finishes. So if you think about a request which might take like 20 seconds, even if some of the intermediate spans finished earlier, you can't basically place them on the page until you get the top level span. And so if you're using standard OTA, you can't show anything until those requests are finished. When those requests are taking a few hundred milliseconds, it doesn't really matter. But when you're doing Gen AI calls or when you're like running a batch job that might take 30 minutes. That like latency of not being able to see the span is like crippling to understanding your application. And so we've we do a bunch of slightly complex stuff to basically send data about a span as it starts, which is closely related. Yeah.Alessio [00:40:09]: Any thoughts on all the other people trying to build on top of OpenTelemetry in different languages, too? There's like the OpenLEmetry project, which doesn't really roll off the tongue. But how do you see the future of these kind of tools? Is everybody going to have to build? Why does everybody want to build? They want to build their own open source observability thing to then sell?Samuel [00:40:29]: I mean, we are not going off and trying to instrument the likes of the OpenAI SDK with the new semantic attributes, because at some point that's going to happen and it's going to live inside OTEL and we might help with it. But we're a tiny team. We don't have time to go and do all of that work. So OpenLEmetry, like interesting project. But I suspect eventually most of those semantic like that instrumentation of the big of the SDKs will live, like I say, inside the main OpenTelemetry report. I suppose. What happens to the agent frameworks? What data you basically need at the framework level to get the context is kind of unclear. I don't think we know the answer yet. But I mean, I was on the, I guess this is kind of semi-public, because I was on the call with the OpenTelemetry call last week talking about GenAI. And there was someone from Arize talking about the challenges they have trying to get OpenTelemetry data out of Langchain, where it's not like natively implemented. And obviously they're having quite a tough time. And I was realizing, hadn't really realized this before, but how lucky we are to primarily be talking about our own agent framework, where we have the control rather than trying to go and instrument other people's.Swyx [00:41:36]: Sorry, I actually didn't know about this semantic conventions thing. It looks like, yeah, it's merged into main OTel. What should people know about this? I had never heard of it before.Samuel [00:41:45]: Yeah, I think it looks like a great start. I think there's some unknowns around how you send the messages that go back and forth, which is kind of the most important part. It's the most important thing of all. And that is moved out of attributes and into OTel events. OTel events in turn are moving from being on a span to being their own top-level API where you send data. So there's a bunch of churn still going on. I'm impressed by how fast the OTel community is moving on this project. I guess they, like everyone else, get that this is important, and it's something that people are crying out to get instrumentation off. So I'm kind of pleasantly surprised at how fast they're moving, but it makes sense.Swyx [00:42:25]: I'm just kind of browsing through the specification. I can already see that this basically bakes in whatever the previous paradigm was. So now they have genai.usage.prompt tokens and genai.usage.completion tokens. And obviously now we have reasoning tokens as well. And then only one form of sampling, which is top-p. You're basically baking in or sort of reifying things that you think are important today, but it's not a super foolproof way of doing this for the future. Yeah.Samuel [00:42:54]: I mean, that's what's neat about OTel is you can always go and send another attribute and that's fine. It's just there are a bunch that are agreed on. But I would say, you know, to come back to your previous point about whether or not we should be relying on one centralized abstraction layer, this stuff is moving so fast that if you start relying on someone else's standard, you risk basically falling behind because you're relying on someone else to keep things up to date.Swyx [00:43:14]: Or you fall behind because you've got other things going on.Samuel [00:43:17]: Yeah, yeah. That's fair. That's fair.Swyx [00:43:19]: Any other observations just about building LogFire, actually? Let's just talk about this. So you announced LogFire. I was kind of only familiar with LogFire because of your Series A announcement. I actually thought you were making a separate company. I remember some amount of confusion with you when that came out. So to be clear, it's Pydantic LogFire and the company is one company that has kind of two products, an open source thing and an observability thing, correct? Yeah. I was just kind of curious, like any learnings building LogFire? So classic question is, do you use ClickHouse? Is this like the standard persistence layer? Any learnings doing that?Samuel [00:43:54]: We don't use ClickHouse. We started building our database with ClickHouse, moved off ClickHouse onto Timescale, which is a Postgres extension to do analytical databases. Wow. And then moved off Timescale onto DataFusion. And we're basically now building, it's DataFusion, but it's kind of our own database. Bogomil is not entirely happy that we went through three databases before we chose one. I'll say that. But like, we've got to the right one in the end. I think we could have realized that Timescale wasn't right. I think ClickHouse. They both taught us a lot and we're in a great place now. But like, yeah, it's been a real journey on the database in particular.Swyx [00:44:28]: Okay. So, you know, as a database nerd, I have to like double click on this, right? So ClickHouse is supposed to be the ideal backend for anything like this. And then moving from ClickHouse to Timescale is another counterintuitive move that I didn't expect because, you know, Timescale is like an extension on top of Postgres. Not super meant for like high volume logging. But like, yeah, tell us those decisions.Samuel [00:44:50]: So at the time, ClickHouse did not have good support for JSON. I was speaking to someone yesterday and said ClickHouse doesn't have good support for JSON and got roundly stepped on because apparently it does now. So they've obviously gone and built their proper JSON support. But like back when we were trying to use it, I guess a year ago or a bit more than a year ago, everything happened to be a map and maps are a pain to try and do like looking up JSON type data. And obviously all these attributes, everything you're talking about there in terms of the GenAI stuff. You can choose to make them top level columns if you want. But the simplest thing is just to put them all into a big JSON pile. And that was a problem with ClickHouse. Also, ClickHouse had some really ugly edge cases like by default, or at least until I complained about it a lot, ClickHouse thought that two nanoseconds was longer than one second because they compared intervals just by the number, not the unit. And I complained about that a lot. And then they caused it to raise an error and just say you have to have the same unit. Then I complained a bit more. And I think as I understand it now, they have some. They convert between units. But like stuff like that, when all you're looking at is when a lot of what you're doing is comparing the duration of spans was really painful. Also things like you can't subtract two date times to get an interval. You have to use the date sub function. But like the fundamental thing is because we want our end users to write SQL, the like quality of the SQL, how easy it is to write, matters way more to us than if you're building like a platform on top where your developers are going to write the SQL. And once it's written and it's working, you don't mind too much. So I think that's like one of the fundamental differences. The other problem that I have with the ClickHouse and Impact Timescale is that like the ultimate architecture, the like snowflake architecture of binary data in object store queried with some kind of cache from nearby. They both have it, but it's closed sourced and you only get it if you go and use their hosted versions. And so even if we had got through all the problems with Timescale or ClickHouse, we would end up like, you know, they would want to be taking their 80% margin. And then we would be wanting to take that would basically leave us less space for margin. Whereas data fusion. Properly open source, all of that same tooling is open source. And for us as a team of people with a lot of Rust expertise, data fusion, which is implemented in Rust, we can literally dive into it and go and change it. So, for example, I found that there were some slowdowns in data fusion's string comparison kernel for doing like string contains. And it's just Rust code. And I could go and rewrite the string comparison kernel to be faster. Or, for example, data fusion, when we started using it, didn't have JSON support. Obviously, as I've said, it's something we can do. It's something we needed. I was able to go and implement that in a weekend using our JSON parser that we built for Pydantic Core. So it's the fact that like data fusion is like for us the perfect mixture of a toolbox to build a database with, not a database. And we can go and implement stuff on top of it in a way that like if you were trying to do that in Postgres or in ClickHouse. I mean, ClickHouse would be easier because it's C++, relatively modern C++. But like as a team of people who are not C++ experts, that's much scarier than data fusion for us.Swyx [00:47:47]: Yeah, that's a beautiful rant.Alessio [00:47:49]: That's funny. Most people don't think they have agency on these projects. They're kind of like, oh, I should use this or I should use that. They're not really like, what should I pick so that I contribute the most back to it? You know, so but I think you obviously have an open source first mindset. So that makes a lot of sense.Samuel [00:48:05]: I think if we were probably better as a startup, a better startup and faster moving and just like headlong determined to get in front of customers as fast as possible, we should have just started with ClickHouse. I hope that long term we're in a better place for having worked with data fusion. We like we're quite engaged now with the data fusion community. Andrew Lam, who maintains data fusion, is an advisor to us. We're in a really good place now. But yeah, it's definitely slowed us down relative to just like building on ClickHouse and moving as fast as we can.Swyx [00:48:34]: OK, we're about to zoom out and do Pydantic run and all the other stuff. But, you know, my last question on LogFire is really, you know, at some point you run out sort of community goodwill just because like, oh, I use Pydantic. I love Pydantic. I'm going to use LogFire. OK, then you start entering the territory of the Datadogs, the Sentrys and the honeycombs. Yeah. So where are you going to really spike here? What differentiator here?Samuel [00:48:59]: I wasn't writing code in 2001, but I'm assuming that there were people talking about like web observability and then web observability stopped being a thing, not because the web stopped being a thing, but because all observability had to do web. If you were talking to people in 2010 or 2012, they would have talked about cloud observability. Now that's not a term because all observability is cloud first. The same is going to happen to gen AI. And so whether or not you're trying to compete with Datadog or with Arise and Langsmith, you've got to do first class. You've got to do general purpose observability with first class support for AI. And as far as I know, we're the only people really trying to do that. I mean, I think Datadog is starting in that direction. And to be honest, I think Datadog is a much like scarier company to compete with than the AI specific observability platforms. Because in my opinion, and I've also heard this from lots of customers, AI specific observability where you don't see everything else going on in your app is not actually that useful. Our hope is that we can build the first general purpose observability platform with first class support for AI. And that we have this open source heritage of putting developer experience first that other companies haven't done. For all I'm a fan of Datadog and what they've done. If you search Datadog logging Python. And you just try as a like a non-observability expert to get something up and running with Datadog and Python. It's not trivial, right? That's something Sentry have done amazingly well. But like there's enormous space in most of observability to do DX better.Alessio [00:50:27]: Since you mentioned Sentry, I'm curious how you thought about licensing and all of that. Obviously, your MIT license, you don't have any rolling license like Sentry has where you can only use an open source, like the one year old version of it. Was that a hard decision?Samuel [00:50:41]: So to be clear, LogFire is co-sourced. So Pydantic and Pydantic AI are MIT licensed and like properly open source. And then LogFire for now is completely closed source. And in fact, the struggles that Sentry have had with licensing and the like weird pushback the community gives when they take something that's closed source and make it source available just meant that we just avoided that whole subject matter. I think the other way to look at it is like in terms of either headcount or revenue or dollars in the bank. The amount of open source we do as a company is we've got to be open source. We're up there with the most prolific open source companies, like I say, per head. And so we didn't feel like we were morally obligated to make LogFire open source. We have Pydantic. Pydantic is a foundational library in Python. That and now Pydantic AI are our contribution to open source. And then LogFire is like openly for profit, right? As in we're not claiming otherwise. We're not sort of trying to walk a line if it's open source. But really, we want to make it hard to deploy. So you probably want to pay us. We're trying to be straight. That it's to pay for. We could change that at some point in the future, but it's not an immediate plan.Alessio [00:51:48]: All right. So the first one I saw this new I don't know if it's like a product you're building the Pydantic that run, which is a Python browser sandbox. What was the inspiration behind that? We talk a lot about code interpreter for lamps. I'm an investor in a company called E2B, which is a code sandbox as a service for remote execution. Yeah. What's the Pydantic that run story?Samuel [00:52:09]: So Pydantic that run is again completely open source. I have no interest in making it into a product. We just needed a sandbox to be able to demo LogFire in particular, but also Pydantic AI. So it doesn't have it yet, but I'm going to add basically a proxy to OpenAI and the other models so that you can run Pydantic AI in the browser. See how it works. Tweak the prompt, et cetera, et cetera. And we'll have some kind of limit per day of what you can spend on it or like what the spend is. The other thing we wanted to b

Test & Code - Python Testing & Development
pytest-mock : Mocking in pytest

Test & Code - Python Testing & Development

Play Episode Listen Later Jan 31, 2025 10:42


pytest-mock is currently the #3 pytest plugin. pytest-mock is a wrapper around unittest.mock.In this episode:Why the pytest-mock plugin is awesomeWhat is mocking, patching, and monkey patchingWhat, if any, is the difference between mock, fake, spy, stub. Why we might need these in testingSome history of mock in Python and how mock became unittest.mockFrom unittest.mockpatch.objectpatch.object with autospecusing these as context managerspytest-mock:The mocker fixture Cleanup in teardownUsing mocker.patch, mocker.spy, and mocker.stubWhy it's awesome and why you might want to use it over straight unittest.mockLinks:top pytest plugins listpytest-mock documentationunittest.mockPodcast episode discussing unittest.mock with Michael Foordmonkeypatch  Learn pytestpytest is the number one test framework for Python.Learn the basics super fast with Hello, pytest!Then later you can become a pytest expert with The Complete pytest CourseBoth courses are at courses.pythontest.com

Test & Code - Python Testing & Development
pytest-cov : The pytest plugin for measuring coverage

Test & Code - Python Testing & Development

Play Episode Listen Later Jan 23, 2025 12:02


pytest-cov is a pytest plugin that helps produce coverage reports using Coverage.py.In this episode, we'll discuss:what Coverage.py iswhy you should measure code coverage on both your source and test codewhat pytest-cov isextra features pytest-cov gives you over and above coverage.pyand generally why using both is awesomeLinks:coverage.pypytest-covhow to set up context reportsTop pytest PluginsErrata:I mentioned that Coverage has the ability to show context (which line is covered by which test) for the past year or so.However, that feature was released in Oct 2018. coverage 5.0 alpha That's over 6 years. Oops. Sorry Ned.  Learn pytestpytest is the number one test framework for Python.Learn the basics super fast with Hello, pytest!Then later you can become a pytest expert with The Complete pytest CourseBoth courses are at courses.pythontest.com

Test & Code - Python Testing & Development
pytest plugins - a full season

Test & Code - Python Testing & Development

Play Episode Listen Later Jan 10, 2025 11:52


This episode kicks off a season of pytest plugins.In this episode:Introduction to pytest pluginsThe pytest.org pytest plugin listFinding pytest related packages on PyPIThe Top pytest plugins list on pythontest.comExploring popular pluginsLearning from plugin examplesLinks:Top pytest plugins listpytest.org plugin listTop PyPI PackagesAnd links to plugins mentioned in the show can be found at pythontest.com/top-pytest-plugins  Learn pytestpytest is the number one test framework for Python.Learn the basics super fast with Hello, pytest!Then later you can become a pytest expert with The Complete pytest CourseBoth courses are at courses.pythontest.com

Python Bytes
#404 The Lost Episode

Python Bytes

Play Episode Listen Later Oct 7, 2024 31:15 Transcription Available


Topics covered in this episode: Python 3.13.0 released Oct 7 PEP 759 – External Wheel Hosting pytest-freethreaded pytest-edit Extras Joke Watch on YouTube About the show Sponsored by ScoutAPM: pythonbytes.fm/scout Connect with the hosts Michael: @mkennedy@fosstodon.org Brian: @brianokken@fosstodon.org Show: @pythonbytes@fosstodon.org Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 10am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Brian #1: Python 3.13.0 released Oct 7 That's today! What's New In Python 3.13 Interpreter (REPL) improvements exit works (really, this is worth the release right here) Multiline editing with history preservation. history sticks around between sessions Direct support for REPL-specific commands like help, exit, and quit, without the need to call them as functions. Prompts and tracebacks with color enabled by default. Interactive help browsing using F1 with a separate command history. History browsing using F2 that skips output as well as the >>> and … prompts. “Paste mode” with F3 that makes pasting larger blocks of code easier (press F3 again to return to the regular prompt). exit now works without parens Improved error messages Colorful tracebacks Better messages for naming a script/module the same name as a stdlib module. naming a script/module the same name as an installed third party module. misspelling a keyword argument Free threaded CPython Included in official installers on Windows and macOS Read these links to figure out how - it's not turned on by default Lot's more. see the What's new page Michael #2: PEP 759 – External Wheel Hosting pypi.org ships over 66 petabytes / month backed by Fastly There are hard project size limits for publishers to PyPI We can host the essence of a .whl as a .rim file, then allow an external download URL Security: Several factors as described in this proposal should mitigate security concerns with externally hosted wheels, such as: Wheel file checksums MUST be included in .rim files, and once uploaded cannot be changed. Since the checksum stored on PyPI is immutable and required, it is not possible to spoof an external wheel file, even if the owning organization lost control of their hosting domain. Externally hosted wheels MUST be served over HTTPS. In order to serve externally hosted wheels, organizations MUST be approved by the PyPI admins. Brian #3: pytest-freethreaded PyCon JP 2024 Team: This extension was created at PyCon JP sprints with Anthony Shaw and 7 other folks listed in credits. “A pytest plugin for helping verify that your tests and libraries are thread-safe with the Python 3.13 experimental freethreaded mode.” Testing your project for compatibility with freethreaded Python. Testing in single thread doesn't test that. Neither does testing with pytest-xdist, because it uses multiprocessing to parallelize tests. So, Ant and others “made this plugin to help you run your tests in a thread-pool with the GIL disabled, to help you identify if your tests are thread-safe.” “And the first library we tested it on (which was marked as compatible) caused a segmentation fault in CPython! So you should give this a go if you're a package maintainer.” Michael #4: pytest-edit A simple Pytest plugin for opening editor on the failed tests. Type pytest --edit to open the failing test code Be sure to set your favorite editor in the ENV variables Extras Michael: New way to explore Talk Python courses via topics This has been in our mobile apps since their rewrite but finally comes to the web Let's go easy on PyPI, OK? essay Hynek's video: uv IS the Future of Python Packaging djade-pre-commit Polyfill.io, BootCDN, Bootcss, Staticfile attack traced to 1 operator PurgeCSS CLI Python 3.12.7 released Incremental GC and pushing back the 3.13.0 release uv making the rounds LLM fatigue, is it real? Take the Python Developers Survey 2024 Joke: Funny 404 pages We have something at least interesting at pythonbytes.fm

Test & Code - Python Testing & Development
221: How to get pytest to import your code under test

Test & Code - Python Testing & Development

Play Episode Listen Later Jun 3, 2024 7:41


We've got some code we want to test, and some tests.The tests need to be able to import the code under test, or at least the API to it, in order to run tests against it.How do we do that? How do we set things up so that our tests can import our code?In this episode, we discuss two options:Installing the code under test as a pip installable package with `pip install -e /path/to/local/package`.Using the pythonpath pytest setting. Sponsored by Mailtrap.ioAn Email Delivery Platform that developers love. An email-sending solution with industry-best analytics, SMTP, and email API, SDKs for major programming languages, and 24/7 human support. Try for Free at MAILTRAP.IOSponsored by The Complete pytest CourseFor the fastest way to learn pytest, go to courses.pythontest.comWhether your new to testing or pytest, or just want to maximize your efficiency and effectiveness when testing.

Python Bytes
#382 A Simple Game

Python Bytes

Play Episode Listen Later May 7, 2024 28:10


Topics covered in this episode: act: Run your GitHub Actions locally! portr Annotating args and kwargs in Python github badges Extras Joke Watch on YouTube About the show Sponsored by ScoutAPM: pythonbytes.fm/scout Connect with the hosts Michael: @mkennedy@fosstodon.org Brian: @brianokken@fosstodon.org Show: @pythonbytes@fosstodon.org Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Tuesdays at 11am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Brian #1: act: Run your GitHub Actions locally! Why? “Fast Feedback - Rather than having to commit/push every time you want to test out the changes you are making to your .github/workflows/ files (or for any changes to embedded GitHub actions), you can use act to run the actions locally. The environment variables and filesystem are all configured to match what GitHub provides.” “Local Task Runner - I love make. However, I also hate repeating myself. With act, you can use the GitHub Actions defined in your .github/workflows/ to replace your Makefile!” Docs: nektosact.com Uses Docker to run containers for each action. Michael #2: portr Open source ngrok alternative designed for teams Expose local http, tcp or websocket connections to the public internet Warning: Portr is currently in beta. Expect bugs and anticipate breaking changes. Server setup (docker basically). Brian #3: Annotating args and kwargs in Python Redowan Delowar I don't think I've ever tried, but this is a fun rabbit hole. Leveraging bits of PEP-5891, PEP-6462, PEP-6553, and PEP-6924. Punchline: from typing import TypedDict, Unpack *# Python 3.12+* *# from typing_extensions import TypedDict, Unpack # < Python 3.12* class Kw(TypedDict): key1: int key2: bool def foo(*args: Unpack[tuple[int, str]], **kwargs: Unpack[Kw]) -> None: ... A recent pic from Redowan's blog: TypeIs does what I thought TypeGuard would do in Python Michael #4: github badges A curated list of GitHub badges for your next project Extras Brian: Fake job interviews target developers with new Python backdoor Later this week, course.pythontest.com will shift from Teachable to Podia Same great content. Just a different backend. To celebrate, get 25% off at pythontest.podia.com now through this Sunday using coupon code PYTEST Getting the most out of PyCon, including juggling - Rob Ludwick Latest PythonTest episode, also cross posted to pythonpeople.fm 3D visualization of dom Michael: Djangonauts Space Session 2 Applications Open! More background at Djangonauts, Ready for Blast-Off on Talk Python. Self-Hosted Open Source - Michael Kennedy on Django Chat Joke: silly games Closing song: Permission Granted

Test & Code - Python Testing & Development
220: Getting the most out of PyCon, including juggling - Rob Ludwick

Test & Code - Python Testing & Development

Play Episode Listen Later May 4, 2024 41:27 Transcription Available


PyCon US is just around the corner.  I've asked Rob Ludwick to come on the show to discuss how to get the most out of your PyCon experience. There's a lot to do. A lot of activities to juggle, including actual juggling, which is where we start the conversation.Even if you never get a chance to go to PyCon, I hope this interview helps you get a feel for the welcoming aspect of the Python community.I recorded this interview as an episode for one of my other podcasts, Python People. But I think it's got some great pre-conference advice, so I'm sharing it here on Python Test as well.We talk about: - Juggling at PyCon- How to get the most out of PyCon    - Watching talks    - Hallway track    - Open spaces    - Lightening talks    - Expo hall / vendor space    - Poster sessions    - Job fair    - A welcoming community    - Tutorials     - Sprints    - But mostly about the people of Python and PyCon."Python enables smart people to work faster" - Rob Ludwick Sponsored by Mailtrap.ioAn Email Delivery Platform that developers love. An email-sending solution with industry-best analytics, SMTP, and email API, SDKs for major programming languages, and 24/7 human support. Try for Free at MAILTRAP.IOSponsored by PyCharm ProUse code PYTEST for 20% off PyCharm Professional at jetbrains.com/pycharmNow with Full Line Code CompletionSee how easy it is to run pytest from PyCharm at pythontest.com/pycharmThe Complete pytest CourseFor the fastest way to learn pytest, go to courses.pythontest.comWhether your new to testing or pytest, or just want to maximize your efficiency and effectiveness when testing.

Test & Code - Python Testing & Development
219: Building Django Apps & SaaS Pegasus - Cory Zue

Test & Code - Python Testing & Development

Play Episode Listen Later Apr 24, 2024 49:00


I'm starting a SaaS project using Django, and there are tons of decisions right out of the gate. To help me navigate these decisions, I've brought on Cory Zue.   Cory is the creator of SaaS Pegasus, and has tons of experience with Django.Some of the topics discussed:Building Django applicationsSaaS Pegasusplacecard.meWhat boilerplate projects areDjango cookiecutterCookiecutterWhich database to use, probably PostgreSQLAuthentication choises, probably AllauthDocker, Docker for development, Docker for deploymentDeployment targets / hosting services. Render, Heroku, Fly.io, for PaaS options.Front end frameworks. Bootstrap, Tailwind, DaisyUI, TailwindUIHTMX vs React vs straight Django templatesRocketsFont Awesomeand of course, SaaS Pegasus Sponsored by Mailtrap.ioAn Email Delivery Platform that developers love. An email-sending solution with industry-best analytics, SMTP, and email API, SDKs for major programming languages, and 24/7 human support. Try for Free at MAILTRAP.IOSponsored by PyCharm ProUse code PYTEST for 20% off PyCharm Professional at jetbrains.com/pycharmNow with Full Line Code CompletionSee how easy it is to run pytest from PyCharm at pythontest.com/pycharmThe Complete pytest CourseFor the fastest way to learn pytest, go to courses.pythontest.comWhether your new to testing or pytest, or just want to maximize your efficiency and effectiveness when testing.

Test & Code - Python Testing & Development
218: Balancing test coverage with test costs - Nicole Tietz-Sokolskaya

Test & Code - Python Testing & Development

Play Episode Listen Later Apr 18, 2024 28:48 Transcription Available


Nicole is a software engineer and writer, and recently wrote about the trade-offs we make when deciding which tests to write and how much testing is enough.We talk about:Balancing schedule vs testingHow much testing is the right about of testingShould code coverage be measured and trackedGood refactoring can reduce code coverageIs it worth testing error conditions?Are rare error codes ok to just monitor?API drift and autospecMitigating riskDeciding what to test and what not to testFocus testing on key money-making features If there's a bug in this part of the code, how much business impact is there?Performance testing needs to approximately match real world workloadsCost of a service breaking vs the cost of creating, maintaining, and running testsKeeping test suites quick to minimize getting distractedLinks:Too much of a good thing: the trade-off we make with tests Load testing is hard, and the tools are... not great. But why?Yet Another Rust Resource (YARR!)Goodhart's law - "When a measure becomes a target, it ceases to be a good measure" Sponsored by Mailtrap.ioAn Email Delivery Platform that developers love. An email-sending solution with industry-best analytics, SMTP, and email API, SDKs for major programming languages, and 24/7 human support. Try for Free at MAILTRAP.IOSponsored by PyCharm ProUse code PYTEST for 20% off PyCharm Professional at jetbrains.com/pycharmNow with Full Line Code CompletionSee how easy it is to run pytest from PyCharm at pythontest.com/pycharmThe Complete pytest CourseFor the fastest way to learn pytest, go to courses.pythontest.comWhether your new to testing or pytest, or just want to maximize your efficiency and effectiveness when testing.

Test & Code - Python Testing & Development
217: Podcasting / SaaS / Work Life Balance - Justin Jackson

Test & Code - Python Testing & Development

Play Episode Listen Later Apr 11, 2024 57:13 Transcription Available


If you've ever thought about starting a podcast or a SaaS project, you'll want to listen to this episode. Justin is one of the people who motivated me to get started podcasting. He's also running a successful SaaS company, transistor.fm, which hosts this podcast.Topics:PodcastingBuilding new SaaS (software as a service) productsBalancing work, side hustle, and familyGreat places to snowboard in British ColumbiaBTW. This episode was recorded last summer before I switched to transistor.fm.I'm now on Transistor for most of a year now, and I love it.Links from the show:Transistor.fm - excellent podcast hosting, Justin is a co-founderHow to start a podcast in 2024Podcasts from JustinBuild your SaaS - currentBuild & Launch - an older one, but greatMegaMaker - from 2021 / 2022 Sponsored by Mailtrap.ioAn Email Delivery Platform that developers love. An email-sending solution with industry-best analytics, SMTP, an email API, SDKs for major programming languages, and 24/7 human support. Try for Free at MAILTRAP.IOSponsored by PyCharm ProUse code PYTEST for 20% off PyCharm Professional at jetbrains.com/pycharmNow with Full Line Code CompletionSee how easy it is to run pytest from PyCharm at pythontest.com/pycharmThe Complete pytest CourseFor the fastest way to learn pytest, go to courses.pythontest.comWhether your new to testing or pytest, or just want to maximize your efficiency and effectiveness when testing.

Hacker Public Radio
HPR4091: Test Driven Development Demo

Hacker Public Radio

Play Episode Listen Later Apr 8, 2024


Test Driven Development Demo with PyTest TDD Discussed in hpr4075 Write a new test and run it. It should fail. Write the minimal code that will pass the test Optionally - refactor the code while ensure the tests continue to pass PyTest Framework for writing software tests with python Normally used to test python projects, but could test any software that python can launch return input. if you can write python, you can write tests in PyTest. python assert - check that something is true Test Discovery Files named test* Functions named test* Demo Project Trivial app as a demo Print a summary of the latest HPR Episode Title, Host, Date, Audio File How do we get the latest show data RSS feed Feed parser Feed URL The pytest setup The python script we want to test will be named hpr_info.py The test will be in a file will be named test_hpr_info.py test_hpr_info.py import hpr_info Run pytest ModuleNotFoundError: No module named 'hpr_info' We have written our first failing test. The minimum code to get pytest to pass is to create an empty file touch hpr_info.py Run pytest again pytest ============================= test session starts ============================== platform linux -- Python 3.11.8, pytest-7.4.4, pluggy-1.4.0 rootdir: /tmp/Demo collected 0 items What just happened We created a file named test_hpr_info.py with a single line to import hpr_info We ran pytest and it failed because hpr_info.py did not exist We created hpr_info.py and pytest ran without an error. This means we confirmed: Pytest found the file named test_hpr_info.py and tried to execute its tests The import line is looking for a file named hpr_info.py Python Assert In python, assert tests if a statement is true For example asert 1==1 In pytest, we can use assert to check a function returns a specific value assert module.function() == "Desired Output" Without doing a comparison operator, we can also use assert to check if something exists without specifying a specific value assert dictionary.key Adding a Test Import hpr_info will allow us to test functions inside hpr_info.py We can reference functions inside hpr_info.py by prepending the name with hpr_info. for example hpr_info.HPR_FEED The first step in finding the latest HPR episode is fetching a copy of the feed. Lets add a test to make sure the HPR feed is defined import hpr_info def test_hpr_feed_url(): assert hpr_info.HPR_FEED == "https://hackerpublicradio.org/hpr_ogg_rss.php" pytest again Lets run pytest again and we get the error AttributeError: module 'hpr_info' has no attribute 'HPR_FEED' So lets add the just enough code hpr_info.py to get the test to pass HPR_FEED = "https://hackerpublicradio.org/hpr_ogg_rss.php" Run pytest again and we get 1 passed indicating the pytest found 1 test which passed Hooray, we are doing TDD Next Test - Parsing the feed lets plan a function that pulls the HPR feed and returns the feed data. We can test that the result of fetching the feed is a HTTP 200 def test_get_show_data(): show_data = hpr_info.get_show_data() assert show_data.status == 200 Now when we run pytest we get 1 failed, 1 passed and we can see the error AttributeError: module 'hpr_info' has no attribute 'get_show_data' Lets write the code to get the new test to pass. We will use the feedparser python module to make it easier to parse the rss feed. After we add the import and the new function, hpr_info.py looks like this import feedparser HPR_FEED = "https://hackerpublicradio.org/hpr_ogg_rss.php" def get_show_data(): showdata = feedparser.parse(HPR_FEED) return showdata Lets run pytest again. When I have more than one test, I like to add the -v flag so I can see each test as it runs. test_hpr_info.py::test_hpr_feed_url PASSED [ 50%] test_hpr_info.py::test_get_show_data PASSED [100%] Next Test - Get the most recent episode from the feed Now that we have the feed, lets test getting the first episode. feedparser entries are dictionaries. Lets test what the function returns to make sure it looks like a rss feed entry. def test_get_latest_entry(): latest_entry = hpr_info.get_latest_entry() assert latest_entry["title"] assert latest_entry["published"] After we verify the test fails, we can write the code to rerun the newest entry data to hpr_info.py and pytest -v will show 3 passing tests. def get_latest_entry(): showdata = get_show_data() return showdata["entries"][0] Final Test Lets test a function to see if it returns the values we want to print. We don't test for specific values, just that the data exists. def test_get_entry_data(): entry_data = hpr_info.get_entry_data(hpr_info.get_latest_entry()) assert entry_data["title"] assert entry_data["host"] assert entry_data["published"] assert entry_data["file"] And then code to get the test to pass def get_entry_data(entry): for link in entry["links"]: if link.get("rel") == "enclosure": enclosure = link.get("href") return { "title": entry["title"], "host": entry["authors"][0]["name"], "published": entry["published"], "file": enclosure, } Finish the HPR info script. Now that we have tested that we can, get all the info we want from the most recent episode lets add the last bit of code to hpr_info.py to print the episode info if __name__ == "__main__": most_recent_show = get_entry_data(get_latest_entry()) print() print(f"Most Recent HPR Episode") for x in most_recent_show: print(f"{x}: {most_recent_show.get(x)}") if __name__ == "__main__": ensures code inside this block will only run when the script is called directly, and not when imported by test_hpr_info.py Summary TDD is a programming method where you write tests prior to writing code. TDD forces me to write smaller functions and more modular code. Link to HPR info script and tests - TODO Additional tests to add Check date is the most recent weekday Check this the host is listed on corespondents page Check others. Project Files - https://gitlab.com/norrist/hpr-pytest-demo

Test & Code - Python Testing & Development
216: ruff, uv, and Astral: Python tooling, much faster, with Rust

Test & Code - Python Testing & Development

Play Episode Listen Later Mar 11, 2024 48:44 Transcription Available


Charlie Marsh and team are using Rust to make Python tooling faster.Ruff can take the place of Flake8, isort, and Black, and so much more.uv can take the place of pip, pip-tools, and virtualenvAstral is Charlie's venture backed company, and what they have with `ruff` and `uv` is just the start.Since uv is the newest tool, there's quite a bit of the discussion diving into uv.Links:ruffAstraluv Sponsored by PyCharm ProUse code PYTEST for 20% off PyCharm Professional at jetbrains.com/pycharmFirst 10 to sign up this month get a free month of AI AssistantSee how easy it is to run pytest from PyCharm at pythontest.com/pycharmThe Complete pytest CourseFor the fastest way to learn pytest, go to courses.pythontest.comWhether your new to testing or pytest, or just want to maximize your efficiency and effectiveness when testing.

Test & Code - Python Testing & Development
215: Staying Technical as a Manager

Test & Code - Python Testing & Development

Play Episode Listen Later Feb 25, 2024 40:49 Transcription Available


Software engineers that move into leadership roles have a struggle between learning leadership skills, maintaining technical skills, and learning new leadership and technical skills. Matt Makai went from individual contributor to developer relations to leadership in devrel. We discuss how to stay technical, as well as dive into some results of his studies in how companies use developer relationship channels. Sponsored by PyCharm ProUse code PYTEST for 20% off PyCharm Professional at jetbrains.com/pycharmFirst 10 to sign up this month get a free month of AI AssistantSee how easy it is to run pytest from PyCharm at pythontest.com/pycharmThe Complete pytest CourseFor the fastest way to learn pytest, go to courses.pythontest.comWhether your new to testing or pytest, or just want to maximize your efficiency and effectiveness when testing.

Test & Code - Python Testing & Development

If a test fails in a test suite, I'm going to want to re-run the test. I may even want to re-run a test, or a subset of the suite, a bunch of times.  There are a few pytest plugins that help with this:pytest-repeatpytest-rerunfailurespytest-flakefinderpytest-instafailWe talk about each of these in this episode. Sponsored by PyCharm ProUse code PYTEST for 20% off PyCharm Professional at jetbrains.com/pycharmFirst 10 to sign up this month get a free month of AI AssistantSee how easy it is to run pytest from PyCharm at pythontest.com/pycharmThe Complete pytest CourseFor the fastest way to learn pytest, go to courses.pythontest.comWhether your new to testing or pytest, or just want to maximize your efficiency and effectiveness when testing.

Test & Code - Python Testing & Development
211: Stamp out test dependencies with pytest plugins

Test & Code - Python Testing & Development

Play Episode Listen Later Dec 15, 2023 20:36


We want to be able to run tests in a suite, and debug them in isolation, and have the behavior be the same.  If the behavior is different in isolation vs in a suite, it's a nightmare to debug. In this episode, we'll talk about:Causes of dependenceTesting for dependencies using pluginsDebugging test dependenciesPlugins discussed:pytest-randomlypytest-reversepytest-random-order Sponsored by PyCharm ProUse code PYTEST for 20% off PyCharm Professional at jetbrains.com/pycharmFirst 10 to sign up this month get a free month of AI AssistantSee how easy it is to run pytest from PyCharm at pythontest.com/pycharmThe Complete pytest CourseFor the fastest way to learn pytest, go to courses.pythontest.comWhether your new to testing or pytest, or just want to maximize your efficiency and effectiveness when testing.

Test & Code - Python Testing & Development
210: TDD - Refactor while green

Test & Code - Python Testing & Development

Play Episode Listen Later Nov 30, 2023 18:19


Test Driven Development. Red, Green, Refactor. Do we have to do the refactor part? Does the refactor at the end include tests? Or can I refactor the tests at any time?Why is refactor at the end? This episode is to talk about this with a an example. Sponsored by PyCharm ProUse code PYTEST for 20% off PyCharm Professional at jetbrains.com/pycharmFirst 10 to sign up this month get a free month of AI AssistantSee how easy it is to run pytest from PyCharm at pythontest.com/pycharmThe Complete pytest CourseFor the fastest way to learn pytest, go to courses.pythontest.comWhether your new to testing or pytest, or just want to maximize your efficiency and effectiveness when testing.

Test & Code - Python Testing & Development
208: Tests with no assert statements

Test & Code - Python Testing & Development

Play Episode Listen Later Oct 30, 2023 14:49


Why on earth would you want to write a test with no assert statements?After all, aren't assert statements how you decide wether a test passes or fails?In this episode, we walk through a handful of useful examples of test code without asserts.We also talk about how these types of tests are a great way to dip your toe into testing. Sponsored by PyCharm ProUse code PYTEST for 20% off PyCharm Professional at jetbrains.com/pycharmFirst 10 to sign up this month get a free month of AI AssistantSee how easy it is to run pytest from PyCharm at pythontest.com/pycharmThe Complete pytest CourseFor the fastest way to learn pytest, go to courses.pythontest.comWhether your new to testing or pytest, or just want to maximize your efficiency and effectiveness when testing.

Datacenter Technical Deep Dives
Pytest: Using Tests as Happy Little Experiments with Brian Okken

Datacenter Technical Deep Dives

Play Episode Listen Later Sep 29, 2023 60:10


Brian Okken is author of 'Python Testing with Pytest', host/co-host of Python Bytes, Python People, and Test & Code (whew!!). In this episode we get into Pytest: Using Tests basics, and why you shouldn't be afraid of starting small, and excellent hair care tips (unrelated to pytest)! Resources: https://fosstodon.org/@brianokken https://pythontest.com https://pythonpeoplefm https://pythonbytes.fm/ #Pytest #usingtests #experiments #testcode #coding #python Intro music attribution: Artist - MaxKoMusic

Test & Code - Python Testing & Development
207: Welcome to "Python Test", pytest course, pytest-repeat and pytest-flakefinder

Test & Code - Python Testing & Development

Play Episode Listen Later Sep 26, 2023 14:14


Podcast name: "Test & Code" -> "Python Test"Python Bytes PodcastPython People PodcastPython Test Podcast

Ethereum Cat Herders Podcast
PEEPanEIP #116: Dencun Testing with Parithosh, Mario, Barnabas

Ethereum Cat Herders Podcast

Play Episode Listen Later Sep 7, 2023 67:25


Resources: ----------------- Devnet 8 specs - https://notes.ethereum.org/@ethpandao... How to join - https://dencun-devnet-8.ethpandaops.io/ Slides - https://docs.google.com/presentation/... PEEPanEIP -    • PEEPanEIP   Dencun -    • Dencun   Check out upcoming EIPs in Peep an EIP series at https://github.com/ethereum-cat-herde... Follow at Twitter -------------------------- Parithosh Jayanthi @parithosh_j | Mario Vega @elbuenmayini | Barnabas Busa @BarnabasBusa | Pooja Ranjan @poojaranjan19 Topics covered ------------------------- 0:57 - Skip intro 1:27 - Meet Barnabas Busa 2:05 - Meet Mario Vega 2:46 - Meet Parithosh Jayanthi 4:06 - Introduction to Dencun Testing 6:19 - Testing flow 14:40 - What kind of test we will run? 18:53 - Execution Specs Tests (Python EVM Tests) 20:31 The Pytest generation flow 24:24 - What do these tests contain? 27:41 - PyTest for Cancun - EVM changes 31:08 - EIP-4844 tests 34:40 - Current state of Functional Testing for Cancun 35:55 - What is Kurtosis? Why do we care? 37:42 - Example of Kurtosis configuration 38:57 - Background - Kurtosis engine and how we interact? 39:50 - Interop issues identified with Kurtosis 40:35 - Dencun - Devnet 8 41:58 - Holesky Coming soon 42:10 - Inviting name ideas for public testnet 42:30 - End of presentation 43:28 - Q&A 43:50 - Testing sequence 45:01 - Testing EIP-7044 & EIP-7045 47:17 - When public testnet? Will it be Holeski? 50:00 - If the testnet is broken with 3/6, what will it going to be? 51:21 - Where are we with Execution specs? Are we ready for Dencun? 54:04 - What can be done to move forks faster? 59:32 - Testnet participation - how can solo validators can participate? 1:01:00 - How far do we see Goerli in the future? 1:02:34 - Success story & road blocker 1:06:12 - When Dencun? 1:07:20 - Message for the community

Test & Code - Python Testing & Development
205: pytest autouse fixtures

Test & Code - Python Testing & Development

Play Episode Listen Later Aug 1, 2023 28:55


On a recent episode of PythonBytes, I suggested it's hard to come up with good examples for pytest autouse fixtures, as there aren't very many good reasons to use them.  James Falcon was kind enough to reach out and correct me. In this episode, we describe:what fixtures arewhat autouse fixtures aregreat reasons to use them

Talk Python To Me - Python conversations for passionate developers
#407: pytest tips and tricks for better testing

Talk Python To Me - Python conversations for passionate developers

Play Episode Listen Later Mar 18, 2023 56:22


If you're like most people, the simplicity and easy of getting started is a big part of pytest's appeal. But beneath that simplicity, there is a lot of power and depth. We have Brian Okken on this episode to dive into his latest pytest tips and tricks for beginners and power users. Links from the show pytest tips and tricks article: pythontest.com Getting started with pytest Course: training.talkpython.fm pytest book: pythontest.com Python Bytes podcast: pythonbytes.fm Brian on Mastodon: @brianokken@fosstodon.org Hypothesis: readthedocs.io Hypothesis: Reproducability: readthedocs.io Get More Done with the DRY Principle: zapier.com "The Key" Keyboard: stackoverflow.blog pytest plugins: docs.pytest.org Watch this episode on YouTube: youtube.com Episode transcripts: talkpython.fm --- Stay in touch with us --- Subscribe to us on YouTube: youtube.com Follow Talk Python on Mastodon: talkpython Follow Michael on Mastodon: mkennedy Sponsors Microsoft Founders Hub 2023 Brilliant 2023 Talk Python Training

Test & Code - Python Testing & Development
195: What would you change about pytest?

Test & Code - Python Testing & Development

Play Episode Listen Later Mar 8, 2023 57:14


Anthony Sottile and Brian discuss changes that would be cool for pytest, even unrealistic changes. These are changes we'd make to pytest if we didn't ahve to care about backwards compatibilty. Anthony's list: The import system Multi-process support out of the box Async support Changes to the fixture system Extend the assert rewriting to make it modular Add matchers to assert mechanism Ban test class inheritance Brian's list: Extend assert rewriting for custom rewriting, like check pytester matchers available for all tests Throw out nose and unittest compatibility plugins Throw out setupmodule, teardownmodule and other xunit style functions Remove a bunch of the hook functions Documentation improvement of remaining hook functions which include examples of how to use it Start running tests before collection is done Split collection and running into two processes Have the fixtures be able to know the result of the test during teardown Special Guest: Anthony Sottile.

extend pytest
Talk Python To Me - Python conversations for passionate developers
#405: Testing in Radio Astronomy with Python and pytest

Talk Python To Me - Python conversations for passionate developers

Play Episode Listen Later Mar 3, 2023 59:21


So you know about dependencies and testing, right? If you're talking to a DB in your app, you have to decide how to approach that with your tests. There are lots of solid options you might pick and they vary by goals. Do you mock out the DB layer for isolation or do you use a test DB to make it as real as possible? Do you just punt and use the real DB for expediency? What if your dependency was a huge array of radio telescopes and a rack of hundreds of bespoke servers? That's the challenge on deck today were we discuss testing radio astronomy with pytest with our guest James Smith. He's a Digital Signal Processing engineer at the South African Radio Astronomy Observatory and has some great stories and tips to share. Links from the show GPU-based correlator for MeerKAT: github.com Meerkat: sarao.ac.za SARAO: sarao.ac.za Skarab server: peralex.com pycuda: documen.tician.de Commercial Telescopes: telescope.com PyLaTeX: github.com Linearity Test Code: talkpython.fm Correlator Context: talkpython.fm Watch this episode on YouTube: youtube.com Episode transcripts: talkpython.fm --- Stay in touch with us --- Subscribe to us on YouTube: youtube.com Follow Talk Python on Mastodon: talkpython Follow Michael on Mastodon: mkennedy Sponsors Taipy Sentry Error Monitoring, Code TALKPYTHON Talk Python Training

Code for Thought
ByteSized: Testing your Python code

Code for Thought

Play Episode Listen Later Dec 15, 2022 21:07


This last episode of ByteSized RSE before the end of 2022 is about testing your Python code.Testing is an essential part of software development, and a lot of what we cover in this episode applies to any programming and scripting language. For Python, the two big frameworks being used are unittest and PyTest. Unittest is built into Python, whereas PyTest is a module you would need to install extra.https://docs.python.org/3/library/unittest.html the built in unit testing framework of Pythonhttps://docs.python.org/3/library/unittest.mock.html mock testing in the unittest frameworkhttps://docs.python.org/3/library/unittest.html#class-and-module-fixtures fixtures for classes and moduleshttps://docs.pytest.org/en/7.2.x/ the popular PyTest frameworkMocking can be done with monkeypatch in PyTest https://docs.pytest.org/en/7.1.x/how-to/monkeypatch.html#Fixtures in PyTest: https://docs.pytest.org/en/7.2.x/reference/fixtures.html Books mentionedWorking effectively with legacy code, Michael Feathers, ISBN: 9780131177055, Pearson's, 2004Refactoring: Improving the Design of Existing Code, Martin Fowler, ISBN: 9780134757681, 2nd edition, Addison-Wesley ProfessionalByte-sized RSE is presented in collaboration with the UNIVERSE-HPC project.https://www.imperial.ac.uk/computational-methods/rse/events/byte-sized-rse/ ByteSized RSE link to Imperial CollegeSupport the Show.Thank you for listening and your ongoing support. It means the world to us! Support the show on Patreon https://www.patreon.com/codeforthought Get in touch: Email mailto:code4thought@proton.me UK RSE Slack (ukrse.slack.com): @code4thought or @piddie US RSE Slack (usrse.slack.com): @Peter Schmidt Mastadon: https://fosstodon.org/@code4thought or @code4thought@fosstodon.org LinkedIn: https://www.linkedin.com/in/pweschmidt/ (personal Profile)LinkedIn: https://www.linkedin.com/company/codeforthought/ (Code for Thought Profile) This podcast is licensed under the Creative Commons Licence: https://creativecommons.org/licenses/by-sa/4.0/

Python Bytes
#311 Catching Memory Leaks with ... pytest?

Python Bytes

Play Episode Listen Later Nov 24, 2022 49:50


Watch on YouTube About the show Python Bytes 311 Sponsored by Microsoft for Startups Founders Hub. Connect with the hosts Michael: @mkennedy@fosstodon.org Brian: @brianokken@fosstodon.org Special guest: Murilo Cunha Michael #1: Latexify We are used to turning beautiful math into programming symbols. For example: amitness.com/2019/08/math-for-programmers/#sigma Take this code: def do_math(a, b, c): return (-b + math.sqrt(b ** 2 - 4 * a * c)) / (2 * a) Add @latexify.function decorator display do_math in a notebook Get this latex: mathrm{do_math}(a, b, c) = frac{-b + sqrt{b^{{2}} - {4} a c}}{{2} a} Which renders as I could only get it to install with: pip install git+https://github.com/google/latexify_py Brian #2: prefixed From Avram Lubkin “Prefixed provides an alternative implementation of the built-in float which supports formatted output with SI (decimal) and IEC (binary) prefixes.” >>> from prefixed import Float >>> f'{Float(3250):.2h}' '3.25k' >>> '{:.2h}s'.format(Float(.00001534)) '15.34μs' >>> '{:.2k}B'.format(Float(42467328)) '40.50MiB' >>> f'{Float(2048):.2m}B' '2.00KB' Because prefixed.Float inherits from the built-in float, it behaves exactly the same in most cases. When a math operation is performed with another real number type (float, int), the result will be a prefixed.Float instance. also interesting First new SI prefixes for over 30 years new prefixes also show up here - Murilo #3: dbt Open source tool CLI tool Built with Python

Software at Scale
Software at Scale 52 - Building Build Systems with Benjy Weinberger

Software at Scale

Play Episode Listen Later Nov 17, 2022 62:57


Benjy Weinberger is the co-founder of Toolchain, a build tool platform. He is one of the creators of the original Pants, an in-house Twitter build system focused on Scala, and was the VP of Infrastructure at Foursquare. Toolchain now focuses on Pants 2, a revamped build system.Apple Podcasts | Spotify | Google PodcastsIn this episode, we go back to the basics, and discuss the technical details of scalable build systems, like Pants, Bazel and Buck. A common challenge with these build systems is that it is extremely hard to migrate to them, and have them interoperate with open source tools that are built differently. Benjy's team redesigned Pants with an initial hyper-focus on Python to fix these shortcomings, in an attempt to create a third generation of build tools - one that easily interoperates with differently built packages, but still fast and scalable.Machine-generated Transcript[0:00] Hey, welcome to another episode of the Software at Scale podcast. Joining me here today is Benji Weinberger, previously a software engineer at Google and Twitter, VP of Infrastructure at Foursquare, and now the founder and CEO of Toolchain.Thank you for joining us.Thanks for having me. It's great to be here. Yes. Right from the beginning, I saw that you worked at Google in 2002, which is forever ago, like 20 years ago at this point.What was that experience like? What kind of change did you see as you worked there for a few years?[0:37] As you can imagine, it was absolutely fascinating. And I should mention that while I was at Google from 2002, but that was not my first job.I have been a software engineer for over 25 years. And so there were five years before that where I worked at a couple of companies.One was, and I was living in Israel at the time. So my first job out of college was at Check Point, which was a big successful network security company. And then I worked for a small startup.And then I moved to California and started working at Google. And so I had the experience that I think many people had in those days, and many people still do, of the work you're doing is fascinating, but the tools you're given to do it with as a software engineer are not great.This, I'd had five years of experience of sort of struggling with builds being slow, builds being flaky with everything requiring a lot of effort. There was almost a hazing,ritual quality to it. Like, this is what makes you a great software engineer is struggling through the mud and through the quicksand with this like awful substandard tooling. And,We are not users, we are not people for whom products are meant, right?We make products for other people. Then I got to Google.[2:03] And Google, when I joined, it was actually struggling with a very massive, very slow make file that took forever to parse, let alone run.But the difference was that I had not seen anywhere else was that Google paid a lot of attention to this problem and Google devoted a lot of resources to solving it.And Google was the first place I'd worked and I still I think in many ways the gold standard of developers are first class participants in the business and deserve the best products and the best tools and we will if there's nothing out there for them to use, we will build it in house and we will put a lot of energy into that.And so it was for me, specifically as an engineer.[2:53] A big part of watching that growth from in the sort of early to late 2000s was. The growth of engineering process and best practices and the tools to enforce it and the thing i personally am passionate about is building ci but i'm also talking about.Code review tools and all the tooling around source code management and revision control and just everything to do with engineering process.It really was an object lesson and so very, very fascinating and really inspired a big chunk of the rest of my career.I've heard all sorts of things like Python scripts that had to generate make files and finally they move the Python to your first version of Blaze. So it's like, it's a fascinating history.[3:48] Maybe can you tell us one example of something that was like paradigm changing that you saw, like something that created like a magnitude, like order of magnitude difference,in your experience there and maybe your first aha moment on this is how good like developer tools can be?[4:09] Sure. I think I had been used to using make basically up till that point. And Google again was, as you mentioned, using make and really squeezing everything it was possible to squeeze out of that lemon and then some.[4:25] But when the very early versions of what became blaze which was that big internal build system which inspired basil which is the open source variant of that today. Hey one thing that really struck me was the integration with the revision controls system which was and i think still is performance.I imagine many listeners are very familiar with Git. Perforce is very different. I can only partly remember all of the intricacies of it, because it's been so long since I've used it.But one interesting aspect of it was you could do partial checkouts. It really was designed for giant code bases.There was this concept of partial checkouts where you could check out just the bits of the code that you needed. But of course, then the question is, how do you know what those bits are?But of course the build system knows because the build system knows about dependencies. And so there was this integration, this back and forth between the, um.[5:32] Perforce client and the build system that was very creative and very effective.And allowed you to only have locally on your machine, the code that you actually needed to work on the piece of the codebase you're working on,basically the files you cared about and all of their transitive dependencies. And that to me was a very creative solution to a problem that involved some lateral thinking about how,seemingly completely unrelated parts of the tool chain could interact. And that's kind of been that made me realize, oh, there's a lot of creative thought at work here and I love it.[6:17] Yeah, no, I think that makes sense. Like I interned there way back in 2016. And I was just fascinated by, I remember by mistake, I ran like a grep across the code base and it just took forever. And that's when I realized, you know, none of this stuff is local.First of all, like half the source code is not even checked out to my machine.And my poor grep command is trying to check that out. But also how seamlessly it would work most of the times behind the scenes.Did you have any experience or did you start working on developer tools then? Or is that just what inspired you towards thinking about developer tools?I did not work on the developer tools at Google. worked on ads and search and sort of Google products, but I was a big user of the developer tools.Exception which was that I made some contributions to the.[7:21] Protocol buffer compiler which i think many people may be familiar with and that is. You know if i very deep part of the toolchain that is very integrated into everything there and so that gave me.Some experience with what it's like to hack on a tool that's everyone in every engineer is using and it's the sort of very deep part of their workflow.But it wasn't until after google when i went to twitter.[7:56] I noticed that the in my time of google my is there the rest of the industry had not. What's up and suddenly i was sort of stressed ten years into the past and was back to using very slow very clunky flaky.Tools that were not designed for the tasks we were trying to use them for. And so that made me realize, wait a minute, I spent eight years using these great tools.They don't exist outside of these giant companies. I mean, I sort of assumed that maybe, you know, Microsoft and Amazon and some other giants probably have similar internal tools, but there's something out there for everyone else.And so that's when I started hacking on that problem more directly was at Twitter together with John, who is now my co-founder at Toolchain, who was actually ahead of me and ahead ofthe game at Twitter and already begun working on some solutions and I joined him in that.Could you maybe describe some of the problems you ran into? Like were the bills just taking forever or was there something else?[9:09] So there were...[9:13] A big part of the problem was that Twitter at the time, the codebase I was interested in and that John was interested in was using Scala. Scala is a fascinating, very rich language.[9:30] Its compiler is very slow. And we were in a situation where, you know, you'd make some small change to a file and then builds would take just,10 minutes, 20 minutes, 40 minutes. The iteration time on your desktop was incredibly slow.And then CI times, where there was CI in place, were also incredibly slow because of this huge amount of repetitive or near repetitive work. And this is because the build tools,etc. were pretty naive about understanding what work actually needs to be done given a set of changes.There's been a ton of work specifically on SBT since then.[10:22] It has incremental compilation and things like that, but nonetheless, that still doesn't really scale well to large corporate codebases that are what people often refer to as monorepos.If you don't want to fragment your codebase with all of the immense problems that that brings, you end up needing tooling that can handle that situation.Some of the biggest challenges are, how do I do less than recompile the entire codebase every time. How can tooling help me be smart about what is the correct minimal amount of work to do.[11:05] To make compiling and testing as fast as it can be?[11:12] And I should mention that I dabbled in this problem at Twitter with John. It was when I went to Foursquare that I really got into it because Foursquare similarly had this big Scala codebase with a very similar problem of incredibly slow builds.[11:29] The interim solution there was to just upgrade everybody's laptops with more RAM and try and brute force the problem. It was very obvious to everyone there, tons of,force-creation pattern still has lots of very, very smart engineers.And it was very obvious to them that this was not a permanent solution and we were casting around for...[11:54] You know what can be smart about scala builds and i remember this thing that i had hacked on twitter and. I reached out to twitter and ask them to open source it so we could use it and collaborate on it wasn't obviously some secret sauce and that is how the very first version of the pants open source build system came to be.I was very much designed around scarlet did eventually.Support other languages. And we hacked on it a lot at Foursquare to get it to...[12:32] To get the codebase into a state where we could build it sensibly. So the one big challenge is build speed, build performance.The other big one is managing dependencies, keeping your codebase sane as it scales.Everything to do with How can I audit internal dependencies?How do I make sure that it is very, very easy to accidentally create all sorts of dependency tangles and cycles and create a code base whose dependency structure is unintelligible, really,hard to work with and actually impacts performance negatively, right?If you have a big tangle of dependencies, you're more likely to invalidate a large chunk of your code base with a small change.And so tooling that allows you to reason about the dependencies in your code base and.[13:24] Make it more tractable was the other big problem that we were trying to solve. Mm-hmm. No, I think that makes sense.I'm guessing you already have a good understanding of other build systems like Bazel and Buck.Maybe could you walk us through what are the difference for PANs, Veevan? What is the major design differences? And even maybe before that, like, how was Pants designed?And is it something similar to like creating a dependency graph? You need to explicitly include your dependencies.Is there something else that's going on?[14:07] Maybe just a primer. Yeah. Absolutely. So I should mention, I was careful to mention, you mentioned Pants V1.The version of Pants that we use today and base our entire technology stack around is what we very unimaginatively call Pants V2, which we launched two years ago almost to the day.That is radically different from Pants V1, from Buck, from Bazel. It is quite a departure in ways that we can talk about later.One thing that I would say Panacea V1 and Buck and Bazel have in common is that they were designed around the use cases of a single organization. is a.[14:56] Open source variant or inspired by blaze its design was very much inspired by. Here's how google does engineering and a buck similarly for facebook and pansy one frankly very similar for.[15:11] Twitter and we sort of because Foursquare also contributed a lot to it, we sort of nudged it in that direction quite a bit. But it's still very much if you did engineering in this one company's specific image, then this might be a good tool for you.But you had to be very much in that lane.But what these systems all look like is, and the way they are different from much earlier systems is.[15:46] They're designed to work in large scalable code bases that have many moving parts and share a lot of code and that builds a lot of different deployables, different, say, binaries or DockerDocker images or AWS lambdas or cloud functions or whatever it is you're deploying, Python distributions, Java files, whatever it is you're building, typically you have many of them in this code base.Could be lots of microservices, could be just lots of different things that you're deploying.And they live in the same repo because you want that unity. You want to be able to share code easily. you don't want to introduce dependency hell problems in your own code. It's bad enough that we have dependency hell problems third-party code.[16:34] And so these systems are all if you squint at them from thirty thousand feet today all very similar in that they make that the problem of. Managing and building and testing and packaging in a code base like that much more tractable and the way they do this is by applying information about the dependencies in your code base.So the important ingredient there is that these systems understand the find the relatively fine grained dependencies in your code base.And they can use that information to reason about work that needs to happen. So a naive build system, you'd say, run all the tests in the repo or in this part of the repo.So a naive system would literally just do that, and first they would compile all the code.[17:23] But a scalable build system like these would say, well, you've asked me to run these tests, but some of them have already been cached and these others, okay, haven't.So I need to look at these ones I actually need to run. So let me see what needs to be done before I can run them.Oh, so these source files need to be compiled, but some of those already in cache and then these other ones I need to compile. But I can apply concurrency because there are multiple cores on this machine.So I can know through dependency analysis which compile jobs can run concurrently and which cannot. And then when it actually comes time to run the tests, again, I can apply that sort of concurrency logic.[18:03] And so these systems, what they have in common is that they use dependency information to make your building testing packaging more tractable in a large code base.They allow you to not have to do the thing that unfortunately many organizations find themselves doing, which is fragmenting the code base into lots of different bits andsaying, well, every little team or sub team works in its own code base and they consume each other's code through, um, so it was third party dependencies in which case you are introducing a dependency versioning hell problem.Yeah. And I think that's also what I've seen that makes the migration to a tool like this hard. Cause if you have an existing code base that doesn't lay out dependencies explicitly.[18:56] That migration becomes challenging. If you already have an import cycle, for example.[19:01] Bazel is not going to work with you. You need to clean that up or you need to create one large target where the benefits of using a tool like Bazel just goes away. And I think that's a key,bit, which is so fascinating because it's the same thing over several years. And I'm hoping that,it sounds like newer tools like Go, at least, they force you to not have circular dependencies and they force you to keep your code base clean so that it's easy to migrate to like a scalable build system.[19:33] Yes exactly so it's funny that is the exact observation that let us to pans to see to so they said pans to be one like base like buck was very much inspired by and developed for the needs of a single company and other companies were using it a little bit.But it also suffered from any of the problems you just mentioned with pans to for the first time by this time i left for square and i started to chain with the exact mission of every company every team of any size should have this kind of tooling should have this ability this revolutionary ability to make the code base is fast and tractable at any scale.And that made me realize.We have to design for that we have to design for not for. What a single company's code base looks like but we have to design.To support thousands of code bases of all sorts of different challenges and sizes and shapes and languages and frameworks so.We actually had to sit down and figure out what does it mean to make a tool.Like this assistant like this adoptable over and over again thousands of times you mentioned.[20:48] Correctly, that it is very hard to adopt one of those earlier tools because you have to first make your codebase conform to whatever it is that tool expects, and then you have to write huge amounts of manual metadata to describe all of the dependencies in your,the structure and dependencies of your codebase in these so-called build files.If anyone ever sees this written down, it's usually build with all capital letters, like it's yelling at you and that those files typically are huge and contain a huge amount of information your.[21:27] I'm describing your code base to the tool with pans be to eat very different approaches first of all we said this needs to handle code bases as they are so if you have circular dependencies it should handle them if you have. I'm going to handle them gracefully and automatically and if you have multiple conflicting external dependencies in different parts of your code base this is pretty common right like you need this version of whatever.Hadoop or NumPy or whatever it is in this part of the code base, and you have a different conflicting version in this other part of the code base, it should be able to handle that.If you have all sorts of dependency tangles and criss-crossing and all sorts of things that are unpleasant, and better not to have, but you have them, the tool should handle that.It should help you remove them if you want to, but it should not let those get in the way of adopting it.It needs to handle real-world code bases. The second thing is it should not require you to write all this crazy amount of metadata.And so with Panzer V2, we leaned in very hard on dependency inference, which means you don't write these crazy build files.You write like very tiny ones that just sort of say, you know, here is some code in this language for the build tool to pay attention to.[22:44] But you don't have to edit the added dependencies to them and edit them every time you change dependencies.Instead, the system infers dependencies by static analysis. So it looks at your, and it does this at runtime.So you, you know, almost all your dependencies, 99% of the time, the dependencies are obvious from import statements.[23:05] And there are occasional and you can obviously customize this because sometimes there are runtime dependencies that have to be inferred from like a string. So from a json file or whatever is so there are various ways to customize this and of course you can always override it manually.If you have to be generally speaking ninety.Seven percent of the boilerplate that used to going to build files in those old systems including pans v1 no. You know not claiming we did not make the same choice but we goes away with pans v2 for exactly the reason that you mentioned these tools,because they were designed to be adopted once by a captive audience that has no choice in the matter.And it was designed for how that code base that adopting code base already is. is these tools are very hard to adopt.They are massive, sometimes multi-year projects outside of that organization. And we wanted to build something that you could adopt in days to weeks and would be very easy,to customize to your code base and would not require these massive wholesale changes or huge amounts of metadata.And I think we've achieved that. Yeah, I've always wondered like, why couldn't constructing the build file be a part of the build. In many ways, I know it's expensive to do that every time. So just like.[24:28] Parts of the build that are expensive, you cache it and then you redo it when things change.And it sounds like you've done exactly that with BANs V2.[24:37] We have done exactly that. The results are cached on a profile basis. So the very first time you run something, then dependency inference can take some time. And we are looking at ways to to speed that up.I mean, like no software system has ever done, right? Like it's extremely rare to declare something finished. So we are obviously always looking at ways to speed things up.But yeah, we have done exactly what you mentioned. We don't, I should mention, we don't generate the dependencies into build for, we don't edit build files and then you check them in.We do that a little bit. So I mentioned you do still with PANSTL V2, you need these little tiny build files that just say, here is some code.They typically can literally be one line sometimes, almost like a marker file just to say, here is some code for you to pay attention to.We're even working on getting rid of those.We do have a little script that generates those one time just to help you onboard.But...[25:41] The dependencies really are just generated a runtime as on demand as needed and used a runtime so we don't have this problem of. Trying to automatically add or edit a otherwise human authored file that is then checked in like this generating and checking in files is.Problematic in many ways, especially when those files also have to take human written edits.So we just do away with all of that and the dependency inference is at runtime, on demand, as needed, sort of lazily done, and the information is cached. So both cached in memory in the surpassed V2 has this daemon that runs and caches a huge amount of state in memory.And the results of running dependency inference are also cached on disk. So they survive a daemon restart, etc.I think that makes sense to me. My next question is going to be around why would I want to use panthv2 for a smaller code base, right? Like, usually with the smaller codebase, I'm not running into a ton of problems around the build.[26:55] I guess, do you notice these inflection points that people run into? It's like, okay, my current build setup is not enough. What's the smallest codebase that you've seen that you think could benefit? Or is it like any codebase in the world? And I should start with,a better build system rather than just Python setup.py or whatever.I think the dividing line is, will this code base ever be used for more than one thing?[27:24] So if you have a, let's take the Python example, if literally all this code base will ever do is build this one distribution and a top level setup pie is all I need. And that is, you know, this,sometimes you see this with open source projects and the code base is going to remain relatively small, say it's only ever going to be a few thousand lines and the tests, even if I runthe tests from scratch every single time, it takes under five minutes, then you're probably fine.But I think two things I would look at are, am I going to be building multiple things in this code base in the future, or certainly if I'm doing it now.And that is much more common with corporate code bases. You have to ask yourself, okay, my team is growing, more and more people are cooperating on this code base.I want to be able to deploy multiple microservices. I want to be able to deploy multiple cloud functions.I want to be able to deploy multiple distributions or third-party artifacts.I want to be able to.[28:41] You know, multiple sort of data science jobs, whatever it is that you're building. If you want, if you ever think you might have more than one, now's the time to think about,okay, how do I structure the code base and what tooling allows me to do this effectively?And then the other thing to look at is build times. If you're using compiled languages, then obviously compilation, in all cases testing, if you start to see like, I can already see that that tests are taking five minutes, 10 minutes, 15 minutes, 20 minutes.Surely, I want some technology that allows me to speed that up through caching, through concurrency, through fine-grained invalidation, namely, don't even attempt to do work that isn't necessary for the result that was asked for.Then it's probably time to start thinking about tools like this, because the earlier you adopt it, the easier it is to adopt.So if you wait until you've got a tangle of multiple setup pies in the repo and it's unclear how you manage them and how you keep their dependencies synchronized,so there aren't version conflicts across these different projects, specifically with Python,this is an interesting problem.I would say with other languages, there is more because of the compilation step in jvm languages or go you.[30:10] Encounter the need for a build system much much earlier a bill system of some kind and then you will ask yourself what kind with python because you can get a bite for a while just running. What are the play gate and pie test and i directly and all everything is all together in a single virtual and.But the Python tooling, as mighty as it is, mostly is not designed for larger code bases with multiple, that deploy multiple things and have multiple different sets of.[30:52] Internal and external dependencies the tooling generally implicitly assume sort of one top level set up i want top level. Hi project dot com all you know how are you configuring things and so especially using python let's say for jango flask apps or for data scienceand your code base is growing and you've hired a bunch of data scientists and there's more and more code going in there. With Python, you need to start thinking about what tooling allows me to scale this code base. No, I think I mostly resonate with that. The first question that comes to my mind is,let's talk specifically about the deployment problem. If you're deployed to multiple AWS lambdas or cloud functions or whatever, the first thought that would come to my mind isis I can use separate Docker images that can let me easily produce this container image that I can ship independently.Would you say that's not enough? I totally get that for the build time problem.A Docker image is not going to solve anything. But how about the deployment step?[32:02] So again, with deployments, I think there are two ways where a tool like this can really speed things up.One is only build the things that actually need to be redeployed. And because the tool understands dependencies and can do change analysis, it can figure that out.So one of the things that HansB2 does is it integrates with Git.And so it natively understands how to figure out Git diffs. So you can say something like, show me all the whatever, lambdas, let's say, that are affected by changes between these two branches.[32:46] And it knows and it understands it can say, well, these files changed and you know, we, I understand the transitive dependencies of those files.So I can see what actually needs to be deployed. And, you know, many cases, many things will not need to be redeployed because they haven't changed.The other thing is there's a lot of performance improvements and process improvements around building those images. So, for example, we have for Python specifically, we have an executable format called PEX,which stands for Python executable, which is a single file that embeds all of your Python code that is needed for your deployable and all of its external requirements, transitive external requirements, all bundled up into this single sort of self-executing file.This allows you to do things like if you have to deploy 50 of these, you can basically have a single docker image.[33:52] The different then on top of that you add one layer for each of these fifty and the only difference in that layer is the presence of this pecs file. Where is without all this typically what you would do is.You have fifty docker images each one of which contains a in each one of which you have to build a virtual and which means running.[34:15] Pip as part of building the image, and that gets slow and repetitive, and you have to do it 50 times.We have a lot of ways to speed up. Even if you are deploying 50 different Docker images, we have ways of speeding that up quite dramatically.Because again, of things like dependency analysis, the PECS format, and the ability to build incrementally.Yeah, I think I remember that at Dropbox, we came up with our own, like, par format to basically bundle up a Python binary with, I think par stood for Python archive. I'm notentirely sure. But it did something remarkably similar to solve exactly this problem. It just takes so long, especially if you have a large Python code base. I think that makes sense to me. The other thing that one might ask is, with Python, you don't really have,too long of a build time, is what you would guess, because there's nothing to build. Maybe myPy takes some time to do some static analysis, and, of course, your tests can take forever,and you don't want to rerun them. But there isn't that much of a build time that you have to think about. Would you say that you agree with this, or there's some issues that end,up happening on real-world code basis.[35:37] Well that's a good question the word builds means different things to different people and we recently taken to using the time see i more. Because i think that is clear to people what that means but when i say build or see i mean it in the law in in the extended sense everything you do to go from.Human written source code to a verified.Test did. deployable artifact and so it's true that for python there's no compilation step although arguably. Running my pie is really important and now that i'm really in the habit of using.My pie i will probably never not use it on python code ever again but so that are.[36:28] Sort of build-ish steps for Python such as type checking, such as running code generators like Thrift or Protobuf.And obviously a big, big one is running, resolving third-party dependencies such as running PIP or poetry or whatever it is you're using. So those are all build steps.But with Python, really the big, big, big thing is testing and packaging and primarily testing.And so with Python, you have to be even more rigorous about unit testing than you do with other languages because you don't have a compiler that is catching whole classes of bugs.So and again, MyPy and type checking does really help with that. And so when I build to me includes, build in the large sense includes running tests,includes packaging and includes everything, all the quality control that you run typically in CI or on your desktop in order to go say, well, I've made some edits and here's the proof that these edits are good and I can merge them or deploy them.[37:35] I think that makes sense to me. And like, I certainly saw it with the limited number of testing, the limited amount of type checking you can do with Python, like MyPy is definitelyimproving on this. You just need to unit test a lot to get the same amount of confidence in your own code and then unit tests are not cheap. The biggest question that comes tomy mind is that is BANs V2 focused on Python? Because I have a TypeScript code base at my workplace and I would love to replace the TypeScript compiler with something that was slightly smarter and could tell me, you know what, you don't need to run every unit test every change.[38:16] Great question so when we launched a pass me to which was two years ago. The.We focused on python and that was the initial language we launched with because you had to start somewhere and in the city ten years in between the very scarlet centric work we were doing on pansy one. And the launch of hands be to something really major happened in the industry which was the python skyrocketed in popularity sky python went from.Mostly the little scripting language around the edges of your quote unquote real code, I can use python like fancy bash to people are building massive multi billion dollar businesses entirely on python code bases and there are a few things that drove this one was.I would say the biggest one probably was the python became the. Language of choice for data science and we have strong support for those use cases. There was another was the,Django and Flask became very popular for writing web apps more and more people were used there were more in Intricate DevOps use cases and Python is very popular for DevOps for various good reasons. So.[39:28] Python became super popular. So that was the first thing we supported in pants v2, but we've since added support for or Go, Java, Scala, Kotlin, Shell.Definitely what we don't have yet is JavaScript TypeScript. We are looking at that very closely right now, because that is the very obvious next thing we want to add.Actually, if any listeners have strong opinions about what that should look like, we would love to hear from them or from you on our Slack channels or on our GitHub discussions where we are having some lively discussions about exactly this because the JavaScript.[40:09] And TypeScript ecosystem is already very rich with tools and we want to provide only value add, right? We don't want to say, you have to, oh, you know, here's another paradigm you have to adopt.And here's, you know, you have to replace, you've just done replacing whatever this with this, you know, NPM with yarn. And now you have to do this thing. And now we're, we don't want to beanother flavor of the month. We only want to do the work that uses those tools, leverages the existing ecosystem but adds value. This is what we do with Python and this is one of the reasons why our Python support is very, very strong, much stronger than any other comparable tool out there is.[40:49] A lot of leaning in on the existing Python tool ecosystem but orchestrating them in a way that brings rigor and speed to your builds.And I haven't used the word we a lot. And I just kind of want to clarify who we is here.So there is tool chain, the company, and we're working on, um, uh, SAS and commercial, um, solutions around pants, which we can talk about in a bit.But there is a very robust open source community around pants that is not. tightly held by Toolchain, the company in a way that some other companies open source projects are.So we have a lot of contributors and maintainers on Pants V2 who are not working at Toolchain, but are using Pants in their own companies and their own organizations.And so we have a very wide range of use cases and opinions that are brought to bear. And this is very important because, as I mentioned earlier,we are not trying to design a system for one use case, for one company or a team's use case.We are trying, you know, we are working on a system we want.[42:05] Adoption for over and over and over again at a wide variety of companies. And so it's very important for us to have the contributions and the input from a wide variety of teams and companiesand people. And it's very fortunate that we now do. I mean, on that note, the thing that comes to my mind is another benefit of your scalable build system like Vance or Bazel or Buck is that youYou don't have to learn various different commands when you are spelunking through the code base, whether it's like a Go code base or like a Java code base or TypeScript code base.You just have to run pants build X, Y, Z, and it can construct the appropriate artifacts for you. At least that was my experience with Bazel.Is that something that you are interested in or is that something that pants V2 does kind of act as this meta layer for various other build systems or is it much more specific and knowledgeable about languages itself?[43:09] It's, I think your intuition is correct. The idea is we want you to be able to do something like pants test or pants test, you know, give it a path to a directory and it understands what that means.Oh, this directory contains Python code. Therefore, I should run PyTest in this way. And oh, Oh, it also contains some JavaScript code, so I should run the JavaScript test in this way.And it basically provides a conceptual layer above all the individual tools that gives you this uniformity across frameworks, across languages.One way to think about this is.[43:52] The tools are all very imperative. say you have to run them with a whole set of flags and inputs and you have to know how to use each one separately. So it's like having just the blades of a Swiss Army knife withno actual Swiss Army knife. A tool like Pants will say, okay, we will encapsulate all of that complexity into a much more simple command line interface. So you can do, like I said,test or pants lint or pants format and it understands, oh, you asked me to format your code. I see that you have the black and I sort configured as formatters. So I will run them. And I happen to know that formatting, because formatting can change the source files,I have to run them sequentially. But when you ask for lint, it's not changing the source files. So I know that I can run them multiple lint as concurrently, that sort of logic. And And different tools have different ways of being configured or of telling you what they want to do, but we...[44:58] Can't be to sort of encapsulate all that away from you and so you get this uniform simple command line interface that abstract away a lot of the specifics of these tools and let you run simple commands and the reason this is important is that. This extra layer of indirection is partly what allows pants to apply things like cashing.And invalidation and concurrency because what you're saying is.[45:25] Hey, the way to think about it is not, I am telling pants to run tests. It is I am telling pants that I want the results of tests, which is a subtle difference.But pants then has the ability to say, well, I don't actually need to run pi test on all these tests because I have results from some of them already cached. So I will return them from cache.So that layer of indirection not only simplifies the UI, but provides the point where you can apply things like caching and concurrency.Yeah, I think every programmer wants to work with declarative tools. I think SQL is one of those things where you don't have to know how the database works. If SQL were somewhat easier, that dream would be fulfilled. But I think we're all getting there.I guess the next question that I have is, what benefit do I get by using the tool chain, like SaaS product versus Pants V2?When I think about build systems, I think about local development, I think about CI.[46:29] Why would I want to use the SaaS product? That's a great question.So Pants does a huge amount of heavy lifting, but in the end it is restricted to the resources is on the machine on which it's running. So when I talk about cash, I'm talking about the local cash on that machine. When I talk about concurrency, I'm talking about using,the cores on your machine. So maybe your CI machine has four cores and your laptop has eight cores. So that's the amount of concurrency you get, which is not nothing at all, which is great.[47:04] Thanks for watching![47:04] You know as i mentioned i worked at google for many years and then other companies where distributed systems were saying like i come from a distributed systems background and it really. Here is a problem.All of a piece of work taking a long time because of. Single machine resource constraints the obvious answer here is distributed distributed the work user distributed system and so that's what tool chain offers essentially.[47:30] You configure Pants to point to the toolchain system, which is currently SAS.And we will have some news soon about some on-prem solutions.And now the cache that I mentioned is not just did this test run with these exact inputs before on my machine by me me while I was iterating, but has anyone in my organization or any CI run this test before,with these exact inputs?So imagine a very common situation where you come in in the morning and you pull all the changes that have happened since you last pulled.Those changes presumably passed CI, right? And the CI populated the cache.So now when I run tests, I can get cache hits from the CI machine.[48:29] Now pretty much, yeah. And then with concurrency, again, so let's say, you know, post cache, there are still 200 tests that need to be run.I could run them eight at a time on my machine or the CI machine could run them, you know, say, four at a time on four cores, or I could run 50 or 100 at a time on a cluster of machines.That's where, again, as your code base gets bigger and bigger, that's where some massive, massive speedups come in.The other aspects of the... I should mention that the remote execution that I just mentioned is something we're about to launch. It is not available today. The remote caching is.The other aspects are things like observability. So when you run builds on your laptop or CI, they're ephemeral.Like the output gets lost in the scroll back.And it's just a wall of text that gets lost with them.[49:39] Toolchain all of that information is captured and stored in structured form so you have. Hey the ability to see past bills and see build behavior over time and drill death search builds and drill down into individual builds and see well.How often does this test fail and you know when did this get slow all this kind of information and so you get.This more enterprise level.Observability into a very core piece of developer productivity, which is the iteration time.The time it takes to run tests and build deployables and parcel the quality control checks so that you can merge and deploy code directly relates to time to release.It directly relates to some of the core metrics of developer productivity. How long is it going to take to get this thing out the door?And so having the ability to both speed that up dramatically through distributing the work and having observability into what work is going on, that is what toolchain provides,on top of the already, if I may say, pretty robust open source offering.[51:01] So yeah, that's kind of it.[51:07] Pants on its own gives you a lot of advantages, but it runs standalone. Plugging it into a larger distributed system really unleashes the full power of Pants as a client to that system.[51:21] No, I think what I'm seeing is this interesting convergence. There's several companies trying to do this for Bazel, like BuildBuddy and Edgeflow. So, and it really sounds like the build system of the future, like 10 years from now.[51:36] No one will really be developing on their local machines anymore. Like there's GitHub code spaces on one side. It's like you're doing all your development remotely.[51:46] I've always found it somewhat odd that development that happens locally and whatever scripts you need to run to provision your CI machine to run the same set of testsare so different sometimes that you can never tell why something's passing locally and failing in in CI or vice versa. And there really should just be this one execution layer that can say, you know what, I'm going to build at a certain commit or run at a certain commit.And that's shared between the local user and the CI user. And your CI script is something as simple as pants build slash slash dot dot dot. And it builds the whole code base for,you. So yeah, I certainly feel like the industry is moving in that direction. I'm curious whether You think that's the same.Do you have an even stronger vision of how folks will be developing 10 years from now? What do you think it's going to look like?Oh, no, I think you're absolutely right. I think if anything, you're underselling it. I think this is how all development should be and will be in the future for multiple reasons.One is performance.[52:51] Two is the problem of different platforms. And so today, big thorny problem is I want to, you know, I want to,I'm developing on my Mac book, but the production, so I'm running, when I run tests locally and when I run anything locally, it's running on my Mac book, but that's not our deployable, right?Typically your deploy platform is some flavor of Linux. So...[53:17] With the distributed system approach you can run the work in. Containers that exactly match your production environments you don't even have to care about can this run.On will my test pass on mac os do i need ci the runs on mac os just to make sure the developers can. past test on Mac OS and that is somehow correlated with success on the production environment.You can cut away a whole suite of those problems, which today, frankly, I had mentioned earlier, you can get cache hits on your desktop from remote, from CI populating the cache.That is hampered by differences in platform.Is hampered by other differences in local setup that we are working to mitigate. But imagine a world in which build logic is not actually running on your MacBook, or if it is,it's running in a container that exactly matches the container that you're targeting.It cuts away a whole suite of problems around platform differences and allows you to focus because on just a platform you're actually going to deploy too.[54:35] And the...[54:42] And just the speed and the performance of being able to work and deploy and the visibility that it gives you into the productivity and the operational work of your development team,I really think this absolutely is the future.There is something very strange about how in the last 15 years or so, so many business functions have had the distributed systems treatment applied to them.Function is now that there are these massive valuable companies providing systems that support sales and systems that support marketing and systems that support HR and systems supportoperations and systems support product management and systems that support every business function,and there need to be more of these that support engineering as a business function.[55:48] And so i absolutely think the idea that i need a really powerful laptop so that my running tests can take thirty minutes instead of forty minutes when in reality it should take three minutes is. That's not the future right the future is to as it has been for so many other systems to the web the laptop is that i can take anywhere is.Particularly in these work from home times, is a work from anywhere times, is just a portal into the system that is doing the actual work.[56:27] Yeah. And there's all these improvements across the stack, right? When I see companies like Versel, they're like, what if you use Next.js, we provide the best developer platform forthat and we want to provide caching. Then there's like the lower level systems with build systems, of course, like bands and Bazel and all that. And at each layer, we're kindof trying to abstract the problem out. So to me, it still feels like there is a lot of innovation to be done. And I'm also going to be really curious to know, you know, there'sgoing to be like a few winners of this space, or if it's going to be pretty broken up. And like everyone's using different tools. It's going to be fascinating, like either way.Yeah, that's really hard to know. I think one thing you mentioned that I think is really important is you said your CI should be as simple as just pants build colon, colon, or whatever.That's our syntax would be sort of pants test lint or whatever.I think that's really important. So.[57:30] Today one of the big problems with see i. Which is still growing right now home market is still growing is more more teams realize the value and importance of automated.Very aggressive automated quality control. But configuring CI is really, really complicated. Every CI provider have their own configuration language,and you have to reason about caching, and you have to manually construct cache keys to the extent,that caching is even possible or useful.There's just a lot of figuring out how to configure and set up CI, And even then it's just doing the naive thing.[58:18] So there are a couple of interesting companies, Dagger and Earthly, or interesting technologies around simplifying that, but again, you still have to manually,so they are providing a, I would say, better config and more uniform config language that allows you to, for example, run build steps in containers.And that's not nothing at all.[58:43] Um, but you still manually creating a lot of, uh, configuration to run these very coarse grained large scale, long running build steps, you know, I thinkthe future is something like my entire CI config post cloning the repo is basically pants build colon, colon, because the system does the configuration for you.[59:09] It figures out what that means in a very fast, very fine grained way and does not require you to manually decide on workflows and steps and jobs and how they all fit together.And if I want to speed this thing up, then I have to manually partition the work somehow and write extra config to implement that partitioning.That is the future, I think, is rather than there's the CI layer, say, which would be the CI providers proprietary config or theodagger and then underneath that there is the buildtool, which would be Bazel or Pants V2 or whatever it is you're using, could still be we make for many companies today or Maven or Gradle or whatever, I really think the future is the integration of those two layers.In the same way that I referenced much, much earlier in our conversation, how one thing that stood out to me at Google was that they had the insight to integrate the version control layer and the build tool to provide really effective functionality there.I think the build tool being the thing that knows about your dependencies.[1:00:29] Can take over many of the jobs of the c i configuration layer in a really smart really fast. Where is the future where essentially more and more of how do i set up and configure and run c i is delegated to the thing that knows about your dependencies and knows about cashing and knows about concurrency and is able,to make smarter decisions than you can in a YAML config file.[1:01:02] Yeah, I'm excited for the time that me as a platform engineer has to spend less than 5% of my time thinking about CI and CD and I can focus on other things like improving our data models rather than mucking with the YAML and Terraform configs. Well, yeah.Yeah. Yeah. Today you have to, we're still a little bit in that state because we are engineers and because we, the tools that we use are themselves made out of software. There's,a strong impulse to tinker and there's a strong impulse sake. Well, I want to solve this problem myself or I want to hack on it or I should be able to hack on it. And that's, you should be able to hack on it for sure. But we do deserve more tooling that requires less hacking,and more things and paradigms that have tested and have survived a lot of tire kicking.[1:02:00] Will we always need to hack on them a little bit? Yes, absolutely, because of the nature of what we do. I think there's a lot of interesting things still to happen in this space.Yeah, I think we should end on that happy note as we go back to our day jobs mucking with YAML. Well, thanks so much for being a guest. I think this was a great conversation and I hope to have you again for the show sometime.Would love that. Thanks for having me. It was fascinating. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.softwareatscale.dev

Talk Python To Me - Python conversations for passionate developers
#389: 18 awesome asyncio packages in Python

Talk Python To Me - Python conversations for passionate developers

Play Episode Listen Later Nov 9, 2022 57:28


If you're a fan of Python's async and await keywords and the powers they unlock, then this episode is for you. We have Timo Furrer here to share a whole bunch of asyncio related Python packages. Timo runs the awesome-asyncio list and he and I picked out some of our favorites to share with you. Links from the show Timo on Twitter: @tuxtimo awesome-asyncio list: github.com Some of the highlighted packages FastAPI: github.com starlette: github.com sanic: github.com uvicorn - The lightning-fast ASGI server: github.com Tech Empower Python Framework benchmarks: techempower.com aioamqp - AMQP implementation using asyncio: github.com pyzmq - Python bindings for ZeroMQ: github.com Scaling Python and Jupyter with ZeroMQ Talk Python episode: talkpython.fm/306 asyncpg - Fast PostgreSQL Database Client: github.com Piccolo - An ORM / query builder: github.com aiosqlite: github.com motor - The async Python driver for MongoDB: github.com AsyncSSH: github.com HTTPX: github.com pytest-asyncio - Pytest support for asyncio: github.com uvloop - Ultra fast implementation of asyncio event loop: github.com aiocache - Cache manager for different backends: github.com aiofiles - File support for asyncio: github.com aiopath - Asynchronous pathlib for asyncio: github.com Video: Demystifying Python's Async and Await Keywords - JetBrains TV 2020 (Michael Kennedy): youtube.com tenacity: readthedocs.io Michael's full 5 hour async course: talkpython.fm/async Watch this episode on YouTube: youtube.com --- Stay in touch with us --- Subscribe to us on YouTube: youtube.com Follow Talk Python on Mastodon: talkpython Follow Michael on Mastodon: mkennedy Sponsors Microsoft Sentry Error Monitoring, Code TALKPYTHON AssemblyAI Talk Python Training

Python Bytes
#291 Wait, you have how many licenses?!?

Python Bytes

Play Episode Listen Later Jul 6, 2022 32:27


Watch the live stream: Watch on YouTube About the show Sponsored by us! Support our work through: Our courses at Talk Python Training Test & Code Podcast Patreon Supporters Michael #1: Python License tracker by Tom Nijhof/Nyhof Every package depends on other package with sometimes different licenses. Tom made a tool to find out what licenses you all need for a project: PyTest alone needs 4 different licenses for itself and its dependencies. Tensorflow is even worst Brian #2: undataclass Trey Hunner As a teaching aid, and to show how much dataclasses do for you, this is a module and an application that converts dataclasses to normal classes, and fills in all of the dunder methods you need. Example in app: from dataclasses import dataclass @dataclass() class Point: x: float y: float z: float Converts to class Point: __match_args__ = ('x', 'y', 'z') def __init__(self, x: float, y: float, z: float) -> None: self.x = x self.y = y self.z = z def __repr__(self): cls = type(self).__name__ return f'{cls}(x={self.x!r}, y={self.y!r}, z={self.z!r})' def __eq__(self, other): if not isinstance(other, Point): return NotImplemented return (self.x, self.y, self.z) == (other.x, other.y, other.z) Note on NotImplemented: It just means, “I don't know how to compare this”, and Python will try __eq__ on the other object. If that also raises NotImplemented, a False is returned. The default is the above with @dataclass(frozen=True, slots=True) and adds the methods: fronzen=True gives you implementations of __hash__, __setattr__, __delattr__, __getstate__, __setstate__, Essentially raises exception if you try to change the contents, and makes your objects hashable. slots=True adds the line: __slots__ = (``'``x', '``y``'``, '``z``'``). This disallows adding new attributes to objects at runtime. See Python docs Trey wrote two posts about it: Appreciating Python's match-case by parsing Python code How I made a dataclass remover Turns out, this is a cool example for AST and structural pattern matching. Notes from the “how I made..” article: "I used some tricks I don't usually get to use in Python. I used: Many very hairy **match**-**case** blocks which replaced even hairier if-elif blocks A sentinel object to keep track of a location that needed replacing Python's **textwrap.dedent** utility, which I feel should be more widely known & used slice assignment to inject one list into another The ast module's unparse function to convert an abstract syntax tree into Python code” Michael #3: Qutebrowser via Martin Borus Qutebrowser is a keyboard-focused browser with a minimal GUI." It's Python powered Whats more important - doesn't force you to use it's Vim-based shortcuts, the mouse still works. But you usually don't need it: Because on any page, a keypress on the "f" key will show, you every clickable think and a letter combination to enter to click this. Brian #4: asyncio and web applications A collection of articles Quart is now a Pallets project P G Jones, maintainer of Quart and Hypercorn “Quart, an ASGI re-implementation of the Flask API has joined the Pallets organization. This means that future development will be under the Pallets governance by the Pallets maintainers. Our long term aim is to merge Quart and Flask to bring ASGI support directly to Flask. “When to use Quart?” “Quart is an ASGI framework utilising async IO throughout, whereas Flask is a WSGI framework utilising sync IO. It is therefore best to use Quart if you intend to use async IO (i.e. async/await libraries) and Flask if not. Don't worry if you choose the 'wrong' framework though, as Quart supports sync IO and Flask supports async IO, although less efficiently.” Using async and await, from Flask docs Flask has some support of async/await since Flask 2.0 But it's still a WSGI application. “Deciding whether you should use Flask, Quart, or something else is ultimately up to understanding the specific needs of your project.” Should You Use AsyncIO for Your Next Python Web Application? Steven Pate A cool “brief history of Python web server interfaces” Discussion of the Python servers and frameworks for both WSGI and ASGI Recommendation: Do you need async? “… most people don't. WSGI servers and frameworks are usually performant enough.” Extras Michael: Python Web Conf Talk: HTMX + Flask: Modern Python Web Apps, Hold the JavaScript browserosaurus Joke: Understanding JavaScript Joke: Where do you see yourself in 5 years?

Software Engineering Radio - The Podcast for Professional Software Developers
Episode 516: Brian Okken on Testing in Python with pytest

Software Engineering Radio - The Podcast for Professional Software Developers

Play Episode Listen Later Jun 16, 2022 50:39


In this episode, we explore the popular pytest python testing tool with author Brian Okken, author of Python Testing with pytest. We start by discussing why pytest is so popular in the Python community: its focus on simplicity, readability, and developer ease-of-use; what makes pytest unique; the setup and teardown of tests using fixtures, parameterization, and the plugin ecosystem; mocking; why we should design for testing, and how to reduce the need for mocking; how to set up a project for testability; test-driven development, and designing your tests so that they support refactoring. Finally, we consider some complementary tools that can improve the python testing experience.

testing python mocking ieee computer society pytest brian okken python testing
Network Automation Nerds Podcast
#016: Network Automation Code Testing with Adam Byczowski

Network Automation Nerds Podcast

Play Episode Play 45 sec Highlight Listen Later Apr 6, 2022 72:53


Today on the show, we will be talking to Adam Byczkowski from Network to Code about code testing in network automation. This is an important topic that is often overlooked in Network Automation, especially if we are just getting our feet wet in the space. I was very happy to see Adam wrote about code testing in the network automation space via a 3-part blog series. I invited Adam on the show to talk about his journey, code testing, and testing in the network automation world. I know we will learn a lot from Adam. Let's dive right in! --- Show Notes Links ---Connect with Adam on LinkedIn: https://www.linkedin.com/in/adam-byczkowski-525568105/  Pytest in the Networking World (Parts 1 - 3): https://blog.networktocode.com/post/pytest-in-the-networking-world/https://blog.networktocode.com/post/pytest-in-the-netwoking-world-part-2/ https://blog.networktocode.com/post/pytest-in-the-netwoking-world-part-3/ NTC NetUtil (easy to understand tests): https://blog.networktocode.com/post/introducing-netutils/ Read Network to Code Blogs: https://blog.networktocode.com/Closing the Loop on Testing Network Changes: https://elegantnetwork.github.io/posts/closing-the-loop-testing/Python Testing with pytest by Brian Okken: https://pragprog.com/titles/bopytest2/python-testing-with-pytest-second-edition/ Coverage.py: https://coverage.readthedocs.io/en/6.1.2/ pytest plugins: https://docs.pytest.org/en/latest/how-to/plugins.html Other Resources: Pythontesting.netTest and Code Podcast: Testandcode.com --- Stay in Touch with Us ---Subscribe on YouTube: https://www.youtube.com/c/EricChouNetworkAutomationNerdsFollow Eric on Twitter: https://twitter.com/ericchou

Python Podcast
FastAPI

Python Podcast

Play Episode Listen Later Feb 14, 2022 87:43


Dominik und Jochen unterhalten sich über FastAPI. FastAPI ist ein noch sehr junges, aber trotzdem recht verbreitetes Webframework für Python, das darauf ausgelegt ist, die moderneren Sprachfeatures von Python wie Typannotationen und Async-Fähigkeit besser zu nutzen als traditionellere Webframeworks wie Django oder Flask.     Shownotes Unsere E-Mail für Fragen, Anregungen & Kommentare: hallo@python-podcast.de News aus der Szene PEP 665 -- A file format to list Python dependencies for reproducibility of an application | Brett Cannon CPython on WASM At long last, Black is no longer a beta product! | Stability Policy Django wird jetzt auch wie in DEP 8 angekündigt mit black formatiert PyTest 7.0 release HATEOAS — An Alternative Explanation The future of editing in Wagtail Prototype Fund EdgeDB 1.0 Release | asyncpg -- A fast PostgreSQL Database Client Library for Python/asyncio | uvloop is a fast, drop-in replacement of the built-in asyncio event loop. uvloop is implemented in Cython and uses libuv under the hood. Twitter: My dental hygienist: "Are you flossing regularly?" Me: "Do you backup your laptop and photos regularly?" Laravel Livewire mit Christoph Rumpel | Alpine.Js | Caleb Porzio Werbung Exklusiv-Deal + ein Geschenk

Test & Code - Python Testing & Development
174: pseudo-TDD - Paul Ganssle

Test & Code - Python Testing & Development

Play Episode Listen Later Dec 22, 2021 39:24


In this episode, I talk with Paul Ganssle about a fun workflow that he calls pseudo-TDD. Pseudo-TDD is a way to keep your commit history clean and your tests passing with each commit. This workflow includes using pytest xfail and some semi-advanced version control features. Some strict forms of TDD include something like this: - write a failing test that demonstrates a lacking feature or defect - write the source code to get the test to pass - refactor if necessary - repeat In reality, at least for me, the software development process is way more messy than this, and not so smooth and linear. Pauls workflow allow you to develop non-linearly, but commit cleanly. Special Guest: Paul Ganssle.

Test & Code - Python Testing & Development

In the preface of "Python Testing with pytest" I list some reasons to use pytest, under a section called "why pytest?". Someone asked me recently, a different but related question "why NOT unittest?". unittest is an xUnit style framework. For me, xUnit style frameworks are fatally flawed for software testing. That's what this episode is about, my opinion of * "Why NOT unittest?", or more broadly, * "What are the fatal flaws of xUnit?"

python pytest python testing
Test & Code - Python Testing & Development
171: How and why I use pytest's xfail - Paul Ganssle

Test & Code - Python Testing & Development

Play Episode Listen Later Nov 22, 2021 38:26


Paul Ganssle, is a software developer at Google, core Python dev, and open source maintainer for many projects, has some thoughts about pytest's xfail. He was an early skeptic of using xfail, and is now an proponent of the feature. In this episode, we talk about some open source workflows that are possible because of xfail. Special Guest: Paul Ganssle.

Test & Code - Python Testing & Development
170: pytest for Data Science and Machine Learning - Prayson Daniel

Test & Code - Python Testing & Development

Play Episode Listen Later Nov 18, 2021 45:12


Prayson Daniel, a principle data scientist, discusses testing machine learning pipelines with pytest. Prayson is using pytest for some pretty cool stuff, including: * unit tests, of course * testing pipeline stages * counterfactual testing * performance testing All with pytest. So cool. Special Guest: Prayson Daniel.

Test & Code - Python Testing & Development
166: unittest expectedFailure and xfail

Test & Code - Python Testing & Development

Play Episode Listen Later Oct 14, 2021 6:23


xfail isn't just for pytest tests. Python's unittest has @unittest.expectedFailure. In this episode, we cover: - using @unittest.expectedFailure - the results of passing and failing tests with expectedFailure - using pytest as a test runner for unittest - using pytest markers on unittest tests Docs for expectedFailure: https://docs.python.org/3/library/unittest.html#skipping-tests-and-expected-failures Some sample code. unittest only: ```python import unittest class ExpectedFailureTestCase(unittest.TestCase): @unittest.expectedFailure def test_fail(self): self.assertEqual(1, 0, "broken") @unittest.expectedFailure def test_pass(self): self.assertEqual(1, 1, "not broken") ``` unittest with pytest markers: ```python import unittest import pytest class ExpectedFailureTestCase(unittest.TestCase): @pytest.mark.xfail def test_fail(self): self.assertEqual(1, 0, "broken") @pytest.mark.xfail def test_pass(self): self.assertEqual(1, 1, "not broken") ```

python pytest
Test & Code - Python Testing & Development
165: pytest xfail policy and workflow

Test & Code - Python Testing & Development

Play Episode Listen Later Oct 7, 2021 9:44


A discussion of how to use the xfail feature of pytest to help with communication on software projects. The episode covers: * What is xfail * Why I use it * Using reason effectively by including issue tracking numbers * Using xfail_strict * Adding --runxfail when transitioning from development to feature freeze * What to do about test failures * How all of this might help with team communication

Test & Code - Python Testing & Development
164: Debugging Python Test Failures with pytest

Test & Code - Python Testing & Development

Play Episode Listen Later Sep 14, 2021 13:17


An overview of the pytest flags that help with debugging. From Chapter 13, Debugging Test Failures, of Python Testing with pytest, 2nd edition (https://pythontest.com/pytest-book/). pytest includes quite a few command-line flags that are useful for debugging. We talk about thes flags in this episode. Flags for selecting which tests to run, in which order, and when to stop: * -lf / --last-failed: Runs just the tests that failed last. * -ff / --failed-failed: Runs all the tests, starting with the last failed. * -x / --exitfirst: Stops the tests session afterEd: after?Author: yep the first failure. * --maxfail=num: Stops the tests after num failures. * -nf / --new-first: Runs all the tests, ordered by file modification time. * --sw / --stepwise: Stops the tests at the first failure. Starts the tests at the last failure next time. * --sw-skip / --stepwise-skip: Same as --sw, but skips the first failure. Flags to control pytest output: * -v / --verbose Displays all the test names, passing or failing. * --tb=[auto/long/short/line/native/no] Controls the traceback style. * -l / --showlocals Displays local variables alongside the stacktrace. Flags to start a command-line debugger: * --pdb Starts an interactive debugging session at the point of failure. * --trace Starts the pdb source-code debugger immediately when running each test. * --pdbcls Uses alternatives to pdb, such as IPython's debugger with –-pdbcls=IPython.terminal.debugger:TerminalPdb. This list is also found in Chapter 13 of Python Testing with pytest, 2nd edition (https://pythontest.com/pytest-book/). The chapter is "Debugging Test Failures" and covers way more than just debug flags, while walking through debugging 2 test failures.

Test & Code - Python Testing & Development
154: Don't Mock your Database - Jeff Triplett

Test & Code - Python Testing & Development

Play Episode Listen Later May 21, 2021 31:39


You need tests for your web app. And it has a database. What do you do with the database during testing? Should you use the real thing? or mock it? Jeff Triplett says don't mock it. In this episode, we talk with Jeff about testing web applications, specifically Django apps, and of course talk about the downsides of database mocking. Special Guest: Jeff Triplett.

Network Automation Hangout
005 — pyATS, network testing with pytest, software upgrades, Python web frameworks, work-life balance

Network Automation Hangout

Play Episode Listen Later May 17, 2021 62:14


Guest: Clay Curtis (@ccurtis584) Topics: - pyATS - Network testing with pytest - Programmatic software upgrades (in the podcast there was a reference to nts/ntc upgrade, it is actually pyntc - https://github.com/networktocode/pyntc) - New major releases of Flask and Jinja - Python web frameworks + recommended resources for every popular framework - Work-life balance Recorded live on 2021-05-13 Weekly recordings with the community on Thursdays at 6 PM CET / 12 PM ET / 9 AM PT on dogehouse.tv

Test & Code - Python Testing & Development
148: Coverage.py and testing packages

Test & Code - Python Testing & Development

Play Episode Listen Later Mar 12, 2021 14:07


How do you test installed packages using coverage.py? Also, a couple followups from last week's episode on using coverage for single file applications.

Test & Code - Python Testing & Development
147: Testing Single File Python Applications/Scripts with pytest and coverage

Test & Code - Python Testing & Development

Play Episode Listen Later Mar 6, 2021 11:24


Have you ever written a single file Python application or script? Have you written tests for it? Do you check code coverage? This is the topic of this weeks episode, spurred on by a listener question. The questions: * For single file scripts, I'd like to have the test code included right there in the file. Can I do that with pytest? * If I can, can I use code coverage on it? The example code discussed in the episode: script.py ``` def foo(): return 5 def main(): x = foo() print(x) if name == 'main': # pragma: no cover main() test code To test: pip install pytest pytest script.py To test with coverage: put this file (script.py) in a directory by itself, say foo then from the parent directory of foo: pip install pytest-cov pytest --cov=foo foo/script.py To show missing lines pytest --cov=foo --cov-report=term-missing foo/script.py def test_foo(): assert foo() == 5 def test_main(capsys): main() captured = capsys.readouterr() assert captured.out == "5n" ```

Test & Code - Python Testing & Development
143: pytest markers - Anthony Sottile

Test & Code - Python Testing & Development

Play Episode Listen Later Feb 7, 2021 40:00


Completely nerding out about pytest markers with Anthony Sottile. Some of what we talk about: Running a subset of tests with markers. Using marker expressions with and, or, not, and parentheses. Keyword expressions also can use and, or, not, and parentheses. Markers and pytest functionality that use mark, such as parametrize, skipif, etc. Accessing markers with itermarkers and get_closest_marker through item. Passing values, metadata through markers to fixtures or hook functions. Special Guest: Anthony Sottile.

Teaching Python
Episode 59: Crossover with PyBites!

Teaching Python

Play Episode Play 63 sec Highlight Listen Later Jan 22, 2021 45:19


Kelly and Sean team up with Bob Belderbos and Julian Sequeira from @PyBites to answer questions about how our students learn Python using the PyBites platform with small code challenges. In this special crossover episode, we cover everything from how students learn to the way they learn Pytest reporting output to the mindset and chemistry of learning something new. Special Guests: Bob Belderbos and Julian Sequeira.

learning teaching crossover programming computer science python pytest pybites bob belderbos julian sequeira
Test & Code - Python Testing & Development
130: virtualenv activation prompt consistency across shells - an open source dev and test adventure - Brian Skinn

Test & Code - Python Testing & Development

Play Episode Listen Later Sep 13, 2020 36:18


virtualenv supports six shells: bash, csh, fish, xonsh, cmd, posh. Each handles prompts slightly differently. Although the virtualenv custom prompt behavior should be the same across shells, Brian Skinn noticed inconsistencies. He set out to fix those inconsistencies. That was the start of an adventure in open source collaboration, shell prompt internals, difficult test problems, and continuous integration quirks. Brian Skinn initially noticed that on Windows cmd, a space was added between a prefix defined by --prompt and the rest of the prompt, whereas on bash no space was added. For reference, there were/are three nominal virtualenv prompt modification behaviors, all of which apply to the prompt changes that are made at the time of virtualenv activation: If the environment variable VIRTUAL_ENV_DISABLE_PROMPT is defined and non-empty at activation time, do not modify the prompt at all. Otherwise: If the --prompt argument was supplied at creation time, use that argument as the prefix to apply to the prompt; or, If the --prompt argument was not supplied at creation time, use the default prefix of "({{ envname }}) " as the prefix (the environment folder name surrounded by parentheses, and with a trailing space after the last paren. Special Guest: Brian Skinn.

Mid Meet Py
Mid Meet Py - Ep.20 - Interview with Bojan Miletic

Mid Meet Py

Play Episode Listen Later Aug 28, 2020 55:48


PyChat PyCon Italy cancelled Pyjamas CfP workshop August 27th & 28th Ticket sale to PyData Global starts tomorrow (27th Aug) Data Science and Business Analytics with Python course- Jesper Dramsch CircuitPython day- Sept 9th - Adafruit newsletter Nadia's new book on Open Source - Guido's tweet Mid Meet - Hall of Fame Bojan Miletic - Python developer Follow Bojan on Twitter Virtual Coffee Website PyPI highlights Pytest-docker-compose

Test & Code - Python Testing & Development
128: pytest-randomly - Adam Johnson

Test & Code - Python Testing & Development

Play Episode Listen Later Aug 28, 2020 18:12


Software tests should be order independent. That means you should be able to run them in any order or run them in isolation and get the same result. However, system state often gets in the way and order dependence can creep into a test suite. One way to fight against order dependence is to randomize test order, and with pytest, we recommend the plugin pytest-randomly to do that for you. The developer that started pytest-randomly and continues to support it is Adam Johnson, who joins us today to discuss pytest-randomly and another plugin he also wrote, called pytest-reverse. Special Guest: Adam Johnson.

Python Podcast
Tests

Python Podcast

Play Episode Listen Later Aug 20, 2020 78:39


Diesmal machen wir eine Testepisode zu Tests :). Wir sind zum ersten mal mit Aufnahmeequipment draussen unterwegs, weil es zuhause einfach zu heiss wurde. Dabei sind heute Ronny, Dominik und Jochen und wir reden über Tests in Python. Ist vielleicht ein bisschen django-lastig, aber viele der Punkte dürften auch auf andere Projekte übertragbar sein. Shownotes Unsere E-Mail für Fragen, Anregungen & Kommentare: hallo@python-podcast.de Wer und Wo Ambient Innovation PyCologne Meetup Django Meetup Köln Restaurant Spoerl Fabrik Zoom H6 HMC 660X Headset HA3D Kopfhörerverstärker News aus der Szene Django 3.1 Release Notes Django 3.1 Async Python 3.9 Release Candidate Buch zu Django: Two Scoops of Django 3.x Tests pytest Pythonic testing framework unittest built in testing framework Langsame Tests finden: django-slowtests Coverage für branch-coverage etc. xdist pytest plugin für verteilte Testausführung Buch von Adam Johnson: Speed Up Your Django Tests | Sein Blog Pareto Distribution kcachegrind Profiler Schnelleres Filesystem für Tests: dj-inmemorystorage django q für asynchrone Tasks Djangocon 2019 talk: Maintaning a Django codebase after 10k commits freezegun time mocking unittests.mock aus der Standardbibliothek cypress end to end tests für Javascript jest unittests für Javascript Öffentliches Tag auf konektom

Test & Code - Python Testing & Development
126: Data Science and Software Engineering Practices ( and Fizz Buzz ) - Joel Grus

Test & Code - Python Testing & Development

Play Episode Listen Later Aug 17, 2020 32:17


Researches and others using data science and software need to follow solid software engineering practices. This is a message that Joel Grus has been promoting for some time. Joel joins the show this week to talk about data science, software engineering, and even Fizz Buzz. Topics include: Software Engineering practices and data science Difficulties with Jupyter notebooks Code reviews on experiment code Unit tests on experiment code Finding bugs before doing experiments Tests for data pipelines Tests for deep learning models Showing researchers the value of tests by showing the bugs found that wouldn't have been found without them. "Data Science from Scratch" book Showing testing during teaching Data Science "Ten Essays on Fizz Buzz" book Meditations on Python, mathematics, science, engineerign and design Testing Fizz Buzz Different algorithms and solutions to an age old interview question. If not Fizz Buzz, what makes a decent coding interview question. pytest hypothesis Math requirements for data science Special Guest: Joel Grus.

Test & Code - Python Testing & Development
125: pytest 6 - Anthony Sottile

Test & Code - Python Testing & Development

Play Episode Listen Later Aug 7, 2020 60:04


pytest 6 is out. Specifically, 6.0.1, as of July 31. And there's lots to be excited about. Anthony Sottile joins the show to discuss features, improvements, documentation updates and more. Full release notes / changelog (https://docs.pytest.org/en/stable/changelog.html) Some of what we talk about: How to update (at least, how I do it) Run your test suites with 5.4.3 or whatever the last version you were using Update to 6 Run again. Same output? Probably good. If there are any warnings, maybe fix those. You can also run with pytest -W error to turn warnings into errors. Then find out all the cool stuff you can do now New Features pytest now supports pyproject.toml files for configuration. but remember, toml syntax is different than ini files. mostly quotes are needed pytest now includes inline type annotations and exposes them to user programs. Most of the user-facing API is covered, as well as internal code. New command-line flags --no-header and --no-summary A warning is now shown when an unknown key is read from a config INI file. The --strict-config flag has been added to treat these warnings as errors. New required_plugins configuration option allows the user to specify a list of plugins, including version information, that are required for pytest to run. An error is raised if any required plugins are not found when running pytest. Improvements You can now pass output to things like less and head that close the pipe passed to them. thank you!!! Improved precision of test durations measurement. use --durations=10 -vv to capture and show durations Rich comparison for dataclasses and attrs-classes is now recursive. pytest --version now displays just the pytest version, while pytest --version --version displays more verbose information including plugins. --junitxml now includes the exception cause in the message XML attribute for failures during setup and teardown. Improved Documentation Add a note about --strict and --strict-markers and the preference for the latter one. Explain indirect parametrization and markers for fixtures. Bug Fixes Deprecations Trivial/Internal Changes Breaking Changes you might need to care about before upgrading PytestDeprecationWarning are now errors by default. Check the deprecations and removals (https://docs.pytest.org/en/latest/deprecations.html) page if you are curious. -k and -m internals were rewritten to stop using eval(), this results in a few slight changes but overall makes them much more consistent testdir.run().parseoutcomes() now always returns the parsed nouns in plural form. I'd say that's an improvement Special Guest: Anthony Sottile.

Mid Meet Py
Mid Meet Py - Ep.17 - Interview with Matin Borus

Mid Meet Py

Play Episode Listen Later Jul 30, 2020 61:48


PyChat: We need you! Data Science workshop at PyCon Africa needs mentors PyData Global CfP deadline is 2nd August !!! Pytest 6 is out now Hacktoberfest Website is already out ?! PyBerlin tonight PyData Cambridge - 20th Meetup happening tonight! Mid Meet - Hall of Fame: Matin Borus - Pythonista, Volunteer of EuroPython Follow Martin on Twitter PyPI highlights: PhotoCollage made this Authlib

Test & Code - Python Testing & Development
117: Python extension for VS Code - Brett Cannon

Test & Code - Python Testing & Development

Play Episode Listen Later Jun 18, 2020 51:17


The Python extension for VS Code is most downloaded extension for VS Code. Brett Cannon is the manager for the distributed development team of the Python extension for VS Code. In this episode, Brett and I discuss the Python extension and VS Code, including: pytest support virtual environment support how settings work, including user and workspace settings multi root projects testing Python in VS Code debugging and pydevd jump to cursor feature upcoming features Special Guest: Brett Cannon.

Test & Code - Python Testing & Development
116: 15 amazing pytest plugins - Michael Kennedy

Test & Code - Python Testing & Development

Play Episode Listen Later Jun 8, 2020 51:27


pytest plugins are an amazing way to supercharge your test suites, leveraging great solutions from people solving test problems all over the world. In this episode Michael and I discuss 15 favorite plugins that you should know about. We also discuss fixtures and plugins and other testing tools that work great with pytest * tox * GitHub Actions * Coverage.py * Selenium + splinter with pytest-splinter * Hypothesis And then our list of pytest plugins: 1. pytest-sugar 1. pytest-cov 1. pytest-stress 1. pytest-repeat 1. pytest-instafail 1. pytest-metadata 1. pytest-randomly 1. pytest-xdist 1. pytest-flake8 1. pytest-timeout 1. pytest-spec 1. pytest-picked 1. pytest-freezegun 1. pytest-check 1. fluentcheck That last one isn't a plugin, but we also talked about pytest-splinter at the beginning. So I think it still counts as 15. Special Guest: Michael Kennedy.

Talk Python To Me - Python conversations for passionate developers

Do you write tests for your code? You probably should. And most of the time, pytest is the industry standard these days. But pytest can be much more than what you get from just installing it as a tool. There are many amazing plugins that improve pytest in many aspects. That's why I invited Brian Okken to the show to tell us about his favorites. Listen in and your Python testing will be faster, stronger, and more beautiful! Links from the show Brian Okken: @brianokken Brian's pytest book: amazon.com Test & Code podcast: testandcode.com Test & Code 104: Top 28 pytest plugins: testandcode.com/104 The list of plugins pytest-sugar: github.com/Teemu/pytest-sugar pytest-cov: pypi.org/project/pytest-cov pytest-stress: github.com/pytest-dev/pytest-stress pytest-repeat: github.com/pytest-dev/pytest-repeat pytest-instafail: pypi.org/project/pytest-instafail pytest-metadata: github.com/pytest-dev/pytest-metadata pytest-randomly: github.com/pytest-dev/pytest-randomly pytest-xdist: pypi.org/project/pytest-xdist pytest-flake8: github.com/tholo/pytest-flake8 pytest-timeout: pypi.org/project/pytest-timeout pytest-spec: pypi.org/project/pytest-spec pytest-picked: github.com/anapaulagomes/pytest-picked pytest-freezegun: github.com/ktosiek/pytest-freezegun pytest-check: github.com/okken/pytest-check fluentcheck: github.com/csparpa/fluentcheck Sponsors Linode Sentry Error Monitoring, Code TALKPYTHON Talk Python Training

Test & Code - Python Testing & Development
111: Subtests in Python with unittest and pytest - Paul Ganssle

Test & Code - Python Testing & Development

Play Episode Listen Later May 2, 2020 48:34


In both unittest and pytest, when a test function hits a failing assert, the test stops and is marked as a failed test. What if you want to keep going, and check more things? There are a few ways. One of them is subtests. Python's unittest introduced subtests in Python 3.4. pytest introduced support for subtests with changes in pytest 4.4 and a plugin, called pytest-subtests. Subtests are still not really used that much. But really, what are they? When could you use them? And more importantly, what should you watch out for if you decide to use them? That's what Paul Ganssle and I will be talking about today. Special Guest: Paul Ganssle.

Test & Code - Python Testing & Development
110: Testing Django - from unittest to pytest - Adam Parkin

Test & Code - Python Testing & Development

Play Episode Listen Later Apr 25, 2020 24:56


Django supports testing out of the box with some cool extensions to unittest. However, many people are using pytest for their Django testing, mostly using the pytest-django plugin. Adam Parkin, who is known online as CodependentCodr (https://twitter.com/codependentcodr), joins us to talk about migrating an existing Django project from unittest to pytest. Adam tells us just how easy this is. Special Guest: Adam Parkin.

Python Podcast
Javascript Frontends

Python Podcast

Play Episode Listen Later Apr 23, 2020 105:23


Da wir aus unterschiedlichen Gründen angefangen haben, uns auch ein bisschen mit Javascript-Frontends auseinanderzusetzen, sprechen wir heute mal ganz allgemein über dieses Thema. Und wie man dann von da aus mit - üblicherweise in Python implementierten - Backends spricht. Shownotes Unsere E-Mail für Fragen, Anregungen & Kommentare: hallo@python-podcast.de Lost & Found PyData Deep Dive Meta-Podcast Audio Hard/Software Headsets von Beyerdynamic: DT 297 DT 797 Superlux HMC 660 X und wie man es verwendet HMC 660 X über Klinke anschliessen Audiointerface, das nativ 12v Phantomspeisung kann: Zoom H6 Ultraschall REAPER Studio Link / Beta Zencastr Videokonferenzsoftware Zoom Microsoft Teams Selbsthosting möglich: Jitsi BigBlueButton Pythoncamp Google Meet Whereby FaceTime News aus der Szene A Language Creators' Conversation: Guido van Rossum, James Gosling, Larry Wall & Anders Hejlsberg Django 1.11 EOL Pytest troubles Pyenv windows Javascript Frontends Vielleicht der Ort, um eine Lerngruppe zu organisieren: Vue-JS-Cologne vue react angular jQuery History API REST / GraphQL Relay / Apollo / axios ASGI Single page application redux DRF serializer Monorepo Jacob Kaplan-Moss - Assets in Django without losing your hair - PyCon 2019 WhiteNoise django-storages webpack Parcel FastAPI / Starlette Öffentliches Tag auf konektom

Tech Writer koduje
#14 Tech Writer zaczyna kodować w Pythonie, czyli o narzędziach i dobrych praktykach

Tech Writer koduje

Play Episode Listen Later Mar 24, 2020 50:07


Rozmawiamy z Sebastianem Witowskim o tym jak ustawić sobie środowisko do kodowania w Pythonie i jakich błędów unikać zaczynając swoją przygodę z tym językiem programowania. Spora dawka wiedzy dla początkujących Pythonistów. Ale jeśli kodujesz w Pythonie od jakiegoś czasu i chcesz się upewnić, że stosujesz dobre praktyki, to ten odcinek jest też dla Ciebie. Informacje dodatkowe: Python: https://www.python.org/ Intellij IDEA: https://www.jetbrains.com/idea/ PyCharm: https://www.jetbrains.com/pycharm/ Visual Studio Code (VS Code): https://code.visualstudio.com/ Vim: https://www.vim.org/ pyenv: https://github.com/pyenv/pyenv Python venv: https://docs.python.org/3/library/venv.html Python virtualenv: https://virtualenv.pypa.io/en/stable/ Conda: https://docs.conda.io/en/latest/ Node modules: https://www.w3schools.com/nodejs/nodejs_modules.asp Pipenv: https://pipenv.readthedocs.io/en/latest/ Poetry: https://python-poetry.org/ Python Requests: https://2.python-requests.org/en/master/ Django: https://www.djangoproject.com/ Flask: https://flask.palletsprojects.com/en/1.1.x/ EuroPython 2019: https://ep2019.europython.eu/ Cookiecutter: https://cookiecutter.readthedocs.io/en/1.7.0/ Pipx: https://github.com/pipxproject/pipx Black: https://github.com/psf/black npm: https://www.npmjs.com/ npx: https://www.npmjs.com/package/npx "The Hitchhiker’s Guide to Python!", Kenneth Reitz, Tanya Schlusser: https://docs.python-guide.org/ Sphinx: http://www.sphinx-doc.org/en/master/ Write the Docs: https://www.writethedocs.org/ Pytest: https://docs.pytest.org/en/latest/ Python unittest: https://docs.python.org/3.8/library/unittest.html Test Driven Development (TDD): https://www.agilealliance.org/glossary/tdd/ Git: https://git-scm.com/ Warsztat "Modern Python Developer's Toolkit": https://www.meetup.com/Pykonik/events/268809734/ Pykonik, Kraków Python User Group: https://www.meetup.com/Pykonik/ Profil Sebastiana na LinkedIn: https://www.linkedin.com/in/switowski/ Profil Sebastiana na Twitterze: https://twitter.com/SebaWitowski Strona Sebastiana: https://switowski.com/

Test & Code - Python Testing & Development
106: Visual Testing : How IDEs can make software testing easier - Paul Everitt

Test & Code - Python Testing & Development

Play Episode Listen Later Mar 20, 2020 49:58


IDEs can help people with automated testing. In this episode, Paul Everitt and Brian discuss ways IDEs can encourage testing and make it easier for everyone, including beginners. We discuss features that exist and are great, as well as what is missing. The conversation also includes topics around being welcoming to new contributors for both open source and professional projects. We talk about a lot of topics, and it's a lot of fun. But it's also important. Because IDEs can make testing Some topics discussed: Making testing more accessible Test First vs teaching testing last TDD workflow Autorun Rerunning last failures Different ways to run different levels of tests Command line flags and how to access them in IDEs pytest.ini zooming in and out of test levels running parametrizations running tests with coverage and profiling parametrize vs parameterize parametrization identifiers pytest fixture support global configurations / configuration templates coverage and testing and being inviting to new contributors confidence in changes and confidence in contributions navigating code, tests, fixtures grouping tests in modules, classes, directories BDD, behavior driven development, cucumber, pytest-bdd web development testing parallel testing with xdist and IDE support refactor rename Special Guest: Paul Everitt.

Test & Code - Python Testing & Development
105: TAP: Test Anything Protocol - Matt Layman

Test & Code - Python Testing & Development

Play Episode Listen Later Mar 11, 2020 30:13


The Test Anything Protocol, or TAP, is a way to record test results in a language agnostic way, predates XML by about 10 years, and is still alive and kicking. Matt Layman has contributed to Python in many ways, including his educational newsletter, and his Django podcast, Django Riffs. Matt is also the maintainer of tap.py and pytest-tap, two tools that bring the Test Anything Protocol to Python. In this episode, Matt and I discuss TAP, it's history, his involvement, and some cool use cases for it. Special Guest: Matt Layman.

Test & Code - Python Testing & Development
104: Top 28 pytest plugins - Anthony Sottile

Test & Code - Python Testing & Development

Play Episode Listen Later Mar 4, 2020 47:13


pytest is awesome by itself. pytest + plugins is even better. In this episode, Anthony Sottile and Brian Okken discuss the top 28 pytest plugins. Some of the plugins discussed (we also mention a few plugins related to some on this list): pytest-cov pytest-timeout pytest-xdist pytest-mock pytest-runner pytest-instafail pytest-django pytest-html pytest-metadata pytest-asyncio pytest-split-tests pytest-sugar pytest-rerunfailures pytest-env pytest-cache pytest-flask pytest-benchmark pytest-ordering pytest-watch pytest-pythonpath pytest-flake8 pytest-pep8 pytest-repeat pytest-pylint pytest-randomly pytest-selenium pytest-mypy pytest-freezegun Honorable mention: - pytest-black - pytest-emoji - pytest-poo Special Guest: Anthony Sottile.

Test & Code - Python Testing & Development
98: pytest-testmon - selects tests affected by changed files and methods - Tibor Arpas

Test & Code - Python Testing & Development

Play Episode Listen Later Jan 21, 2020 32:58


pytest-testmon is a pytest plugin which selects and executes only tests you need to run. It does this by collecting dependencies between tests and all executed code (internally using Coverage.py) and comparing the dependencies against changes. testmon updates its database on each test execution, so it works independently of version control. In this episode, I talk with testmon creator Tibor Arpas about testmon, about it's use and how it works. Special Guest: Tibor Arpas.

Test & Code - Python Testing & Development
93: Software Testing, Book Writing, Teaching, Public Speaking, and PyCarolinas - Andy Knight

Test & Code - Python Testing & Development

Play Episode Listen Later Oct 31, 2019 30:24


Andy Knight is the Automation Panda. Andy Knight is passionate about software testing, and shares his passion through public speaking, writing on automationpanda.com, teaching as an adjunct professor, and now also through writing a book and organizing a new regional Python conference. Topics of this episode include: Andy's book on software testing Being an adjunct professor Public speaking and preparing talk proposals including tips from Andy about proposals and preparing for talks PyCarolinas Special Guest: Andy Knight.

Test & Code - Python Testing & Development
90: Dynamic Scope Fixtures in pytest 5.2 - Anthony Sottile

Test & Code - Python Testing & Development

Play Episode Listen Later Oct 11, 2019 33:59


pytest 5.2 was just released, and with it, a cool fun feature called dynamic scope fixtures. Anthony Sottile so tilly is one of the pytest core developers, so I thought it be fun to have Anthony describe this new feature for us. We also talk about parametrized testing and really what is fixture scope and then what is dynamic scope. Special Guest: Anthony Sottile.

Test & Code - Python Testing & Development
87: Paths to Parametrization - from one test to many

Test & Code - Python Testing & Development

Play Episode Listen Later Sep 11, 2019 19:01


There's a cool feature of pytest called parametrization. It's totally one of the superpowers of pytest. It's actually a handful of features, and there are a few ways to approach it. Parametrization is the ability to take one test, and send lots of different input datasets into the code under test, and maybe even have different output checks, all within the same test that you developed in the simple test case. Super powerful, but something since there's a few approaches to it, a tad tricky to get the hang of.

paths pytest
Test & Code - Python Testing & Development
85: Speed Up Test Suites - Niklas Meinzer

Test & Code - Python Testing & Development

Play Episode Listen Later Aug 26, 2019 26:32


Good software testing strategy is one of the best ways to save developer time and shorten software development delivery cycle time. Software test suites grow from small quick suites at the beginning of a project to larger suites as we add tests, and the time to run the suites grows with it. Fortunately, pytest has many tricks up it's sleave to help shorten those test suite times. Niklas Meinzer is a software developer that recentely wrote an article on optimizing test suites. In this episode, I talk with Niklas about the optimization techniques discussed in the article and how they can apply to just about any project. Special Guest: Niklas Meinzer.

Test & Code - Python Testing & Development
83: PyBites Code Challenges behind the scenes - Bob Belderbos

Test & Code - Python Testing & Development

Play Episode Listen Later Aug 16, 2019 24:03


Bob Belderbos and Julian Sequeira started PyBites (https://pybit.es/) a few years ago. They started doing code challanges along with people around the world and writing about it. Then came the codechalleng.es (https://codechalleng.es/) platform, where you can do code challenges in the browser and have your answer checked by pytest tests. But how does it all work? Bob joins me today to go behind the scenes and share the tech stack running the PyBites Code Challenges platform. We talk about the technology, the testing, and how it went from a cool idea to a working platform. Special Guest: Bob Belderbos.

behind the scenes python django selenium web applications pytest code challenges pybites bob belderbos julian sequeira
Test & Code - Python Testing & Development
82: pytest - favorite features since 3.0 - Anthony Sottile

Test & Code - Python Testing & Development

Play Episode Listen Later Jul 31, 2019 36:35


Anthony Sottile is a pytest core contributor, as well as a maintainer and contributor to many other projects. In this episode, Anthony shares some of the super cool features of pytest that have been added since he started using it. We also discuss Anthony's move from user to contributor, and how others can help with the pytest project. Special Guest: Anthony Sottile.

sottile pytest
Test & Code - Python Testing & Development
80: From Python script to Maintainable Package

Test & Code - Python Testing & Development

Play Episode Listen Later Jul 4, 2019 21:51


This episode is a story about packaging, and flit, tox, pytest, and coverage. And an alternate solution to "using the src". Python makes it easy to build simple tools for all kinds of tasks. And it's great to be able to share small projects with others on your team, in your company, or with the world. When you want to take a script from "just a script" to maintainable package, there are a few steps, but none of it's hard. Also, the structure of the code layout changes to help with the growth and support. Instead of just talking about this from memory, I thought it'd be fun to create a new project and walk through the steps, and report back in a kind of time lapse episode. It should be fun. Here are the steps we walk through: 0.1 Initial script and tests 0.2 build wheel with flit 0.3 build and test with tox 0.4 move source module into a package directory 0.5 move tests into tests directory

Teach Erik Code
I thought you just said wasabi

Teach Erik Code

Play Episode Listen Later May 3, 2019 44:55


Episode 2: Hiring process yo-yos, pay rates, an update on Erik's portfolio site, and a plan to move forward. Oh, yeah, impostor syndrome. Mock up sent to the tech lead: https://codesandbox.io/s/9ylplv5y1p Erik's Portfolio site mock up: https://codesandbox.io/s/vn02vklxwl Erik's Portfolio project: https://git.etherealvisions.us/teacherikcode/eriks_portfolio Follow us on Twitter: https://twitter.com/code_erik/ Coming soon: https://teacherikcode.com/ Pytest: https://docs.pytest.org/en/latest/ Figma: https://www.figma.com/ Music: NEO- NIGHTMARE by Clown NEO SoundCloud: @neofrance Facebook: www.facebook.com/NEO-1467258656867045/ Clown YouTube: www.youtube.com/user/ClownDubstep SoundCloud: @clowndubstep Facebook: www.facebook.com/ClownDubstepOfficial?_rdr Twitter: twitter.com/ClownDubstep Released by:ClownRelease date:17 December 2015P-line:℗ ClownNEO - NIGHTMARE by Clown Music is licensed under a Creative Commons License.

Test & Code - Python Testing & Development

Is it ok to have more than one assert statement in a test? I've seen articles that say no, you should never have more than one assert. I've also seen some test code made almost unreadable due to trying to avoid more than one assert per test. Where did this recommendation even come from? What are the reasons? What are the downsides to both perspectives? That's what we're going to talk about today.

Test & Code - Python Testing & Development

A look back on 3 years of podcasting, and a bit of a look forward to what to expect in 2019. Top 5 episodes: 2: Pytest vs Unittest vs Nose (https://testandcode.com/2) 33: Katharine Jarmul - Testing in Data Science (https://testandcode.com/33) 18: Testing in Startups and Hiring Software Engineers with Joe Stump (https://testandcode.com/18) 45: David Heinemeier Hansson - Software Development and Testing, TDD, and exploratory QA (https://testandcode.com45) 27: Mahmoud Hashemi : unit, integration, and system testing (https://testandcode.com/27) Honorable mention: 32: David Hussman - Agile vs Agility, Dude's Law, and more (https://testandcode.com/32) This episode also went through lots of: what went well what was lacking what's next Please listen and let me know where I should take this podcast.

Test & Code - Python Testing & Development
REST APIs, testing with Docker containers and pytest

Test & Code - Python Testing & Development

Play Episode Listen Later Dec 14, 2018 28:09


Let's say you've got a web application you need to test. It has a REST API that you want to use for testing. Can you use Python for this testing even if the application is written in some other language? Of course. Can you use pytest? duh. yes. what else? What if you want to spin up docker instances, get your app running in that, and run your tests against that environment? How would you use pytest to do that? Well, there, I'm not exactly sure. But I know someone who does. Dima Spivak is the Director of Engineering at StreamSets, and he and his team are doing just that. He's also got some great advice on utilizing code reviews across teams for test code, and a whole lot more. Special Guest: Dima Spivak.

Test & Code - Python Testing & Development
50: Flaky Tests and How to Deal with Them

Test & Code - Python Testing & Development

Play Episode Listen Later Oct 25, 2018 32:20


Anthony Shaw joins Brian to discuss flaky tests and flaky test suites. What are flaky tests? Is it the same as fragile tests? Why are they bad? How do we deal with them? What causes flakiness? How can we fix them? How can we avoid them? Proactively rooting out flakiness Test design GUI tests Sharing solutions Special Guest: Anthony Shaw.

Test & Code - Python Testing & Development

The story of how I came to find a good user interface for running and debugging automated tests is interleaved with a multi-year effort of mine to have a test workflow that’s works smoothly with product development and actually speeds things up. It’s also interleaved with the origins of the blog pythontesting.net, this podcast, and the pytest book I wrote with Pragmatic. It’s not a long story. And it has a happy ending. Well. It’s not over. But I’m happy with where we are now. I’m also hoping that this tale of my dedication to, or obsession with, quality and developer efficiency helps you in your own efforts to make your daily workflow better and to extend that to try to increase the efficiency of those you work with.

pragmatic pytest
Castálio Podcast
Episódio 125: Bruno Oliveira - Pytest

Castálio Podcast

Play Episode Listen Later Jan 15, 2018


Olá pessoal e sejam bem-vindos à mais um episódio do Castálio Podcast! Nosso convidado de hoje mora Florianópolis e contribui com varios projetos open source em seu tempo livre. Ele descobriu o pytest há 5 ou 6 anos atrás e se apaixonou imediatamente pelo projeto, por isso, ele começou a contribuir e é um core developer do pytest por mais de 4 anos. Suas contribuições para o projeto são diárias e por isso ele também é mantenedor de varios plugins para o pytest: pytest-xdist, pytest-mock, pytest-faulthandler, pytest-cpp, pytest-qt. Como podem ver nosso episódio de hoje será sobre o pytest e o nosso convidado é Bruno Oliveira.

Castálio Podcast
Episódio 125: Bruno Oliveira - pytest

Castálio Podcast

Play Episode Listen Later Jan 14, 2018


Olá pessoal e sejam bem-vindos à mais um episódio do Castálio Podcast! Nosso convidado de hoje mora Florianópolis e contribui com varios projetos open source em seu tempo livre. Ele descobriu o pytest há 5 ou 6 anos atrás e se apaixonou imediatamente pelo projeto, por isso, ele começou a …

Test & Code - Python Testing & Development
25: Selenium, pytest, Mozilla – Dave Hunt

Test & Code - Python Testing & Development

Play Episode Listen Later Dec 1, 2016 42:20


Interview with Dave Hunt @davehunt82 (https://twitter.com/davehunt82). We Cover: Selenium Driver (http://www.seleniumhq.org/) pytest (http://docs.pytest.org/) pytest plugins: pytest-selenium (http://pytest-selenium.readthedocs.io/) pytest-html (https://pypi.python.org/pypi/pytest-html) pytest-variables (https://pypi.python.org/pypi/pytest-variables) tox (https://tox.readthedocs.io) Dave Hunt’s “help wanted” list on github (https://github.com/search?utf8=%E2%9C%93&q=author%3Adavehunt+type%3Aissue+label%3A%22help+wanted%22+state%3Aopen+no%3Aassignee) Mozilla (https://www.mozilla.org) Also: fixtures xfail CI and xfail and html reports CI and capturing pytest code sprint working remotely for Mozilla

Test & Code - Python Testing & Development
24: pytest with Raphael Pierzina

Test & Code - Python Testing & Development

Play Episode Listen Later Nov 10, 2016 35:15


pytest is an extremely popular test framework used by many projects and companies. In this episode, I interview Raphael Pierzina (@hackebrot (https://twitter.com/hackebrot)), a core contributor to both pytest and cookiecutter. We discuss how Raphael got involved with both projects, his involvement in cookiecutter, pytest, "adopt pytest month", the pytest code sprint, and of course some of the cool new features in pytest 3. Links: Raphael Pierzina on twitter (@hackebrot (https://twitter.com/hackebrot)) pytest - http://doc.pytest.org (http://doc.pytest.org/en/latest/) cookie cutter - https://github.com/audreyr/cookiecutter (https://github.com/audreyr/cookiecutter) cookiecutter-pytest-plugin - https://github.com/pytest-dev/cookiecutter-pytest-plugin (https://github.com/pytest-dev/cookiecutter-pytest-plugin)

pytest pierzina
Test & Code - Python Testing & Development
19: Python unittest with Robert Collins

Test & Code - Python Testing & Development

Play Episode Listen Later Jun 15, 2016 40:25


Interview with Robert Collins, current core maintainer of Python's unittest module. Some of the topics covered How did Robert become the maintainer of unittest? unittest2 as a rolling backport of unittest test and class parametrization with subtest and testscenarios Which extension to unittest most closely resembles Pytest fixtures? Comparing Pytest and unittest Will unittest ever get assert rewriting? Future changes to unittest I've been re-studying unittest recently and I mostly wanted to ask Robert a bunch of clarifying questions. This is an intermediate to advanced discussion of unittest. Many great features of unittest go by quickly in this talk. Please let me know if there's something you'd like me to cover in more depth as a blog post or a future episode. Links unittest (https://docs.python.org/3.5/library/unittest.html) unittest2 (https://pypi.python.org/pypi/unittest2) pip (https://docs.python.org/3.5/installing/) mock (https://docs.python.org/dev/library/unittest.mock.html) testtools (https://testtools.readthedocs.io/en/latest/) fixtures (https://pypi.python.org/pypi/fixtures) testscenarios (https://pypi.python.org/pypi/testscenarios) subunit (https://pypi.python.org/pypi/python-subunit) pipserver (https://pypi.python.org/pypi/pypiserver) devpi (https://pypi.python.org/pypi/devpi-server) testresources (https://pypi.python.org/pypi/testresources) TIP (testing in python) mailing list (http://lists.idyll.org/listinfo/testing-in-python)

Test & Code - Python Testing & Development

How pytest, unittest, and nose deal with assertions. The job of the test framework to tell developers how and why their tests failed is a difficult job. In this episode I talk about assert helper functions and the 3 methods pytest uses to get around having users need to use assert helper functions.

Test & Code - Python Testing & Development
2: Pytest vs Unittest vs Nose

Test & Code - Python Testing & Development

Play Episode Listen Later Aug 20, 2015 12:17


I list my requirements for a framework and discuss how Pytest, Unittest, and Nose measure up to those requirements. Mentioned: pytest (http://pythontesting.net/framework/pytest/pytest-introduction/) unittest (http://pythontesting.net/framework/unittest/unittest-introduction/) nose (http://pythontesting.net/framework/nose/nose-introduction/) delayed assert (http://pythontesting.net/strategy/delayed-assert/) pytest-expect (http://pythontesting.net/pytest-expect/) doctest (http://pythontesting.net/framework/doctest/doctest-introduction/) I did the audio processing differently for this episode. Please let me know how it sounds, if there are any problems, etc.

nose pytest
Kodsnack
Kodsnack 65 - Den andra dåliga idén

Kodsnack

Play Episode Listen Later Aug 24, 2014 56:38


Vi snackar om att uppdatera sina applikationer, hur Tobias uppdaterat Plex och problem med installationsprogram. Tobias tipsar om Pytest och berättar hur han förbättrat uppdateringshanteringen. Tobias avslöjar häftiga trick man kan utföra när man vill uppdatera appar på Mac utan att behöva ladda ner varenda fil igen. Sedan pratar vi om kod skrivern för forskning och kommer in på att värdera bra struktur på koden och allting kring den - sådant som vi som kodhantverkare värderar högt men kanske inte alla som skriver kod. Problemen i STL får avrunda. Diskutera gärna avsnittet på Techworld Länkar Vi pratar så lite om Microsoft I am Groot Plex autouppdateringsinfrastruktur Plex Home Theater Deltauppdatering - uppdatering i vilken man enbart hämtar det som ändrats, istället för precis allting. Ett binärdelta innebär de rena och råa binärdataändringarna i varje fil, istället för exempelvis varje ändrad fil i sin helhet Bsdiff/bspatch Testsvit - en uppsättning tester Pytest Foo och bar - nonsensnamn som (allt för) ofta används i exempelkod Fixture Decorator Nose Jenkins Poppa stacken - ta bort och returnera det översta elementet i högen Windowsregistret - Windows centrala databas för inställningar Kodsignering av applikationer på OS X DMG - disk image, skivavbild, filformat Apple använder för att representera monterbara enheter .deb och .rpm - Linuxdistributionerna Debian och Red hats filer för distribution av mjukvarupaket Windows installer - .msi WIX - Windows installer XML SOAP OSGi - ett “modulärt system och en tjänsteplattform” för Java Byggare Bob - vårt avsnitt om byggsystem TAR - anrikt filformat och program för datalagring Blizzards installer Markstrid i asien… - Citat från Princess Bride The worst API ever made Historien bakom Direct3D Direct X 12 - Senaste versionen av DirectX Rendermorphics - Tillverkaren man köpte Direct 3D av Apples installationsinfrastruktur One little package of hate - Edge cases avsnitt om Apples installationssystem Resursagenter High Availability-kluster libvirt 20 000 rader kod (i libvirt) Xen och KVM VMWare LXC BSDiff-algoritmen Airmech-tillverkarnas fork av BSDiff UML-diagram används för att modellera och visualisera systemdesign Lua - språk bland annat populärt för högnivålogik i spel Game Engine Architecture Frostbite) Unreal-motorn Unity EA grundades år 1982 EASTL Koncept i C++ - som inte finns export i C++ - nyckelordet som bara en enda kompilator lyckades implementera Map i STL std::map.find - returnerar iterator