Podcasts about djangocon

  • 19PODCASTS
  • 34EPISODES
  • 53mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Apr 25, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about djangocon

Latest podcast episodes about djangocon

Python Podcast
Live von der DjangoCon Europe 2025 in Dublin - Tag 3

Python Podcast

Play Episode Listen Later Apr 25, 2025 42:53


Live von der DjangoCon Europe 2025 in Dublin - Tag 3 (click here to comment) 25. April 2025, Jochen Wir melden uns wieder von der DjangoCon Europe 2025 aus der Hotellobby. Diesmal haben wir Sebastian dabei, der am ersten Tag einen Vortrag über die Feinheiten in den Django Release Notes gehalten hat, den wir leider nicht sehen konnten, weil wir da noch mit Podcastaufnehmen beschäftigt waren. Er kommt auch aus dem Rheinland und betreibt in Köln eine Agentur für Softwareentwicklung und Beratung.In dieser Episode diskutieren wir:

Python Podcast
Live von der DjangoCon Europe 2025 in Dublin - Tag 2

Python Podcast

Play Episode Listen Later Apr 24, 2025 66:15 Transcription Available


Live von der DjangoCon Europe 2025 in Dublin - Tag 2 (click here to comment) 24. April 2025, Jochen Wir melden uns erneut von der DjangoCon Europe und sprechen über die Highlights des zweiten Konferenztages – mit jeder Menge technischer Einblicke, spannenden Talks und persönlichen Eindrücken.Diesmal mit dabei: Ronny als Gast in unserer Runde!

Python Podcast
Live von der DjangoCon Europe 2025 in Dublin - Tag 1

Python Podcast

Play Episode Listen Later Apr 23, 2025 36:14 Transcription Available


Live von der DjangoCon Europe 2025 in Dublin - Tag 1 (click here to comment) 23. April 2025, Jochen In dieser Sonderausgabe melden wir uns live von der DjangoCon Europe in Dublin!

Software Misadventures
LLMs are like your weird, over-confident intern | Simon Willison (Datasette)

Software Misadventures

Play Episode Listen Later Sep 10, 2024 115:50


Known for co-creating Django and Datasette, as well as his thoughtful writing on LLMs, Simon Willison joins the show to chat about blogging as an accountability mechanism, how to build intuition with LLMs, building a startup with his partner on their honeymoon, and more.   Segments: (00:00:00) The weird intern (00:01:50) The early days of LLMs (00:04:59) Blogging as an accountability mechanism (00:09:24) The low-pressure approach to blogging (00:11:47) GitHub issues as a system of records (00:16:15) Temporal documentation and design docs (00:18:19) GitHub issues for team collaboration (00:21:53) Copy-paste as an API (00:26:54) Observable notebooks (00:28:50) pip install LLM (00:32:26) The evolution of using LLMs daily (00:34:47) Building intuition with LLMs (00:43:24) Democratizing access to automation (00:47:45) Alternative interfaces for language models (00:53:39) Is prompt engineering really engineering? (00:58:39) The frustrations of working with LLMs (01:01:59) Structured data extraction with LLMs (01:06:08) How Simon would go about building a LLM app (01:09:49) LLMs making developers more ambitious (01:13:32) Typical workflow with LLMs (01:19:58) Vibes-based evaluation (01:23:25) Staying up-to-date with LLMs (01:27:49) The impact of LLMs on new programmers (01:29:37) The rise of 'Goop' and the future of software development (01:40:20) Being an independent developer (01:42:26) Staying focused and accountable (01:47:30) Building a startup with your partner on the honeymoon (01:51:30) The responsibility of AI practitioners (01:53:07) The hidden dangers of prompt injection (01:53:44) “Artificial intelligence” is really “imitation intelligence”   Show Notes: Simon's blog: https://simonwillison.net/ Natalie's post on them building a startup together: https://blog.natbat.net/post/61658401806/lanyrd-from-idea-to-exit Simon's talk from DjangoCon: https://www.youtube.com/watch?v=GLkRK2rJGB0 Simon on twitter: https://x.com/simonw Datasette: https://github.com/simonw/datasette   Stay in touch:

Environment Variables
The Week in Green Software: Carbon Hack 24 Recap

Environment Variables

Play Episode Listen Later Jun 13, 2024 63:50


TWiGS host Chris Adams is joined by Asim Hussain the executive director of the GSF to talk about the recent hackathon hosted by the GSF : Carbon Hack 24. Asim goes through some of his favourite projects that featured work with the Impact Framework including some surprising choices! They also cover some interesting news from the world of cloud service providers and the new CSDDD developments. Asim also talks about how mushrooms are out and bread is in!

hack carbon digital ocean chris adams twigs finops gsf hetzner cloud service providers green software foundation green software asim hussain djangocon
Sustain
Episode 223: OSCA 2023 with Mannie William Young on the Python community in Ghana & PyCon Africa

Sustain

Play Episode Listen Later Mar 8, 2024 19:00


Guest Mannie William Young Panelist Richard Littauer Show Notes In this episode, host Richard invites guest Mannie Young from Ghana's Python community to share his experiences in open source development. Mannie discusses his role as the Executive Director of the Python Software Community in Ghana and his involvement in organizing PyCon Africa. He provides insights into the significant growth of the Python community in Ghana and the various initiatives under it. He also discusses the Nigerian open source community's vibrancy, the Python community's development in Ghana, and reflects on his experiences at OSCA and Sustain events. Mannie touches on cultural differences affecting community sustainability and funding opportunities, and he shares insights on how to get involved with PyCon Africa and Python Ghana, highlighting the new PyClubs initiative. Hit download now to hear more! [00:00:59] Mannie mentions his active contribution to the Python software community and his roles as the Executive Director of Python Ghana and organizer of PyCon Africa. [00:02:02] Mannie discusses his experience at OSCA Fest 2023, insights from the Sustain Session, as well as Importance of Documentation in Open Source [00:06:14] Mannie explains the growth of the Python community in Ghana and its various initiatives, like PyLadies Ghana and PyData Ghana. [00:07:11] There's a discussion about OSCA's event in Lagos and the Sustain event. Although Mannie was not part of the organizing team this year, he shares some highlights from OSCA including great talks, diversity, and a welcoming environment. He also tells us about the Sustain workshops he attended, focusing on design and community. [00:10:04] The conversation shifts to compare the open source communities in Ghana and Nigeria, with an emphasis on social media presence and advocacy. [00:11:36] Mannie discusses the impact of being reserved on funding and opportunities in the Ghanaian open source community, along with the cultural differences affecting sustainability. [00:012:30] Richard and Mannie address a recent issue with DjangoCon and the PSF regarding discrepancies in approaches to funding and community support, along with cultural and legal considerations in Africa. [00:15:33] Richard inquires about how people can get involved with PyCon Africa, PyCon Ghana, and Mannie's communities. Mannie explains that preparations for PyCon Africa 2024 are underway and provides contact emails and websites. [00:17:08] Find out where you can follow Mannie and his blog on the web. Quotes [00:11:08] “If you don't blog about things, no one knows what you were doing.” Links SustainOSS (https://sustainoss.org/) SustainOSS Twitter (https://twitter.com/SustainOSS?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor) SustainOSS Discourse (https://discourse.sustainoss.org/) podcast@sustainoss.org (mailto:podcast@sustainoss.org) SustainOSS Mastodon (https://mastodon.social/tags/sustainoss) Open Collective-SustainOSS (Contribute) (https://opencollective.com/sustainoss) Richard Littauer Mastodon (https://mastodon.social/@richlitt) Mannie Young Website (https://www.mannieyoung.com/) Mannie Young LinkedIn (https://www.linkedin.com/in/mawy7/?originalSubdomain=gh) An Open Letter to the Python Software Foundation (Python Africa) (https://pythonafrica.blogspot.com/2023/12/an-open-letter-to-python-software_5.html) PyCon Ghana (https://gh.pycon.org/) PyClubs (https://www.pyclubs.org/) PyLadies Ghana (https://blog.pythonghana.org/series/pyladies) PyData Ghana (https://blog.pythonghana.org/series/pydata) OSCAfrica (https://oscafrica.org/about-us) Credits Produced by Richard Littauer (https://www.burntfen.com/) Edited by Paul M. Bahr at Peachtree Sound (https://www.peachtreesound.com/) Show notes by DeAnn Bahr Peachtree Sound (https://www.peachtreesound.com/) Special Guest: Mannie William Young.

Octobot Tech Talks
E20 - DjangoCon 2023: How our speakers lived the experience

Octobot Tech Talks

Play Episode Listen Later Dec 5, 2023 26:00


En esta nueva edición de la DjangoCon, dos de nuestras Software Engineers Dara Silvera y Eli Rosselli fueron las speakers seleccionadas en representación de Octobot.

Python Bytes
#339 Actual Technical People

Python Bytes

Play Episode Listen Later Jun 7, 2023 30:43


Watch on YouTube About the show Sponsored by InfluxDB from Influxdata. Connect with the hosts Michael: @mkennedy@fosstodon.org Brian: @brianokken@fosstodon.org Show: @pythonbytes@fosstodon.org Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Tuesdays at 11am PT. Older video versions available there too. Michael #1: pystack PyStack is a tool that uses forbidden magic to let you inspect the stack frames of a running Python process or a Python core dump, helping you quickly and easily learn what it's doing. PyStack has the following amazing features:

Python Podcast
GUI-Applikationen am Beispiel von MiaPlan

Python Podcast

Play Episode Listen Later May 4, 2023


Sustain
Episode 169: Dawn Wages of PSF on organizing communities, ethical licenses, and more

Sustain

Play Episode Listen Later Apr 21, 2023 36:45


Guest Dawn Wages Panelist Richard Littauer Show Notes Hello and welcome to Sustain! The podcast where we talk about sustaining open source for the long haul. Today, Richard is very excited to have as his guest, Dawn Wages, who's the Python Community Advocate at Microsoft, Core Team Member for Wagtail, DjangoCon Organizer, and Director and Treasurer for the Python Software Foundation. We'll hear Dawn's journey into how she got involved with the PSF and as a Python Community Advocate at Microsoft, she explains how to become a PSF member, as well as the benefits, since they've made some changes recently. She explains where she falls on the ethical source divide and dives into the AntiRacist Ethical Source License, which is her niche. Also, she shares advice on how communities can be more sustainable at navigating conflict in their communities and reveals that we should lead with empathy. If you're looking at going to a conference this year, there's some great DjangoCon's and a PyCon going on that are worth checking out. Hit download now to hear more! [00:03:31] We hear how Dawn got involved with the PSF and how she became the Python Community Advocate at Microsoft. [00:05:23] Dawn shares why foundations in the open source space seem to continually have this community voting way of entering into the board, if she thinks it's healthy, and if she thought about it when she was working on Django's new process. [00:08:27] Both dollars and time are things which are often barriers to entry for DEI, so how does that help diversity, equity, and inclusion versus how it hurts it? Also, we hear about Wagtail and Torchbox and what they do. [00:11:40] Dawn mentioned that the PSF lowered the dollar amount and Open Collective, so now we hear the benefits it gives to an individual to become a member of the PSF, if that's something people should think about if they're working in Python, and if it's possible to join on behalf of the project and not their company. [00:13:30] We hear about a tool called, Fiscal Sponsoree, with the PSF. [00:14:50] Dawn fills us in on DjangoCon 2023, the financing structure for keeping Django going, how they think about sustainability in their community, and DjangoCon Africa 2023. [00:16:51] What does a sponsored chair do? [00:19:04] Richard wonders how Dawn thinks about the return on investment for her ultimate strategy, why these conferences, and what's the ultimate narrative arc for her seventh season open source Bajor story. Also, she explains why she's the treasurer. [00:22:56] Richard explains what the Ethical Source Movement is and wonders how Dawn holds the tension and where she falls on the ethical source divide. [00:24:37] We hear Richard's opinion on one of the problems with open source requiring a huge layout of upfront investment in hours and time and no guarantee that it will pay off, and the work being detrimental to mental health of people working on it. Dawn talks about the Anti-Racist License and explains the “PIES” check-in. [00:28:12] Dawn shares advice on how to help communities be more sustainable at navigating trauma and conflict in their communities without it becoming a drain on resources. [00:31:00] Listen here for a list of conferences you should go to that are Python and Django and where you can follow Dawn on the web. Quotes [00:08:58] “Open source is not accessible for everyone, and it's not a great method for everyone. It is people who have support elsewhere somehow.” [00:26:34] “I think there are tools we can use to be able to acknowledge the humanity of the individuals contributing, and being flexible and thoughtful about the goals we are trying to meet as a collective, and the goals the individual is trying to contribute or try to receive.” Spotlight [00:33:21] Richard's spotlight is his friend, Danielle Garber, who's a personal coach and makes amazing hand woven things. [00:34:08] Dawn's spotlight is Jeff Triplett, Director of PSF, and Coraline Ada Ehmke, lead organizer for the Organization for Ethical Source. Links SustainOSS (https://sustainoss.org/) SustainOSS Twitter (https://twitter.com/SustainOSS?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor) SustainOSS Discourse (https://discourse.sustainoss.org/) podcast@sustainoss.org (mailto:podcast@sustainoss.org) Richard Littauer Twitter (https://twitter.com/richlitt?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor) Dawn Wages Twitter (https://twitter.com/BajoranEngineer) Dawn Wages Website (https://dawnwages.info/) Dawn Wages Mastodon (https://mastodon.online/@fly00gemini8712) Python Software Foundation (https://www.python.org/psf-landing/) At The Root (https://attheroot.dev/) DjangoCon 2023 (Durham, North Carolina) (https://2023.djangocon.us/) DjangoCon 2023 (Edinburgh, Scotland) (https://2023.djangocon.eu/) DjangoCon Africa 2023 ( Zanzibar, Tanzania) (https://2023.djangocon.africa/) PyCon 2023 (Salt Lake City, Utah) (https://us.pycon.org/2023/) Sustain Podcast-Episode 75: Deb Nicholson on the OSI, the future of open source, and SeaGL (https://podcast.sustainoss.org/75) Wagtail (https://wagtail.org/) Torchbox (https://torchbox.com/) Fiscal Sponsorees (https://www.python.org/psf/fiscal-sponsorees/) AntiRacist Ethical Source License (https://github.com/AtTheRoot/ATR-License) Every Thread Handwoven (Danielle Garber) (https://www.everythreadhandwoven.com/) Jeff Triplett Website (https://jefftriplett.com/about/) Coraline Ada Ehmke Website (https://where.coraline.codes/) Credits Produced by Richard Littauer (https://www.burntfen.com/) Edited by Paul M. Bahr at Peachtree Sound (https://www.peachtreesound.com/) Show notes by DeAnn Bahr Peachtree Sound (https://www.peachtreesound.com/) Special Guest: Dawn Wages.

Django Chat
DjangoCon US 2023 - Drew Winstel

Django Chat

Play Episode Listen Later Mar 15, 2023 65:21


Drew's Personal Site@drewbrew on Mastodon and https://takahe.social/@drewDjangoCon US 2023 and on GitHubDjango Events Foundation North America (DEFNA)Drew's 2018 DCUS talkDCUS call for volunteersSponsor DCUS 2023 https://thenounproject.com/icon/dumpster-fire-4367573/HSV.beer, on Instagram and on GitHubDrunk: How We Sipped, Danced, and Stumbled Our Way to CivilizationEast Coast GreenwayAmerican Tobacco Trail 14 miles paved, 22 total; directions from the hotel to the trailheadSupport the ShowThis podcast does not have any ads or sponsors. To support the show, please consider purchasing a book, signing up for Button, or reading the Django News newsletter.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
ChatGPT, GPT4 hype, and Building LLM-native products — with Logan Kilpatrick of OpenAI

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Feb 23, 2023 51:37


We're so glad to launch our first podcast episode with Logan Kilpatrick! This also happens to be his first public interview since joining OpenAI as their first Developer Advocate. Thanks Logan!Recorded in-person at the beautiful StudioPod studios in San Francisco. Full transcript is below the fold.Timestamps* 00:29: Logan's path to OpenAI* 07:06: On ChatGPT and GPT3 API* 16:16: On Prompt Engineering* 20:30: Usecases and LLM-Native Products* 25:38: Risks and benefits of building on OpenAI* 35:22: OpenAI Codex* 42:40: Apple's Neural Engine* 44:21: Lightning RoundShow notes* Sam Altman's interview with Connie Loizos* OpenAI Cookbook* OpenAI's new Embedding Model* Cohere on Word and Sentence Embeddings* (referenced) What is AGI-hard?Lightning Rounds* Favorite AI Product: https://www.synthesia.io/* Favorite AI Community: MLOps * One year prediction: Personalized AI, https://civitai.com/* Takeaway: AI Revolution is here!Transcript[00:00:00] Alessio Fanelli: Hey everyone. Welcome to the Latent Space podcast. This is Alessio, partner and CTO in residence at Decibel Partners. I'm joined by my cohost, swyx writer editor of L Space Diaries. Hey.[00:00:20] swyx: Hey . Our guest today is Logan Kilpatrick. What I'm gonna try to do is I'm gonna try to introduce you based on what people know about you, and then you can fill in the blanks.[00:00:28] Introducing Logan[00:00:28] swyx: So you are the first. Developer advocate at OpenAI, which is a humongous achievement. Congrats. You're also the lead developer community advocate of the Julia language. I'm interested in a little bit of that and apparently as I've did a bit of research on you, you got into Julia through NASA where you interned and worked on stuff that's gonna land on the moon apparently.[00:00:50] And you are also working on computer vision at Apple. And had to sit at path, the eye as you fell down the machine learning rabbit hole. What should people know about you that's kind of not on your LinkedIn that like sort of ties together your interest[00:01:02] Logan Kilpatrick: in story? It's a good question. I think so one of the things that is on my LinkedIn that wasn't mentioned that's super near and dear to my heart and what I spend a lot of time in sort of wraps a lot of my open source machine learning developer advocacy experience together is supporting NumFOCUS.[00:01:17] And NumFOCUS is the nonprofit that helps enable a bunch of the open source scientific projects like Julia, Jupyter, Pandas, NumPy, all of those open source projects are. Facilitated legal and fiscally through NumFOCUS. So it's a very critical, important part of the ecosystem and something that I, I spend a bunch of my now more limited free time helping support.[00:01:37] So yeah, something that's, It's on my LinkedIn, but it's, it's something that's important to me. Well,[00:01:42] swyx: it's not as well known of a name, so maybe people kind of skip over it cuz they were like, I don't know what[00:01:45] Logan Kilpatrick: to do with this. Yeah. It's super interesting to see that too. Just one point of context for that is we tried at one point to get a Wikipedia page for non focus and it's, it's providing, again, the infrastructure for, it's like a hundred plus open source scientific projects and they're like, it's not notable enough.[00:01:59] I'm like, well, you know, there's something like 30 plus million developers around the world who use all these open source tools. It's like the foundation. All open source like science that happens. Every breakthrough in science is they discovered the black hole, the first picture of the black hole, all that stuff using numb focus tools, the Mars Rovers, NumFOCUS tools, and it's interesting to see like the disconnect between the nonprofit that supports those projects and the actual success of the projects themselves.[00:02:26] swyx: Well, we'll, we'll get a bunch of people focused on NumFOCUS and we'll get it on Wikipedia. That that is our goal. . That is the goal. , that is our shot. Is this something that you do often, which is you? You seem to always do a lot of community stuff. When you get into something, you're also, I don't know where this, where you find time for this.[00:02:42] You're also a conference chair for DjangoCon, which was last year as well. Do you fall down the rabbit hole of a language and then you look for community opportunities? Is that how you get into.[00:02:51] Logan Kilpatrick: Yeah, so the context for Django stuff was I'd actually been teaching and still am through Harvard's division of continuing education as a teaching fellow for a Django class, and had spent like two and a half years actually teaching students every semester, had a program in Django and realized that like it was kind of the one ecosystem or technical tool that I was using regularly that I wasn't actually contributing to that community.[00:03:13] So, I think sometime in 2021 like applied to be on the board of directors of the Django Events Foundation, north America, who helps run DjangoCon and was fortunate enough to join a support to be the chair of DjangoCon us and then just actually rolled off the board because of all the, all the craziness and have a lot less free time now.[00:03:32] And actually at PATH ai. Sort of core product was also using, was using Django, so it also had a lot of connections to work, so it was a little bit easier to justify that time versus now open ai. We're not doing any Django stuff unfortunately, so, or[00:03:44] swyx: Julia, I mean, should we talk about this? Like, are you defecting from Julia?[00:03:48] What's going on? ,[00:03:50] Logan Kilpatrick: it's actually felt a little bit strange recently because I, for the longest time, and, and happy to talk about this in the context of Apple as well, the Julie ecosystem was my outlet to do a lot of the developer advocacy, developer relations community work that I wanted to do. because again, at Apple I was just like training machine learning models.[00:04:07] Before that, doing software engineering at Apple, and even at Path ai, we didn't really have a developer product, so it wasn't, I was doing like advocacy work, but it wasn't like developer relations in the traditional sense. So now that I'm so deeply doing developer relations work at Open OpenAI, it's really difficult to.[00:04:26] Continue to have the energy after I just spent nine hours doing developer relations stuff to like go and after work do a bunch more developer relations stuff. So I'll be interested to see for myself like how I'm able to continue to do that work and I. The challenge is that it's, it's such critical, important work to happen.[00:04:43] Like I think the Julie ecosystem is so important. I think the language is super important. It's gonna continue to grow in, in popularity, and it's helping scientists and engineers solve problems they wouldn't otherwise be able to. So it's, yeah, the burden is on me to continue to do that work, even though I don't have a lot of time now.[00:04:58] And I[00:04:58] Alessio Fanelli: think when it comes to communities, the machine learning technical community, I think in the last six to nine months has exploded. You know, you're the first developer advocate at open ai, so I don't think anybody has a frame of reference on what that means. What is that? ? So , what do you, how did, how the[00:05:13] swyx: job, yeah.[00:05:13] How do you define the job? Yeah, let's talk about that. Your role.[00:05:16] Logan Kilpatrick: Yeah, it's a good question and I think there's a lot of those questions that actually still exist at OpenAI today. Like I think a lot of traditional developed by advocacy, at least like what you see on Twitter, which I think is what a lot of people's perception of developer advocacy and developer relations is, is like, Just putting out external content, going to events, speaking at conferences.[00:05:35] And I think OpenAI is very unique in the sense that, at least at the present moment, we have so much inbound interest that there's, there is no desire for us to like do that type of developer advocacy work. So it's like more from a developer experience point of view actually. Like how can we enable developers to be successful?[00:05:53] And that at the present moment is like building a strong foundation of documentation and things like that. And we had a bunch of amazing folks internally who were. Who were doing some of this work, but it really wasn't their full-time job. Like they were focused on other things and just helping out here and there.[00:06:05] And for me, my full-time job right now is how can we improve the documentation so that people can build the next generation of, of products and services on top of our api. And it's. Yeah. There's so much work that has to happen, but it's, it's, it's been a ton of fun so far. I find[00:06:20] swyx: being in developer relations myself, like, it's kind of like a fill in the blanks type of thing.[00:06:24] Like you go to where you, you're needed the most open. AI has no problem getting attention. It is more that people are not familiar with the APIs and, and the best practices around programming for large language models, which is a thing that did not exist three years ago, two years ago, maybe one year ago.[00:06:40] I don't know. When she launched your api, I think you launched Dall-E. As an API or I, I don't[00:06:45] Logan Kilpatrick: know. I dunno. The history, I think Dall-E was, was second. I think it was some of the, like GPT3 launched and then GPT3 launched and the API I think like two years ago or something like that. And then Dali was, I think a little more than a year ago.[00:06:58] And then now all the, the Chachi Beast ChatGPT stuff has, has blown it all outta the water. Which you have[00:07:04] swyx: a a wait list for. Should we get into that?[00:07:06] Logan Kilpatrick: Yeah. .[00:07:07] ChatGPT[00:07:07] Alessio Fanelli: Yeah. We would love to hear more about that. We were looking at some of the numbers you went. Zero to like a million users in five days and everybody, I, I think there's like dozens of ChatGPT API wrappers on GitHub that are unofficial and clearly people want the product.[00:07:21] Like how do you think about that and how developers can interact with it.[00:07:24] Logan Kilpatrick: It. It's absolutely, I think one of the most exciting things that I can possibly imagine to think about, like how much excitement there was around ChatGPT and now getting to hopefully at some point soon, put that in the hands of developers and see what they're able to unlock.[00:07:38] Like I, I think ChatGPT has been a tremendous success, hands down without a question, but I'm actually more excited to see what developers do with the API and like being able to build those chat first experiences. And it's really fascinating to see. Five years ago or 10 years ago, there was like, you know, all this like chatbot sort of mm-hmm.[00:07:57] explosion. And then that all basically went away recently, and the hype went to other places. And I think now we're going to be closer to that sort of chat layer and all these different AI chat products and services. And it'll be super interesting to see if that sticks or not. I, I'm not. , like I think people have a lot of excitement for ChatGPT right now, but it's not clear to me that that that's like the, the UI or the ux, even though people really like it in the moment, whether that will stand the test of time, I, I just don't know.[00:08:23] And I think we'll have to do a podcast in five years. Right. And check in and see whether or not people are still really enjoying that sort of conversational experience. I think it does make sense though cause like that's how we all interact and it's kind of weird that you wouldn't do that with AI products.[00:08:37] So we. and I think like[00:08:40] Alessio Fanelli: the conversational interface has made a lot of people, first, the AI to hallucinate, you know, kind of come up with things that are not true and really find all the edge cases. I think we're on the optimism camp, you know, like we see the potential. I think a lot of people like to be negative.[00:08:56] In your role, kind of, how do you think about evangelizing that and kind of the patience that sometimes it takes for these models to become.[00:09:03] Logan Kilpatrick: Yeah, I think what, what I've done is just continue to scream from the, the mountains that like ChatGPT has, current form is definitely a research preview. The model that underlies ChatGPT GPT 3.5 is not a research preview.[00:09:15] I think there's things that folks can do to definitely reduce the amount of hall hallucinations and hopefully that's something that over time I, I, again have full confidence that it'll, it'll solve. Yeah, there's a bunch of like interesting engineering challenges. you have to solve in order to like really fix that problem.[00:09:33] And I think again, people are, are very fixated on the fact that like in, you know, a few percentage points of the conversations, things don't sound really good. Mm-hmm. , I'm really more excited to see, like, again when the APIs and the Han developers like what are the interesting solutions that people come up with, I think there's a lot that can be explored and obviously, OpenAI can explore all them because we have this like one product that's using the api.[00:09:56] And once you get 10,000, a hundred thousand developers building on top of that, like, we'll see what are the different ways that people handle this. And I imagine there's a lot of low-hanging fruit solutions that'll significantly improve the, the amount of halluc hallucinations that are showing up. Talk about[00:10:11] swyx: building on top of your APIs.[00:10:13] Chat GPTs API is not out yet, but let's assume it is. Should I be, let's say I'm, I'm building. A choice between GP 3.5 and chat GPT APIs. As far as I understand, they are kind of comparable. What should people know about deciding between either of them? Like it's not clear to me what the difference is.[00:10:33] Logan Kilpatrick: It's a great question.[00:10:35] I don't know if there's any, if we've made any like public statements about like what the difference will be. I think, I think the point is that the interface for the Chachi B API will be like conversational first, and that's not the case now. If you look at text da Vinci oh oh three, like you, you just put in any sort of prompt.[00:10:52] It's not really built from the ground up to like keep the context of a conversation and things like that. And so it's really. Put in some sort of prompt, get a response. It's not always designed to be in that sort of conversational manner, so it's not tuned in that way. I think that's the biggest difference.[00:11:05] I think, again, the point that Sam made in a, a strictly the strictly VC talk mm-hmm. , which was incredible and I, I think that that talk got me excited and my, which, which part? The whole thing. And I think, I haven't been at open AI that long, so like I didn't have like a s I obviously knew who Sam was and had seen a bunch of stuff, but like obviously before, a lot of the present craziness with Elon Musk, like I used to think Elon Musk seemed like a really great guy and he was solving all these really important problems before all the stuff that happened.[00:11:33] That's a hot topic. Yeah. The stuff that happened now, yeah, now it's much more questionable and I regret having a Tesla, but I, I think Sam is actually. Similar in the sense that like he's solving and thinking about a lot of the same problems that, that Elon, that Elon is still today. But my take is that he seems like a much more aligned version of Elon.[00:11:52] Like he's, he's truly like, I, I really think he cares deeply about people and I think he cares about like solving the problems that people have and wants to enable people. And you can see this in the way that he's talked about how we deploy models at OpenAI. And I think you almost see Tesla in like the completely opposite end of the spectrum, where they're like, whoa, we.[00:12:11] Put these 5,000 pound machines out there. Yeah. And maybe they'll run somebody over, maybe they won't. But like it's all in the interest of like advancement and innovation. I think that's really on the opposite end of the spectrum of, of what open AI is doing, I think under Sam's leadership. So it's, it's interesting to see that, and I think Sam said[00:12:30] Alessio Fanelli: that people could have built Chen g p t with what you offered like six, nine months ago.[00:12:35] I[00:12:35] swyx: don't understand. Can we talk about this? Do you know what, you know what we're talking about, right? I do know what you're talking about. da Vinci oh three was not in the a p six months before ChatGPT. What was he talking about? Yeah.[00:12:45] Logan Kilpatrick: I think it's a little bit of a stretch, but I do think that it's, I, I think the underlying principle is that.[00:12:52] The way that it, it comes back to prompt engineering. The way that you could have engineered, like the, the prompts that you were put again to oh oh three or oh oh two. You would be able to basically get that sort of conversational interface and you can do that now. And, and I, you know, I've seen tutorials.[00:13:05] We have tutorials out. Yep. No, we, I mean, we, nineties, we have tutorials in the cookbook right now in on GitHub. We're like, you can do this same sort of thing. And you just, it's, it's all about how you, how you ask for responses and the way you format data and things like that. It. The, the models are currently only limited by what people are willing to ask them to do.[00:13:24] Like I really do think that, yeah, that you can do a lot of these things and you don't need the chat CBT API to, to build that conversational layer. That is actually where I[00:13:33] swyx: feel a little bit dumb because I feel like I don't, I'm not smart enough to think of new things to ask the models. I have to see an example and go, oh, you can do that.[00:13:43] All right, I'm gonna do that for now. You know, and, and that's why I think the, the cookbook is so important cuz it's kind of like a compendium of things we know about the model that you can ask it to do. I totally[00:13:52] Logan Kilpatrick: agree and I think huge shout out to the, the two folks who I work super closely with now on the cookbook, Ted and Boris, who have done a lot of that work and, and putting that out there and it's, yeah, you see number one trending repo on, on GitHub and it was super, like when my first couple of weeks at Open ai, super unknown, like really, we were only sort of directing our customers to that repo.[00:14:13] Not because we were trying to hide it or anything, but just because. It was just the way that we were doing things and then all of a sudden it got picked up on GitHub trending and a bunch of tweets went viral, showing the repo. So now I think people are actually being able to leverage the tools that are in there.[00:14:26] And, and Ted's written a bunch of amazing tutorials, Boris, as well. So I think it's awesome that more people are seeing those. And from my perspective, it's how can we take those, make them more accessible, give them more visibility, put them into the documentation, and I don't think that that connection right now doesn't exist, which I'm, I'm hopeful we'll be able to bridge those two things.[00:14:44] swyx: Cookbook is kind of a different set of documentation than API docs, and I think there's, you know, sort of existing literature about how you document these things and guide developers the right way. What, what I, what I really like about the cookbook is that it actually cites academic research. So it's like a nice way to not read the paper, but just read the conclusions of the paper ,[00:15:03] Logan Kilpatrick: and, and I think that's, that's a shout out to Ted and Boris cuz I, I think they're, they're really smart in that way and they've done a great job of finding the balance and understanding like who's actually using these different tools.[00:15:13] So, . Yeah.[00:15:15] swyx: You give other people credit, but you should take credit for yourself. So I read your last week you launched some kind of documentation about rate limiting. Yeah. And one of my favorite things about reading that doc was seeing examples of, you know, you were, you're telling people to do exponential back off and, and retry, but you gave code examples with three popular libraries.[00:15:32] You didn't have to do that. You could have just told people, just figure it out. Right. But you like, I assume that was you. It wasn't.[00:15:38] Logan Kilpatrick: So I think that's the, that's, I mean, I'm, I'm helping sort of. I think there's a lot of great stuff that people have done in open ai, but it was, we have the challenge of like, how can we make that accessible, get it into the documentation and still have that high bar for what goes into the doc.[00:15:51] So my role as of recently has been like helping support the team, building that documentation first culture, and supporting like the other folks who actually are, who wrote that information. The information was actually already in. Help center but it out. Yeah, it wasn't in the docs and like wasn't really focused on, on developers in that sense.[00:16:10] So yeah. I can't take the, the credit for the rate limit stuff either. , no, this[00:16:13] swyx: is all, it's part of the A team, that team effort[00:16:16] On Prompt Engineering[00:16:16] Alessio Fanelli: I was reading on Twitter, I think somebody was saying in the future will be kind of like in the hair potter word. People have like the spell book, they pull it out, they do all the stuff in chat.[00:16:24] GP z. When you talk with customers, like are they excited about doing prompt engineering and kind of getting a starting point or do they, do they wish there was like a better interface? ?[00:16:34] Logan Kilpatrick: Yeah, that's a good question. I think prompt engineering is so much more of an art than a science right now. Like I think there are like really.[00:16:42] Systematic things that you can do and like different like approaches and designs that you can take, but really it's a lot of like, you kind of just have to try it and figure it out. And I actually think that this remains to be one of the challenges with large language models in general, and not just head open ai, but for everyone doing it is that it's really actually difficult to understand what are the capabilities of the model and how do I get it to do the things that I wanted to do.[00:17:05] And I think that's probably where a lot of folks need to do like academic research and companies need to invest in understanding the capabilities of these models and the limitations because it's really difficult to articulate the capabilities of a model without those types of things. So I'm hopeful that, and we're shipping hopefully some new updated prompt engineering stuff.[00:17:24] Cause I think the stuff we have on the website is old, and I think the cookbook actually has a little bit more up-to-date stuff. And so hopefully we'll ship some new prompt engineering stuff in the, in the short term. I think dispel some of the myths and rumors, but like I, it's gonna continue to be like a, a little bit of a pseudoscience, I would imagine.[00:17:41] And I also think that the whole prompt engineering being like a job in the future meme, I think is, I think it's slightly overblown. Like I think at, you see this now actually with like, there's tools that are showing up and I forgot what the, I just saw went on Twitter. The[00:17:57] swyx: next guest that we are having on this podcast, Lang.[00:17:59] Yeah. Yeah.[00:18:00] Logan Kilpatrick: Lang Chain and Harrison on, yeah, there's a bunch of repos too that like categorize and like collect all the best prompts that you can put into chat. For example, and like, that's like the people who are, I saw the advertisement for someone to be like a prompt engineer and it was like a $350,000 a year.[00:18:17] Mm-hmm. . Yeah, that was, that was philanthropic. Yeah, so it, it's just unclear to me like how, how sustainable stuff like that is. Cuz like, once you figure out the interesting prompts and like right now it's kind of like the, the Wild West, but like in a year you'll be able to sort of categorize all those and then people will be able to find all the good ones that are relevant for what they want to do.[00:18:35] And I think this goes back to like, having the examples is super important and I'm, I'm with you as well. Like every time I use Dall-E the little. While it's rendering the image, it gives you like a suggestion of like how you should ask for the art to be generated. Like do it in like a cyberpunk format. Do it in a pixel art format.[00:18:53] Et cetera, et cetera, and like, I really need that. I'm like, I would never come up with asking for those things had it not prompted me to like ask it that way. And now I always ask for pixel art stuff or cyberpunk stuff and it looks so cool. That's what I, I think,[00:19:06] swyx: is the innovation of ChatGPT as a format.[00:19:09] It reduces. The need for getting everything into your prompt in the first try. Mm-hmm. , it takes it from zero shot to a few shot. If, if, if that, if prompting as, as, as shots can be concerned.[00:19:21] Logan Kilpatrick: Yeah. , I think that's a great perspective and, and again, this goes back to the ux UI piece of it really being sort of the differentiating layer from some of the other stuff that was already out there.[00:19:31] Because you could kind of like do this before with oh oh three or something like that if you just made the right interface and like built some sort of like prompt retry interface. But I don't think people were really, were really doing that. And I actually think that you really need that right now. And this is the, again, going back to the difference between like how you can use generative models versus like large scale.[00:19:53] Computer vision systems for self-driving cars, like the, the answer doesn't actually need to be right all the time. That's the beauty of, of large language models. It can be wrong 50% of the time and like it doesn't really cost you anything to like regenerate a new response. And there's no like, critical safety issue with that, so you don't need those.[00:20:09] I, I keep seeing these tweets about like, you need those like 99.99% reliability and like the three nines or whatever it is. Mm-hmm. , but like you really don't need that because the cost of regenerating the prop is again, almost, almost. I think you tweeted a[00:20:23] Alessio Fanelli: couple weeks ago that the average person doesn't yet fully grasp how GBT is gonna impact human life in the next four, five years.[00:20:30] Usecases and LLM-Native Products[00:20:30] Alessio Fanelli: I think you had an example in education. Yeah. Maybe touch on some of these. Example of non-tech related use cases that are enabling, enabled by C G B[00:20:38] T.[00:20:39] Logan Kilpatrick: I'm so excited and, and there's a bunch of other like random threads that come to my mind now. I saw a thread and, and our VP of product was, Peter, was, was involved in that thread as well, talking about like how the use of systems like ChatGPT will unlock like pretty almost low to zero cost access to like mental health services.[00:20:59] You know, you can imagine like the same use case for education, like really personalized tutors and like, it's so crazy to think about, but. The technology is not actually , like it's, it's truly like an engineering problem at this point of like somebody using one of these APIs to like build something like that and then hopefully the models get a little bit better and make it, make it better as well.[00:21:20] But like it, I have no doubt in my mind that three years from now that technology will exist for every single student in the world to like have that personalized education experience, have a pr, have a chat based experience where like they'll be able. Ask questions and then the curriculum will just evolve and be constructed for them in a way that keeps, I think the cool part is in a way that keeps them engaged, like it doesn't have to be sort of like the same delivery of curriculum that you've always seen, and this now supplements.[00:21:49] The sort of traditional education experience in the sense of, you know, you don't need teachers to do all of this work. They can really sort of do the thing that they're amazing at and not spend time like grading assignments and all that type of stuff. Like, I really do think that all those could be part of the, the system.[00:22:04] And same thing, I don't know if you all saw the the do not pay, uh, lawyer situation, say, I just saw that Twitter thread, I think yesterday around they were going to use ChatGPT in the courtroom and basically I think it was. California Bar or the Bar Institute said that they were gonna send this guy to prison if he brought, if he put AirPods in and started reading what ChatGPT was saying to him.[00:22:26] Yeah.[00:22:26] swyx: To give people the context, I think, like Josh Browder, the CEO of Do Not Pay, was like, we will pay you money to put this AirPod into your ear and only say what we tell you to say fr from the large language model. And of course the judge was gonna throw that out. I mean, I, I don't see how. You could allow that in your court,[00:22:42] Logan Kilpatrick: Yeah, but I, I really do think that, like, the, the reality is, is that like, again, it's the same situation where the legal spaces even more so than education and, and mental health services, is like not an accessible space. Like every, especially with how like overly legalized the United States is, it's impossible to get representation from a lawyer, especially if you're low income or some of those things.[00:23:04] So I'm, I'm optimistic. Those types of services will exist in the future. And you'll be able to like actually have a, a quality defense representative or just like some sort of legal counsel. Yeah. Like just answer these questions, what should I do in this situation? Yeah. And I like, I have like some legal training and I still have those same questions.[00:23:22] Like I don't know what I would do in that situation. I would have to go and get a lawyer and figure that out. And it's, . It's tough. So I'm excited about that as well. Yeah.[00:23:29] Alessio Fanelli: And when you think about all these vertical use cases, do you see the existing products implementing language models in what they have?[00:23:35] Or do you think we're just gonna see L L M native products kind of come to market and build brand[00:23:40] Logan Kilpatrick: new experiences? I think there'll be a lot of people who build the L l M first experience, and I think that. At least in the short term, those are the folks who will have the advantage. I do think that like the medium to long term is again, thinking about like what is your moat for and like again, and everyone has access to, you know, ChatGPT and to the different models that we have available.[00:24:05] So how can you build a differentiated business? And I think a lot of it actually will come down to, and this is just the true and the machine learning world in general, but having. Unique access to data. So I think if you're some company that has some really, really great data about the legal space or about the education space, you can use that and be better than your competition by fine tuning these models or building your own specific LLMs.[00:24:28] So it'll, it'll be interesting to see how that plays out, but I do think that. from a product experience, it's gonna be better in the short term for people who build the, the generative AI first experience versus people who are sort of bolting it onto their mm-hmm. existing product, which is why, like, again, the, the Google situation, like they can't just put in like the prompt into like right below the search bar.[00:24:50] Like, it just, it would be a weird experience and, and they have to sort of defend that experience that they have. So it, it'll be interesting to see what happens. Yeah. Perplexity[00:24:58] swyx: is, is kind of doing that. So you're saying perplexity will go Google ?[00:25:04] Logan Kilpatrick: I, I think that perplexity has a, has a chance in the short term to actually get more people to try the product because it's, it's something different I think, whether they can, I haven't actually used, so I can't comment on like that experience, but like I think the long term is like, How can they continue to differentiate?[00:25:21] And, and that's really the focus for like, if you're somebody building on these models, like you have to be, your first thought should be, how do I build a differentiated business? And if you can't come up with 10 reasons that you can build a differentiated business, you're probably not gonna succeed in, in building something that that stands the test of time.[00:25:37] Yeah.[00:25:37] Risks and benefits of building on OpenAI[00:25:37] swyx: I think what's. As a potential founder or something myself, like what's scary about that is I would be building on top of open ai. I would be sending all my stuff to you for fine tuning and embedding and what have you. By the way, fine tuning, embedding is their, is there a third one? Those are the main two that I know of.[00:25:55] Okay. And yeah, that's the risk. I would be a open AI API reseller.[00:26:00] Logan Kilpatrick: Yeah. And, and again, this, this comes back down to like having a clear sense of like how what you're building is different. Like the people who are just open AI API resellers, like, you're not gonna, you're not gonna have a successful business doing that because everybody has access to the Yeah.[00:26:15] Jasper's pretty great. Yeah, Jasper's pretty great because I, I think they've done a, they've, they've been smart about how they've positioned the product and I was actually a, a Jasper customer before I joined OpenAI and was using it to do a bunch of stuff. because the interface was simple because they had all the sort of customized, like if you want for like a response for this sort of thing, they'd, they'd pre-done that prompt engineering work for us.[00:26:39] I mean, you could really just like put in some exactly what you wanted and then it would make that Amazon product description or whatever it is. So I think like that. The interface is the, the differentiator for, for Jasper. And again, whether that send test time, hopefully, cuz I know they've raised a bunch of money and have a bunch of employees, so I'm, I'm optimistic for them.[00:26:58] I think that there's enough room as well for a lot of these companies to succeed. Like it's not gonna, the space is gonna get so big so quickly that like, Jasper will be able to have a super successful business. And I think they are. I just saw some, some tweets from the CEO the other day that I, I think they're doing, I think they're doing well.[00:27:13] Alessio Fanelli: So I'm the founder of A L L M native. I log into open ai, there's 6 million things that I can do. I'm on the playground. There's a lot of different models. How should people think about exploring the surface area? You know, where should they start? Kind of like hugging the go deeper into certain areas.[00:27:30] Logan Kilpatrick: I think six months ago, I think it would've been a much different conversation because people hadn't experienced ChatGPT before.[00:27:38] Now that people have experienced ChatGPT, I think there's a lot more. Technical things that you should start looking into and, and thinking about like the differentiators that you can bring. I still think that the playground that we have today is incredible cause it does sort of similar to what Jasper does, which is like we have these very focused like, you know, put in a topic and we'll generate you a summary, but in the context of like explaining something to a second grader.[00:28:03] So I think all of those things like give a sense, but we only have like 30 on the website or something like that. So really doing a lot of exploration around. What is out there? What are the different prompts that you can use? What are the different things that you can build on? And I'm super bullish on embeddings, like embed everything and that's how you can build cool stuff.[00:28:20] And I keep seeing all these Boris who, who I talked about before, who did a bunch of the cookbook stuff, tweeted the other day that his like back of the hand, back of the napkin math, was that 50 million bucks you can embed the whole internet. I'm like, Some companies gonna spend the 50 million and embed the whole internet and like, we're gonna find out what that product looks like.[00:28:40] But like, there's so many cool things that you could do if you did have the whole internet embedded. Yeah, and I, I mean, I wouldn't be surprised if Google did that cuz 50 million is a drop in the bucket and they already have the whole internet, so why not embed it?[00:28:52] swyx: Can can I ask a follow up question on that?[00:28:54] Cuz I am just learning about embeddings myself. What makes open eyes embeddings different from other embeddings? If, if there's like, It's okay if you don't have the, the numbers at hand, but I'm just like, why should I use open AI emitting versus others? I[00:29:06] Logan Kilpatrick: don't understand. Yeah, that's a really good question.[00:29:08] So I'm still ramping up on my understanding of embeddings as well. So the two things that come to my mind, one, going back to the 50 million to embed the whole internet example, it's actually just super cheap. I, I don't know the comparisons of like other prices, but at least from what I've seen people talking about on Twitter, like the embeddings that that we have in the API is just like significantly cheaper than a lot of other c.[00:29:30] Embeddings. Also the accuracy of some of the benchmarks that are like, Sort of academic benchmarks to use in embeddings. I know at least I was just looking back through the blog post from when we announced the new text embedding model, which is what Powers embeddings and it's, yeah, the, on those metrics, our API is just better.[00:29:50] So those are the those. I'll go read it up. Yeah, those are the two things. It's a good. It's a good blog post to read. I think the most recent one that came out, but, and also the original one from when we first announced the Embeddings api, I think also was a, it had, that one has a little bit more like context around if you're trying to wrap your head around embeddings, how they work.[00:30:06] That one has the context, the new one just has like the fancy new stuff and the metrics and all that kind of stuff.[00:30:11] swyx: I would shout a hugging face for having really good content around what these things like foundational concepts are. Because I was familiar with, so, you know, in Python you have like text tove, my first embedding as as a, as someone getting into nlp.[00:30:24] But then developing the concept of sentence embeddings is, is as opposed to words I think is, is super important. But yeah, it's an interesting form of lock in as a business because yes, I'm gonna embed all my source data, but then every inference needs an embedding as. . And I think that is a risk to some people, because I've seen some builders should try and build on open ai, call that out as, as a cost, as as like, you know, it starts to add a cost to every single query that you, that you[00:30:48] Logan Kilpatrick: make.[00:30:49] Yeah. It'll be interesting to see how it all plays out, but like, my hope is that that cost isn't the barrier for people to build because it's, it's really not like the cost for doing the incremental like prompts and having them embedded is, is. Cent less than cents, but[00:31:06] swyx: cost I, I mean money and also latency.[00:31:08] Yeah. Which is you're calling the different api. Yeah. Anyway, we don't have to get into that.[00:31:13] Alessio Fanelli: No, but I think embeds are a good example. You had, I think, 17 versions of your first generation, what api? Yeah. And then you released the second generation. It's much cheaper, much better. I think like the word on the street is like when GPT4 comes out, everything else is like trash that came out before it.[00:31:29] It's got[00:31:30] Logan Kilpatrick: 100 trillion billion. Exactly. Parameters you don't understand. I think Sam has already confirmed that those are, those are not true . The graphics are not real. Whatever you're seeing on Twitter about GPT4, you're, I think the direct quote was, you're begging to be disappointed by continuing to, to put that hype out.[00:31:47] So[00:31:48] Alessio Fanelli: if you're a developer building on these, What's kind of the upgrade path? You know, I've been building on Model X, now this new model comes out. What should I do to be ready to move on?[00:31:58] Logan Kilpatrick: Yeah. I think all of these types of models folks have to think about, like there will be trade offs and they'll also be.[00:32:05] Breaking changes like any other sort of software improvement, like things like the, the prompts that you were previously expecting might not be the prompts that you're seeing now. And you can actually, you, you see this in the case of the embeddings example that you just gave when we released Tex embeddings, ADA oh oh two, ada, ada, whichever it is oh oh two, and it's sort of replaced the previous.[00:32:26] 16 first generation models, people went through this exact experience where like, okay, I need to test out this new thing, see how it works in my environment. And I think that the really fascinating thing is that there aren't, like the tools around doing this type of comparison don't exist yet today. Like if you're some company that's building on lms, you sort of just have to figure it out yourself of like, is this better in my use case?[00:32:49] Is this not better? In my use case, it's, it's really difficult to tell because the like, Possibilities using generative models are endless. So I think folks really need to focus on, again, that goes back to how to build a differentiated business. And I think it's understanding like what is the way that people are using your product and how can you sort of automate that in as much way and codify that in a way that makes it clear when these different models come up, whether it's open AI or other companies.[00:33:15] Like what is the actual difference between these and which is better for my use case because the academic be. It'll be saturated and people won't be able to use them as a point of comparison in the future. So it'll be important to think about. For your specific use case, how does it differentiate?[00:33:30] swyx: I was thinking about the value of frameworks or like Lang Chain and Dust and what have you out there.[00:33:36] I feel like there is some value to building those frameworks on top of Open Eyes, APIs. It kind of is building what's missing, essentially what, what you guys don't have. But it's kind of important in the software engineering sense, like you have this. Unpredictable, highly volatile thing, and you kind of need to build a stable foundation on top of it to make it more predictable, to build real software on top of it.[00:33:59] That's a super interesting kind of engineering problem. .[00:34:03] Logan Kilpatrick: Yeah, it, it is interesting. It's also the, the added layer of this is that the large language models. Are inherently not deterministic. So I just, we just shipped a small documentation update today, which, which calls this out. And you think about APIs as like a traditional developer experience.[00:34:20] I send some response. If the response is the same, I should get the same thing back every time. Unless like the data's updating and like a, from like a time perspective. But that's not the, that's not the case with the large language models, even with temperature zero. Mm-hmm. even with temperature zero. Yep.[00:34:34] And that's, Counterintuitive part, and I think someone was trying to explain to me that it has to do with like Nvidia. Yeah. Floating points. Yes. GPU stuff. and like apparently the GPUs are just inherently non-deterministic. So like, yes, there's nothing we can do unless this high Torch[00:34:48] swyx: relies on this as well.[00:34:49] If you want to. Fix this. You're gonna have to tear it all down. ,[00:34:53] Logan Kilpatrick: maybe Nvidia, we'll fix it. I, I don't know, but I, I think it's a, it's a very like, unintuitive thing and I don't think that developers like really get that until it happens to you. And then you're sort of scratching your head and you're like, why is this happening?[00:35:05] And then you have to look it up and then you see all the NVIDIA stuff. Or hopefully our documentation makes it more clear now. But hopefully people, I also think that's, it's kinda the cool part as well. I don't know, it's like, You're not gonna get the same stuff even if you try to.[00:35:17] swyx: It's a little spark of originality in there.[00:35:19] Yeah, yeah, yeah, yeah. The random seed .[00:35:22] OpenAI Codex[00:35:22] swyx: Should we ask about[00:35:23] Logan Kilpatrick: Codex?[00:35:23] Alessio Fanelli: Yeah. I mean, I love Codex. I use it every day. I think like one thing, sometimes the code is like it, it's kinda like the ChatGPT hallucination. Like one time I asked it to write up. A Twitter function, they will pull the bayou of this thing and it wrote the whole thing and then the endpoint didn't exist once I went to the Twitter, Twitter docs, and I think like one, I, I think there was one research that said a lot of people using Co Palace, sometimes they just auto complete code that is wrong and then they commit it and it's a, it's a big[00:35:51] Logan Kilpatrick: thing.[00:35:51] swyx: Do you secure code as well? Yeah, yeah, yeah, yeah. I saw that study.[00:35:54] Logan Kilpatrick: How do[00:35:54] Alessio Fanelli: you kind of see. Use case evolving. You know, you think, like, you obviously have a very strong partnership with, with Microsoft. Like do you think Codex and VS code will just keep improving there? Do you think there's kind of like a. A whole better layer on top of it, which is from the scale AI hackathon where the, the project that one was basically telling the l l m, you're not the back end of a product[00:36:16] And they didn't even have to write the code and it's like, it just understood. Yeah. How do you see the engineer, I, I think Sean, you said copilot is everybody gets their own junior engineer to like write some of the code and then you fix it For me, a lot of it is the junior engineer gets a senior engineer to actually help them write better code.[00:36:32] How do you see that tension working between the model and the. It'll[00:36:36] Logan Kilpatrick: be really interesting to see if there's other, if there's other interfaces to this. And I think I've actually seen a lot of people asking, like, it'd be really great if I had ChatGPT and VS code because in, in some sense, like it can, it's just a better, it's a better interface in a lot of ways to like the, the auto complete version cuz you can reprompt and do, and I know Via, I know co-pilot actually has that, where you can like click and then give it, it'll like pop up like 10 suggested.[00:36:59] Different options instead of brushes. Yeah, copilot labs, yeah. Instead of the one that it's providing. And I really like that interface, but again, this goes back to. I, I do inherently think it'll get better. I think it'll be able to do a lot, a lot more of the stuff as the models get bigger, as they have longer context as they, there's a lot of really cool things that will end up coming out and yeah, I don't think it's actually very far away from being like, much, much better.[00:37:24] It'll go from the junior engineer to like the, the principal engineer probably pretty quickly. Like I, I don't think the gap is, is really that large between where things are right now. I think like getting it to the point. 60% of the stuff really well to get it to do like 90% of the stuff really well is like that's within reach in the next, in the next couple of years.[00:37:45] So I'll be really excited to see, and hopefully again, this goes back to like engineers and developers and people who aren't thinking about how to integrate. These tools, whether it's ChatGPT or co-pilot or something else into their workflows to be more efficient. Those are the people who I think will end up getting disrupted by these tools.[00:38:02] So figuring out how to make yourself more valuable than you are today using these tools, I think will be super important for people. Yeah.[00:38:09] Alessio Fanelli: Actually use ChatGPT to debug, like a react hook the other day. And then I posted in our disc and I was like, Hey guys, like look, look at this thing. It really helped me solve this.[00:38:18] And they. That's like the ugliest code I've ever seen. It's like, why are you doing that now? It's like, I don't know. I'm just trying to get[00:38:24] Logan Kilpatrick: this thing to work and I don't know, react. So I'm like, that's the perfect, exactly, that's the perfect solution. I, I did this the other day where I was looking at React code and like I have very briefly seen React and run it like one time and I was like, explain how this is working.[00:38:38] So, and like change it in this way that I want to, and like it was able to do that flawlessly and then I just popped it in. It worked exactly like I. I'll give a[00:38:45] swyx: little bit more context cause I was, I was the guy giving you feedback on your code and I think this is a illustrative of how large language models can sort of be more confident than they should be because you asked it a question which is very specific on how to improve your code or fix your code.[00:39:00] Whereas a real engineer would've said, we've looked at your code and go, why are you doing it at at all? Right? So there's a sort of sycophantic property of martial language. Accepts the basis of your question, whereas a real human might question your question. Mm-hmm. , and it was just not able to do that. I mean, I, I don't see how he could do that.[00:39:17] Logan Kilpatrick: Yeah. It's, it's interesting. I, I saw another example of this the other day as well with some chatty b t prompt and I, I agree. It'll be interesting to see if, and again, I think not to, not to go back to Sam's, to Sam's talk again, but like, he, he talked real about this, and I think this makes a ton of sense, which is like you should be able to have, and this isn't something that that exists right now, but you should be able to have the model.[00:39:39] Tuned in the way that you wanna interact with. Like if you want a model that sort of questions what you're asking it to do, like you should be able to have that. And I actually don't think that that's as far away as like some of the other stuff. Um, It, it's a very possible engineering problem to like have the, to tune the models in that way and, and ask clarifying questions, which is even something that it doesn't do right now.[00:39:59] It'll either give you the response or it won't give you the response, but it'll never say like, Hey, what do you mean by this? Which is super interesting cuz that's like we spend as humans, like 50% of our conversational time being like, what do you mean by that? Like, can you explain more? Can you say it in a different way?[00:40:14] And it's, it's fascinating that the model doesn't do that right now. It's, it's interesting.[00:40:20] swyx: I have written a piece on sort of what AGI hard might be, which is the term that is being thrown around as like a layer of boundary for what is, what requires an A real AGI to do and what, where you might sort of asymptotically approach.[00:40:33] So, What people talk about is essentially a theory of mind, developing a con conception of who I'm talking to and persisting that across sessions, which essentially ChatGPT or you know, any, any interface that you build on top of GPT3 right now would not be able to do. Right? Like, you're not persisting you, you are persisting that history, but you don't, you're not building up a conception of what you know and what.[00:40:54] I should fill in the blanks for you or where I should question you. And I think that's like the hard thing to understand, which is what will it take to get there? Because I think that to me is the, going back to your education thing, that is the biggest barrier, which is I, the language model doesn't have a memory or understanding of what I know.[00:41:11] and like, it's, it's too much to tell them what I don't know. Mm-hmm. , there's more that I don't know than I, than I do know . I think the cool[00:41:16] Logan Kilpatrick: part will be when, when you're able to, like, imagine you could upload all of the, the stuff that you've ever done, all the texts, the work that you've ever done before, and.[00:41:27] The model can start to understand, hey, what are the, what are the conceptual gaps that this person has based on what you've said, based on what you've done? I think that would be really interesting. Like if you can, like I have good notes on my phone and I can still go back to see all of the calculus classes that I took and I could put in all my calculus notebooks and all the assignments and stuff that I did in, in undergrad and grad school, and.[00:41:50] basically be like, Hey, here are the gaps in your understanding of calculus. Go and do this right now. And I think that that's in the education space. That's exactly what will end up happening. You'll be able to put in all this, all the work that you've done. It can understand those ask and then come up with custom made questions and prompts and be like, Hey, how, you know, explain this concept to me and if it.[00:42:09] If you can't do that, then it can sort of put that into your curriculum. I think like Khan Academy as an example, already does some of this, like personalized learning. You like take assessments at the beginning of every Khan Academy model module, and it'll basically only have you watch the videos and do the assignments for the things that like you didn't test well into.[00:42:27] So that's, it's, it's sort of close to already being there in some sense, but it doesn't have the, the language model interface on top of it before we[00:42:34] swyx: get into our lightning round, which is like, Quick response questions. Was there any other topics that you think you wanted to cover? We didn't touch on, whisper.[00:42:40] We didn't touch on Apple. Anything you wanted to[00:42:42] Logan Kilpatrick: talk?[00:42:43] Apple's Neural Engine[00:42:43] Logan Kilpatrick: Yeah, I think the question around Apple stuff and, and the neural engine, I think will be really interesting to see how it all plays out. I think, I don't know if you wanna like ask just to give the context around the neural engine Apple question. Well, well, the[00:42:54] swyx: only thing I know it's because I've seen Apple keynotes.[00:42:57] Everyone has, you know, I, I have a m M one MacBook Cure. They have some kind of neuro chip. , but like, I don't see it in my day-to-day life, so when is this gonna affect me, essentially? And you worked at Apple, so I I was just gonna throw the question over to you, like, what should we[00:43:11] Logan Kilpatrick: expect out of this? Yeah.[00:43:12] The, the problem that I've seen so far with the neural engine and all the, the Mac, and it's also in the phones as well, is that the actual like, API to sort of talk to the neural engine isn't something that's like a common you like, I'm pretty sure it's either not exposed at all, like it only like Apple basically decides in the software layer Yeah.[00:43:34] When, when it should kick in and when it should be used, which I think doesn't really like help developers and it doesn't, that's why no one is using it. I saw a bunch of, and of course I don't have any good insight on this, but I saw a bunch of rumors that we're talking about, like a lot of. Main use cases for the neural engine stuff.[00:43:50] It's, it's basically just in like phantom mode. Now, I'm sure it's doing some processing, but like the main use cases will be a lot of the ar vr stuff that ends up coming out and like when it gets much heavier processing on like. Graphic stuff and doing all that computation, that's where it'll be. It'll be super important.[00:44:06] And they've basically been able to trial this for the last, like six years and have it part of everything and make sure that they can do it cheaply in a cost effective way. And so it'll be cool to see when that I'm, I hope it comes out. That'll be awesome.[00:44:17] swyx: Classic Apple, right? They, they're not gonna be first, but when they do it, they'll make a lot of noise about it.[00:44:21] Yeah. . It'll be[00:44:22] Logan Kilpatrick: awesome. Sure.[00:44:22] Lightning Round[00:44:22] Logan Kilpatrick: So, so are we going to light. Let's[00:44:24] Alessio Fanelli: do it. All right. Favorite AI products not[00:44:28] Logan Kilpatrick: open AI. Build . I think synthesis. Is synthesis.io is the, yeah, you can basically put in like a text prompt and they have like a human avatar that will like speak and you can basically make content in like educational videos.[00:44:44] And I think that's so cool because maybe as people who are making content, like it's, it's super hard to like record video. It just takes a long time. Like you have to edit all the stuff, make sure you sound right, and then when you edit yourself talking it's super weird cuz your mouth is there and things.[00:44:57] So having that and just being able to ChatGPT A script. Put it in. Hopefully I saw another demo of like somebody generating like slides automatically using some open AI stuff. Like I think that type of stuff. Chat, BCG, ,[00:45:10] swyx: a fantastic name, best name of all time .[00:45:14] Logan Kilpatrick: I think that'll be cool. So I'm super excited,[00:45:16] swyx: but Okay.[00:45:16] Well, so just a follow up question on, on that, because we're both in that sort of Devrel business, would you put AI Logan on your video, on your videos and a hundred[00:45:23] Logan Kilpatrick: percent, explain that . A hundred percent. I would, because again, if it reduces the time for me, like. I am already busy doing a bunch of other stuff,[00:45:31] And if I could, if I could take, like, I think the real use case is like I've made, and this is in the sense of like creators wanting to be on every platform. If I could take, you know, the blog posts that I wrote and then have AI break it up into a bunch of things, have ai Logan. Make a TikTok, make a YouTube video.[00:45:48] I cannot wait for that. That's gonna be so nice. And I think there's probably companies who are already thinking about doing that. I'm just[00:45:53] swyx: worried cuz like people have this uncanny valley reaction to like, oh, you didn't tell me what I just watched was a AI generated thing. I hate you. Now you know there, there's a little bit of ethics there and I'm at the disclaimer,[00:46:04] Logan Kilpatrick: at the top.[00:46:04] Navigating. Yeah. I also think people will, people will build brands where like their whole thing is like AI content. I really do think there are AI influencers out there. Like[00:46:12] swyx: there are entire Instagram, like million plus follower accounts who don't exist.[00:46:16] Logan Kilpatrick: I, I've seen that with the, the woman who's a Twitch streamer who like has some, like, she's using like some, I don't know, that technology from like movies where you're like wearing like a mask and it like changes your facial appearance and all that stuff.[00:46:27] So I think there's, there's people who find their niche plus it'll become more common. So, cool. My[00:46:32] swyx: question would be, favorite AI people in communities that you wanna shout up?[00:46:37] Logan Kilpatrick: I think there's a bunch of people in the ML ops community where like that seemed to have been like the most exciting. There was a lot of innovation, a lot of cool things happening in the ML op space, and then all the generative AI stuff happened and then all the ML Ops two people got overlooked.[00:46:51] They're like, what's going on here? So hopefully I still think that ML ops and things like that are gonna be super important for like getting machine learning to be where it needs to be for us to. AGI and all that stuff. So a year from[00:47:05] Alessio Fanelli: now, what will people be the most[00:47:06] Logan Kilpatrick: surprised by? N. I think the AI is gonna get very, very personalized very quickly, and I don't think that people have that feeling yet with chat, BT, but I, I think that that's gonna, that's gonna happen and they'll be surprised in like the, the amount of surface areas in which AI is present.[00:47:23] Like right now it's like, it's really exciting cuz Chat BT is like the one place that you can sort of get that cool experience. But I think that, The people at Facebook aren't dumb. The people at Google aren't dumb. Like they're gonna have, they're gonna have those experiences in a lot of different places and I think that'll be super fascinating to see.[00:47:40] swyx: This is for the builders out there. What's an AI thing you would pay for if someone built it with their personal[00:47:45] Logan Kilpatrick: work? I think more stuff around like transfer learning for, like making transfer, learning easier. Like I think that's truly the way to. Build really cool things is transfer learning, fine tuning, and I, I don't think that there's enough.[00:48:04] Jeremy Howard who created Fasted AI talks a lot about this. I mean, it's something that really resonates with me and, and for context, like at Apple, all the machine learning stuff that we did was transfer learning because it was so powerful. And I think people have this perception that they need to.[00:48:18] Build things from scratch and that's not the case. And I think especially as large language models become more accessible, people need to build layers and products on top of this to make transfer learning more accessible to more people. So hopefully somebody builds something like that and we can all train our own models.[00:48:33] I think that's how you get like that personalized AI experiences you put in your stuff. Make transfer learning easy. Everyone wins. Just just to vector in[00:48:40] swyx: a little bit on this. So in the stable diffusion community, there's a lot of practice of like, I'll fine tune a custom dis of stable diffusion and share it.[00:48:48] And then there also, there's also this concept of, well, first it was textual inversion and then dream booth where you essentially train a concept that you can sort of add on. Is that what you're thinking about when you talk about transfer learning or is that something[00:48:59] Logan Kilpatrick: completely. I feel like I'm not as in tune with the generative like image model community as I probably should be.[00:49:07] I, I think that that makes a lot of sense. I think there'll be like whole ecosystems and marketplaces that are sort of built around exactly what you just said, where you can sort of fine tune some of these models in like very specific ways and you can use other people's fine tunes. That'll be interesting to see.[00:49:21] But, c.ai is,[00:49:23] swyx: what's it called? C C I V I Ts. Yeah. It's where people share their stable diffusion checkpoints in concepts and yeah, it's[00:49:30] Logan Kilpatrick: pretty nice. Do you buy them or is it just like free? Like open. Open source? It's, yeah. Cool. Even better.[00:49:34] swyx: I think people might want to sell them. There's a, there's a prompt marketplace.[00:49:38] Prompt base, yeah. Yeah. People hate it. Yeah. They're like, this should be free. It's just text. Come on, .[00:49:45] Alessio Fanelli: Hey, it's knowledge. All right. Last question. If there's one thing you want everyone to take away about ai, what would.[00:49:51] Logan Kilpatrick: I think the AI revolution is gonna, you know, it's been this like story that people have been talking about for the longest time, and I don't think that it's happened.[00:50:01] It was really like, oh, AI's gonna take your job, AI's gonna take your job, et cetera, et cetera. And I think people have sort of like laughed that off for a really long time, which was fair because it wasn't happening. And I think now, Things are going to accelerate very, very quickly. And if you don't have your eyes wide open about what's happening, like there's a good chance that something that you might get left behind.[00:50:21] So I'm, I'm really thinking deeply these days about like how that is going to impact a lot of people. And I, I'm hopeful that the more widespread this technology becomes, the more mainstream this technology becomes, the more people will benefit from it and hopefully not be affected in that, in that negative way.[00:50:35] So use these tools, put them into your workflow, and, and hopefully that will, and that will acceler. Well,[00:50:41] swyx: we're super happy that you're at OpenAI getting this message out there, and I'm sure we'll see a l

Python Bytes
#324 JSON in My DB?

Python Bytes

Play Episode Listen Later Feb 21, 2023 44:53


Watch on YouTube About the show Sponsored by Compiler Podcast from Red Hat. Connect with the hosts Michael: @mkennedy@fosstodon.org Brian: @brianokken@fosstodon.org Show: @pythonbytes@fosstodon.org Special guest, Erin Mullaney: @erinrachel@fosstodon.org Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Tuesdays at 11am PT. Older video versions available there too. Brian #1: Use TOML for .env files? Brett Cannon .env files are used to store default settings that can be overridden by environmental variables. Possibly brought on by twelve-factor app design. Supported by python-dotenv, which is also used by pydantic, pipenv, and others. One issue is that it's not a defined standard. from python-dotenv docs “The format is not formally specified and still improves over time. That being said, .env files should mostly look like Bash files.” Adafruit decided that an upcoming CircuitPython will use TOML as the format for settings.toml files, which are to be used mostly how .env files are being used. Brett notices this may fix things for Python for VS Code, and other people as well. So… Is this a good idea? I think so. Michael #2: Pydantic gets serious funding via Mark Little (was on episode 285) Sequoia backs open source data-validation framework Pydantic to commercialize with cloud services. Pydantic Services Inc. emerges from stealth today with $4.7 million in seed funding. Pydantic's new commercial entity will incorporate a swath of new tools and services that are both “powered-by and inspired-by the Pydantic library” Pydantic will start with an initial team of six, with the first three engineers based in Montana, Chicago and Berlin. “With $4.7 million in the bank, Colvin said that they're continuing to rewrite parts of Pydantic in Rust, with a view toward making it more efficient via a ten-fold performance improvement.” Erin #3: JSON Fields for performance (Denormalization) David Stokes Using JSON fields when you design your databases is a good way to improve database query performance. Brian #4: f-strings with pandas and Jupyter keyboard shortcuts Kevin Markham After a couple year break from blogging, friend of the show Kevin Markham has a couple great, short, useful posts. How to use Python's f-strings with pandas My favorite bit is the part about using f-strings for dictionary keys Fly through Jupyter with keyboard shortcuts

Python Podcast
Jahresrückblick 2022 und ungelesene MacBooks

Python Podcast

Play Episode Listen Later Dec 31, 2022 129:22


Jahresrückblick 2022 und ungelesene MacBooks 31. Dezember 2022, Jochen Johannes, Dominik und Jochen unterhalten sich über das vergangene Jahr und eigene Projekte. Diesmal erstaunlicherweise wieder komplett vor Ort.Dies ist auch die erste Episode, die über das neue wagtail-basierte django-cast veröffentlicht wird. Bin mal gespannt, ob das ohne größere Unfälle über die Bühne geht

Django Chat
DjangoCon US 2022 Recap

Django Chat

Play Episode Listen Later Nov 9, 2022 50:33


DjangoCon US 2022DjangoCon Europe 2022 VideosAbout my proposal for the Django Core SprintsPersonal Thoughts on the Django Software Foundation Board's Future Benchmarks for SQLite in DjangoSupport the ShowThis podcast does not have any ads or sponsors. To support the show, please consider purchasing a book, signing up for Button, or reading the Django News newsletter.

Octobot Tech Talks
E20 - DjangoCon 2022

Octobot Tech Talks

Play Episode Listen Later Nov 1, 2022 29:14


En este episodio recibimos a los developers Joaquin Scocozza, Felipe Lopez y Carmela Beiro, que estuvieron el mes pasado en la DjangoCon US en San Diego, California. En nuestra conversación nos cuentan cómo fue vivir la conferencia no solamente como espectadores sino también como speakers, ya que los tres dieron charlas sobre sus experiencias con Django. Seguí conociendo nuestra experiencia en la DjangoCon en nuestras redes sociales: -Destacados DjangoCon en Instagram: https://www.instagram.com/octobotdev/ -Serie de posts sobre la DjangoCon en LinkedIn: https://www.linkedin.com/feed/update/urn:li:activity:6988150126645030913 -Blog post: https://www.octobot.io/blog/octobot-djangocon-us/ ¡Seguinos en nuestras redes para no perderte ninguna de nuestras novedades! @octobotdev en Twitter, Instagram y Dribble, y @octobot en LinkedIn y YouTube.

The React Show
It's Not Your Fault You Don't Understand The Code

The React Show

Play Episode Listen Later Oct 28, 2022 46:52


If you or the previous programmer doesn't document what the code is intended to do it's bad code and it won't be maintainable in the long term. High quality, maintainable code must include high quality code comments. In this episode we look at why that is and how to do it. We also investigate if you can store React state outside of hooks or React classes.LinksTwitter - The React ShowEdited by: The Podcast Editorthereactshow.comPatreon“Moving From React to htmx” a talk by David Guillot at DjangoCon 2022 conferenceLiterate Programming

Django Chat
DjangoCon Europe 2022 - Kojo Idrissa

Django Chat

Play Episode Listen Later Sep 28, 2022 71:49


@KojoIdrissa on TwitterKojo Idrissa Kojo at RevSysNaomi Ceder PyCon US 2022 Keynote textDjangoCon US 2022

europe kojo djangocon
Python Podcast
DjangoCon Europe 2021

Python Podcast

Play Episode Listen Later Jun 27, 2021 94:35


 Johannes und Jochen waren auf der DjangoCon Europe 2021 und erzählen Dominik davon. Beispielsweise, weshalb vielleicht keine so gute Idee ist, zuviel Spaß beim Programmieren zu haben. Oder welche Talks und Workshops besonders interessant, gut oder einfach nur überraschend waren.     Shownotes Unsere E-Mail für Fragen, Anregungen & Kommentare: hallo@python-podcast.de DjangoCon Europe 2021 DjangoCon Europe 2021  Talk: Programming for pleasure | What nobody tells you about documentation ATEM Mini Talk: Serving files with Django, django_fileresponse nginx X-Accel | ngx_http_auth_request CDN Django 3.1 Async | Django wird asynchron: Pythons Web-Framework erhält neue Funktion MinIO Jochens Twitch Stream | Youtube Playlist Talk: Django Unstuck: Suggestions for common challenges in your projects | Video und Material zu Django Unstuck DjangoCon 2020 | How To Get On This Stage (And What To Do When You Get There) - Mark Smith gather.town Talk: Dynamic static sites with Django and Sphinx Django Chat Talk: Rewriting Django from (almost) scratch in 2021 Talk: KEYNOTE | We're all part of this: Jazzband 5 years later Github organization: jazzband kolo.app Htmx / intercooler.js Podcast Episode: HTMX - Clean, Dynamic HTML Pages Talk: Unlocking the full potential of PostgreSQL indexes in Django Talk: (A) SQL for Django Talk: Writing Safe Database Migrations Talk: Domain Driven Design with Django and GraphQL SOLID Hotwire Talk: Anvil: Full Stack Web with Nothing but Python Podcast Episode: Flask 2.0 gevent FastAPI Pyramid Picks Devdocs aiosql - Simple SQL in Python Tig: text-mode interface for Git lifetimes Öffentliches Tag auf konektom

The Real Python Podcast
Organizing and Restructuring DjangoCon Europe 2021

The Real Python Podcast

Play Episode Listen Later May 7, 2021 53:39


Are you interested in learning more about Django? Would you like to meet other professionals and learn how they are using Django? DjangoCon Europe 2021 is virtual this year, and you can join in from anywhere in the world. This week on the show, we have Miguel Magalhães and David Vaz, two of the organizers of the conference.

Espacios Abiertos
Mentoreando a los mentores

Espacios Abiertos

Play Episode Listen Later Oct 5, 2020 39:14


Cuando conocí a Ed Rivas, estaba dando una introducción en DjangoCon llamada: "La guía de conferencias tecnológicas para personas tímidas". Por eso, es la persona ideal para hablar del tema de este espacio que es Mentoreo.

Python Podcast
Tests

Python Podcast

Play Episode Listen Later Aug 20, 2020 78:39


Diesmal machen wir eine Testepisode zu Tests :). Wir sind zum ersten mal mit Aufnahmeequipment draussen unterwegs, weil es zuhause einfach zu heiss wurde. Dabei sind heute Ronny, Dominik und Jochen und wir reden über Tests in Python. Ist vielleicht ein bisschen django-lastig, aber viele der Punkte dürften auch auf andere Projekte übertragbar sein. Shownotes Unsere E-Mail für Fragen, Anregungen & Kommentare: hallo@python-podcast.de Wer und Wo Ambient Innovation PyCologne Meetup Django Meetup Köln Restaurant Spoerl Fabrik Zoom H6 HMC 660X Headset HA3D Kopfhörerverstärker News aus der Szene Django 3.1 Release Notes Django 3.1 Async Python 3.9 Release Candidate Buch zu Django: Two Scoops of Django 3.x Tests pytest Pythonic testing framework unittest built in testing framework Langsame Tests finden: django-slowtests Coverage für branch-coverage etc. xdist pytest plugin für verteilte Testausführung Buch von Adam Johnson: Speed Up Your Django Tests | Sein Blog Pareto Distribution kcachegrind Profiler Schnelleres Filesystem für Tests: dj-inmemorystorage django q für asynchrone Tasks Djangocon 2019 talk: Maintaning a Django codebase after 10k commits freezegun time mocking unittests.mock aus der Standardbibliothek cypress end to end tests für Javascript jest unittests für Javascript Öffentliches Tag auf konektom

Mid Meet Py
Mid Meet Py - Ep.13 - Interview with Jason McDonald

Mid Meet Py

Play Episode Listen Later Jun 25, 2020 60:40


PyChat: Congratulations to new PSF board directors PyCon India - CfP open until the 14th August. Conf on 2nd & 3rd of October PyCon Australia is happening soon - CfP ends on July 12th DjangoCon two full days of talks, free & online Mid Meet - Hall of Fame Interview with Jason McDonald, author, speaker and time-lord. Follow Jason on Twitter and dev.to

The PIT Show: Reflections and Interviews in the Tech World
Jeff Triplett Tells Us a Story of Django and Community!

The PIT Show: Reflections and Interviews in the Tech World

Play Episode Listen Later Oct 2, 2019 40:18


Jeff Triplett is a veteran in the Django space. In fact he has done so much for the Python and Django communities and helps to organize Django Con). We recorded this before I went on Vacation but there is still a lot to pull from this episode!

Django Chat
Search

Django Chat

Play Episode Listen Later Oct 2, 2019 19:39


DjangoCon 2019: Search from the Ground UpDjango Search Tutorialdjango-filterMDN on sending form data and form data validationDjango Q Objectsdjango.contrib.postgres.searchPostgreSQL Full Text SearchEuroPython 2017 - Full-Text Search in Django with PostgreSQL by Paulo MelchiorreDjangoCon Europe 2018 - On The Look-Out For Your Data by Markus HoltermannDjangoCon US 2015 - Beyond the basics with Elasticsearch by Honza Kral

Django Chat
DjangoCon US 2019 - Jessica Deaton

Django Chat

Play Episode Listen Later Jun 19, 2019 44:28


Jessica Deaton personal website DjangoCon 2019 Organizers Django Events Foundation North America DEP dissolving Django core DSF Paid Internship with Jacob Kaplan-Moss Django Search Tutorial Open Ticket to add Docker to Django docs Django Software Foundation SHAMELESS PLUGS William's books on Django Carlton's website Noumenal

docker deaton djangocon
Women Tech Talk
DjangoCon Europe /Tech Conferences

Women Tech Talk

Play Episode Listen Later Apr 13, 2017


On this session of Tech Talk we tape live in Florence Italy at the Django Con Europe.  Live at the Cinema Teatro Odeon.  We talk about the conference and Tech conferences in general. Alicia talks about her experiences at Tech conferences as well as her feelings about the conferences formats that differ from each other.

Ruby Rogues
277 RR GROWS Method with Andy Hunt

Ruby Rogues

Play Episode Listen Later Sep 14, 2016 1:06


00:30 Introducing Andy Hunt Website Twitter The Pragmatic Bookshelf GROWS Method 5:25 - GROWS Method Dreyfus Model of Skill Acquisition 13:20 - How GROWS solves Agile’s shortcomings 19:50 - GROWS for executives 22:50 - Marketing Ruby Faker Gems Fakercompany.bs 25:30 - GROWS and laying framework for change 29:00 - How empirical is GROWS? 33:35 - How expectations from the Agile Manifesto have changed 36:10 - Prescribing practices that work 40:00 - Getting feedback Burnup and Burndown charts 42:40 - Human limitations 46:00 - Meaning behind GROWS name 50:05 - Knowing when to scale up 53:00 - Agile Fluency Agile Fluency Model by Diana Larson and James Shore 57:30 - The future of GROWS   Picks: Going camping in your front yard (Jessica) California Academy of Sciences in San Francisco (Sam) Exploratorium in San Francisco (Sam) Shoe Dog by Phil Knight (Saron) Espresso Pillows (Saron) “It’s Darkest Before Dawn” DjangoCon 2016 talk by Timothy Allen (Saron) Ruby Book Club Podcast (Saron) Investing in yourself (Andy)

All Ruby Podcasts by Devchat.tv
277 RR GROWS Method with Andy Hunt

All Ruby Podcasts by Devchat.tv

Play Episode Listen Later Sep 14, 2016 1:06


00:30 Introducing Andy Hunt Website Twitter The Pragmatic Bookshelf GROWS Method 5:25 - GROWS Method Dreyfus Model of Skill Acquisition 13:20 - How GROWS solves Agile’s shortcomings 19:50 - GROWS for executives 22:50 - Marketing Ruby Faker Gems Fakercompany.bs 25:30 - GROWS and laying framework for change 29:00 - How empirical is GROWS? 33:35 - How expectations from the Agile Manifesto have changed 36:10 - Prescribing practices that work 40:00 - Getting feedback Burnup and Burndown charts 42:40 - Human limitations 46:00 - Meaning behind GROWS name 50:05 - Knowing when to scale up 53:00 - Agile Fluency Agile Fluency Model by Diana Larson and James Shore 57:30 - The future of GROWS   Picks: Going camping in your front yard (Jessica) California Academy of Sciences in San Francisco (Sam) Exploratorium in San Francisco (Sam) Shoe Dog by Phil Knight (Saron) Espresso Pillows (Saron) “It’s Darkest Before Dawn” DjangoCon 2016 talk by Timothy Allen (Saron) Ruby Book Club Podcast (Saron) Investing in yourself (Andy)

Devchat.tv Master Feed
277 RR GROWS Method with Andy Hunt

Devchat.tv Master Feed

Play Episode Listen Later Sep 14, 2016 1:06


00:30 Introducing Andy Hunt Website Twitter The Pragmatic Bookshelf GROWS Method 5:25 - GROWS Method Dreyfus Model of Skill Acquisition 13:20 - How GROWS solves Agile’s shortcomings 19:50 - GROWS for executives 22:50 - Marketing Ruby Faker Gems Fakercompany.bs 25:30 - GROWS and laying framework for change 29:00 - How empirical is GROWS? 33:35 - How expectations from the Agile Manifesto have changed 36:10 - Prescribing practices that work 40:00 - Getting feedback Burnup and Burndown charts 42:40 - Human limitations 46:00 - Meaning behind GROWS name 50:05 - Knowing when to scale up 53:00 - Agile Fluency Agile Fluency Model by Diana Larson and James Shore 57:30 - The future of GROWS   Picks: Going camping in your front yard (Jessica) California Academy of Sciences in San Francisco (Sam) Exploratorium in San Francisco (Sam) Shoe Dog by Phil Knight (Saron) Espresso Pillows (Saron) “It’s Darkest Before Dawn” DjangoCon 2016 talk by Timothy Allen (Saron) Ruby Book Club Podcast (Saron) Investing in yourself (Andy)

Geek Shock
Geek Shock #270 - Max Adventure!

Geek Shock

Play Episode Listen Later Jan 14, 2015 101:02


This week we are joined by Eric Randall and Phillip Fitzlaff to talk about Max Adventure, their pitch on Indiegogo. we also talk about DjangoCon goes One Direction, Dashcon, CW's Atom, the death of Deadpool, Minority Report TV show, Taylor Negron, 51st State, Shannara, Tales of Halloween, Star Talk, Friday the 13th: the Game, Porn stats, and crowdsourcing the Stanley Maze. So get an extra hour in the ball pit, it's time for a Geek Shock!https://www.indiegogo.com/projects/max-adventure-animated-educational-series

Django-NYC
Ocotober Meeting - My First API and Djangocon Recap

Django-NYC

Play Episode Listen Later Oct 13, 2009 40:56


Hani Musallam gives a talk on his first web API implemented with Django and Sean O'Connor gives a recap of what happened at Djangocon 2009.