POPULARITY
In this Episode: LindaAnn Rogers, Tom Bradshaw, Matthew Lampe, Nic Krueger, Lee Crowson, Rich Cruz, Dr. Martha Grajdek, Cynthia Mehrkam Visit us https://www.seboc.com/ Follow us on LinkedIn: https://bit.ly/sebocLI Join an open-mic event: https://www.seboc.com/events References Ansell, C., & Boin, A. (2019). Taming deep uncertainty: The potential of pragmatist principles for understanding and improving strategic crisis management. Administration & Society, 51(7), 1079-1112. Barton, M. A., Sutcliffe, K. M., Vogus, T. J., & DeWitt, T. (2015). Performing under uncertainty: Contextualized engagement in wildland firefighting. Journal of Contingencies and Crisis Management, 23(2), 74-83. Hoel, M. R. (2021). Risk and uncertainty in team decision-making-Case study in an Arctic context (Master's thesis, UiT Norges arktiske universitet). Ramnund, V. (2020). Strategic decision-making in the context of crisis and uncertainty. University of Pretoria (South Africa).
Reach Out to Us Today!This week we're covering Acts 16-18. This covers the second and part of the third leg of Paul's missionary journey. During the part of Paul's journey we see him empower believers, wrestle with other beliefs/customs and speak truth a variety of settings. Support the show If you have any questions about the subjects covered in today's episode you can find us on Facebook at the links below or you can shoot me an email at joe@buddywalkwithjesus.com One Stop Shop for all the links Linktr.ee/happydeamedia
Jeff flies solo on this episode of the podcast and he opens the pod by ranking his top 10 QBs in the NFL through three weeks, but based on the context. Then Jeff previews and gives his picks for Week 4 in the NFL!
In this Episode: LindaAnn Rogers, Tom Bradshaw, Matthew Lampe, Nic Krueger, Lee Crowson, Rich Cruz, Dr. Martha Grajdek, Cynthia Mehrkam Visit us https://www.seboc.com/ Follow us on LinkedIn: https://bit.ly/sebocLI Join an open-mic event: https://www.seboc.com/events References Ansell, C., & Boin, A. (2019). Taming deep uncertainty: The potential of pragmatist principles for understanding and improving strategic crisis management. Administration & Society, 51(7), 1079-1112. Barton, M. A., Sutcliffe, K. M., Vogus, T. J., & DeWitt, T. (2015). Performing under uncertainty: Contextualized engagement in wildland firefighting. Journal of Contingencies and Crisis Management, 23(2), 74-83. Hoel, M. R. (2021). Risk and uncertainty in team decision-making-Case study in an Arctic context (Master's thesis, UiT Norges arktiske universitet). Ramnund, V. (2020). Strategic decision-making in the context of crisis and uncertainty. University of Pretoria (South Africa).
Today's guest, Nicholas Carlini, a research scientist at DeepMind, argues that we should be focusing more on what AI can do for us individually, rather than trying to have an answer for everyone."How I Use AI" - A Pragmatic ApproachCarlini's blog post "How I Use AI" went viral for good reason. Instead of giving a personal opinion about AI's potential, he simply laid out how he, as a security researcher, uses AI tools in his daily work. He divided it in 12 sections:* To make applications* As a tutor* To get started* To simplify code* For boring tasks* To automate tasks* As an API reference* As a search engine* To solve one-offs* To teach me* Solving solved problems* To fix errorsEach of the sections has specific examples, so we recommend going through it. It also includes all prompts used for it; in the "make applications" case, it's 30,000 words total!My personal takeaway is that the majority of the work AI can do successfully is what humans dislike doing. Writing boilerplate code, looking up docs, taking repetitive actions, etc. These are usually boring tasks with little creativity, but with a lot of structure. This is the strongest arguments as to why LLMs, especially for code, are more beneficial to senior employees: if you can get the boring stuff out of the way, there's a lot more value you can generate. This is less and less true as you go entry level jobs which are mostly boring and repetitive tasks. Nicholas argues both sides ~21:34 in the pod.A New Approach to LLM BenchmarksWe recently did a Benchmarks 201 episode, a follow up to our original Benchmarks 101, and some of the issues have stayed the same. Notably, there's a big discrepancy between what benchmarks like MMLU test, and what the models are used for. Carlini created his own domain-specific language for writing personalized LLM benchmarks. The idea is simple but powerful:* Take tasks you've actually needed AI for in the past.* Turn them into benchmark tests.* Use these to evaluate new models based on your specific needs.It can represent very complex tasks, from a single code generation to drawing a US flag using C:"Write hello world in python" >> LLMRun() >> PythonRun() >> SubstringEvaluator("hello world")"Write a C program that draws an american flag to stdout." >> LLMRun() >> CRun() >> VisionLLMRun("What flag is shown in this image?") >> (SubstringEvaluator("United States") | SubstringEvaluator("USA")))This approach solves a few problems:* It measures what's actually useful to you, not abstract capabilities.* It's harder for model creators to "game" your specific benchmark, a problem that has plagued standardized tests.* It gives you a concrete way to decide if a new model is worth switching to, similar to how developers might run benchmarks before adopting a new library or framework.Carlini argues that if even a small percentage of AI users created personal benchmarks, we'd have a much better picture of model capabilities in practice.AI SecurityWhile much of the AI security discussion focuses on either jailbreaks or existential risks, Carlini's research targets the space in between. Some highlights from his recent work:* LAION 400M data poisoning: By buying expired domains referenced in the dataset, Carlini's team could inject arbitrary images into models trained on LAION 400M. You can read the paper "Poisoning Web-Scale Training Datasets is Practical", for all the details. This is a great example of expanding the scope beyond the model itself, and looking at the whole system and how ti can become vulnerable.* Stealing model weights: They demonstrated how to extract parts of production language models (like OpenAI's) through careful API queries. This research, "Extracting Training Data from Large Language Models", shows that even black-box access can leak sensitive information.* Extracting training data: In some cases, they found ways to make models regurgitate verbatim snippets from their training data. Him and Milad Nasr wrote a paper on this as well: Scalable Extraction of Training Data from (Production) Language Models. They also think this might be applicable to extracting RAG results from a generation.These aren't just theoretical attacks. They've led to real changes in how companies like OpenAI design their APIs and handle data. If you really miss logit_bias and logit results by token, you can blame Nicholas :)We had a ton of fun also chatting about things like Conway's Game of Life, how much data can fit in a piece of paper, and porting Doom to Javascript. Enjoy!Show Notes* How I Use AI* My Benchmark for LLMs* Doom Javascript port* Conway's Game of Life* Tic-Tac-Toe in one printf statement* International Obfuscated C Code Contest* Cursor* LAION 400M poisoning paper* Man vs Machine at Black Hat* Model Stealing from OpenAI* Milad Nasr* H.D. Moore* Vijay Bolina* Cosine.sh* uuencodeTimestamps* [00:00:00] Introductions* [00:01:14] Why Nicholas writes* [00:02:09] The Game of Life* [00:05:07] "How I Use AI" blog post origin story* [00:08:24] Do we need software engineering agents?* [00:11:03] Using AI to kickstart a project* [00:14:08] Ephemeral software* [00:17:37] Using AI to accelerate research* [00:21:34] Experts vs non-expert users as beneficiaries of AI* [00:24:02] Research on generating less secure code with LLMs.* [00:27:22] Learning and explaining code with AI* [00:30:12] AGI speculations?* [00:32:50] Distributing content without social media* [00:35:39] How much data do you think you can put on a single piece of paper?* [00:37:37] Building personal AI benchmarks* [00:43:04] Evolution of prompt engineering and its relevance* [00:46:06] Model vs task benchmarking* [00:52:14] Poisoning LAION 400M through expired domains* [00:55:38] Stealing OpenAI models from their API* [01:01:29] Data stealing and recovering training data from models* [01:03:30] Finding motivation in your workTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO-in-Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:12]: Hey, and today we're in the in-person studio, which Alessio has gorgeously set up for us, with Nicholas Carlini. Welcome. Thank you. You're a research scientist at DeepMind. You work at the intersection of machine learning and computer security. You got your PhD from Berkeley in 2018, and also your BA from Berkeley as well. And mostly we're here to talk about your blogs, because you are so generous in just writing up what you know. Well, actually, why do you write?Nicholas [00:00:41]: Because I like, I feel like it's fun to share what you've done. I don't like writing, sufficiently didn't like writing, I almost didn't do a PhD, because I knew how much writing was involved in writing papers. I was terrible at writing when I was younger. I do like the remedial writing classes when I was in university, because I was really bad at it. So I don't actually enjoy, I still don't enjoy the act of writing. But I feel like it is useful to share what you're doing, and I like being able to talk about the things that I'm doing that I think are fun. And so I write because I think I want to have something to say, not because I enjoy the act of writing.Swyx [00:01:14]: But yeah. It's a tool for thought, as they often say. Is there any sort of backgrounds or thing that people should know about you as a person? Yeah.Nicholas [00:01:23]: So I tend to focus on, like you said, I do security work, I try to like attacking things and I want to do like high quality security research. And that's mostly what I spend my actual time trying to be productive members of society doing that. But then I get distracted by things, and I just like, you know, working on random fun projects. Like a Doom clone in JavaScript.Swyx [00:01:44]: Yes.Nicholas [00:01:45]: Like that. Or, you know, I've done a number of things that have absolutely no utility. But are fun things to have done. And so it's interesting to say, like, you should work on fun things that just are interesting, even if they're not useful in any real way. And so that's what I tend to put up there is after I have completed something I think is fun, or if I think it's sufficiently interesting, write something down there.Alessio [00:02:09]: Before we go into like AI, LLMs and whatnot, why are you obsessed with the game of life? So you built multiplexing circuits in the game of life, which is mind boggling. So where did that come from? And then how do you go from just clicking boxes on the UI web version to like building multiplexing circuits?Nicholas [00:02:29]: I like Turing completeness. The definition of Turing completeness is a computer that can run anything, essentially. And the game of life, Conway's game of life is a very simple cellular 2D automata where you have cells that are either on or off. And a cell becomes on if in the previous generation some configuration holds true and off otherwise. It turns out there's a proof that the game of life is Turing complete, that you can run any program in principle using Conway's game of life. I don't know. And so you can, therefore someone should. And so I wanted to do it. Some other people have done some similar things, but I got obsessed into like, if you're going to try and make it work, like we already know it's possible in theory. I want to try and like actually make something I can run on my computer, like a real computer I can run. And so yeah, I've been going on this rabbit hole of trying to make a CPU that I can run semi real time on the game of life. And I have been making some reasonable progress there. And yeah, but you know, Turing completeness is just like a very fun trap you can go down. A while ago, as part of a research paper, I was able to show that in C, if you call into printf, it's Turing complete. Like printf, you know, like, which like, you know, you can print numbers or whatever, right?Swyx [00:03:39]: Yeah, but there should be no like control flow stuff.Nicholas [00:03:42]: Because printf has a percent n specifier that lets you write an arbitrary amount of data to an arbitrary location. And the printf format specifier has an index into where it is in the loop that is in memory. So you can overwrite the location of where printf is currently indexing using percent n. So you can get loops, you can get conditionals, and you can get arbitrary data rates again. So we sort of have another Turing complete language using printf, which again, like this has essentially zero practical utility, but like, it's just, I feel like a lot of people get into programming because they enjoy the art of doing these things. And then they go work on developing some software application and lose all joy with the boys. And I want to still have joy in doing these things. And so on occasion, I try to stop doing productive, meaningful things and just like, what's a fun thing that we can do and try and make that happen.Alessio [00:04:39]: Awesome. So you've been kind of like a pioneer in the AI security space. You've done a lot of talks starting back in 2018. We'll kind of leave that to the end because I know the security part is, there's maybe a smaller audience, but it's a very intense audience. So I think that'll be fun. But everybody in our Discord started posting your how I use AI blog post and we were like, we should get Carlini on the podcast. And then you were so nice to just, yeah, and then I sent you an email and you're like, okay, I'll come.Swyx [00:05:07]: And I was like, oh, I thought that would be harder.Alessio [00:05:10]: I think there's, as you said in the blog posts, a lot of misunderstanding about what LLMs can actually be used for. What are they useful at? What are they not good at? And whether or not it's even worth arguing what they're not good at, because they're obviously not. So if you cannot count the R's in a word, they're like, it's just not what it does. So how painful was it to write such a long post, given that you just said that you don't like to write? Yeah. And then we can kind of run through the things, but maybe just talk about the motivation, why you thought it was important to do it.Nicholas [00:05:39]: Yeah. So I wanted to do this because I feel like most people who write about language models being good or bad, some underlying message of like, you know, they have their camp and their camp is like, AI is bad or AI is good or whatever. And they like, they spin whatever they're going to say according to their ideology. And they don't actually just look at what is true in the world. So I've read a lot of things where people say how amazing they are and how all programmers are going to be obsolete by 2024. And I've read a lot of things where people who say like, they can't do anything useful at all. And, you know, like, they're just like, it's only the people who've come off of, you know, blockchain crypto stuff and are here to like make another quick buck and move on. And I don't really agree with either of these. And I'm not someone who cares really one way or the other how these things go. And so I wanted to write something that just says like, look, like, let's sort of ground reality and what we can actually do with these things. Because my actual research is in like security and showing that these models have lots of problems. Like this is like my day to day job is saying like, we probably shouldn't be using these in lots of cases. I thought I could have a little bit of credibility of in saying, it is true. They have lots of problems. We maybe shouldn't be deploying them lots of situations. And still, they are also useful. And that is the like, the bit that I wanted to get across is to say, I'm not here to try and sell you on anything. I just think that they're useful for the kinds of work that I do. And hopefully, some people would listen. And it turned out that a lot more people liked it than I thought. But yeah, that was the motivation behind why I wanted to write this.Alessio [00:07:15]: So you had about a dozen sections of like how you actually use AI. Maybe we can just kind of run through them all. And then maybe the ones where you have extra commentary to add, we can... Sure.Nicholas [00:07:27]: Yeah, yeah. I didn't put as much thought into this as maybe was deserved. I probably spent, I don't know, definitely less than 10 hours putting this together.Swyx [00:07:38]: Wow.Alessio [00:07:39]: It took me close to that to do a podcast episode. So that's pretty impressive.Nicholas [00:07:43]: Yeah. I wrote it in one pass. I've gotten a number of emails of like, you got this editing thing wrong, you got this sort of other thing wrong. It's like, I haven't just haven't looked at it. I tend to try it. I feel like I still don't like writing. And so because of this, the way I tend to treat this is like, I will put it together into the best format that I can at a time, and then put it on the internet, and then never change it. And this is an aspect of like the research side of me is like, once a paper is published, like it is done as an artifact that exists in the world. I could forever edit the very first thing I ever put to make it the most perfect version of what it is, and I would do nothing else. And so I feel like I find it useful to be like, this is the artifact, I will spend some certain amount of hours on it, which is what I think it is worth. And then I will just...Swyx [00:08:22]: Yeah.Nicholas [00:08:23]: Timeboxing.Alessio [00:08:24]: Yeah. Stop. Yeah. Okay. We just recorded an episode with the founder of Cosine, which is like an AI software engineer colleague. You said it took you 30,000 words to get GPT-4 to build you the, can GPT-4 solve this kind of like app. Where are we in the spectrum where chat GPT is all you need to actually build something versus I need a full on agent that does everything for me?Nicholas [00:08:46]: Yeah. Okay. So this was an... So I built a web app last year sometime that was just like a fun demo where you can guess if you can predict whether or not GPT-4 at the time could solve a given task. This is, as far as web apps go, very straightforward. You need basic HTML, CSS, you have a little slider that moves, you have a button, sort of animate the text coming to the screen. The reason people are going here is not because they want to see my wonderful HTML, right? I used to know how to do modern HTML in 2007, 2008. I was very good at fighting with IE6 and these kinds of things. I knew how to do that. I have no longer had to build any web app stuff in the meantime, which means that I know how everything works, but I don't know any of the new... Flexbox is new to me. Flexbox is like 10 years old at this point, but it's just amazing being able to go to the model and just say, write me this thing and it will give me all of the boilerplate that I need to get going. Of course it's imperfect. It's not going to get you the right answer, and it doesn't do anything that's complicated right now, but it gets you to the point where the only remaining work that needs to be done is the interesting hard part for me, the actual novel part. Even the current models, I think, are entirely good enough at doing this kind of thing, that they're very useful. It may be the case that if you had something, like you were saying, a smarter agent that could debug problems by itself, that might be even more useful. Currently though, make a model into an agent by just copying and pasting error messages for the most part. That's what I do, is you run it and it gives you some code that doesn't work, and either I'll fix the code, or it will give me buggy code and I won't know how to fix it, and I'll just copy and paste the error message and say, it tells me this. What do I do? And it will just tell me how to fix it. You can't trust these things blindly, but I feel like most people on the internet already understand that things on the internet, you can't trust blindly. And so this is not like a big mental shift you have to go through to understand that it is possible to read something and find it useful, even if it is not completely perfect in its output.Swyx [00:10:54]: It's very human-like in that sense. It's the same ring of trust, I kind of think about it that way, if you had trust levels.Alessio [00:11:03]: And there's maybe a couple that tie together. So there was like, to make applications, and then there's to get started, which is a similar you know, kickstart, maybe like a project that you know the LLM cannot solve. It's kind of how you think about it.Nicholas [00:11:15]: Yeah. So for getting started on things is one of the cases where I think it's really great for some of these things, where I sort of use it as a personalized, help me use this technology I've never used before. So for example, I had never used Docker before January. I know what Docker is. Lucky you. Yeah, like I'm a computer security person, like I sort of, I have read lots of papers on, you know, all the technology behind how these things work. You know, I know all the exploits on them, I've done some of these things, but I had never actually used Docker. But I wanted it to be able to, I could run the outputs of language model stuff in some controlled contained environment, which I know is the right application. So I just ask it like, I want to use Docker to do this thing, like, tell me how to run a Python program in a Docker container. And it like gives me a thing. I'm like, step back. You said Docker compose, I do not know what this word Docker compose is. Is this Docker? Help me. And like, you'll sort of tell me all of these things. And I'm sure there's this knowledge that's out there on the internet, like this is not some groundbreaking thing that I'm doing, but I just wanted it as a small piece of one thing I was working on. And I didn't want to learn Docker from first principles. Like I, at some point, if I need it, I can do that. Like I have the background that I can make that happen. But what I wanted to do was, was thing one. And it's very easy to get bogged down in the details of this other thing that helps you accomplish your end goal. And I just want to like, tell me enough about Docker so I can do this particular thing. And I can check that it's doing the safe thing. I sort of know enough about that from, you know, my other background. And so I can just have the model help teach me exactly the one thing I want to know and nothing more. I don't need to worry about other things that the writer of this thinks is important that actually isn't. Like I can just like stop the conversation and say, no, boring to me. Explain this detail. I don't understand. I think that's what that was very useful for me. It would have taken me, you know, several hours to figure out some things that take 10 minutes if you could just ask exactly the question you want the answer to.Alessio [00:13:05]: Have you had any issues with like newer tools? Have you felt any meaningful kind of like a cutoff day where like there's not enough data on the internet or? I'm sure that the answer to this is yes.Nicholas [00:13:16]: But I tend to just not use most of these things. Like I feel like this is like the significant way in which I use machine learning models is probably very different than most people is that I'm a researcher and I get to pick what tools that I use and most of the things that I work on are fairly small projects. And so I can, I can entirely see how someone who is in a big giant company where they have their own proprietary legacy code base of a hundred million lines of code or whatever and like you just might not be able to use things the same way that I do. I still think there are lots of use cases there that are entirely reasonable that are not the same ones that I've put down. But I wanted to talk about what I have personal experience in being able to say is useful. And I would like it very much if someone who is in one of these environments would be able to describe the ways in which they find current models useful to them. And not, you know, philosophize on what someone else might be able to find useful, but actually say like, here are real things that I have done that I found useful for me.Swyx [00:14:08]: Yeah, this is what I often do to encourage people to write more, to share their experiences because they often fear being attacked on the internet. But you are the ultimate authority on how you use things and there's this objectively true. So they cannot be debated. One thing that people are very excited about is the concept of ephemeral software or like personal software. This use case in particular basically lowers the activation energy for creating software, which I like as a vision. I don't think I have taken as much advantage of it as I could. I feel guilty about that. But also, we're trending towards there.Nicholas [00:14:47]: Yeah. No, I mean, I do think that this is a direction that is exciting to me. One of the things I wrote that was like, a lot of the ways that I use these models are for one-off things that I just need to happen that I'm going to throw away in five minutes. And you can.Swyx [00:15:01]: Yeah, exactly.Nicholas [00:15:02]: Right. It's like the kind of thing where it would not have been worth it for me to have spent 45 minutes writing this, because I don't need the answer that badly. But if it will only take me five minutes, then I'll just figure it out, run the program and then get it right. And if it turns out that you ask the thing, it doesn't give you the right answer. Well, I didn't actually need the answer that badly in the first place. Like either I can decide to dedicate the 45 minutes or I cannot, but like the cost of doing it is fairly low. You see what the model can do. And if it can't, then, okay, when you're using these models, if you're getting the answer you want always, it means you're not asking them hard enough questions.Swyx [00:15:35]: Say more.Nicholas [00:15:37]: Lots of people only use them for very small particular use cases and like it always does the thing that they want. Yeah.Swyx [00:15:43]: Like they use it like a search engine.Nicholas [00:15:44]: Yeah. Or like one particular case. And if you're finding that when you're using these, it's always giving you the answer that you want, then probably it has more capabilities than you're actually using. And so I oftentimes try when I have something that I'm curious about to just feed into the model and be like, well, maybe it's just solved my problem for me. You know, most of the time it doesn't, but like on occasion, it's like, it's done things that would have taken me, you know, a couple hours that it's been great and just like solved everything immediately. And if it doesn't, then it's usually easier to verify whether or not the answer is correct than to have written in the first place. And so you check, you're like, well, that's just, you're entirely misguided. Nothing here is right. It's just like, I'm not going to do this. I'm going to go write it myself or whatever.Alessio [00:16:21]: Even for non-tech, I had to fix my irrigation system. I had an old irrigation system. I didn't know how I worked to program it. I took a photo, I sent it to Claude and it's like, oh yeah, that's like the RT 900. This is exactly, I was like, oh wow, you know, you know, a lot of stuff.Swyx [00:16:34]: Was it right?Alessio [00:16:35]: Yeah, it was right.Swyx [00:16:36]: It worked. Did you compare with OpenAI?Alessio [00:16:38]: No, I canceled my OpenAI subscription, so I'm a Claude boy. Do you have a way to think about this like one-offs software thing? One way I talk to people about it is like LLMs are kind of converging to like semantic serverless functions, you know, like you can say something and like it can run the function in a way and then that's it. It just kind of dies there. Do you have a mental model to just think about how long it should live for and like anything like that?Nicholas [00:17:02]: I don't think I have anything interesting to say here, no. I will take whatever tools are available in front of me and try and see if I can use them in meaningful ways. And if they're helpful, then great. If they're not, then fine. And like, you know, there are lots of people that I'm very excited about seeing all these people who are trying to make better applications that use these or all these kinds of things. And I think that's amazing. I would like to see more of it, but I do not spend my time thinking about how to make this any better.Alessio [00:17:27]: What's the most underrated thing in the list? I know there's like simplified code, solving boring tasks, or maybe is there something that you forgot to add that you want to throw in there?Nicholas [00:17:37]: I mean, so in the list, I only put things that people could look at and go, I understand how this solved my problem. I didn't want to put things where the model was very useful to me, but it would not be clear to someone else that it was actually useful. So for example, one of the things that I use it a lot for is debugging errors. But the errors that I have are very much not the errors that anyone else in the world will have. And in order to understand whether or not the solution was right, you just have to trust me on it. Because, you know, like I got my machine in a state that like CUDA was not talking to whatever some other thing, the versions were mismatched, something, something, something, and everything was broken. And like, I could figure it out with interaction with the model, and it gave it like told me the steps I needed to take. But at the end of the day, when you look at the conversation, you just have to trust me that it worked. And I didn't want to write things online that were this, like, you have to trust me that what I'm saying. I want everything that I said to like have evidence that like, here's the conversation, you can go and check whether or not this actually solved the task as I said that the model does. Because a lot of people I feel like say, I used a model to solve this very complicated task. And what they mean is the model did 10%, and I did the other 90% or something, I wanted everything to be verifiable. And so one of the biggest use cases for me, I didn't describe even at all, because it's not the kind of thing that other people could have verified by themselves. So that maybe is like, one of the things that I wish I maybe had said a little bit more about, and just stated that the way that this is done, because I feel like that this didn't come across quite as well. But yeah, of the things that I talked about, the thing that I think is most underrated is the ability of it to solve the uninteresting parts of problems for me right now, where people always say, this is one of the biggest arguments that I don't understand why people say is, the model can only do things that people have done before. Therefore, the model is not going to be helpful in doing new research or like discovering new things. And as someone whose day job is to do new things, like what is research? Research is doing something literally no one else in the world has ever done before. So this is what I do every single day, 90% of this is not doing something new, 90% of this is doing things a million people have done before, and then a little bit of something that was new. There's a reason why we say we stand on the shoulders of giants. It's true. Almost everything that I do is something that's been done many, many times before. And that is the piece that can be automated. Even if the thing that I'm doing as a whole is new, it is almost certainly the case that the small pieces that build up to it are not. And a number of people who use these models, I feel like expect that they can either solve the entire task or none of the task. But now I find myself very often, even when doing something very new and very hard, having models write the easy parts for me. And the reason I think this is so valuable, everyone who programs understands this, like you're currently trying to solve some problem and then you get distracted. And whatever the case may be, someone comes and talks to you, you have to go look up something online, whatever it is. You lose a lot of time to that. And one of the ways we currently don't think about being distracted is you're solving some hard problem and you realize you need a helper function that does X, where X is like, it's a known algorithm. Any person in the world, you say like, give me the algorithm that, have a dense graph or a sparse graph, I need to make it dense. You can do this by doing some matrix multiplies. It's like, this is a solved problem. I knew how to do this 15 years ago, but it distracts me from the problem I'm thinking about in my mind. I needed this done. And so instead of using my mental capacity and solving that problem and then coming back to the problem I was originally trying to solve, you could just ask model, please solve this problem for me. It gives you the answer. You run it. You can check that it works very, very quickly. And now you go back to solving the problem without having lost all the mental state. And I feel like this is one of the things that's been very useful for me.Swyx [00:21:34]: And in terms of this concept of expert users versus non-expert users, floors versus ceilings, you had some strong opinion here that like, basically it actually is more beneficial for non-experts.Nicholas [00:21:46]: Yeah, I don't know. I think it could go either way. Let me give you the argument for both of these. Yes. So I can only speak on the expert user behalf because I've been doing computers for a long time. And so yeah, the cases where it's useful for me are exactly these cases where I can check the output. I know, and anything the model could do, I could have done. I could have done better. I can check every single thing that the model is doing and make sure it's correct in every way. And so I can only speak and say, definitely it's been useful for me. But I also see a world in which this could be very useful for the kinds of people who do not have this knowledge, with caveats, because I'm not one of these people. I don't have this direct experience. But one of these big ways that I can see this is for things that you can check fairly easily, someone who could never have asked or have written a program themselves to do a certain task could just ask for the program that does the thing. And you know, some of the times it won't get it right. But some of the times it will, and they'll be able to have the thing in front of them that they just couldn't have done before. And we see a lot of people trying to do applications for this, like integrating language models into spreadsheets. Spreadsheets run the world. And there are some people who know how to do all the complicated spreadsheet equations and various things, and other people who don't, who just use the spreadsheet program but just manually do all of the things one by one by one by one. And this is a case where you could have a model that could try and give you a solution. And as long as the person is rigorous in testing that the solution does actually the correct thing, and this is the part that I'm worried about most, you know, I think depending on these systems in ways that we shouldn't, like this is what my research says, my research says is entirely on this, like, you probably shouldn't trust these models to do the things in adversarial situations, like, I understand this very deeply. And so I think that it's possible for people who don't have this knowledge to make use of these tools in ways, but I'm worried that it might end up in a world where people just blindly trust them, deploy them in situations that they probably shouldn't, and then someone like me gets to come along and just break everything because everything is terrible. And so I am very, very worried about that being the case, but I think if done carefully it is possible that these could be very useful.Swyx [00:23:54]: Yeah, there is some research out there that shows that when people use LLMs to generate code, they do generate less secure code.Nicholas [00:24:02]: Yeah, Dan Bonet has a nice paper on this. There are a bunch of papers that touch on exactly this.Swyx [00:24:07]: My slight issue is, you know, is there an agenda here?Nicholas [00:24:10]: I mean, okay, yeah, Dan Bonet, at least the one they have, like, I fully trust everything that sort of.Swyx [00:24:15]: Sorry, I don't know who Dan is.Swyx [00:24:17]: He's a professor at Stanford. Yeah, he and some students have some things on this. Yeah, there's a number. I agree that a lot of the stuff feels like people have an agenda behind it. There are some that don't, and I trust them to have done the right thing. I also think, even on this though, we have to be careful because the argument, whenever someone says x is true about language models, you should always append the suffix for current models because I'll be the first to admit I was one of the people who was very much on the opinion that these language models are fun toys and are going to have absolutely no practical utility. If you had asked me this, let's say, in 2020, I still would have said the same thing. After I had seen GPT-2, I had written a couple of papers studying GPT-2 very carefully. I still would have told you these things are toys. And when I first read the RLHF paper and the instruction tuning paper, I was like, nope, this is this thing that these weird AI people are doing. They're trying to make some analogies to people that makes no sense. It's just like, I don't even care to read it. I saw what it was about and just didn't even look at it. I was obviously wrong. These things can be useful. And I feel like a lot of people had the same mentality that I did and decided not to change their mind. And I feel like this is the thing that I want people to be careful about. I want them to at least know what is true about the world so that they can then see that maybe they should reconsider some of the opinions that they had from four or five years ago that may just not be true about today's models.Swyx [00:25:47]: Specifically because you brought up spreadsheets, I want to share my personal experience because I think Google has done a really good job that people don't know about, which is if you use Google Sheets, Gemini is integrated inside of Google Sheets and it helps you write formulas. Great.Nicholas [00:26:00]: That's news to me.Swyx [00:26:01]: Right? They don't maybe do a good job. Unless you watch Google I.O., there was no other opportunity to learn that Gemini is now in your Google Sheets. And so I just don't write formulas manually anymore. It just prompts Gemini to do it for me. And it does it.Nicholas [00:26:15]: One of the problems that these machine learning models have is a discoverability problem. I think this will be figured out. I mean, it's the same problem that you have with any assistant. You're given a blank box and you're like, what do I do with it? I think this is great. More of these things, it would be good for them to exist. I want them to exist in ways that we can actually make sure that they're done correctly. I don't want to just have them be pushed into more and more things just blindly. I feel like lots of people, there are far too many X plus AI, where X is like arbitrary thing in the world that has nothing to do with it and could not be benefited at all. And they're just doing it because they want to use the word. And I don't want that to happen.Swyx [00:26:58]: You don't want an AI fridge?Nicholas [00:27:00]: No. Yes. I do not want my fridge on the internet.Swyx [00:27:03]: I do not want... Okay.Nicholas [00:27:05]: Anyway, let's not go down that rabbit hole. I understand why some of that happens, because people want to sell things or whatever. But I feel like a lot of people see that and then they write off everything as a result of it. And I just want to say, there are allowed to be people who are trying to do things that don't make any sense. Just ignore them. Do the things that make sense.Alessio [00:27:22]: Another chunk of use cases was learning. So both explaining code, being an API reference, all of these different things. Any suggestions on how to go at it? I feel like one thing is generate code and then explain to me. One way is just tell me about this technology. Another thing is like, hey, I read this online, kind of help me understand it. Any best practices on getting the most out of it?Swyx [00:27:47]: Yeah.Nicholas [00:27:47]: I don't know if I have best practices. I have how I use them.Swyx [00:27:51]: Yeah.Nicholas [00:27:51]: I find it very useful for cases where I understand the underlying ideas, but I have never usedSwyx [00:27:59]: them in this way before.Nicholas [00:28:00]: I know what I'm looking for, but I just don't know how to get there. And so yeah, as an API reference is a great example. The tool everyone always picks on is like FFmpeg. No one in the world knows the command line arguments to do what they want. They're like, make the thing faster. I want lower bitrate, like dash V. Once you tell me what the answer is, I can check. This is one of these things where it's great for these kinds of things. Or in other cases, things where I don't really care that the answer is 100% correct. So for example, I do a lot of security work. Most of security work is reading some code you've never seen before and finding out which pieces of the code are actually important. Because, you know, most of the program isn't actually do anything to do with security. It has, you know, the display piece or the other piece or whatever. And like, you just, you would only ignore all of that. So one very fun use of models is to like, just have it describe all the functions and just skim it and be like, wait, which ones look like approximately the right things to look at? Because otherwise, what are you going to do? You're going to have to read them all manually. And when you're reading them manually, you're going to skim the function anyway, and not just figure out what's going on perfectly. Like you already know that when you're going to read these things, what you're going to try and do is figure out roughly what's going on. Then you'll delve into the details. This is a great way of just doing that, but faster, because it will abstract most of whatSwyx [00:29:21]: is right.Nicholas [00:29:21]: It's going to be wrong some of the time. I don't care.Swyx [00:29:23]: I would have been wrong too.Nicholas [00:29:24]: And as long as you treat it with this way, I think it's great. And so like one of the particular use cases I have in the thing is decompiling binaries, where oftentimes people will release a binary. They won't give you the source code. And you want to figure out how to attack it. And so one thing you could do is you could try and run some kind of decompiler. It turns out for the thing that I wanted, none existed. And so I spent too many hours doing it by hand. Before I first thought, why am I doing this? I should just check if the model could do it for me. And it turns out that it can. And it can turn the compiled source code, which is impossible for any human to understand, into the Python code that is entirely reasonable to understand. And it doesn't run. It has a bunch of problems. But it's so much nicer that it's immediately a win for me. I can just figure out approximately where I should be looking, and then spend all of my time doing that by hand. And again, you get a big win there.Swyx [00:30:12]: So I fully agree with all those use cases, especially for you as a security researcher and having to dive into multiple things. I imagine that's super helpful. I do think we want to move to your other blog post. But you ended your post with a little bit of a teaser about your next post and your speculations. What are you thinking about?Nicholas [00:30:34]: So I want to write something. And I will do that at some point when I have time, maybe after I'm done writing my current papers for ICLR or something, where I want to talk about some thoughts I have for where language models are going in the near-term future. The reason why I want to talk about this is because, again, I feel like the discussion tends to be people who are either very much AGI by 2027, orSwyx [00:30:55]: always five years away, or are going to make statements of the form,Nicholas [00:31:00]: you know, LLMs are the wrong path, and we should be abandoning this, and we should be doing something else instead. And again, I feel like people tend to look at this and see these two polarizing options and go, well, those obviously are both very far extremes. Like, how do I actually, like, what's a more nuanced take here? And so I have some opinions about this that I want to put down, just saying, you know, I have wide margins of error. I think you should too. If you would say there's a 0% chance that something, you know, the models will get very, very good in the next five years, you're probably wrong. If you're going to say there's a 100% chance that in the next five years, then you're probably wrong. And like, to be fair, most of the people, if you read behind the headlines, actually say something like this. But it's very hard to get clicks on the internet of like, some things may be good in the future. Like, everyone wants like, you know, a very, like, nothing is going to be good. This is entirely wrong. It's going to be amazing. You know, like, they want to see this. I want people who have negative reactions to these kinds of extreme views to be able to at least say, like, to tell them, there is something real here. It may not solve all of our problems, but it's probably going to get better. I don't know by how much. And that's basically what I want to say. And then at some point, I'll talk about the safety and security things as a result of this. Because the way in which security intersects with these things depends a lot in exactly how people use these tools. You know, if it turns out to be the case that these models get to be truly amazing and can solve, you know, tasks completely autonomously, that's a very different security world to be living in than if there's always a human in the loop. And the types of security questions I would want to ask would be very different. And so I think, you know, in some very large part, understanding what the future will look like a couple of years ahead of time is helpful for figuring out which problems, as a security person, I want to solve now. You mentioned getting clicks on the internet,Alessio [00:32:50]: but you don't even have, like, an ex-account or anything. How do you get people to read your stuff? What's your distribution strategy? Because this post was popping up everywhere. And then people on Twitter were like, Nicholas Garlini wrote this. Like, what's his handle? It's like, he doesn't have it. It's like, how did you find it? What's the story?Nicholas [00:33:07]: So I have an RSS feed and an email list. And that's it. I don't like most social media things. On principle, I feel like they have some harms. As a person, I have a problem when people say things that are wrong on the internet. And I would get nothing done if I would have a Twitter. I would spend all of my time correcting people and getting into fights. And so I feel like it is just useful for me for this not to be an option. I tend to just post things online. Yeah, it's a very good question. I don't know how people find it. I feel like for some things that I write, other people think it resonates with them. And then they put it on Twitter. And...Swyx [00:33:43]: Hacker News as well.Nicholas [00:33:44]: Sure, yeah. I am... Because my day job is doing research, I get no value for having this be picked up. There's no whatever. I don't need to be someone who has to have this other thing to give talks. And so I feel like I can just say what I want to say. And if people find it useful, then they'll share it widely. You know, this one went pretty wide. I wrote a thing, whatever, sometime late last year, about how to recover data off of an Apple profile drive from 1980. This probably got, I think, like 1000x less views than this. But I don't care. Like, that's not why I'm doing this. Like, this is the benefit of having a thing that I actually care about, which is my research. I would care much more if that didn't get seen. This is like a thing that I write because I have some thoughts that I just want to put down.Swyx [00:34:32]: Yeah. I think it's the long form thoughtfulness and authenticity that is sadly lacking sometimes in modern discourse that makes it attractive. And I think now you have a little bit of a brand of you are an independent thinker, writer, person, that people are tuned in to pay attention to whatever is next coming.Nicholas [00:34:52]: Yeah, I mean, this kind of worries me a little bit. I don't like whenever I have a popular thing that like, and then I write another thing, which is like entirely unrelated. Like, I don't, I don't... You should actually just throw people off right now.Swyx [00:35:01]: Exactly.Nicholas [00:35:02]: I'm trying to figure out, like, I need to put something else online. So, like, the last two or three things I've done in a row have been, like, actually, like, things that people should care about.Swyx [00:35:10]: Yes. So, I have a couple of things.Nicholas [00:35:11]: I'm trying to figure out which one do I put online to just, like, cull the list of people who have subscribed to my email.Swyx [00:35:16]: And so, like, tell them, like,Nicholas [00:35:16]: no, like, what you're here for is not informed, well-thought-through takes. Like, what you're here for is whatever I want to talk about. And if you're not up for that, then, like, you know, go away. Like, this is not what I want out of my personal website.Swyx [00:35:27]: So, like, here's, like, top 10 enemies or something.Alessio [00:35:30]: What's the next project you're going to work on that is completely unrelated to research LLMs? Or what games do you want to port into the browser next?Swyx [00:35:39]: Okay. Yeah.Nicholas [00:35:39]: So, maybe.Swyx [00:35:41]: Okay.Nicholas [00:35:41]: Here's a fun question. How much data do you think you can put on a single piece of paper?Swyx [00:35:47]: I mean, you can think about bits and atoms. Yeah.Nicholas [00:35:49]: No, like, normal printer. Like, I gave you an office printer. How much data can you put on a piece of paper?Alessio [00:35:54]: Can you re-decode it? So, like, you know, base 64A or whatever. Yeah, whatever you want.Nicholas [00:35:59]: Like, you get normal off-the-shelf printer, off-the-shelf scanner. How much data?Swyx [00:36:03]: I'll just throw out there. Like, 10 megabytes. That's enormous. I know.Nicholas [00:36:07]: Yeah, that's a lot.Swyx [00:36:10]: Really small fonts. That's my question.Nicholas [00:36:12]: So, I have a thing. It does about a megabyte.Swyx [00:36:14]: Yeah, okay.Nicholas [00:36:14]: There you go. I was off by an order of magnitude.Swyx [00:36:16]: Yeah, okay.Nicholas [00:36:16]: So, in particular, it's about 1.44 megabytes. A floppy disk.Swyx [00:36:21]: Yeah, exactly.Nicholas [00:36:21]: So, this is supposed to be the title at some point. It's a floppy disk.Swyx [00:36:24]: A paper is a floppy disk. Yeah.Nicholas [00:36:25]: So, this is a little hard because, you know. So, you can do the math and you get 8.5 by 11. You can print at 300 by 300 DPI. And this gives you 2 megabytes. And so, every single pixel, you need to be able to recover up to like 90 plus percent. Like, 95 percent. Like, 99 point something percent accuracy. In order to be able to actually decode this off the paper. This is one of the things that I'm considering. I need to get a couple more things working for this. Where, you know, again, I'm running into some random problems. But this is probably, this will be one thing that I'm going to talk about. There's this contest called the International Obfuscated C-Code Contest, which is amazing. People try and write the most obfuscated C code that they can. Which is great. And I have a submission for that whenever they open up the next one for it. And I'll write about that submission. I have a very fun gate level emulation of an old CPU that runs like fully precisely. And it's a fun kind of thing. Yeah.Swyx [00:37:20]: Interesting. Your comment about the piece of paper reminds me of when I was in college. And you would have like one cheat sheet that you could write. So, you have a formula, a theoretical limit for bits per inch. And, you know, that's how much I would squeeze in really, really small. Yeah, definitely.Nicholas [00:37:36]: Okay.Swyx [00:37:37]: We are also going to talk about your benchmarking. Because you released your own benchmark that got some attention, thanks to some friends on the internet. What's the story behind your own benchmark? Do you not trust the open source benchmarks? What's going on there?Nicholas [00:37:51]: Okay. Benchmarks tell you how well the model solves the task the benchmark is designed to solve. For a long time, models were not useful. And so, the benchmark that you tracked was just something someone came up with, because you need to track something. All of deep learning exists because people tried to make models classify digits and classify images into a thousand classes. There is no one in the world who cares specifically about the problem of distinguishing between 300 breeds of dog for an image that's 224 or 224 pixels. And yet, like, this is what drove a lot of progress. And people did this not because they cared about this problem, because they wanted to just measure progress in some way. And a lot of benchmarks are of this flavor. You want to construct a task that is hard, and we will measure progress on this benchmark, not because we care about the problem per se, but because we know that progress on this is in some way correlated with making better models. And this is fine when you don't want to actually use the models that you have. But when you want to actually make use of them, it's important to find benchmarks that track with whether or not they're useful to you. And the thing that I was finding is that there would be model after model after model that was being released that would find some benchmark that they could claim state-of-the-art on and then say, therefore, ours is the best. And that wouldn't be helpful to me to know whether or not I should then switch to it. So the argument that I tried to lay out in this post is that more people should make benchmarks that are tailored to them. And so what I did is I wrote a domain-specific language that anyone can write for and say, you can take tasks that you have wanted models to solve for you, and you can put them into your benchmark that's the thing that you care about. And then when a new model comes out, you benchmark the model on the things that you care about. And you know that you care about them because you've actually asked for those answers before. And if the model scores well, then you know that for the kinds of things that you have asked models for in the past, it can solve these things well for you. This has been useful for me because when another model comes out, I can run it. I can see, does this solve the kinds of things that I care about? And sometimes the answer is yes, and sometimes the answer is no. And then I can decide whether or not I want to use that model or not. I don't want to say that existing benchmarks are not useful. They're very good at measuring the thing that they're designed to measure. But in many cases, what that's designed to measure is not actually the thing that I want to use it for. And I expect that the way that I want to use it is different the way that you want to use it. And I would just like more people to have these things out there in the world. And the final reason for this is, it is very easy. If you want to make a model good at some benchmark, to make it good at that benchmark, you can find the distribution of data that you need and train the model to be good on the distribution of data. And then you have your model that can solve this benchmark well. And by having a benchmark that is not very popular, you can be relatively certain that no one has tried to optimize their model for your benchmark.Swyx [00:40:40]: And I would like this to be-Nicholas [00:40:40]: So publishing your benchmark is a little bit-Swyx [00:40:43]: Okay, sure.Nicholas [00:40:43]: Contextualized. So my hope in doing this was not that people would use mine as theirs. My hope in doing this was that- You should make yours. Yes, you should make your benchmark. And if, for example, there were even a very small fraction of people, 0.1% of people who made a benchmark that was useful for them, this would still be hundreds of new benchmarks that- not want to make one myself, but I might want to- I might know the kinds of work that I do is a little bit like this person, a little bit like that person. I'll go check how it is on their benchmarks. And I'll see, roughly, I'll get a good sense of what's going on. Because the alternative is people just do this vibes-based evaluation thing, where you interact with the model five times, and you see if it worked on the kinds of things that you just like your toy questions. But five questions is a very low bit output from whether or not it works for this thing. And if you could just automate running it 100 questions for you, it's a much better evaluation. So that's why I did this.Swyx [00:41:37]: Yeah, I like the idea of going through your chat history and actually pulling out real-life examples. I regret to say that I don't think my chat history is used as much these days, because I'm using Cursor, the native AI IDE. So your examples are all coding related. And the immediate question is, now that you've written the How I Use AI post, which is a little bit broader, are you able to translate all these things to evals? Are some things unevaluable?Nicholas [00:42:03]: Right. A number of things that I do are harder to evaluate. So this is the problem with a benchmark, is you need some way to check whether or not the output was correct. And so all of the kinds of things that I can put into the benchmark are the kinds of things that you can check. You can check more things than you might have thought would be possible if you do a little bit of work on the back end. So for example, all of the code that I have the model write, it runs the code and sees whether the answer is the correct answer. Or in some cases, it runs the code, feeds the output to another language model, and the language model judges was the output correct. And again, is using a language model to judge here perfect? No. But like, what's the alternative? The alternative is to not do it. And what I care about is just, is this thing broadly useful for the kinds of questions that I have? And so as long as the accuracy is better than roughly random, like, I'm okay with this. I've inspected the outputs of these, and like, they're almost always correct. If you ask the model to judge these things in the right way, they're very good at being able to tell this. And so, yeah, I probably think this is a useful thing for people to do.Alessio [00:43:04]: You complain about prompting and being lazy and how you do not want to tip your model and you do not want to murder a kitten just to get the right answer. How do you see the evolution of like prompt engineering? Even like 18 months ago, maybe, you know, it was kind of like really hot and people wanted to like build companies around it. Today, it's like the models are getting good. Do you think it's going to be less and less relevant going forward? Or what's the minimum valuable prompt? Yeah, I don't know.Nicholas [00:43:29]: I feel like a big part of making an agent is just like a fancy prompt that like, you know, calls back to the model again. I have no opinion. It seems like maybe it turns out that this is really important. Maybe it turns out that this isn't. I guess the only comment I was making here is just to say, oftentimes when I use a model and I find it's not useful, I talk to people who help make it. The answer they usually give me is like, you're using it wrong. Which like reminds me very much of like that you're holding it wrong from like the iPhone kind of thing, right? Like, you know, like I don't care that I'm holding it wrong. I'm holding it that way. If the thing is not working with me, then like it's not useful for me. Like it may be the case that there exists a way to ask the model such that it gives me the answer that's correct, but that's not the way I'm doing it. If I have to spend so much time thinking about how I want to frame the question, that it would have been faster for me just to get the answer. It didn't save me any time. And so oftentimes, you know, what I do is like, I just dump in whatever current thought that I have in whatever ill-formed way it is. And I expect the answer to be correct. And if the answer is not correct, like in some sense, maybe the model was right to give me the wrong answer. Like I may have asked the wrong question, but I want the right answer still. And so like, I just want to sort of get this as a thing. And maybe the way to fix this is you have some default prompt that always goes into all the models or something, or you do something like clever like this. It would be great if someone had a way to package this up and make a thing I think that's entirely reasonable. Maybe it turns out that as models get better, you don't need to prompt them as much in this way. I just want to use the things that are in front of me.Alessio [00:44:55]: Do you think that's like a limitation of just how models work? Like, you know, at the end of the day, you're using the prompt to kind of like steer it in the latent space. Like, do you think there's a way to actually not make the prompt really relevant and have the model figure it out? Or like, what's the... I mean, you could fine tune itNicholas [00:45:10]: into the model, for example, that like it's supposed to... I mean, it seems like some models have done this, for example, like some recent model, many recent models. If you ask them a question, computing an integral of this thing, they'll say, let's think through this step by step. And then they'll go through the step by step answer. I didn't tell it. Two years ago, I would have had to have prompted it. Think step by step on solving the following thing. Now you ask them the question and the model says, here's how I'm going to do it. I'm going to take the following approach and then like sort of self-prompt itself.Swyx [00:45:34]: Is this the right way?Nicholas [00:45:35]: Seems reasonable. Maybe you don't have to do it. I don't know. This is for the people whose job is to make these things better. And yeah, I just want to use these things. Yeah.Swyx [00:45:43]: For listeners, that would be Orca and Agent Instruct. It's the soda on this stuff. Great. Yeah.Alessio [00:45:49]: That's a few shot. It's included in the lazy prompting. Like, do you do a few shot prompting? Like, do you collect some examples when you want to put them in? Or...Nicholas [00:45:57]: I don't because usually when I want the answer, I just want to get the answer. Brutal.Swyx [00:46:03]: This is hard mode. Yeah, exactly.Nicholas [00:46:04]: But this is fine.Swyx [00:46:06]: I want to be clear.Nicholas [00:46:06]: There's a difference between testing the ultimate capability level of the model and testing the thing that I'm doing with it. What I'm doing is I'm not exercising its full capability level because there are almost certainly better ways to ask the questions and sort of really see how good the model is. And if you're evaluating a model for being state of the art, this is ultimately what I care about. And so I'm entirely fine with people doing fancy prompting to show me what the true capability level could be because it's really useful to know what the ultimate level of the model could be. But I think it's also important just to have available to you how good the model is if you don't do fancy things.Swyx [00:46:39]: Yeah, I would say that here's a divergence between how models are marketed these days versus how people use it, which is when they test MMLU, they'll do like five shots, 25 shots, 50 shots. And no one's providing 50 examples. I completely agree.Nicholas [00:46:54]: You know, for these numbers, the problem is everyone wants to get state of the art on the benchmark. And so you find the way that you can ask the model the questions so that you get state of the art on the benchmark. And it's good. It's legitimately good to know. It's good to know the model can do this thing if only you try hard enough. Because it means that if I have some task that I want to be solved, I know what the capability level is. And I could get there if I was willing to work hard enough. And the question then is, should I work harder and figure out how to ask the model the question? Or do I just do the thing myself? And for me, I have programmed for many, many, many years. It's often just faster for me just to do the thing than to figure out the incantation to ask the model. But I can imagine someone who has never programmed before might be fine writing five paragraphs in English describing exactly the thing that they want and have the model build it for them if the alternative is not. But again, this goes to all these questions of how are they going to validate? Should they be trusting the output? These kinds of things.Swyx [00:47:49]: One problem with your eval paradigm and most eval paradigms, I'm not picking on you, is that we're actually training these things for chat, for interactive back and forth. And you actually obviously reveal much more information in the same way that asking 20 questions reveals more information in sort of a tree search branching sort of way. Then this is also by the way the problem with LMSYS arena, right? Where the vast majority of prompts are single question, single answer, eval, done. But actually the way that we use chat things, in the way, even in the stuff that you posted in your how I use AI stuff, you have maybe 20 turns of back and forth. How do you eval that?Nicholas [00:48:25]: Yeah. Okay. Very good question. This is the thing that I think many people should be doing more of. I would like more multi-turn evals. I might be writing a paper on this at some point if I get around to it. A couple of the evals in the benchmark thing I have are already multi-turn. I mentioned 20 questions. I have a 20 question eval there just for fun. But I have a couple others that are like, I just tell the model, here's my get thing, figure out how to cherry pick off this other branch and move it over there. And so what I do is I just, I basically build a tiny little agency thing. I just ask the model how I do it. I run the thing on Linux. This is what I want a Docker for. I spin up a Docker container. I run whatever the model told me the output to do is. I feed the output back into the model. I repeat this many rounds. And then I check at the very end, does the git commit history show that it is correctly cherry picked in
Today we walk through all of the cuts releases, plus the folks who landed on the PUP or season-ending IR. After all that fun stuff we get back into the auction draft we've been riffing about plus some WR rankings/targeting updates. Here's a look at Jay's team. I sorted by nomination order. As you can see, Jay did ok—maybe better than ok, but he could have crushed the room. He was in that money position where you can dominate— if you have all of your targeting information at your fingertips. Narrator: "Jay did not have his key targeting information at his fingertips."
As promised, today we walk through the first round as well as yesterday's news. I'll be back later today with some small individual pods dedicated to each draft slot, and tomorrow, I'll be dropping the Deep Sleepers pod, which will be both fun and actionable for those in deeper leagues.
Today we breakdown a lot of news from the weekend with appropriate contextualization. I'll be back later today with a look at the WR board through the prism of ADP. It's fantasy football season!
This pod goes long. Hence the late posting time. Today we go through some news and the top 15 or so tight ends, and how we can draft the position as a whole. At the end I spend about 20 minutes doing a simulated mock draft on Fantasy Pros, just to add some spice and context. See you all tomorrow morning!
More Brandon Aiyuk stuff today with some other important injury news and some QB talk. Players discussed: Puka Nacua Kyren Williams Blake Corum Caleb Williams Darnold / McCarthy
What was supposed to be RB Day has morphed into a BIG NEWS day, so we pivot. Today we get into the big trade news, Puka's knee, Jordan Addison's situation and a lot more.
Today we have 30 minutes of camp news to dissect. There is some good stuff in here! See you all on Monday morning! I may have a bonus pod over the weekend as well.
In today's episode we look at a round-table discusison found in 'Senkyogaku Readings' the same collection from which the article 'the Sermon contextualized to Japan' discussed in the previous episode, was taken. Again, the book can be purchased from the RAC Network store here: https://rac-network.com/?p=563
What would a distinctly Japanese sermon sound like? What can missionaries adjust to make their sermons less Western and more in tune with the Japnese mind? In today's episode I look at Mitsuo Fukuda's entry 'The Japan-contextualized Sermon' from Missiological readings (宣教学リーディングす), published by RAC Network (www.rac-network.com) Fukuda Sensei is a longtime Bible scholar and missiologist, and has written much on the topic of the Japanese contextual church. His book 'Developing a Contextualized Church as a Bridge to Christianity in Japan' is available on kindle or paperback and is highly recommended.
Healthcare has always relied on data. What's changed is the explosion of data in healthcare and the availability of this data to clinicians as well as a whole host of healthcare professionals. Bringing context and meaning to this vast amount of data including unstructured health data is going to be key for every healthcare organization. We sat down with Dr. Paulo Pinho, Chief Medical & Strategy Officer at Discern Health, and Dr. Tim O'Connell, Co-founder and CEO at emtelligent, to learn more about what they're doing to contextualize data and improve processes for providers, payers, and researchers across even the most complex use cases. Learn more about emtelligent: https://emtelligent.com/ Learn more about Discern Health: https://discernhealth.ai/ Health IT Community: https://www.healthcareittoday.com/
Join Native Nevadan and visual artist Nick Larsen in a captivating episode on @kwnk97.7 as he discusses his solo exhibition Old Haunts, Lower Reaches currently featured at the Nevada Museum of Art. Joining him for the interview are two friends from Santa Fe - where Nick currently resides, artwork and podcaster Chelsea Weathers and writer Jenn Shapland (whose latest book Thin Skin is available now at your favorite independent bookstore). In conjunction with the interview, Nick curated an hour-long playlist - I Want to Live on An Abstract Plain - evoking a drive to the Nevada ghost town Rhyolite, the subject of some of the work in Nick's exhibition. Listen to the playlist, HERE. More on Nick Larsen and the exhibition Old Haunts, Lower Reaches (on view Jan 20 - July 7, 2024 at the Nevada Museum of Art): Old Haunts, Lower Reaches is an exhibition of new work by Nick Larsen (b. 1982) that excavates history, possibility, identity, and place. Comprised of layered collage pieces, textile-based architectural models, and image projection, Larsen explores what is present and visible in the desert landscape and, perhaps more importantly, what isn't. Influenced heavily by the artist's experience working for an archaeological firm focused on the Great Basin region, research for Old Haunts, Lower Reaches began when Larsen discovered a fading layer in the history of the ghost town of Rhyolite, Nevada. Rhyolite (located thirty miles from Death Valley National Park) served, at one point, as the proposed site for a planned queer community, Stonewall Park, envisioned by two men from Reno in the 1980s. Contextualized by the history of Rhyolite, Stonewall Park, and his own life, Larsen speculates pasts, presents, and futures for this desert locale. In the words of the artist, “The desert is an environment defined by what it lacks, its bleakness an invitation to project possibilities for both what could have been and what might be on what is often perceived as empty.” Repurposing materials to create his layered collages and sculptures, Larsen's speculative practice also serves as a kind of “making do,” using what is at hand to give form to an invisible history or an unattainable future. Nick Larsen was raised in Northern Nevada and currently lives in Santa Fe, New Mexico. Listen in on April 20th at 9am on KWNK 97.7FM to explore how art and music intertwine with Nick Larson.
How the DeltaV Edge Environment enables greater use of operational data scattered across various systems and software applications to help you improve overall performance in safety, efficiency, reliability, and sustainability.
Michael hosts Christian missionary Julie as she facilitates a conversation on Jesus Christ
Listen to this engaging interview with Cheryl Lawther, who talks about why the Research Handbook on Transitional Justice (Edward Elgar, 2023) is one of the most widely used books in the field of transitional justice. The second edition brings together scholarly experts to reconsider how societies deal with gross human rights violations, structural injustices and mass violence. Contextualized by historical developments, the Research Handbook covers a diverse range of concepts, actors and mechanisms of transitional justice, while shedding light on the new and emerging areas in the field, such as counter-terrorism, climate change, colonialism and non-paradigmatic transitions. As a co-editor, Cheryl engages with Lavinia, who wrote one chapter in each edition, revealing a personal view on this important reference tool. Lavinia Stan is a professor of political science at St. Francis Xavier University in Canada. Learn more about your ad choices. Visit podcastchoices.com/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
Listen to this engaging interview with Cheryl Lawther, who talks about why the Research Handbook on Transitional Justice (Edward Elgar, 2023) is one of the most widely used books in the field of transitional justice. The second edition brings together scholarly experts to reconsider how societies deal with gross human rights violations, structural injustices and mass violence. Contextualized by historical developments, the Research Handbook covers a diverse range of concepts, actors and mechanisms of transitional justice, while shedding light on the new and emerging areas in the field, such as counter-terrorism, climate change, colonialism and non-paradigmatic transitions. As a co-editor, Cheryl engages with Lavinia, who wrote one chapter in each edition, revealing a personal view on this important reference tool. Lavinia Stan is a professor of political science at St. Francis Xavier University in Canada. Learn more about your ad choices. Visit podcastchoices.com/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/political-science
Listen to this engaging interview with Cheryl Lawther, who talks about why the Research Handbook on Transitional Justice (Edward Elgar, 2023) is one of the most widely used books in the field of transitional justice. The second edition brings together scholarly experts to reconsider how societies deal with gross human rights violations, structural injustices and mass violence. Contextualized by historical developments, the Research Handbook covers a diverse range of concepts, actors and mechanisms of transitional justice, while shedding light on the new and emerging areas in the field, such as counter-terrorism, climate change, colonialism and non-paradigmatic transitions. As a co-editor, Cheryl engages with Lavinia, who wrote one chapter in each edition, revealing a personal view on this important reference tool. Lavinia Stan is a professor of political science at St. Francis Xavier University in Canada. Learn more about your ad choices. Visit podcastchoices.com/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/genocide-studies
Listen to this engaging interview with Cheryl Lawther, who talks about why the Research Handbook on Transitional Justice (Edward Elgar, 2023) is one of the most widely used books in the field of transitional justice. The second edition brings together scholarly experts to reconsider how societies deal with gross human rights violations, structural injustices and mass violence. Contextualized by historical developments, the Research Handbook covers a diverse range of concepts, actors and mechanisms of transitional justice, while shedding light on the new and emerging areas in the field, such as counter-terrorism, climate change, colonialism and non-paradigmatic transitions. As a co-editor, Cheryl engages with Lavinia, who wrote one chapter in each edition, revealing a personal view on this important reference tool. Lavinia Stan is a professor of political science at St. Francis Xavier University in Canada. Learn more about your ad choices. Visit megaphone.fm/adchoices
Highlights from this week's conversation include:Amr's extensive background in data (3:23)The evolution of neural networks (9:21)The role of supervised learning in AI (11:17)Explaining Vectara (13:07)Papers that laid the foundation for AI (15:02)Contextualized translation and personalization (20:07)Ease of use and answer-based search (25:01)AI and potential liabilities (35:54)Minimizing difficulties in large language models (36:43)The process of extracting documents in multidimensional space (44:47)Summarization process (46:33)The danger of humans misusing technology (54:59)Final thoughts and takeaways (57:12)The Data Stack Show is a weekly podcast powered by RudderStack, the CDP for developers. Each week we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.
JR Rife - Author, Rocker, Theologian, and Modern Viking - engages in a variety of topics, ranging from Biblical to Heavy Metal to anthropology in this eclectic podcast.
Mike, Seth, & Tommy dive into another amazing article by Brent Dykes that are at the heart of why we visualize data in the first place - If we cannot provide context then what are we really showing? Using comparative, historical, and other techniques, we can transform our reports.= https://www.effectivedatastorytelling.com/post/contextualized-insights-six-ways-to-put-your-numbers-in-context Get in touch: Send in your questions or topics you want us to discuss by tweeting to @PowerBITips with the hashtag #empMailbag or submit on the PowerBI.tips Podcast Page. Visit PowerBI.tips: https://powerbi.tips/ Watch the episodes live every Tuesday and Thursday morning at 730am CST on YouTube: https://www.youtube.com/powerbitips Subscribe on Spotify: https://open.spotify.com/show/230fp78XmHHRXTiYICRLVv Subscribe on Apple: https://podcasts.apple.com/us/podcast/explicit-measures-podcast/id1568944083 Check Out Community Jam: https://jam.powerbi.tips Follow Mike: https://www.linkedin.com/in/michaelcarlo/ Follow Seth: https://www.linkedin.com/in/seth-bauer/ Follow Tommy: https://www.linkedin.com/in/tommypuglia/
On this episode of Pints and Perspectives, the guys welcome their friend and fellow theologian Andre Franklin while drinking Sisyphus from Real Ale Brewing. Grab a beer and join us for a conversation around folk spirituality, contextualized theology, and embodied faith. Happy Listening! If you would like to partner with us financially we would be honored and you can do so here: https://mywellhousechurch.churchcenter.com/givingOur Socials:WellHouse ChurchWebsite: mywellhouse.church Instagram: @mywellhouse.churchFacebook: @mywellhouse.churchYoutube: Wellhouse Church - https://www.youtube.com/channel/UC1Ls...Pastor CullenInstagram: @CullenjwareAdam ChaneyInstagram: @chaney_ajOur Identity: WellHouse Church is a dream of half a dozen people who love Jesus and were hurt by the Church. We envision something different. We want to be something different. We want to be a network of house churches that spends money meeting needs rather than paying for a building. Join us on our journey at our website (which needs to be revamped, yikes.) to connect with us. You can find all of our content below!Youtube: https://www.youtube.com/@wellhousechurch1081Pints & Perspectives Podcast: https://feeds.captivate.fm/pints-perspecitves/Practicing Presence Podcast: https://feeds.captivate.fm/practicing-presence/Let's Talk Podcast: https://feeds.captivate.fm/lets-talk-podcast/A Closer Look Podcast: https://feeds.captivate.fm/a-closer-look/
True Confessions with Lisa and Sarah has been on hiatus for a while, but are excited to be back in the Confessional with the one and only Bill Bolden! We break down his January SLP Summit presentation, the mixed reaction to his live giveaway during his course, and the relationship between presenters who both speak and have products to sell. We also dive in to how to use a cycles approach to teach grammar, so we're not spinning our wheels in therapy. This approach can be used for all ages, so join us for this must-listen to podcast! Resources: Ukrainetz, T. A. (Ed). *Contextualized language intervention: Scaffolding prek-12 literacy achievement. *(145-194). Pro-Ed, Inc.: Austin, TX. Cleave, P. L., & Fey, M. E. (1997). Two approaches to the facilitation of grammar in children with language impairments: Rationale and description. American Journal of Speech-Language Pathology, 6(1), 22-31–31. https://doi-org.proxy.library.kent.edu/10.1044/1058-0360.0601.22 Clip art mentioned: https://www.teacherspayteachers.com/Store/Kari-Bolt-Clip-Art Mycutegraphics.com Kari Bolt clipart Two Models of Grammar Facilitation in Children With Language Impairments SNAP - Strong Narrative Assessment Procedure
In todays sermon, Pastor Mark preaches from Acts 14:8-18 and explains how the Gospel, when taken out of context. becomes a gospel that really isn't the Gospel anymore..
Whilst severing as the National Coach of the Dutch Federation, Laurent Meuwly has taken several sprint athletes and relay teams to international stardom. Prior to arriving at Papendal, Coach Meuwly worked with the Swiss Athletics federation, there he coached European champions like Lena Sprunger and Ajla Del Pointe (who he still works with today). In this episode, Coach Meuwly shared some of his non-negotiables when it comes to training and recovery - particularly for the longer sprint events. Follow Laurent: -https://www.instagram.com/laurentmeuwly -https://twitter.com/LaurentMeuwlyThis podcast is supported by Output Sports, use the promo code COLMBOURKE10 for 10% off: https://buy.stripe.com/6oE3ck2Ex7BB1UcdR7Support the show
Travis is back for another film review edition of the Drive Time Podcast. Today, we'll look at the Patriots game through the lens of the film, the key stats and snap counts played in the game. Plus, Mike McDaniel's Monday afternoon presser.See omnystudio.com/listener for privacy information.
Travis is back for another deep dive edition of the Drive Time Podcast. Today, we'll perform the autopsy on the 26-20 loss to the Packers by looking at the film and key stats. Plus, injury updates and Mike McDaniel's Monday media availability.See omnystudio.com/listener for privacy information.
Travis is back for another deep dive edition of the Drive Time Podcast. Today, we'll look at the loss in Buffalo from the key numbers and what the tape tells us. How Miami's defense bounced back after a slow start, how Tua and the offense found its footing and the encouraging aspects of this tape going forward. Plus, the key stats, snap counts and commentary from Mike McDaniel and his Monday afternoon presser.See omnystudio.com/listener for privacy information.
Travis is back for another deep dive edition of the Drive Time Podcast. Today, we open up the film room and break down the loss on Sunday night going over the offensive and defensive tape. Coach called it a frustrating tape and it's easy to see why. Plus, the key stats, snap counts, and Mike McDaniel's Monday press conference highlights.See omnystudio.com/listener for privacy information.
Travis is back for another deep dive edition of the Drive Time Podcast. Today, we go into the film room from Sunday's game in San Francisco. Offense and defense review of a frustrating tape. Plus, the key stats, snap counts, and Mike McDaniel's Monday press conference.See omnystudio.com/listener for privacy information.
Travis is back for another deep dive, film review edition of the Drive Time Podcast. Today, we'll examine the Dolphins 30-15 win over the Houston Texans by taking a look at each play on tape, telling you what stands out – including Tua Tagovailoa's subtle nuance – and much more. Plus, key stats, snap counts and Mike McDaniel's Monday media availability highlights.See omnystudio.com/listener for privacy information.
Travis is back for another deep dive film room edition of the Drive Time Podcast. Today, we'll break down the Dolphins win over the Browns by looking at each play on tape, the key stats, the season rankings and snap counts. Plus, Mike McDaniel's Monday media availability.See omnystudio.com/listener for privacy information.
Travis is back for another deep dive edition of the Drive Time Podcast. Today, we're dissecting the film from the 35-32 win in Chicago, looking at the key stats from the game as well as the snap counts and hearing from Head Coach Mike McDaniel and his Monday after presser.See omnystudio.com/listener for privacy information.
Travis is back for another deep dive edition of the Drive Time Podcast to breakdown the Dolphins 31-27 win in Detroit. We'll break down the film and give you all the intricacies from the win, we'll look at key stats, the league leaderboard, snap counts and hear from Head Coach Mike McDaniel.See omnystudio.com/listener for privacy information.
Travis is back for another deep dive edition of the Drive Time Podcast. Today, we go under the hood of Miami's 16-10 win over Pittsburgh by examining the tape, the key stats, league leaderboards, snap counts and we hear from HC Mike McDaniel.See omnystudio.com/listener for privacy information.
Travis is back for another film review edition of the Drive Time Podcast. Today, we look at the loss to the Vikings from the perspective of the all-22, they key stats and snap counts. Plus, Mike McDaniel updates us on the latest with his Monday media availability.See omnystudio.com/listener for privacy information.
Travis is back for another edition of the Drive Time Podcast. Today, we go into the film room and discuss the aftermath of the Dolphins loss to the Jets. We'll evaluate the tape, look at the key numbers and snap counts and hear from Head Coach Mike McDaniel at his Monday news conference.See omnystudio.com/listener for privacy information.
Travis is back for another deep dive edition of the Drive Time Podcast as we look at the aftermath from the Week 4 loss in Cincinnati. We'll review the tape, the key stats, the snap counts as well as hear from Head Coach Mike McDaniel and a great message from QB Tua Tagovailoa.See omnystudio.com/listener for privacy information.
Travis is back for another edition of the Drive Time Podcast. Today, we break down the victory over the Bills with an extensive film re-watch, break down the key stats and snap counts and hear from Mike McDaniel on his quarterback's emergence and the veteran leadership of Terron Armstead and Xavien Howard.See omnystudio.com/listener for privacy information.
Travis is back for another edition of the Drive Time Podcast. Today, we'll break out the microscope and examine Sunday's thrilling win by breaking down the entire game tape, telling you about the key stats, advanced metrics and snap counts in the game, and we'll hear from Head Coach Mike McDaniel on the offensive production and the confidence instilled in him by quarterback Tua Tagovailoa and the entire offense.See omnystudio.com/listener for privacy information.
Travis is back for the first all-22 review of the season. Today, we'll take a look at the tape and break down the positives and opportunities from Week 1. Plus, we'll detail the snap counts and key stats. We'll hear from Coach Mike McDaniel and we'll scan the social.See omnystudio.com/listener for privacy information.
Dr. Nicole de Paula has been globally connecting policymakers and researchers for more than a decade to create a public understanding on key issues related to sustainability and public health. As a Planetary Health advocate, she champions the socioeconomic advancement of women through environmental conservation. She is the founder of the Women Leaders for Planetary Health and in 2019, she became the first awardee of the prestigious Klaus Töpfer Sustainability Fellow at the Institute for Advanced Sustainability Studies in Potsdam, Germany. Nicole is the author of the book “Breaking the Silos for Planetary Health: A Roadmap for a Resilient Post-Pandemic World.” Learn More about Nicole. Learn more about The Passionistas Project. FULL TRANSCRIPT: Passionistas: Hi, and welcome to the Passionistas project podcast, where we talk with women who are following their Passionistas to inspire you to do the same. We're Amy and Nancy Harrington. And today we're talking with Dr. Nicole de Paula, who has been globally connecting policy makers and researchers for more than a decade to create a public understanding on key issues related to sustainability and public. As a planetary health advocate, she champions the socioeconomic advancement of women through environmental conservation. She's the founder of Women Leaders for Planetary Health and in 2019, she became the first awardee of the prestigious Klaus Töpfer Sustainability Fellow at the Institute for Advanced Sustainability Studies (IASS) in Potsdam, Germany. Nicole is also the author of the book “Breaking the Silos for Planetary Health - A Roadmap for a Resilient Post-Pandemic World.” So please welcome to the show Dr. Nicole de Paula. Nicole: Hi, Nancy and Amy. Thank you for having me. Passionistas: What's the one thing you're most passionate about? Nicole: I think recently it's definitely planetary health. Uh, we've been advocating so much and at the beginning, the term was what is planetary health sounded like a horror cop thing. Right? So it was the, it was a term that sounded, it was a bit weird in some language doesn't translate. Well, I think in German, for example, it's, it's, it's hard to translate in Portuguese as well. I'm from Brazil. So, uh, it was also a bit funny, but definitely is the topic that we should be talking about specifically. Now when we need to recover. Hopefully from this pandemic. Passionistas: So tell us of what planetary health means and how it relates to what you do for a living. Nicole: Yeah. So maybe what I do, I'm my background. I tend to say I'm a fake doctor, right? So I'm a, I have a PhD in international relations, so I'm not a magical doctor cause I've been talking a lot with public health experts. It's quite an interesting exercise. And so planetary health, uh, from my perspective is of very interesting narrative of things that decision makers should. Talking about or acting. So it's basically everything. So the planet is changing, right? We say that if the planet is sick with all the climate change impacts biodiversity loss, pollution, you know, we, we don't know anymore what we have in our foods. So much chemicals there processed food, you know, and crisis. We used to have a big problem of course, with hunger and. You know, half of the population is obese. So of course we're changing our lifestyles and the way the planet is changing and the way that we are impacting our planet. So that's why we say this anthropogenic impacts we need it's impacting public health. So the decision normally is what is health at the end of the day, right? Is everything that is inside our bodies and is just this small system. Or we should talk about health. Connected to the health of our planet. So the planetary health is a scientific discipline or, um, not discipline is there is discussion that I think is started as saying as a discipline, but let's say it's an approach, a new area of studies calling that way. I think many researchers were already discussing sustainability connecting to the, to human health. So again is very simple. It's just trying to connect sustainability to public health policies and on the, on the issue of. scientists are trying to understand how exactly climate change impacts, you know, human health. We have heat waves that impact, you know, the most vulnerable in cities. Uh, so we're trying to measure that's. So that's not exactly what I do, you know, when people will do modeling and, but in the end, we need to communicate and inform decision makers of this field and say, what do we do about it? And that's the, what I'm passionate about. How do we get the science and bring it to the people who can take these decision? And it's of course not an easy thing, especially this days, but we keep trying. So you mentioned COVID talk about how the relationship to COVID and planetary health. Like what, how is it affecting the world on the planet? Yes, COVID is as, um, sometimes mentioned and I notice in a book it's. Of course, it's a very bad thing, but if every crisis brings an opportunity, that's the sad reality. If we need change, we probably learn through love or pain. Right. So it's very hard to change behavior if you don't have a big crisis and COVID is now showing I think stimulating this conversation about, okay, what is exactly connections? It's, it's just, just a sanitary thing. It's, uh, the disease, but what you're learning now and, and. Trying to communicate. Actually, I think a lot of people have been trying to communicate this before, but the way, for example, deforestation, the way we are transforming our environment, we are, uh, increasing the chances of this contact with new viruses. So for example, illegal wildlife. Trading, you know, if you're bringing species to different and because the world is so connected in three days, the whole, if you have a new disease in three days, the whole world is contaminated. So the COVID is really showing that we need to connect more. The dots. Between these issues of biodiversity conservation. You know, this, there is a link with zoonotic diseases. When you have pathogens, frighten animals, jump to humans, we still, we don't have definitive answers about how exactly COVID was created, but six out of 10 new diseases come from animals. You know, so this, this zoonotic disease. So, so we know that we are creating some sort of this possibility of increasing diseases and, and climate change. For example, Our natural ecosystem. So new mosquitoes there wouldn't be in Europe, for example, because of the climate. Now, if we find, so we have a new ecology of, of these diseases that it's important to understand and study again, we have, uh, researchers doing that. So planetary health brings this conversation and links, uh, this points. Passionistas: So let's take a step back. You talked about the fact that you're from Brazil. Tell us a bit about growing up there. And when did you first become aware of these issues and what inspired you to pursue this field. Nicole: Of course, I mean, I think I always wanted to, I remember as a, let's say teenager, the time you need to decide about university, I was between. Two things. I think I, I love studying. So I think my thing, I love learning. So doesn't matter what it is. I people say, oh, what's your favorite? You know, subject? I liked everything. Uh, at the end I started being better at humanities and others, but I was still at some point. Good, very good in chemistry. Very good in math, some parts of physics. So I wish I had more talent. I wish I had kept my talents. I found that time would be great for calculating it or model. Days, which I don't feel they're very capable, but I enjoyed, uh, learning and, and, and I enjoyed traveling. So that was a big thing. So I think, you know, if you're uncomfortable in new places. So for example, from Brazil, I remember going to Portugal at early age and I didn't enjoy so much because it was so similar. To Brazil. And I think nowadays I would think, uh, differently because it's a fantastic city in LIBO, for example, it changed so much, but the traveling part was inspiring. And so I was trying to find things, you know, what is, what can I do that unite all this many disciplines that I enjoy and, and traveling. So I initially, um, I also was very good at debating, especially my family. If I wanted something I would debate until they were tired. So it was, uh, some people found that of course, very annoying, but they thought would be, I would be a good lawyer. Right. So I thought about it. And in the end I found this brochure, that's saying, oh, international relation. It was a new course at that point, you know, remember also globalization and all this. So that's something we have a very, of course at the university of Sao Paulo is let's say top university in Brazil, depending on the subject, but is very, uh, important center, but they didn't have international relations when I was applying for it. So there was another univers. The head leading that in Sao Paulo and from Sao Paulo. And so I joined that and started doing international relations, but at that point, nobody knew what do you do with international relations? Right? It just, and in the first year it was, it was actually the time when the United States. Was not ready to sign or, you know, was withdrawing from the Coda protocol, which is the whole, the initial agreement, uh, in the whole climate sphere. So as a student in political science, I was like, why, if it's such a good thing for the planet, why we have the biggest power saying that they don't wanna agree with this? You know, that's, it's good for the plant. So that's how I entered the, the climate diplomacy conversation. So again, I entered the sustainability sphere through the political. Perspective. Right. And then from that on, I was started doing a lot of understanding how countries negotiate about the trees. So it was climate then biodiversity and quickly I could actually move to France. So my university had an agreement. So I moved to France and then started studying a lot from the perspective of European union, which is another whole in region and negotiations of agreement to have a global position. So all that it's endless and it was fascinating. But I tended to focus on the sustainable stable development aspects. And, you know, we have in Rio, Brazil also, we are very, it's a very important country for sustainable development. The Amazon has always been on the agenda. We have infinite natural resources, you know, is the mega diverse, uh, countries top. So Brazil has been very important for this negotiations. And so that's why I started my academic life. And there was no specific moment, right. This, I had an aha moment for other things later, but for that, I just really enjoyed the disciplines. And, and that's how I think also. We say the planetary health is really about multidisciplinary, you know, whatever we do, we need to unite disciplines. And international relations was always a, let's say a collection of disciplines. You did economics, law, sociology, you know, theology, linguistics things. And you had to make sense of all this. So I think from the early age, I was maybe comfortable navigating multidisciplinary systems and which today is very useful because, you know, you're kind of comfortable. You're not there to protect a discipline and you're just free to kind of have this dialogue, which is so, so important. So tell us about some of the fellowships that you've done through the years, the international Institute for sustainable development. Passionistas: What was your work like there? Nicole: Yeah, so, well, the international for sustainable development is actually the it's more, um, it's a think tank and that's through this organization that I could. Actually be in the practice of sustainability tracking sustainable development in real time, because you are, uh, going to all this at the UN and, and, and trying to understand the country's positions and why. So it's a lot of work of Intel in the end, the product you would say you would do reports and informing in a very succinct, uh, way what countries are doing. However you need the whole background. So we were, most of the people there were doing their PhDs or at least a master in one of the specific negoti later negotiations. So it was more, uh, yeah, so we were part of a global team tracking this, but usually also connected to your academic. Research. So this was during my PhD times where I could, I think, you know, I don't know, almost 60 countries and, and it was gave a lot of perspective, you know, from what people think, because one solution, you know, in Europe is not a solution in Africa is on solution in Latin America. And that's, that's why it's so slow. And that's why it's so difficult because of course we do need global solutions. However, you still need to kind of get the. Contextualized moments of this. So very challenging, but that's what I did there. It was really getting, uh, and track and sustainability in practice at the UN level. Passionistas: And as we mentioned in our intro in 2019, you became the first awardee of the Klaus Töpfer Sustainability Fellow. So tell us about that period and what, and what that experience was like. Nicole: So that's a very recent experience and it's, it's one of my favorites because it gave so, um, gave me a lot of freedom to, I think, do follow my passion and do the things that, you know, I use usually say it's it's. When is a time that you have time and money together, you know, it never either you have time or no, uh, no money or money and no time. So this was, this fellowship is really dedicated for two kind of people do their projects and elevate them. And so I was so proud to, uh, cost software is the former Minnesota environment in Germany. He was also the head of the United Nations department program before. So it was someone who was, you know, doing politics in Germany. But also went moved to Kenya and was the head of a large organization. And he had to also understand, right. This compromises, how it works. Africa is not the same as Germany. So, um, and of course it's very influential. Public figure. So I, he, uh, and together a few of, I think Noble Prizes founded, uh, this Institute in, in Potsdam. And it's a very interesting, I think I had a lot of intellectual freedom there and I could develop the book, "Breaking the Silos for Planetary Health," which if you don't have time to sit down and write it's, you know, you never finish. So I could do that. I could support Brazil in a large planetary health global event together with the Harvard university. And this was a fantastic, uh, really expanding the field of planetary health in Latin America. Because one of the things I try to say is there's no point of having planetary health conversation. If it's only in Australia, Europe and you know, north America. So I need to bring that to the global south. And I could found the social enterprise, uh, called women leadership, monetary health, and, and this has opened so many. To a lot of my work today. So I really enjoyed that and, and very supportive colleagues and directors, and it was really, really a very fun time in my career. I must, I'm very thankful for that. I think it was, you know, when you got these things at the right time, you really could. I think I used the opportunity and then COVID came and that for me professionally, Was good because I was talking so much about health sustainability, and unfortunately, see, you need a crisis to push these things and it's a sad reality, but from that perspective was a good timing to talk about this. Passionistas: Talk us through what you do. You connect policy makers and researchers. So what is that process? What's your day like? Nicole: Well, that's funny. My day has been the most. I don't have a routine I have now. I think it's first two weeks that I'm having more of a routine in my life and I'm almost 40. So I enjoy that. I think I worked a lot to get a lot of flexibility in my work life. So I have absolutely no routine because every day, and now with the pandemic, it became then a different world. Why we could do so much virtually and things, but it was more about, so I did a lot of work in different countries when. You know, ISD the internet. When I said I was tracking sustainment about negotiations, every time was in a different country. So I would be in the desert and the next week I would be in the Arctic literally. So you'll have to Pack, you know, for north of Finland and Dubai. So it has been very hectic, but I enjoyed that, but definitely not a common. Existence, especially for women, as we know, you know, people expect that you have your traditional things and then you have your family life like a traditional way and all that. And I always refused in a way and said, no, that's really exciting to not have these routine. That's not what I want. And during this time, so you, why, if you travel so much, you're also connecting with people around the planet. So it facilitates so. Your work doing, you know, if you have to gathering intelligence, you have to see what that country's thinking and what the others. So how can I, if I'm writing a paper. Or, or, you know, even my PhD, I had to really, for, for five years you were doing research and, and, and I was about the strategic partnership between Brazil and EU on the specific agreements. So things are evolving, right? So I need to track that. And so this connection is. First through research because you have to inform and you have to publish and you have to get the knowledge, but then once, once you are working with these organizations, you're actually also transferring that knowledge or trying to, you know, it's not so much of an academic exercise, but if you do, if you're working with think tanks, then you do round tables and you do other events. And it's more of the networking part, exchanging the word that I like here. Cross pollinating knowledge around disciplines. Institutions. So that's a lot of what I do. And so it's not a clear cut thing, but when you see, you have to yeah. Do your research like political scientist and a lot of interviews. For example, the method, if you're this participant observant, you know, you are in the process. So not only reading cuz what is published in the end, it's not necessarily what was happening. There's so much in politics that cannot be published. That's why these personal connections are so important because you need trust from these individuals to get the information. That's how I think, think it's a very important talent. So this personal [00:18:00] diplomacy with trust building networking in many countries that really helps to kind of today. I have my colleagues that, oh, we will. And I moved to Bangkok after, right. So I lived in France, then I moved to Thailand and I lived in Canada. I lived in Washington, DC, and I lived in more in Brazil, of course. And now I'm in Italy. So it's kind of, some point gets Tre with the bureaucracy, you know, the visa things. That's, uh, what I'm, but apart from that is fascinating because you adapt and I think that's what the world needs today. Right? We all had to adapt so fast, but honestly, for me, it was. When the lockdown came, I just felt that was just my regular life that everybody could finally understand that we could do so much online, that we could do so much virtually. So a lot of distracting of the negotiations we did virtually and I worked. Like this with slack or all this chat functions with people around the world that I never met since 2012. So, you know, 10 years later, the world figured out that it is possible. We don't need to fly across the world to have, you know, a one-on-one meeting that that's absolutely insane Passionistas: We're Amy and Nancy Harrington and you are listening to The Passionistas Project Podcast in our interview with Dr. Nicole de Paula. To learn more about Women Leaders for Planetary Health's mission to empower women to lead planetary health solutions at frontlines of development in the Global South visit WLPH.org. We'd like to take a moment to invite you to the third annual Power of Passionistas summit this September 21st through September 23rd, 2020. The three-day virtual event is focused on authentic conversations about diversity, equity and inclusion, this unique gathering of intersectional storytellers and panelists harnesses, the power of our rich community of passionate thought leaders and activists to pose solutions to the problems plaguing women and non-binary people. Early bird tickets are on sale now through August 21st for just $99 at ThePassionistasProject.com. So be sure to register before the special discount rate ends. We'd like to thank our sponsors, Melanie Childers Master Coach, Graceful Revolution, The OSSA Collective, Tea Drops, Aaron's Coffee Corner, Flourishing Over 50, Espinola Real Estate Team, Sarah Finns Coaching, Tara McCann Wellness, Aspira Public Affairs and TrizCom Public Relations. Now here's more of our interview with Nicole. Did you miss traveling though for someone who likes to be on the go. Nicole: Exactly, that's a very, you know, interesting question and. The good thing is I did so much that I feel that. I feel a bit satisfied with, you know, the places that I've been and it's never enough there's no, if you like traveling, you know, you can always do again and, and learn more and spend more time. But I definitely felt at the beginning was fine because, you know, with the lockdown you could produce everything and write, I used my time, a lot to do the writing and. What I miss is just, um, the easing, you know, the facility that you could go. So now, if you're in Italy, Italy, you have to go back to Germany. It feels like you're going to another continent in the civil war, you know? So, and that's the thing, it's very, it's sad because, you know, if you have family also abroad and it's just, it's kind of a, a worry that if you need to travel fast and, and, and not every. We'll have, you know, the same advantages or being treated equally. So in the end, the most vulnerable will always suffer more. They will not have support. They cannot. So I miss, I miss the, the easy connections to exotic places. so in 2019 new co-founded the planetary health research group. So tell us about that and what the mission is of that organization. So this group is at the, is hosted by the universal Sao Paul in Brazil. And, and it's hosted by the, there is an Institute for advances studies there and was with together with professor Antonio Saraiva, who is an absolutely partnering crime and that in Brazil and an amazing group of. Interdisciplinary researches. So we were, we actually with professor sarava, we met in the first meeting of the planetary health Alliance in Boston. It was hosted by Harvard and we met in a museum, uh, with, you know, I think it was natural history and you have like ping wings around us. So it was a very fun dinner. And in the bit of the. And we just connected. And for many years we were, you know, discussing and going to these meetings. Every, every it's an annual meeting until Brazil got the right to host for, for the first time the planetary health Alliance would, you know, give the right for a developing country to host this, this conference. And then we, we were just natural partners and we had, we were working direct together. So we decided to have this an official center, uh, at the university of Sao Paulo in the most interdisciplinary center. And this is growing now I'm affiliated I'm founder co-founder and professor is really leading that. Now he's a very senior professor there, so it's, it's just fascinating because it's not something, you know, that belongs to the university. Of Sao, but it's something that belongs to Brazil because we have many partners. We have people from all regions, as you know, Brazil's a very, very big country. So it's kind of really well distributed now. And it's fascinating to, even for me, when you go to meetings, you have all different accents from Brazil. You know that sometimes you, if you'll sustain your bubble, you don't even listen to different voices. And, and if you're advocating for this diversity in decision making it. You know, it starts there. We have to have people from different regions, so that's, it's growing and we could host successfully the. In last year. Yeah, because January, so definitely like, uh, last year, I think April, we got 5,000 people who register for this and, you know, from 130 countries. And, and because also it was the first time it would be in Brazil, but the pandemic had to be online, but we really took the opportunity to make this. An inclusive, you know, not that a lot of people would, this conferences would be usually around 400 people and we could at least bring that to the houses of, you know, in people in hundred, 130 countries. So, and that's why the, what I like to talk about also volunteer health movement. It's a scientific thing, but also if you don't talk and people don't get excited and don't wanna do things, it's usually right. The planetary health movement, as you know, social movement is very important as well. And I think we've worked quite well and there are now new programs of young ambassadors from different universities and they're doing things. So it's about also inspiring others to, to get to know more about the few, to apply to their, how would they think, you know, in their topic of research discuss this. So, yeah, so very proud of that one. That's how I could help my own country. Explore the team. And in 2020 you founded the Women Leaders for Planetary Health. So what is the mission of that organization? Nicole: Yeah, it was so the United nations climate conference, the cop 25 December. I had it with the support of, I, I asked this organization that was in pots. I really wanted to do something that would, I was doing so much on voluntary health, but the gender dimension was really mentioned. I wasn't hearing about it. It was just. You know, unknown issue. So, so, uh, I, I definitely the mission is we want to empower women to lead planetary health solutions in the global south, simple as that, because how many women, you know, and sustainability is very full of women, but how many women really leading solutions or, you know, receive funding to do their own thing, or that's the challenge that we have. Right. And so I wanted to focus. On that discussion first to understand why if we empower women, what's the difference for planetary. And I mean, we're doing research on that, right. But of course there's many indications that you can accelerate the impact of sustainable development policies. If you have women empowered and able to, to take the lead and, and make a change, if you wanna like in food systems, for example, if you, you can be investing agriculture in bio things, however, if women don't have land. You know, legally they're discriminated and they cannot produce their own things or do practices. Um, it's kind of useless. So we need to pay attention to this, to many of inequalities of inequalities, not only income, but also opportunities. And that's why I wanted to again, bring the planetary health conversation to low and middle income countries. So I was really targeting that as part of the. That's why the first, um, round we created a digital academy, which was with the pandemic was great because everything could be digital. And it could, we, we had third more than 30 countries participating in our things. So, and, and, and very, let's say non reachable, difficult countries, you know, we had people in Palestine had people from Sudan. We had people named Zimbabwe from Brazil, you know, in Latin America. In all these women, they all share the same problems, but also the same passion and the same solutions. You see the they're doers, you know, and the, the [00:28:00] narrative is really not to make oh, women is, I didn't create organizations to say, oh, we are suffering. It's so difficult. They're discriminated. The point is how we empower them to, to do what they wanna do and, but have the right resources and the leadership. So we focus really on, on leadership training sessions and with, we had our wonderful Angela field who also supported us on that. And I was mostly focusing on, on this research part of planetary health. And so we write papers and do the research as well. How climate or. Biodiversity. How does things connect to gender? Yeah. So that's how we, and it's, it's growing the UN, so it was good to also have that conversation at the UN that's, how it started. And now we are a social enterprise, you know, legal institution in Germany. And, and that's, I'm very excited to see how this is growing. We have a team in Brazil. Now we have things growing Africa. We have things in Southeast Asia. Yeah. Very excited. That's I think how we get that's the, the passion, I think our jobs. And if you work with the policy makers, it's not always fun. Right? They're of course politics entered in the middle. Things can be delayed and take time to, to drive change. But this is really the fun part. I think of my work, cuz you see the results and you see also the results at the personal level. You know, you have sometimes I think we underestimate how much we could help people by simple things, just, you know, supporting them with the letter. So the mentoring part of our, we had this digital academy, but also we were pairing individuals with senior mentors. So we had a mentorship program. Targeting low middle income countries, women in low middle income countries. So, and I heard so many stories after, because at the beginning I thought, well, you know, this is not, I mean, it's not a big deal. It's just, okay, we're helping a little bit. But when you see the later, what they tell and the things, the decisions that they took in the end, or the courage that they had to do, their own things, they really, you get surprised and you say, wow, and this is, you know, we did this and that's very rewarding. Passionistas: Can you tell us about maybe a success story, something that you've seen come through the organization? Nicole: Yeah, I think it, I mean, what I saw a lot was this positive. They tell stories that, oh, when I joined the program, I was, you know, I was a bit lost. I didn't know what to do or maybe careers. And they normally, they felt empowered to take the decisions that they already knew that they would do, but they felt validated somehow that that's, oh, that's I can do this. So I heard many stories like this. If they wanna maybe start a new master's program or if they wanna change careers, if they wanna quit their toxic. You know, there were stories like this or people who they want to change industries and do more work on sustainability. I saw a lot of this and simply, and maybe at the end, I can tell another story, but don't keep it a secret. Passionistas: So what can women who aren't kind of full-time activists in this field? What can we do on a day to day basis to have an impact on the planet? Nicole: Yeah. So this is a very, it's a common question that we get, right? So how, of course, everybody wants to know how they can make a better place of role, but I like to call attention to, to another point, because yes, you can do your recycles. You can eat, you know, reduce, consumption meat, normally, what is in terms of impact. If you change your diets, that's the easiest and the biggest impact you're gonna. So not so simple to do it. And especially it depends where you leave and your culture or your habit, but that's what researchers show that that's the biggest impact you can have. If you change your diet, you have of course, more, more, less meat, less a more plants. And so there is something called plenary health diet that it doesn't say you can never eat meat, but you know, Definitely. We have to shift the quantity and the proportion of things that we are eating, as we know we're not so healthy these days. So I would invite our, our participants to, to, you know, Google planter, health diet. That's an interesting exercise. But what I like to think about, and that's why it's, it's important also to think in this, which is also hard, but the systemic part, right. Nobody will completely change. What I'm trying to do is really how do you address the root causes of this problems that are saving? I don't think it's our five minute, three minute or 60 seconds shower that will do that. So when we try to put the, the solutions on the shoulders of individuals only, you're not addressing the problem. You're just masking. The problem. And you're just, you know, you want to delay action because what you need to do is to change drastic. You know, you need to change trade rules, you need to change the way supply chains you need to, it's not only one company, right. That company has thousands of companies involved in their business. So how do we do that? So I'm more interested now in, in really in. Transformative systems for sustainability. And of course we have the UN sustainable development goals who, who addressed it. It's a very, it's a plan for development and address so many questions that they're important. But as you see there, it's very hard to disconnect one goal from the other, but many institutions they say, oh, I do, you know, SDG two or four or five. I do gender. And what I like to say, no, if you don't do everything. A little bit, if you don't understand the connections, you're not doing much. So, which is difficult to do because obviously capacity and is limited. Time is limited. Resources are limited. We need to prioritize, use your best skills and maybe focus on what you can do best, but you need partnerships. Nobody will do this alone. So that's why the individual quest, what can we do is yeah, you can start with your house and then maybe influencing your own family and your building and start expanding, but also try to educate yourself about these connections, because I see a lot of people. Oh, use this or consume that, but there's so many inconsistencies things, you know, they would, maybe they are young activists, but they're using Neo Polish full of chemicals for, because it's cheaper from, I don't know, another country try to understand the whole picture. And, and I think that's the way we can have a bigger impact and on women. Right. Let me just, uh, address that. And I think because. Women need to support women. That's simple, you know, for too long, we are also trying this narrative. Oh, women are difficult. You know, today I was hearing someone, if you, since a lot of positions of power are, you know, occupied by men. Also, if, if you're a woman you're just maybe used to kind of, let's say. Working for men or serving that, you know, the ideas of men have. And, and then if women wants to do things they're normally considered difficult or challenging, you know, this is so typical and, and it's happening every day and it's just getting tiring now. And I think women need to stop that and help each other. To, instead of making things worse for ourselves, because we already have a lot of challenge in life. So it's, it's just not acceptable that we are also struggling with other women. So I think it just is more cohesion and support solidarity would make life for all of us so much easier. Passionistas: Thanks for listening to The Passionistas Project Podcast in our interview with Dr. Nicole de Paula. To learn more about Women Leaders for Planetary Health's mission to empower women to lead planetary health solutions the frontlines of development in the Global South visit WLPH.org. Please visit ThePassionistasProject.com to learn more about our podcast and subscription box filled with products made by women owned businesses and female artisans to inspire you to follow your passions. Double your first box when you sign up for a one-year subscription. And remember to get your tickets to the third annual virtual Power of Passionistas summit from September 21st through 23rd. Early bird tickets are on sale now through August 21st for just $99 at ThePassionistasProject.com. So be sure to register before this special discount rate ends. And subscribe to The Passionistas Project Podcast, so you don't miss any of our upcoming inspiring guests. Until next time, stay well and stay passionate.
Talk 1 of 6 from Missions Conference 2022: As the Waters Cover the Sea. Consider donating to our Missions Conference fund: https://bit.ly/missions-conference-donation. — The gospel is good news for all people, in all lands, at all times. The call of the church is to obey Christ's command to teach the nations obedience to Him, as the King of all the earth. The great promise of the prophet Hosea is that the knowledge of the glory of the Lord shall cover the earth as water covers the sea. But between the commencement of Christ's kingly rule of earth and the day when he comes again to judge the living and the dead, there will be ebbs and flows. While initially in the gospel's advance it centered in Jerusalem, and then took root in the West, we see in more recent decades how the gospel is rapidly advancing in South America and in the Eastern lands. But oftentimes, Christians in the West are often unsure of how to take the gospel and share it with their fellow Westerners; but more so are stumped by how to share the good news with those from very different cultures and religions. Missions Conference 2022 is intended to help answer those questions, while equipping the saints where they are to be ready to share the word with not only their neighbor but the foreigner in their midst as well. Visit our website: https://christkirk.com.
One in four black girls will be sexually abused before the age of 18. Forty-five percent of black women have experienced abuse from an intimate partner. Forty percent of human trafficking victims are black. As fragments of data, these statistics are alarming. Contextualized within the historical experiences of black women and girls in America, they are the results of the sexualization of black women rooted in generational trauma steeped in racism, slavery, dehumanization and so much more. We dive into the history, data and language of these experiences; how they are shaped by policy making and practices in the U.S.; and the role each of us can play in shifting the experience from black woman tropes and victimization to beautifully complex and deserving of multilayered support. Ayana Wallace, Training and Technical Assistance Manager at Ujima: The National Center on Violence Against Women in the Black Community, lends her unique and courageous voice to this conversation providing both history and hands-on understanding of black women survivor needs and experiences. Ms. Wallace has worked for over a decade in the domestic violence field providing direct service to survivors, technical assistance to advocates, law enforcement, community-based partners and faith communities, and toward the advancement of national initiatives that benefit survivors.
A series of biblical teachings on the nature of the Church in Acts. Speaker: Charley Dever The theme music for this episode was written and produced by Brooks Cooker and our friends at Freshly Squeezed Studios.
We are joined this week by Daniel K. Eng, Assistant Professor of New Testament Language and Literature at Western Seminary in Portland, OR! Listen in as Daniel shares about the impact of higher education on Chinese church ministry and the need to contextualize discipleship in the Chinese heritage church.