All of the attributes that distinguish the communicative content under analysis as an object of study
POPULARITY
Cope, estar informado. Mira, voy a intentar explicarte de la forma más esquemática posible toda la casquería, la basura y el hedor político que nos desborda en la actualidad española. Uno, protagonista del día. El presidente de Castilla-La Mancha, Emiliano García-Page. Esta mañana ha hablado muy alto y muy clarito con Herrera aquí en Cope. Dice que no hay una salida digna al laberinto, que lo que más teme Sánchez es lo que todavía queda por salir y no está publicado, y atención, dice que hay ministros que tienen grabadas conversaciones con Sánchez. Textual. A mí me da la impresión de que ...
Software Engineering Radio - The Podcast for Professional Software Developers
Will McGugan, the CEO and founder of Textualize, speaks with host Gregory M. Kapfhammer about how to use packages such as Rich and Textual to build text-based user interfaces (TUIs) and command-line interfaces (CLIs) in Python. Along with discussing the design idioms that enable developers to create TUIs in Python, they consider practical strategies for efficiently rendering the components of a TUI. They also explore the subtle idiosyncrasies of implementing performant TUI frameworks like Textual and Rich and introduce the steps that developers would take to create their own CLI or TUI. This episode is sponsored by Fly.io.
Topics covered in this episode: pre-commit: install with uv PEP 773: A Python Installation Manager for Windows (Accepted) Changes for Textual The Best Programmers I Know Extras Joke Watch on YouTube About the show Sponsored by NordLayer: pythonbytes.fm/nordlayer Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 10am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Brian #1: pre-commit: install with uv Adam Johnson uv tool works great at keeping tools you use on lots of projects up to date quickly, why not use it for pre-commit. The extension of pre-commit-uv will use uv to create virtual environments and install packages fore pre-commit. This speeds up initial pre-commit cache creation. However, Adam is recommending this flavor of using pre-commit because it's just plain easier to install pre-commit and dependencies than the official pre-commit install guide. Win-win. Side note: No Adam, I'm not going to pronounce uv “uhv”, I'll stick with “you vee”, even Astral tells me I'm wrong Michael #2: PEP 773: A Python Installation Manager for Windows (Accepted) via pycoders newsletter One manager to rule them all – PyManager. PEP 773 replaces all existing Windows installers (.exe “traditional” bundle, per-version Windows Store apps, and the separate py.exe launcher) with a single MSIX app called Python Install Manager (nick-named PyManager). PyManager should be mainstream by CPython 3.15, and the traditional installer disappears no earlier than 3.16 (≈ mid-2027). Simple, predictable commands. python → launches “the best” runtime already present or auto-installs the latest CPython if none is found. py → same launcher as today plus management sub-commands: py install, py uninstall, py list, py exec, py help. Optional python3 and python3.x aliases can be enabled by adding one extra PATH entry. Michael #3: Changes for Textual Bittersweet news: the business experiment ends, but the code lives on. Textual began as a hobby project layered on top of Rich, but it has grown into a mature, “makes-the-terminal-do-the-impossible” TUI framework with an active community and standout documentation. Despite Textual's technical success, the team couldn't pinpoint a single pain-point big enough to sustain a business model, so the company will wind down in the coming weeks. The projects themselves aren't going anywhere: they're stable, battle-tested, and will continue under the stewardship of the original author and the broader community. Brian #4: The Best Programmers I Know Matthias Endler “I have met a lot of developers in my life. Lately, I asked myself: “What does it take to be one of the best? What do they all have in common?”” The list Read the reference Know your tools really well Read the error message Break down problems Don't be afraid to get your hands dirty Always help others Write Never stop learning Status doesn't matter Build a reputation Have patience Never blame the computer Don't be afraid to say “I don't know” Don't guess Keep it simple Each topic has a short discussion. So don't just ready the bullet points, check out the article. Extras Brian: I had a great time in Munich last week. I a talk at a company event, met with tons of people, and had a great time. The best part was connecting with people from different divisions working on similar problems. I love the idea of internal conferences to get people to self organize by topic and meet people they wouldn't otherwise, to share ideas. Also got started working on a second book on the plane trip back. Michael: Talk Python Clips (e.g. mullet) Embrace your cloud firewall (example). Python 3.14.0 beta 1 is here Congrats to the new PSF Fellows. Cancelled faster CPython https://bsky.app/profile/snarky.ca/post/3lp5w5j5tws2i Joke: How To Fix Your Computer
The Rebbe responds to an editorial suggestion, defending his deliberate use of a less common version referencing food purity in Egypt. He explains that uncommon variants often reflect deeper intent and clarity, not error, and that changing them risks distorting the message—especially in light of contemporary challenges to kashrus in Eretz Yisrael. https://www.torahrecordings.com/rebbe/igroskodesh/015/009/5433
FULL SHOW: Thursday, April 10th, 2025 Get your 2nd Date Update Merch For A Cause HERE! Curious if we look as bad as we sound? Follow us @BrookeandJeffrey: Youtube Instagram TikTok BrookeandJeffrey.comSee omnystudio.com/listener for privacy information.
3-30-25 - Biblical-Literacy Mark Lanier began a new series on Romans. Today's focus merged how to study an Epistle and began an introduction into the book of Romans. 1. How to study Romans, an Epistle, in 8 basic steps. Historical context Literary context Textual analysis Paragraph exegesis Theological analysis Application Mix-in others Constant ongoing reflection 2. The occasion of the letter - Mark explains historical context that result in the church in Rome consisting of both Jewish and Gentile Christians with an appeal for unity. 3. The opening of Romans - Learn how Romans differs from Paul's other epistles in his introduction which includes the author, recipients and a greeting. Points for home God works through history Gospel is amazing news You are loved, called, and forgiven
Join us on the Resurrection Church Podcast as we dive into the fascinating textual history of John 5:4. In this episode, we explore the debate surrounding this verse—why it's missing from many modern Bible translations and what the earliest manuscripts reveal. Was it an addition by a scribe, or part of the original Gospel? We'll discuss the evidence, its implications for biblical reliability, and how understanding textual history can deepen your faith and study of Scripture. Perfect for anyone curious about the Bible's journey through time!
Inflation can squeeze profit margins and shake up even the strongest businesses—but only if you're unprepared. In this episode, we're breaking down how to protect your business from rising costs, shifting markets, and economic uncertainty. From pricing strategies to smarter spending, we'll cover real, actionable ways to stay profitable and resilient no matter what the economy throws your way.
Who is the Jesus of History? Who is the Jesus of the Gospels? And, are they one in the same? Dr. Andreas Köstenberger returns to the Bible and Theology Matters podcast to continue our discussion about these issues and more.Dr. Andreas J. Köstenberger is the founder of Biblical Foundations. He is the Theologian in Residence and Director of the Equipping Center at Fellowship Raleigh Church. He is a certified Christian leadership coach (CCLC), and he has authored, edited, and translated over sixty books on a variety of biblical topics. Dr. Köstenberger has also served on the faculty of Southeastern Baptist Theological Seminary and Midwestern Baptist Theological Seminary. Dr. Kostenberger and his wife Marny live in Wake Forest, North Carolina and have four adult children and two grandchildren. He is the author of "The Jesus of the Gospels" and an abridged version, that has recently been released, "Introducing Jesus: The Fourfold Gospel." The contents of these books are the subject matter of this program.
Who is the Jesus of History? Who is the Jesus of the Gospels? And, are they one in the same? Dr. Andreas Köstenberger will answer these questions and more. Dr. Andreas J. Köstenberger is the founder of Biblical Foundations. He is the Theologian in Residence and Director of the Equipping Center at Fellowship Raleigh Church. He is a certified Christian leadership coach (CCLC), and he has authored, edited, and translated over sixty books on a variety of biblical topics. Dr. Köstenberger has also served on the faculty of Southeastern Baptist Theological Seminary and Midwestern Baptist Theological Seminary. Dr. Kostenberger and his wife Marny live in Wake Forest, North Carolina and have four adult children and two grandchildren. He is the author of "The Jesus of the Gospels" and an abridged version, that has recently been released, "Introducing Jesus: The Fourfold Gospel." The contents of these books are the subject matter of this program. On the Bible and Theology Matters podcast, we discuss all things Bible and Theology, because it matters! What you believe determines how you behave.
Q: My Bible says that John 7:53 - 8:11 isn't found in the earliest manuscripts, so why are we including this in the Bible that we say is the inerrant word of God? Takeaways Inerrancy is a complex issue, focusing on original autographs. Manuscript evidence is abundant but requires careful analysis. Textual criticism helps us understand variations in biblical texts. The ending of Mark and the story of the adulterous woman are key examples. Historical accuracy of certain passages can still hold value. Translation and tradition play crucial roles in understanding scripture. The Bible's reliability is not diminished by textual criticism. Faith and critical thinking are essential in biblical study. Understanding the context of scripture enhances its interpretation. If you've got a question for Dr. Easley, call or text us your question at 615-281-9694 or email at question@michaelincontext.com.
Shabbat Teaching at Temple Beth Am, Los Angeles, March 1, 2025, with Rabbi Adam Kligfeld and Rabbi Hannah Jensen (IKAR). (Youtube/Zoom) Special Guest: Rabbi Hannah Jensen.
In this lecture, Bible teacher Dave Bigler (founder of Iron Sheep Ministries) does a basic overview of the history of our New Testament text. From the spreading of the early Gospel by word of mouth, the writing down of our New Testament text on papyrus to the formation of the New Testament Canon of scripture. All this and more is covered in this one hour lecture.Outline:01:38 - What are the top arguments against the validity of the Bible?03:06 - We live in a culture of doubt. 03:49 - Overview of part 1 and part 2 of this lecture series. 04:44 - Knowledge is our greatest strength amidst a culture of doubt.Jude 10; Rom 12.2; Prov 15.14; Prov 23.12; Prov 1:7Own your knowledge, if you don't know, find out. Pray for a hunger for knowledge.06:59 - The goal: Provide a basic, foundational knowledge of how our New Testament text passed from the pen of its original human author to your hands today. 07:14 - Outline for the lecture08:50 - what does inerrant mean?Define inerrant - without error. God, through the Holy Spirit, inspired the original human author who put pen to paper (quill to papyrus). THAT original also known as the “autograph,” THAT was without error. We do not have any of the original “autographs.” We have copies, that is where Textual criticism comes in. But let me be clear from the start; in all my research, all my schooling, all my studies; as much as I can be sure of anything, I am sure that this is God's perfect word for us today. Mat 24:35 - Heaven and earth will pass away, but my word will never pass away.10:45 - The spreading of the early gospel - an oral traditionThe gospel spread, and the narratives about Jesus' life and teachings were repeated hundreds of thousands of times by reliable eyewitnesses simply by word of mouth.Mat 28.18-20Acts 1.814:34 - When, why, and how was the text written down? When was the New Testament written? 17:18 - Why was there a gap between when Jesus lived and when the New Testament was written?18:10 - Why was the New Testament even written down?19:17 - What is the principle of immanence in Christianity?Heb 1.2, Matt 24.36, Mark 13,3220:44 - How was the New Testament written? Parchment, Papyrus, Manuscripts, etc.22:28 - what is a scribe?23:44 - The Canon of Scripture. Who decided what books would be in the Bible? What does the word Canon mean in relation to the Bible?23:44 - What is Canonization?25:36 - What is Pseudepigrapha?What is the Testament of Hezekiah, the Vision of Isaiah, the Books of Enoch, the Book of Noah, the Testament of Abraham, The Acts of Paul, The Gospel of Thomas, The Epistles of Barnabas?28:26 - Three key criteria for determining what books were in the New Testament Canon:ApostolicityOrthodoxyCatholicity30:48 - What books were questioned?33:14 - Why was the book of James questioned as being part of the New Testament?35:56 - Textual Criticism - the transmission of our text (copies of copies)38:06 - What is a textual variant in the Bible?47:39 - Is the ending of Mark a textual variant? Who wrote the ending to Mark? Mark 16.9-2052:07 - Was the story of the woman caught in adultery in the original New Testament text? John 7.53-8.11 56:17 - how much confidence can we really have in our text today?A look at Greek and Roman Historians 484-140ADHerodotus, Thucydides, Livy, Tacitus, Suetonius01:01:53 - Can we be confident in our New Testament text?01:04:04 - Where to learn more about Textual Criticism? Peter Gurry - interview on ApostleTalk.orgCo-Director - Text and Canon Institute TextandCanon.orgDig super deep w/ those that know - EvangelicalTextualCriticism.blogspot.comCenter for the Study of New Testament Manuscripts csntm.orgBooks: Reinventing Jesus (Daniel Wallace)How We Got the Bible (Neil Lightfoot)Scribes & Scripture (Peter Gurry)Pastor's Guide to the NT (David Bigler)01:06:25 - What will be in Part 2?01:07:05 - In Conclusion: God is sovereign!
Ratatui is a Rust framework for building rich--and incredible--UIs in the terminal. Bryan and Adam were joined by Orhun Parmaksız, who leads the project, to discuss the glory--as well as the ubiquity and utility!--of TUIs.In addition to Bryan Cantrill and Adam Leventhal, our special guest was Orhun Parmaksız. We were also joined by slightly-less-special guests Andrew Stone, Rain Paharia, and Josh Clulow.Some of the topics we hit on, in the order that we hit them:RatatuiOrhun's blogOrhun's FOSDEM 2025 talk (YT) or (fosdem.org) with slides link etc.MinitelMinitel rust stackratatui on MinitelSpotify player tuiDiscord TUIOrhun: tui-rs to ratatui transition blog postOxF: Oxide's ratatui based configurationtui-rsOxF: Describing the Oxide management networkRatzillaTerminal Collectivetui web bub / artratatui testing with snapshotsrizzuptui-realmAsterion (game)If we got something wrong or missed something, please file a PR! Our next show will likely be on Monday at 5p Pacific Time on our Discord server; stay tuned to our Mastodon feeds for details, or subscribe to this calendar. We'd love to have you join us, as we always love to hear from new speakers!
Rebecca and Dr. Michael J. Kruger discuss the origins and reliability of the New Testament, the textual transmission from early Christianity, the authenticity of gospel accounts, and address common skeptic arguments, including Bart Ehrman's views on manuscript variations. Dr. Kruger highlights the significance of Jesus' character and teachings while discussing the historical importance of early Christian texts.Subscribe to Mike's Blog:Miniature Codices in Early ChristianityFollow Mike Kruger:X, Facebook, and WebsiteThe Story of Jesus is designed for churches to use during evangelism and outreach events to help readers understand who Jesus is so they may believe and have life in his name. Pick up a copy wherever books are sold or visit crossway.org/plus to learn how you can get 30 percent off with a Crossway plus account.Sign up for weekly emails at RebeccaMcLaughlin.org/SubscribeFollow Confronting Christianity:Instagram | XProduced by The Good Podcast Co.
Applications for the NYC AI Engineer Summit, focused on Agents at Work, are open!When we first started Latent Space, in the lightning round we'd always ask guests: “What's your favorite AI product?”. The majority would say Midjourney. The simple UI of prompt → very aesthetic image turned it into a $300M+ ARR bootstrapped business as it rode the first wave of AI image generation.In open source land, StableDiffusion was congregating around AUTOMATIC1111 as the de-facto web UI. Unlike Midjourney, which offered some flags but was mostly prompt-driven, A1111 let users play with a lot more parameters, supported additional modalities like img2img, and allowed users to load in custom models. If you're interested in some of the SD history, you can look at our episodes with Lexica, Replicate, and Playground.One of the people involved with that community was comfyanonymous, who was also part of the Stability team in 2023, decided to build an alternative called ComfyUI, now one of the fastest growing open source projects in generative images, and is now the preferred partner for folks like Black Forest Labs's Flux Tools on Day 1. The idea behind it was simple: “Everyone is trying to make easy to use interfaces. Let me try to make a powerful interface that's not easy to use.”Unlike its predecessors, ComfyUI does not have an input text box. Everything is based around the idea of a node: there's a text input node, a CLIP node, a checkpoint loader node, a KSampler node, a VAE node, etc. While daunting for simple image generation, the tool is amazing for more complex workflows since you can break down every step of the process, and then chain many of them together rather than manually switching between tools. You can also re-start execution halfway instead of from the beginning, which can save a lot of time when using larger models.To give you an idea of some of the new use cases that this type of UI enables:* Sketch something → Generate an image with SD from sketch → feed it into SD Video to animate* Generate an image of an object → Turn into a 3D asset → Feed into interactive experiences* Input audio → Generate audio-reactive videosTheir Examples page also includes some of the more common use cases like AnimateDiff, etc. They recently launched the Comfy Registry, an online library of different nodes that users can pull from rather than having to build everything from scratch. The project has >60,000 Github stars, and as the community grows, some of the projects that people build have gotten quite complex:The most interesting thing about Comfy is that it's not a UI, it's a runtime. You can build full applications on top of image models simply by using Comfy. You can expose Comfy workflows as an endpoint and chain them together just like you chain a single node. We're seeing the rise of AI Engineering applied to art.Major Tom's ComfyUI Resources from the Latent Space DiscordMajor shoutouts to Major Tom on the LS Discord who is a image generation expert, who offered these pointers:* “best thing about comfy is the fact it supports almost immediately every new thing that comes out - unlike A1111 or forge, which still don't support flux cnet for instance. It will be perfect tool when conflicting nodes will be resolved”* AP Workflows from Alessandro Perili are a nice example of an all-in-one train-evaluate-generate system built atop Comfy* ComfyUI YouTubers to learn from:* @sebastiankamph* @NerdyRodent* @OlivioSarikas* @sedetweiler* @pixaroma* ComfyUI Nodes to check out:* https://github.com/kijai/ComfyUI-IC-Light* https://github.com/MrForExample/ComfyUI-3D-Pack* https://github.com/PowerHouseMan/ComfyUI-AdvancedLivePortrait* https://github.com/pydn/ComfyUI-to-Python-Extension* https://github.com/THtianhao/ComfyUI-Portrait-Maker* https://github.com/ssitu/ComfyUI_NestedNodeBuilder* https://github.com/longgui0318/comfyui-magic-clothing* https://github.com/atmaranto/ComfyUI-SaveAsScript* https://github.com/ZHO-ZHO-ZHO/ComfyUI-InstantID* https://github.com/AIFSH/ComfyUI-FishSpeech* https://github.com/coolzilj/ComfyUI-Photopea* https://github.com/lks-ai/anynode* Sarav: https://www.youtube.com/@mickmumpitz/videos ( applied stuff )* Sarav: https://www.youtube.com/@latentvision (technical, but infrequent)* look for comfyui node for https://github.com/magic-quill/MagicQuill* “Comfy for Video” resources* Kijai (https://github.com/kijai) pushing out support for Mochi, CogVideoX, AnimateDif, LivePortrait etc* Comfyui node support like LTX https://github.com/Lightricks/ComfyUI-LTXVideo , and HunyuanVideo* FloraFauna AI* Communities: https://www.reddit.com/r/StableDiffusion/, https://www.reddit.com/r/comfyui/Full YouTube EpisodeAs usual, you can find the full video episode on our YouTube (and don't forget to like and subscribe!)Timestamps* 00:00:04 Introduction of hosts and anonymous guest* 00:00:35 Origins of Comfy UI and early Stable Diffusion landscape* 00:02:58 Comfy's background and development of high-res fix* 00:05:37 Area conditioning and compositing in image generation* 00:07:20 Discussion on different AI image models (SD, Flux, etc.)* 00:11:10 Closed source model APIs and community discussions on SD versions* 00:14:41 LoRAs and textual inversion in image generation* 00:18:43 Evaluation methods in the Comfy community* 00:20:05 CLIP models and text encoders in image generation* 00:23:05 Prompt weighting and negative prompting* 00:26:22 Comfy UI's unique features and design choices* 00:31:00 Memory management in Comfy UI* 00:33:50 GPU market share and compatibility issues* 00:35:40 Node design and parameter settings in Comfy UI* 00:38:44 Custom nodes and community contributions* 00:41:40 Video generation models and capabilities* 00:44:47 Comfy UI's development timeline and rise to popularity* 00:48:13 Current state of Comfy UI team and future plans* 00:50:11 Discussion on other Comfy startups and potential text generation supportTranscriptAlessio [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Small AI.swyx [00:00:12]: Hey everyone, we are in the Chroma Studio again, but with our first ever anonymous guest, Comfy Anonymous, welcome.Comfy [00:00:19]: Hello.swyx [00:00:21]: I feel like that's your full name, you just go by Comfy, right?Comfy [00:00:24]: Yeah, well, a lot of people just call me Comfy, even when they know my real name. Hey, Comfy.Alessio [00:00:32]: Swyx is the same. You know, not a lot of people call you Shawn.swyx [00:00:35]: Yeah, you have a professional name, right, that people know you by, and then you have a legal name. Yeah, it's fine. How do I phrase this? I think people who are in the know, know that Comfy is like the tool for image generation and now other multimodality stuff. I would say that when I first got started with Stable Diffusion, the star of the show was Automatic 111, right? And I actually looked back at my notes from 2022-ish, like Comfy was already getting started back then, but it was kind of like the up and comer, and your main feature was the flowchart. Can you just kind of rewind to that moment, that year and like, you know, how you looked at the landscape there and decided to start Comfy?Comfy [00:01:10]: Yeah, I discovered Stable Diffusion in 2022, in October 2022. And, well, I kind of started playing around with it. Yes, I, and back then I was using Automatic, which was what everyone was using back then. And so I started with that because I had, it was when I started, I had no idea like how Diffusion works. I didn't know how Diffusion models work, how any of this works, so.swyx [00:01:36]: Oh, yeah. What was your prior background as an engineer?Comfy [00:01:39]: Just a software engineer. Yeah. Boring software engineer.swyx [00:01:44]: But like any, any image stuff, any orchestration, distributed systems, GPUs?Comfy [00:01:49]: No, I was doing basically nothing interesting. Crud, web development? Yeah, a lot of web development, just, yeah, some basic, maybe some basic like automation stuff. Okay. Just. Yeah, no, like, no big companies or anything.swyx [00:02:08]: Yeah, but like already some interest in automations, probably a lot of Python.Comfy [00:02:12]: Yeah, yeah, of course, Python. But I wasn't actually used to like the Node graph interface before I started Comfy UI. It was just, I just thought it was like, oh, like, what's the best way to represent the Diffusion process in the user interface? And then like, oh, well. Well, like, naturally, oh, this is the best way I've found. And this was like with the Node interface. So how I got started was, yeah, so basic October 2022, just like I hadn't written a line of PyTorch before that. So it's completely new. What happened was I kind of got addicted to generating images.Alessio [00:02:58]: As we all did. Yeah.Comfy [00:03:00]: And then I started. I started experimenting with like the high-res fixed in auto, which was for those that don't know, the high-res fix is just since the Diffusion models back then could only generate that low-resolution. So what you would do, you would generate low-resolution image, then upscale, then refine it again. And that was kind of the hack to generate high-resolution images. I really liked generating. Like higher resolution images. So I was experimenting with that. And so I modified the code a bit. Okay. What happens if I, if I use different samplers on the second pass, I was edited the code of auto. So what happens if I use a different sampler? What happens if I use a different, like a different settings, different number of steps? And because back then the. The high-res fix was very basic, just, so. Yeah.swyx [00:04:05]: Now there's a whole library of just, uh, the upsamplers.Comfy [00:04:08]: I think, I think they added a bunch of, uh, of options to the high-res fix since, uh, since, since then. But before that was just so basic. So I wanted to go further. I wanted to try it. What happens if I use a different model for the second, the second pass? And then, well, then the auto code base was, wasn't good enough for. Like, it would have been, uh, harder to implement that in the auto interface than to create my own interface. So that's when I decided to create my own. And you were doing that mostly on your own when you started, or did you already have kind of like a subgroup of people? No, I was, uh, on my own because, because it was just me experimenting with stuff. So yeah, that was it. Then, so I started writing the code January one. 2023, and then I released the first version on GitHub, January 16th, 2023. That's how things got started.Alessio [00:05:11]: And what's, what's the name? Comfy UI right away or? Yeah.Comfy [00:05:14]: Comfy UI. The reason the name, my name is Comfy is people thought my pictures were comfy, so I just, uh, just named it, uh, uh, it's my Comfy UI. So yeah, that's, uh,swyx [00:05:27]: Is there a particular segment of the community that you targeted as users? Like more intensive workflow artists, you know, compared to the automatic crowd or, you know,Comfy [00:05:37]: This was my way of like experimenting with, uh, with new things, like the high risk fixed thing I mentioned, which was like in Comfy, the first thing you could easily do was just chain different models together. And then one of the first things, I think the first times it got a bit of popularity was when I started experimenting with the different, like applying. Prompts to different areas of the image. Yeah. I called it area conditioning, posted it on Reddit and it got a bunch of upvotes. So I think that's when, like, when people first learned of Comfy UI.swyx [00:06:17]: Is that mostly like fixing hands?Comfy [00:06:19]: Uh, no, no, no. That was just, uh, like, let's say, well, it was very, well, it still is kind of difficult to like, let's say you want a mountain, you have an image and then, okay. I'm like, okay. I want the mountain here and I want the, like a, a Fox here.swyx [00:06:37]: Yeah. So compositing the image. Yeah.Comfy [00:06:40]: My way was very easy. It was just like, oh, when you run the diffusion process, you kind of generate, okay. You do pass one pass through the diffusion, every step you do one pass. Okay. This place of the image with this brand, this space, place of the image with the other prop. And then. The entire image with another prop and then just average everything together, every step, and that was, uh, area composition, which I call it. And then, then a month later, there was a paper that came out called multi diffusion, which was the same thing, but yeah, that's, uh,Alessio [00:07:20]: could you do area composition with different models or because you're averaging out, you kind of need the same model.Comfy [00:07:26]: Could do it with, but yeah, I hadn't implemented it. For different models, but, uh, you, you can do it with, uh, with different models if you want, as long as the models share the same latent space, like we, we're supposed to ring a bell every time someone says, yeah, like, for example, you couldn't use like Excel and SD 1.5, because those have a different latent space, but like, uh, yeah, like SD 1.5 models, different ones. You could, you could do that.swyx [00:07:59]: There's some models that try to work in pixel space, right?Comfy [00:08:03]: Yeah. They're very slow. Of course. That's the problem. That that's the, the reason why stable diffusion actually became like popular, like, cause was because of the latent space.swyx [00:08:14]: Small and yeah. Because it used to be latent diffusion models and then they trained it up.Comfy [00:08:19]: Yeah. Cause a pixel pixel diffusion models are just too slow. So. Yeah.swyx [00:08:25]: Have you ever tried to talk to like, like stability, the latent diffusion guys, like, you know, Robin Rombach, that, that crew. Yeah.Comfy [00:08:32]: Well, I used to work at stability.swyx [00:08:34]: Oh, I actually didn't know. Yeah.Comfy [00:08:35]: I used to work at stability. I got, uh, I got hired, uh, in June, 2023.swyx [00:08:42]: Ah, that's the part of the story I didn't know about. Okay. Yeah.Comfy [00:08:46]: So the, the reason I was hired is because they were doing, uh, SDXL at the time and they were basically SDXL. I don't know if you remember it was a base model and then a refiner model. Basically they wanted to experiment, like chaining them together. And then, uh, they saw, oh, right. Oh, this, we can use this to do that. Well, let's hire that guy.swyx [00:09:10]: But they didn't, they didn't pursue it for like SD3. What do you mean? Like the SDXL approach. Yeah.Comfy [00:09:16]: The reason for that approach was because basically they had two models and then they wanted to publish both of them. So they, they trained one on. Lower time steps, which was the refiner model. And then they, the first one was trained normally. And then they went during their test, they realized, oh, like if we string these models together are like quality increases. So let's publish that. It worked. Yeah. But like right now, I don't think many people actually use the refiner anymore, even though it is actually a full diffusion model. Like you can use it on its own. And it's going to generate images. I don't think anyone, people have mostly forgotten about it. But, uh.Alessio [00:10:05]: Can we talk about models a little bit? So stable diffusion, obviously is the most known. I know flux has gotten a lot of traction. Are there any underrated models that people should use more or what's the state of the union?Comfy [00:10:17]: Well, the, the latest, uh, state of the art, at least, yeah, for images there's, uh, yeah, there's flux. There's also SD3.5. SD3.5 is two models. There's a, there's a small one, 2.5B and there's the bigger one, 8B. So it's, it's smaller than flux. So, and it's more, uh, creative in a way, but flux, yeah, flux is the best. People should give SD3.5 a try cause it's, uh, it's different. I won't say it's better. Well, it's better for some like specific use cases. Right. If you want some to make something more like creative, maybe SD3.5. If you want to make something more consistent and flux is probably better.swyx [00:11:06]: Do you ever consider supporting the closed source model APIs?Comfy [00:11:10]: Uh, well, they, we do support them as custom nodes. We actually have some, uh, official custom nodes from, uh, different. Ideogram.swyx [00:11:20]: Yeah. I guess DALI would have one. Yeah.Comfy [00:11:23]: That's, uh, it's just not, I'm not the person that handles that. Sure.swyx [00:11:28]: Sure. Quick question on, on SD. There's a lot of community discussion about the transition from SD1.5 to SD2 and then SD2 to SD3. People still like, you know, very loyal to the previous generations of SDs?Comfy [00:11:41]: Uh, yeah. SD1.5 then still has a lot of, a lot of users.swyx [00:11:46]: The last based model.Comfy [00:11:49]: Yeah. Then SD2 was mostly ignored. It wasn't, uh, it wasn't a big enough improvement over the previous one. Okay.swyx [00:11:58]: So SD1.5, SD3, flux and whatever else. SDXL. SDXL.Comfy [00:12:03]: That's the main one. Stable cascade. Stable cascade. That was a good model. But, uh, that's, uh, the problem with that one is, uh, it got, uh, like SD3 was announced one week after. Yeah.swyx [00:12:16]: It was like a weird release. Uh, what was it like inside of stability actually? I mean, statute of limitations. Yeah. The statute of limitations expired. You know, management has moved. So it's easier to talk about now. Yeah.Comfy [00:12:27]: And inside stability, actually that model was ready, uh, like three months before, but it got, uh, stuck in, uh, red teaming. So basically the product, if that model had released or was supposed to be released by the authors, then it would probably have gotten very popular since it's a, it's a step up from SDXL. But it got all of its momentum stolen. It got stolen by the SD3 announcement. So people kind of didn't develop anything on top of it, even though it's, uh, yeah. It was a good model, at least, uh, completely mostly ignored for some reason. Likeswyx [00:13:07]: I think the naming as well matters. It seemed like a branch off of the main, main tree of development. Yeah.Comfy [00:13:15]: Well, it was different researchers that did it. Yeah. Yeah. Very like, uh, good model. Like it's the Worcestershire authors. I don't know if I'm pronouncing it correctly. Yeah. Yeah. Yeah.swyx [00:13:28]: I actually met them in Vienna. Yeah.Comfy [00:13:30]: They worked at stability for a bit and they left right after the Cascade release.swyx [00:13:35]: This is Dustin, right? No. Uh, Dustin's SD3. Yeah.Comfy [00:13:38]: Dustin is a SD3 SDXL. That's, uh, Pablo and Dome. I think I'm pronouncing his name correctly. Yeah. Yeah. Yeah. Yeah. That's very good.swyx [00:13:51]: It seems like the community is very, they move very quickly. Yeah. Like when there's a new model out, they just drop whatever the current one is. And they just all move wholesale over. Like they don't really stay to explore the full capabilities. Like if, if the stable cascade was that good, they would have AB tested a bit more. Instead they're like, okay, SD3 is out. Let's go. You know?Comfy [00:14:11]: Well, I find the opposite actually. The community doesn't like, they only jump on a new model when there's a significant improvement. Like if there's a, only like a incremental improvement, which is what, uh, most of these models are going to have, especially if you, cause, uh, stay the same parameter count. Yeah. Like you're not going to get a massive improvement, uh, into like, unless there's something big that, that changes. So, uh. Yeah.swyx [00:14:41]: And how are they evaluating these improvements? Like, um, because there's, it's a whole chain of, you know, comfy workflows. Yeah. How does, how does one part of the chain actually affect the whole process?Comfy [00:14:52]: Are you talking on the model side specific?swyx [00:14:54]: Model specific, right? But like once you have your whole workflow based on a model, it's very hard to move.Comfy [00:15:01]: Uh, not, well, not really. Well, it depends on your, uh, depends on their specific kind of the workflow. Yeah.swyx [00:15:09]: So I do a lot of like text and image. Yeah.Comfy [00:15:12]: When you do change, like most workflows are kind of going to be complete. Yeah. It's just like, you might have to completely change your prompt completely change. Okay.swyx [00:15:24]: Well, I mean, then maybe the question is really about evals. Like what does the comfy community do for evals? Just, you know,Comfy [00:15:31]: Well, that they don't really do that. It's more like, oh, I think this image is nice. So that's, uh,swyx [00:15:38]: They just subscribe to Fofr AI and just see like, you know, what Fofr is doing. Yeah.Comfy [00:15:43]: Well, they just, they just generate like it. Like, I don't see anyone really doing it. Like, uh, at least on the comfy side, comfy users, they, it's more like, oh, generate images and see, oh, this one's nice. It's like, yeah, it's not, uh, like the, the more, uh, like, uh, scientific, uh, like, uh, like checking that's more on specifically on like model side. If, uh, yeah, but there is a lot of, uh, vibes also, cause it is a like, uh, artistic, uh, you can create a very good model that doesn't generate nice images. Cause most images on the internet are ugly. So if you, if that's like, if you just, oh, I have the best model at 10th giant, it's super smart. I created on all the, like I've trained on just all the images on the internet. The images are not going to look good. So yeah.Alessio [00:16:42]: Yeah.Comfy [00:16:43]: They're going to be very consistent. But yeah. People like, it's not going to be like the, the look that people are going to be expecting from, uh, from a model. So. Yeah.swyx [00:16:54]: Can we talk about LoRa's? Cause we thought we talked about models then like the next step is probably LoRa's. Before, I actually, I'm kind of curious how LoRa's entered the tool set of the image community because the LoRa paper was 2021. And then like, there was like other methods like textual inversion that was popular at the early SD stage. Yeah.Comfy [00:17:13]: I can't even explain the difference between that. Yeah. Textual inversions. That's basically what you're doing is you're, you're training a, cause well, yeah. Stable diffusion. You have the diffusion model, you have text encoder. So basically what you're doing is training a vector that you're going to pass to the text encoder. It's basically you're training a new word. Yeah.swyx [00:17:37]: It's a little bit like representation engineering now. Yeah.Comfy [00:17:40]: Yeah. Basically. Yeah. You're just, so yeah, if you know how like the text encoder works, basically you have, you take your, your words of your product, you convert those into tokens with the tokenizer and those are converted into vectors. Basically. Yeah. Each token represents a different vector. So each word presents a vector. And those, depending on your words, that's the list of vectors that get passed to the text encoder, which is just. Yeah. Yeah. I'm just a stack of, of attention. Like basically it's a very close to LLM architecture. Yeah. Yeah. So basically what you're doing is just training a new vector. We're saying, well, I have all these images and I want to know which word does that represent? And it's going to get like, you train this vector and then, and then when you use this vector, it hopefully generates. Like something similar to your images. Yeah.swyx [00:18:43]: I would say it's like surprisingly sample efficient in picking up the concept that you're trying to train it on. Yeah.Comfy [00:18:48]: Well, people have kind of stopped doing that even though back as like when I was at Stability, we, we actually did train internally some like textual versions on like T5 XXL actually worked pretty well. But for some reason, yeah, people don't use them. And also they might also work like, like, yeah, this is something and probably have to test, but maybe if you train a textual version, like on T5 XXL, it might also work with all the other models that use T5 XXL because same thing with like, like the textual inversions that, that were trained for SD 1.5, they also kind of work on SDXL because SDXL has the, has two text encoders. And one of them is the same as the, as the SD 1.5 CLIP-L. So those, they actually would, they don't work as strongly because they're only applied to one of the text encoders. But, and the same thing for SD3. SD3 has three text encoders. So it works. It's still, you can still use your textual version SD 1.5 on SD3, but it's just a lot weaker because now there's three text encoders. So it gets even more diluted. Yeah.swyx [00:20:05]: Do people experiment a lot on, just on the CLIP side, there's like Siglip, there's Blip, like do people experiment a lot on those?Comfy [00:20:12]: You can't really replace. Yeah.swyx [00:20:14]: Because they're trained together, right? Yeah.Comfy [00:20:15]: They're trained together. So you can't like, well, what I've seen people experimenting with is a long CLIP. So basically someone fine tuned the CLIP model to accept longer prompts.swyx [00:20:27]: Oh, it's kind of like long context fine tuning. Yeah.Comfy [00:20:31]: So, so like it's, it's actually supported in Core Comfy.swyx [00:20:35]: How long is long?Comfy [00:20:36]: Regular CLIP is 77 tokens. Yeah. Long CLIP is 256. Okay. So, but the hack that like you've, if you use stable diffusion 1.5, you've probably noticed, oh, it still works if I, if I use long prompts, prompts longer than 77 words. Well, that's because the hack is to just, well, you split, you split it up in chugs of 77, your whole big prompt. Let's say you, you give it like the massive text, like the Bible or something, and it would split it up in chugs of 77 and then just pass each one through the CLIP and then just cut anything together at the end. It's not ideal, but it actually works.swyx [00:21:26]: Like the positioning of the words really, really matters then, right? Like this is why order matters in prompts. Yeah.Comfy [00:21:33]: Yeah. Like it, it works, but it's, it's not ideal, but it's what people expect. Like if, if someone gives a huge prompt, they expect at least some of the concepts at the end to be like present in the image. But usually when they give long prompts, they, they don't, they like, they don't expect like detail, I think. So that's why it works very well.swyx [00:21:58]: And while we're on this topic, prompts waiting, negative comments. Negative prompting all, all sort of similar part of this layer of the stack. Yeah.Comfy [00:22:05]: The, the hack for that, which works on CLIP, like it, basically it's just for SD 1.5, well, for SD 1.5, the prompt waiting works well because CLIP L is a, is not a very deep model. So you have a very high correlation between, you have the input token, the index of the input token vector. And the output token, they're very, the concepts are very close, closely linked. So that means if you interpolate the vector from what, well, the, the way Comfy UI does it is it has, okay, you have the vector, you have an empty prompt. So you have a, a chunk, like a CLIP output for the empty prompt, and then you have the one for your prompt. And then it interpolates from that, depending on your prompt. Yeah.Comfy [00:23:07]: So that's how it, how it does prompt waiting. But this stops working the deeper your text encoder is. So on T5X itself, it doesn't work at all. So. Wow.swyx [00:23:20]: Is that a problem for people? I mean, cause I'm used to just move, moving up numbers. Probably not. Yeah.Comfy [00:23:25]: Well.swyx [00:23:26]: So you just use words to describe, right? Cause it's a bigger language model. Yeah.Comfy [00:23:30]: Yeah. So. Yeah. So honestly it might be good, but I haven't seen many complaints on Flux that it's not working. So, cause I guess people can sort of get around it with, with language. So. Yeah.swyx [00:23:46]: Yeah. And then coming back to LoRa's, now the, the popular way to, to customize models is LoRa's. And I saw you also support Locon and LoHa, which I've never heard of before.Comfy [00:23:56]: There's a bunch of, cause what, what the LoRa is essentially is. Instead of like, okay, you have your, your model and then you want to fine tune it. So instead of like, what you could do is you could fine tune the entire thing, but that's a bit heavy. So to speed things up and make things less heavy, what you can do is just fine tune some smaller weights, like basically two, two matrices that when you multiply like two low rank matrices and when you multiply them together, gives a, represents a difference between trained weights and your base weights. So by training those two smaller matrices, that's a lot less heavy. Yeah.Alessio [00:24:45]: And they're portable. So you're going to share them. Yeah. It's like easier. And also smaller.Comfy [00:24:49]: Yeah. That's the, how LoRa's work. So basically, so when, when inferencing you, you get an inference with them pretty efficiently, like how ComputeWrite does it. It just, when you use a LoRa, it just applies it straight on the weights so that there's only a small delay at the base, like before the sampling to when it applies the weights and then it just same speed as, as before. So for, for inference, it's, it's not that bad, but, and then you have, so basically all the LoRa types like LoHa, LoCon, everything, that's just different ways of representing that like. Basically, you can call it kind of like compression, even though it's not really compression, it's just different ways of represented, like just, okay, I want to train a different on the difference on the weights. What's the best way to represent that difference? There's the basic LoRa, which is just, oh, let's multiply these two matrices together. And then there's all the other ones, which are all different algorithms. So. Yeah.Alessio [00:25:57]: So let's talk about LoRa. Let's talk about what comfy UI actually is. I think most people have heard of it. Some people might've seen screenshots. I think fewer people have built very complex workflows. So when you started, automatic was like the super simple way. What were some of the choices that you made? So the node workflow, is there anything else that stands out as like, this was like a unique take on how to do image generation workflows?Comfy [00:26:22]: Well, I feel like, yeah, back then everyone was trying to make like easy to use interface. Yeah. So I'm like, well, everyone's trying to make an easy to use interface.swyx [00:26:32]: Let's make a hard to use interface.Comfy [00:26:37]: Like, so like, I like, I don't need to do that, everyone else doing it. So let me try something like, let me try to make a powerful interface that's not easy to use. So.swyx [00:26:52]: So like, yeah, there's a sort of node execution engine. Yeah. Yeah. And it actually lists, it has this really good list of features of things you prioritize, right? Like let me see, like sort of re-executing from, from any parts of the workflow that was changed, asynchronous queue system, smart memory management, like all this seems like a lot of engineering that. Yeah.Comfy [00:27:12]: There's a lot of engineering in the back end to make things, cause I was always focused on making things work locally very well. Cause that's cause I was using it locally. So everything. So there's a lot of, a lot of thought and working by getting everything to run as well as possible. So yeah. ConfUI is actually more of a back end, at least, well, not all the front ends getting a lot more development, but, but before, before it was, I was pretty much only focused on the backend. Yeah.swyx [00:27:50]: So v0.1 was only August this year. Yeah.Comfy [00:27:54]: With the new front end. Before there was no versioning. So yeah. Yeah. Yeah.swyx [00:27:57]: And so what was the big rewrite for the 0.1 and then the 1.0?Comfy [00:28:02]: Well, that's more on the front end side. That's cause before that it was just like the UI, what, cause when I first wrote it, I just, I said, okay, how can I make, like, I can do web development, but I don't like doing it. Like what's the easiest way I can slap a node interface on this. And then I found this library. Yeah. Like JavaScript library.swyx [00:28:26]: Live graph?Comfy [00:28:27]: Live graph.swyx [00:28:28]: Usually people will go for like react flow for like a flow builder. Yeah.Comfy [00:28:31]: But that seems like too complicated. So I didn't really want to spend time like developing the front end. So I'm like, well, oh, light graph. This has the whole node interface. So, okay. Let me just plug that into, to my backend.swyx [00:28:49]: I feel like if Streamlit or Gradio offered something that you would have used Streamlit or Gradio cause it's Python. Yeah.Comfy [00:28:54]: Yeah. Yeah. Yeah.Comfy [00:29:00]: Yeah.Comfy [00:29:14]: Yeah. logic and your backend logic and just sticks them together.swyx [00:29:20]: It's supposed to be easy for you guys. If you're a Python main, you know, I'm a JS main, right? Okay. If you're a Python main, it's supposed to be easy.Comfy [00:29:26]: Yeah, it's easy, but it makes your whole software a huge mess.swyx [00:29:30]: I see, I see. So you're mixing concerns instead of separating concerns?Comfy [00:29:34]: Well, it's because... Like frontend and backend. Frontend and backend should be well separated with a defined API. Like that's how you're supposed to do it. Smart people disagree. It just sticks everything together. It makes it easy to like a huge mess. And also it's, there's a lot of issues with Gradio. Like it's very good if all you want to do is just get like slap a quick interface on your, like to show off your ML project. Like that's what it's made for. Yeah. Like there's no problem using it. Like, oh, I have my, I have my code. I just wanted a quick interface on it. That's perfect. Like use Gradio. But if you want to make something that's like a real, like real software that will last a long time and will be easy to maintain, then I would avoid it. Yeah.swyx [00:30:32]: So your criticism is Streamlit and Gradio are the same. I mean, those are the same criticisms.Comfy [00:30:37]: Yeah, Streamlit I haven't used as much. Yeah, I just looked a bit.swyx [00:30:43]: Similar philosophy.Comfy [00:30:44]: Yeah, it's similar. It's just, it just seems to me like, okay, for quick, like AI demos, it's perfect.swyx [00:30:51]: Yeah. Going back to like the core tech, like asynchronous queues, slow re-execution, smart memory management, you know, anything that you were very proud of or was very hard to figure out?Comfy [00:31:00]: Yeah. The thing that's the biggest pain in the ass is probably the memory management. Yeah.swyx [00:31:05]: Were you just paging models in and out or? Yeah.Comfy [00:31:08]: Before it was just, okay, load the model, completely unload it. Then, okay, that, that works well when you, your model are small, but if your models are big and it takes sort of like, let's say someone has a, like a, a 4090, and the model size is 10 gigabytes, that can take a few seconds to like load and load, load and load, so you want to try to keep things like in memory, in the GPU memory as much as possible. What Comfy UI does right now is it. It tries to like estimate, okay, like, okay, you're going to sample this model, it's going to take probably this amount of memory, let's remove the models, like this amount of memory that's been loaded on the GPU and then just execute it. But so there's a fine line between just because try to remove the least amount of models that are already loaded. Because as fans, like Windows drivers, and one other problem is the NVIDIA driver on Windows by default, because there's a way to, there's an option to disable that feature, but by default it, like, if you start loading, you can overflow your GPU memory and then it's, the driver's going to automatically start paging to RAM. But the problem with that is it's, it makes everything extremely slow. So when you see people complaining, oh, this model, it works, but oh, s**t, it starts slowing down a lot, that's probably what's happening. So it's basically you have to just try to get, use as much memory as possible, but not too much, or else things start slowing down, or people get out of memory, and then just find, try to find that line where, oh, like the driver on Windows starts paging and stuff. Yeah. And the problem with PyTorch is it's, it's high levels, don't have that much fine-grained control over, like, specific memory stuff, so kind of have to leave, like, the memory freeing to, to Python and PyTorch, which is, can be annoying sometimes.swyx [00:33:32]: So, you know, I think one thing is, as a maintainer of this project, like, you're designing for a very wide surface area of compute, like, you even support CPUs.Comfy [00:33:42]: Yeah, well, that's... That's just, for PyTorch, PyTorch supports CPUs, so, yeah, it's just, that's not, that's not hard to support.swyx [00:33:50]: First of all, is there a market share estimate, like, is it, like, 70% NVIDIA, like, 30% AMD, and then, like, miscellaneous on Apple, Silicon, or whatever?Comfy [00:33:59]: For Comfy? Yeah. Yeah, and, yeah, I don't know the market share.swyx [00:34:03]: Can you guess?Comfy [00:34:04]: I think it's mostly NVIDIA. Right. Because, because AMD, the problem, like, AMD works horribly on Windows. Like, on Linux, it works fine. It's, it's lower than the price equivalent NVIDIA GPU, but it works, like, you can use it, you generate images, everything works. On Linux, on Windows, you might have a hard time, so, that's the problem, and most people, I think most people who bought AMD probably use Windows. They probably aren't going to switch to Linux, so... Yeah. So, until AMD actually, like, ports their, like, raw cam to, to Windows properly, and then there's actually PyTorch, I think they're, they're doing that, they're in the process of doing that, but, until they get it, they get a good, like, PyTorch raw cam build that works on Windows, it's, like, they're going to have a hard time. Yeah.Alessio [00:35:06]: We got to get George on it. Yeah. Well, he's trying to get Lisa Su to do it, but... Let's talk a bit about, like, the node design. So, unlike all the other text-to-image, you have a very, like, deep, so you have, like, a separate node for, like, clip and code, you have a separate node for, like, the case sampler, you have, like, all these nodes. Going back to, like, the making it easy versus making it hard, but, like, how much do people actually play with all the settings, you know? Kind of, like, how do you guide people to, like, hey, this is actually going to be very impactful versus this is maybe, like, less impactful, but we still want to expose it to you?Comfy [00:35:40]: Well, I try to... I try to expose, like, I try to expose everything or, but, yeah, at least for the, but for things, like, for example, for the samplers, like, there's, like, yeah, four different sampler nodes, which go in easiest to most advanced. So, yeah, if you go, like, the easy node, the regular sampler node, that's, you have just the basic settings. But if you use, like, the sampler advanced... If you use, like, the custom advanced node, that, that one you can actually, you'll see you have, like, different nodes.Alessio [00:36:19]: I'm looking it up now. Yeah. What are, like, the most impactful parameters that you use? So, it's, like, you know, you can have more, but, like, which ones, like, really make a difference?Comfy [00:36:30]: Yeah, they all do. They all have their own, like, they all, like, for example, yeah, steps. Usually you want steps, you want them to be as low as possible. But you want, if you're optimizing your workflow, you want to, you lower the steps until, like, the images start deteriorating too much. Because that, yeah, that's the number of steps you're running the diffusion process. So, if you want things to be faster, lower is better. But, yeah, CFG, that's more, you can kind of see that as the contrast of the image. Like, if your image looks too bursty. Then you can lower the CFG. So, yeah, CFG, that's how, yeah, that's how strongly the, like, the negative versus positive prompt. Because when you sample a diffusion model, it's basically a negative prompt. It's just, yeah, positive prediction minus negative prediction.swyx [00:37:32]: Contrastive loss. Yeah.Comfy [00:37:34]: It's positive minus negative, and the CFG does the multiplier. Yeah. Yeah. Yeah, so.Alessio [00:37:41]: What are, like, good resources to understand what the parameters do? I think most people start with automatic, and then they move over, and it's, like, snap, CFG, sampler, name, scheduler, denoise. Read it.Comfy [00:37:53]: But, honestly, well, it's more, it's something you should, like, try out yourself. I don't know, you don't necessarily need to know how it works to, like, what it does. Because even if you know, like, CFGO, it's, like, positive minus negative prompt. Yeah. So the only thing you know at CFG is if it's 1.0, then that means the negative prompt isn't applied. It also means sampling is two times faster. But, yeah. But other than that, it's more, like, you should really just see what it does to the images yourself, and you'll probably get a more intuitive understanding of what these things do.Alessio [00:38:34]: Any other nodes or things you want to shout out? Like, I know the animate diff IP adapter. Those are, like, some of the most popular ones. Yeah. What else comes to mind?Comfy [00:38:44]: Not nodes, but there's, like, what I like is when some people, sometimes they make things that use ComfyUI as their backend. Like, there's a plugin for Krita that uses ComfyUI as its backend. So you can use, like, all the models that work in Comfy in Krita. And I think I've tried it once. But I know a lot of people use it, and it's probably really nice, so.Alessio [00:39:15]: What's the craziest node that people have built, like, the most complicated?Comfy [00:39:21]: Craziest node? Like, yeah. I know some people have made, like, video games in Comfy with, like, stuff like that. So, like, someone, like, I remember, like, yeah, last, I think it was last year, someone made, like, a, like, Wolfenstein 3D in Comfy. Of course. And then one of the inputs was, oh, you can generate a texture, and then it changes the texture in the game. So you can plug it to, like, the workflow. And there's a lot of, if you look there, there's a lot of crazy things people do, so. Yeah.Alessio [00:39:59]: And now there's, like, a node register that people can use to, like, download nodes. Yeah.Comfy [00:40:04]: Like, well, there's always been the, like, the ComfyUI manager. Yeah. But we're trying to make this more, like, I don't know, official, like, with, yeah, with the node registry. Because before the node registry, the, like, okay, how did your custom node get into ComfyUI manager? That's the guy running it who, like, every day he searched GitHub for new custom nodes and added dev annually to his custom node manager. So we're trying to make it less effortless. So we're trying to make it less effortless for him, basically. Yeah.Alessio [00:40:40]: Yeah. But I was looking, I mean, there's, like, a YouTube download node. There's, like, this is almost like, you know, a data pipeline more than, like, an image generation thing at this point. It's, like, you can get data in, you can, like, apply filters to it, you can generate data out.Comfy [00:40:54]: Yeah. You can do a lot of different things. Yeah. So I'm thinking, I think what I did is I made it easy to make custom nodes. So I think that helped a lot. I think that helped a lot for, like, the ecosystem because it is very easy to just make a node. So, yeah, a bit too easy sometimes. Then we have the issue where there's a lot of custom node packs which share similar nodes. But, well, that's, yeah, something we're trying to solve by maybe bringing some of the functionality into the core. Yeah. Yeah. Yeah.Alessio [00:41:36]: And then there's, like, video. People can do video generation. Yeah.Comfy [00:41:40]: Video, that's, well, the first video model was, like, stable video diffusion, which was last, yeah, exactly last year, I think. Like, one year ago. But that wasn't a true video model. So it was...swyx [00:41:55]: It was, like, moving images? Yeah.Comfy [00:41:57]: I generated video. What I mean by that is it's, like, it's still 2D Latents. It's basically what I'm trying to do. So what they did is they took SD2, and then they added some temporal attention to it, and then trained it on videos and all. So it's kind of, like, animated, like, same idea, basically. Why I say it's not a true video model is that you still have, like, the 2D Latents. Like, a true video model, like Mochi, for example, would have 3D Latents. Mm-hmm.Alessio [00:42:32]: Which means you can, like, move through the space, basically. It's the difference. You're not just kind of, like, reorienting. Yeah.Comfy [00:42:39]: And it's also, well, it's also because you have a temporal VAE. Mm-hmm. Also, like, Mochi has a temporal VAE that compresses on, like, the temporal direction, also. So that's something you don't have with, like, yeah, animated diff and stable video diffusion. They only, like, compress spatially, not temporally. Mm-hmm. Right. So, yeah. That's why I call that, like, true video models. There's, yeah, there's actually a few of them, but the one I've implemented in comfy is Mochi, because that seems to be the best one so far. Yeah.swyx [00:43:15]: We had AJ come and speak at the stable diffusion meetup. The other open one I think I've seen is COG video. Yeah.Comfy [00:43:21]: COG video. Yeah. That one's, yeah, it also seems decent, but, yeah. Chinese, so we don't use it. No, it's fine. It's just, yeah, I could. Yeah. It's just that there's a, it's not the only one. There's also a few others, which I.swyx [00:43:36]: The rest are, like, closed source, right? Like, Cling. Yeah.Comfy [00:43:39]: Closed source, there's a bunch of them. But I mean, open. I've seen a few of them. Like, I can't remember their names, but there's COG videos, the big, the big one. Then there's also a few of them that released at the same time. There's one that released at the same time as SSD 3.5, same day, which is why I don't remember the name.swyx [00:44:02]: We should have a release schedule so we don't conflict on each of these things. Yeah.Comfy [00:44:06]: I think SD 3.5 and Mochi released on the same day. So everything else was kind of drowned, completely drowned out. So for some reason, lots of people picked that day to release their stuff.Comfy [00:44:21]: Yeah. Which is, well, shame for those. And I think Omnijet also released the same day, which also seems interesting. Yeah. Yeah.Alessio [00:44:30]: What's Comfy? So you are Comfy. And then there's like, comfy.org. I know we do a lot of things for, like, news research and those guys also have kind of like a more open source thing going on. How do you work? Like you mentioned, you mostly work on like, the core piece of it. And then what...Comfy [00:44:47]: Maybe I should fade it in because I, yeah, I feel like maybe, yeah, I only explain part of the story. Right. Yeah. Maybe I should explain the rest. So yeah. So yeah. Basically, January, that's when the first January 2023, January 16, 2023, that's when Amphi was first released to the public. Then, yeah, did a Reddit post about the area composition thing somewhere in, I don't remember exactly, maybe end of January, beginning of February. And then someone, a YouTuber, made a video about it, like Olivio, he made a video about Amphi in March 2023. I think that's when it was a real burst of attention. And by that time, I was continuing to develop it and it was getting, people were starting to use it more, which unfortunately meant that I had first written it to do like experiments, but then my time to do experiments went down. It started going down, because people were actually starting to use it then. Like, I had to, and I said, well, yeah, time to add all these features and stuff. Yeah, and then I got hired by Stability June, 2023. Then I made, basically, yeah, they hired me because they wanted the SD-XL. So I got the SD-XL working very well withітhe UI, because they were experimenting withámphi.house.com. Actually, the SDX, how the SDXL released worked is they released, for some reason, like they released the code first, but they didn't release the model checkpoint. So they released the code. And then, well, since the research was related to code, I released the code in Compute 2. And then the checkpoints were basically early access. People had to sign up and they only allowed a lot of people from edu emails. Like if you had an edu email, like they gave you access basically to the SDXL 0.9. And, well, that leaked. Right. Of course, because of course it's going to leak if you do that. Well, the only way people could easily use it was with Comfy. So, yeah, people started using. And then I fixed a few of the issues people had. So then the big 1.0 release happened. And, well, Comfy UI was the only way a lot of people could actually run it on their computers. Because it just like automatic was so like inefficient and bad that most people couldn't actually, like it just wouldn't work. Like because he did a quick implementation. So people were forced. To use Comfy UI, and that's how it became popular because people had no choice.swyx [00:47:55]: The growth hack.Comfy [00:47:56]: Yeah.swyx [00:47:56]: Yeah.Comfy [00:47:57]: Like everywhere, like people who didn't have the 4090, they had like, who had just regular GPUs, they didn't have a choice.Alessio [00:48:05]: So yeah, I got a 4070. So think of me. And so today, what's, is there like a core Comfy team or?Comfy [00:48:13]: Uh, yeah, well, right now, um, yeah, we are hiring. Okay. Actually, so right now core, like, um, the core core itself, it's, it's me. Uh, but because, uh, the reason where folks like all the focus has been mostly on the front end right now, because that's the thing that's been neglected for a long time. So, uh, so most of the focus right now is, uh, all on the front end, but we are, uh, yeah, we will soon get, uh, more people to like help me with the actual backend stuff. Yeah. So, no, I'm not going to say a hundred percent because that's why once the, once we have our V one release, which is because it'd be the package, come fee-wise with the nice interface and easy to install on windows and hopefully Mac. Uh, yeah. Yeah. Once we have that, uh, we're going to have to, lots of stuff to do on the backend side and also the front end side, but, uh.Alessio [00:49:14]: What's the release that I'm on the wait list. What's the timing?Comfy [00:49:18]: Uh, soon. Uh, soon. Yeah, I don't want to promise a release date. We do have a release date we're targeting, but I'm not sure if it's public. Yeah, and we're still going to continue doing the open source, making MPUI the best way to run stable infusion models. At least the open source side, it's going to be the best way to run models locally. But we will have a few things to make money from it, like cloud inference or that type of thing. And maybe some things for some enterprises.swyx [00:50:08]: I mean, a few questions on that. How do you feel about the other comfy startups?Comfy [00:50:11]: I mean, I think it's great. They're using your name. Yeah, well, it's better they use comfy than they use something else. Yeah, that's true. It's fine. We're going to try not to... We don't want to... We want people to use comfy. Like I said, it's better that people use comfy than something else. So as long as they use comfy, I think it helps the ecosystem. Because more people, even if they don't contribute directly, the fact that they are using comfy means that people are more likely to join the ecosystem. So, yeah.swyx [00:50:57]: And then would you ever do text?Comfy [00:50:59]: Yeah, well, you can already do text with some custom nodes. So, yeah, it's something we like. Yeah, it's something I've wanted to eventually add to core, but it's more like not a very... It's a very high priority. But because a lot of people use text for prompt enhancement and other things like that. So, yeah, it's just that my focus has always been on diffusion models. Yeah, unless some text diffusion model comes out.swyx [00:51:30]: Yeah, David Holtz is investing a lot in text diffusion.Comfy [00:51:34]: Yeah, well, if a good one comes out, then we'll probably implement it since it fits with the whole...swyx [00:51:39]: Yeah, I mean, I imagine it's going to be a close source to Midjourney. Yeah.Comfy [00:51:43]: Well, if an open one comes out, then I'll probably implement it.Alessio [00:51:54]: Cool, comfy. Thanks so much for coming on. This was fun. Bye. Get full access to Latent Space at www.latent.space/subscribe
On today's show, Alex and Calvin continue their series on Discourse and Manipulation by examining the role of manipulative silence in various post-mortems to the 2024 Presidential Election. As a second-term President Donald Trump looms, many have been debating: what went wrong in the Democrats' campaign? What policy positions, rhetorical strategies and slip-ups, or other contextual factors led Kamala Harris and Tim Walz to be so soundly defeated? However, amidst all of the post-mortem analysis by institutional Democrats and their surrogates in the media, some salient concerns seem to be missing: namely, the various causes and effects of economic and political precarity that many communities in the US are actively experiencing, and the Democrats' seeming unwillingness to address these issues head-on. Instead, many are using this epideictic moment to blame scores of abstract, ill-defined terms for the election loss: “wokeness,” “inflation,” “misogyny,” “political headwinds,” and “anti-incumbent sentiment,” among others. When we apply a Critical Discourse Studies lens, we can see that all of these concepts share a common grammatical category: each one is a nominalization, or a noun that has been made out of a verb or adjective. These nominalizations serve the useful purpose of obscuring or silencing important information, such as who is responsible for an action (or who/what is being affected by it), as well as the scale of the issue. In this episode, we examine a series of texts that use manipulative nominalizations and other discourse structures to erase the specific ways that Democratic leaders, campaign staff, and consultancy firms have acted ineffectively and destructively both in this failed run and in the recent past (e.g. Biden's and Obama's presidencies and Clinton's losing bid in 2016). Instead of taking real stock of this history, these texts are mainly platforms for powerful actors to attack broad, abstract concepts, or worse, to victim-blame the voters themselves. We conclude by reflecting upon how these manipulative silences betray the Democratic establishment's inability or unwillingness to reckon with how its own economic and material interests might be at odds with policies and platforms that could help uplift the most vulnerable in our society.Texts Analyzed in this Episode:Maureen Dowd - “Democrats and the Case of Mistaken Identity Politics”National Organization for Women President Christian F. Nunes: “Racism, Sexism, Misogyny and Hate Won This Election, But We Won't Let Our Democracy Be Destroyed”David Plouffe dialogue on Pod Save America podcast episode: “Exclusive: The Harris Campaign On What Went Wrong”Works & Concepts Cited in this EpisodeFairclough, N. (2003). Analysing discourse (Vol. 270). London: Routledge.Huckin, T. (2002). Textual silence and the discourse of homelessness. Discourse & Society, 13(3), 347-372.Van Dijk, T. A. (1998). Ideology: A multidisciplinary approach. London: Sage.Cameron Mozafari's Twitter thread summarizing his work with Michael Israel on the changing meaning of “woke”re:verb episode 71: re:pronounsre:verb episode 14: re:blurb - IdeographsAn accessible transcript of this episode can be found here (via Descript)
In this episode: Alan has written Grummage for inspecting the output of Grype. Martin is hosting his own personal Fediverse instance with GoToSocial, and has written a backup tool. Mark is listening to Critical Role with Audiobookshelf. You can send your feedback via show@linuxmatters.sh or the Contact Form. If you’d like to hang out with other listeners and share your feedback with the community you can join: The Linux Matters Chatters on Telegram. The #linux-matters channel on the Late Night Linux Discord server. If you enjoy the show, please consider supporting us using Patreon or PayPal. For $5 a month on Patreon, you can enjoy an ad-free feed of Linux Matters, or for $10, get access to all the Late Night Linux family of podcasts ad-free.
In this episode: Alan has written Grummage for inspecting the output of Grype. Martin is hosting his own personal Fediverse instance with GoToSocial, and has written a backup tool. Mark is listening to Critical Role with Audiobookshelf. You can send your feedback via show@linuxmatters.sh or the Contact Form. If you’d like to hang out with other listeners and share your feedback with the community you can join: The Linux Matters Chatters on Telegram. The #linux-matters channel on the Late Night Linux Discord server. If you enjoy the show, please consider supporting us using Patreon or PayPal. For $5 a month on Patreon, you can enjoy an ad-free feed of Linux Matters, or for $10, get access to all the Late Night Linux family of podcasts ad-free.
In this episode: Alan has written Grummage for inspecting the output of Grype. Martin is hosting his own personal Fediverse instance with GoToSocial, and has written a backup tool. Mark is listening to Critical Role with Audiobookshelf. You can send your feedback via show@linuxmatters.sh or the Contact Form. If you'd like to hang... Read More
R' Micha Golshevsky - Torah Beis Tinyana00:00 Introduction 01:21 The Power of Crying and Emotions03:17 Spiritual Realities06:16 Chanukah and Its Deeper Meanings08:09 The Importance of Gratitude12:38 Connecting to Hashem Through Halachos19:53 Conclusion: The Bliss of the Next World
Textual criticism is about. offering a critique about what the best reading of the biblical text is, because before you can translate it, you need to know that you've got the best reading possible. And to engage in textual criticism, you need to have a fascination with biblical languages and biblical culture and biblical history. If it sounds complicated, it sort of is. But we have a great guide to walk us through what textual criticism is and why it matters. Dr. Karl Kutz is a recently retired professor of biblical languages and Bible at Multnomah University, and he is a voracious explorer of the biblical text. Find out more about Karl Kutz HEREContact Cyndi Parker through Narrative of Place.Join Cyndi Parker's Patreon Team!
Welcome to this brand-new episode of Light ‘Em Up!Tell a friend that lives overseas about us! We are now actively being downloaded in 114 countries.On this investigative, educational and impactful edition of Light ‘Em Up — we expose and tell the truth in a world filled with confusion and misinformation.We delve into the facts not the fiction about: Protecting your Constitutional & Civil Rights and educating you on topics that we're betting you rarely hear or know much about but should know everything about.We drill deep on:— Pretextual traffic stops (something we've given great attention to here on Light ‘Em Up)— Section 1983 lawsuits— and as a case study we examine Whren v U.S. (517 US 806 (1996), a landmark Supreme Court decision that relates directly to how law enforcement interacts with the public today.With a fine-tooth comb, we examine the details as to what exactly is going on in Cleveland, Ohio as the Division of Police, which is currently under DOJ (consent decree) oversight, continues to allegedly cheat, stretch, break and violate the law by intentionally stopping and searching black drivers at much higher rates than white Clevelanders.We explore “When cops become robbers”: When law enforcement officers use their badge and authority to “game” the citizenry as they violate the spirit and letter of the law.We tell the story of the Tenaha, TX Police Department, whose officers were (allegedly) intentionally using their authority to “shake-down” the good people passing through their city limits. The lead officer directly involved literally expressed the intention of using “the money that they get from thugs “from a newly created drug interdiction program to pay the town's bills. A class action lawsuit asserted that their pretextual traffic enforcement scheme was bogus and was designed solely to enhance the city financially and the defendants personally.Especially for you, we have exclusive audio of the Rehnquist High Court as the attorney on behalf of the petitioner (Whren), Lisa Burget Wright, argues their position before the high court on April 17th, 1996.Whren vs U.S. says that “a stop or search that is objectively reasonable is not diminished by the fact that the officer's real reason for making the stop or search has nothing to do with the validating reason”. (Meaning that it is acceptable for an officer to make up a reason to stop you.)We explore and define a §1983 lawsuit, which provides an individual the right to sue state government employees and others acting under the color of state law for civil rights violations.For your education and empowerment, we explore §242 of Title 18, which makes it a crime for a person acting under color of any law to willfully deprive a person of a right or privilege protected by the Constitution or laws of the U.S., and we provide crucial insights as to what your rights are when you are stopped by law enforcement. And we delve into pretextual traffic stops: What are they, and how they are used against you.We also offer a “Know Before You Go” pre-travel safety check-list that will help limit your potential exposure and hopefully reduce the likelihood of being pulled over for any legitimate or pretextual moving traffic violations.Finally, we review an in-depth analysis of what more than 17,000 traffic stops in the City of Cleveland has exposed in and among the CDP (Cleveland Division of Police) as it initially appears that they disproportionately pull over Black drivers with much more frequency than white motorists.Follow our sponsors Newsly & Feedspot here:We want to hear from you!
We are excited to announce our next season talking all about death and dying and our favorite stories. Heavy, you say? You betcha, but it's also some of the best conversations we've had about the characters and stories we've grown up with. The new season drops November 2nd and we can't wait for you to enjoy it!
A discussion about how we hijack scripture
A discussion about how we hijack scripture
Welcome back to this brand-new, explosive, visionary, investigative edition of Light 'Em Up.We're currently being actively downloaded in 114 countries, globally. Thank you for your unwavering, constant support.Without fear or favor, we shine the antiseptic light of the truth on any topic that we undertake and report on.On this episode we focus on and explore emerging ways of addressing critical issues in the criminal justice system and policing — drilling down on and beginning a focused conversation as to:Rethinking how law enforcement is deployed and all too often traffic stop by police.Public safety has long been treated as the near-exclusive province of law enforcement agencies.Police are tasked with countless challenging and many dangerous duties, including but not limited to:— Responding to active crime scenes— 911 calls for service (that are made all the more unpredictable by the prevalence of guns on our streets and gun ownership in the U.S.)— People who are in the grips of a mental health crisis— Domestic violence situationsThe most common 911 calls include: business checks, disturbances, suspicious persons, and complaints.Simply put, the police are over-tasked and vastly undertrained to deal with a myriad of complex issues that 21st Century Policing encounters and demands — especially those that center around mental health.The risk of being killed while being approached or stopped by law enforcement in the community is 16 times higher for individuals with untreated serious mental illness than for other civilians.For the safety of the public and law enforcement officers equally, we have to begin to re-think, re-examine and re-engineer these concepts, whether it be the biased enforcement of traffic laws by police which drive racial disparities in the criminal justice system or topics like we've covered in the past such as “DWB” or Driving While Black in America.— Police in the U.S. conduct more than 20 million traffic stops per year— Some 42% of African Americans say that police have stopped them just because of their race— 59% of the U.S. public believes that this practice of racial profiling is widespread— 81% disapprove of it — or at least express the concept that they doCivilian first responders dedicated to traffic and road safety can better serve communities by resolving traffic and safety issues without the potential for punitive law enforcement action.Racial profiling is a significant policing and social problem. We all witnessed how fast the police incident with Miami Dolphins star wide receiver Tyreek Hill escalated quickly from the body worn camera of the officer on scene.To mitigate the risk of harm to both the police and the public, many municipalities have tasked unarmed, non-law enforcement responders to address nonviolent social and medical issues such as mental health crises or have narrowed the scope of police discretion and duties in traffic enforcement.How many times have we seen in the past where a citizen is pulled over for a minor traffic infraction and it has escalated into a deadly encounter?— An expired registration— A crack in a windshield— LitteringIt happens every day, don't be fooled!Click here to see the list of the top jurisdictions that have first responder programs across the U.S.Tune in for ALL the explosive details and follow our sponsors Newsly & Feedspot.We want to hear from you!
In this episode, Andrew Albin and Andrew Kraebel, the editors of Speculum's essay cluster on the textual cult of fourteenth-century mystic Richard Rolle, chat with MMA series producer and host Jonathan Correa-Reyes about Rolle's life, his works, and the contemplative life that he practiced. This episode is a collaboration with Speculum: A Journal of Medieval Studies.For more information about Richard, Andrew, and Andrew, visit www.multiculturalmiddleages.com.
Talk Python To Me - Python conversations for passionate developers
Do you have kids? Maybe nieces and nephews? Or maybe you work in a school environment? Maybe it's just friend's who know you're a programmer and ask about how they should go about introducing programming concepts with them. Anna-Lena Popkes is back on the show to share her research on when and how to teach kids programming. We spend the second half of the episode talking about concrete apps and toys you might consider for each age group. Plus, some of these things are fun for adults too. ;) Episode sponsors WorkOS Talk Python Courses Links from the show Anna-Lena: alpopkes.com Magical universe repo: github.com Machine learning basics repo: github.com PyData recording "when and how to start coding with kids": youtube.com Robots and devices Bee Bot: terrapinlogo.com Cubelets: modrobotics.com BBC Microbit: microbit.org RaspberryPi: raspberrypi.com Adafruit Qualia ESP32 for CircuitPython: adafruit.com Zumi: robolink.com Board games Think Fun Robot Turtles Board Game: amazon.com Visual programming: Scratch Jr.: scratchjr.org Scratch: scratch.org Blocky: google.com Microbit's Make Code: microbit.org Code Club: codeclubworld.org Textual programming Code Combat: codecombat.com Hedy: hedycode.com Anvil: anvil.works Coding classes / summer camps (US) Portland Community College Summer Teen Program: pcc.edu Watch this episode on YouTube: youtube.com Episode transcripts: talkpython.fm --- Stay in touch with us --- Subscribe to us on YouTube: youtube.com Follow Talk Python on Mastodon: talkpython Follow Michael on Mastodon: mkennedy
In this class, we will analyze four beautiful piyutim that we chant on Rosh Hashanah and Yom Kippur, uncovering some structural oddities along the way. As we raise questions and examine older prints and medieval manuscripts, we'll uncover a story of mysterious censorship. While we'll piece together what happened, understanding why it happened will prove to be more challenging. Links: Machzor, Lublin 1551 https://www.nli.org.il/en/books/NNL_ALEPH990011638640205171/NLI Bodleian Library, MS. Michael 619 https://digital.bodleian.ox.ac.uk/objects/9e481d9a-06e6-41e8-ae36-a58d445a1ffa/surfaces/2e65041c-333b-4c90-9b54-c3566fbf73d8/# Bodleian Library, MS. Heb. e. 39 (Geniza) https://hebrew.bodleian.ox.ac.uk/fragments/full/MS_HEB_e_39_4b.jpg Bibliothèque Nationale de France, Hébreu 631 https://gallica.bnf.fr/ark:/12148/btv1b105392732/f25.item Bodleian Library MS. Laud Or. 321 https://digital.bodleian.ox.ac.uk/objects/268d1688-4523-4aed-962a-75f24c8cbfd0/surfaces/0b534075-2679-4a1e-88ca-998acfad9899/# Bodleian Library, MS. Michael 627 https://digital.bodleian.ox.ac.uk/objects/b3f2d1d5-ff07-4a6e-87ea-281c41957925/surfaces/ed999873-315d-46dc-a134-c50ca6f24362/# Bavarian State Library, Cod. Hebr. 69 https://www.nli.org.il/en/discover/manuscripts/hebrew-manuscripts/viewerpage?vid=MANUSCRIPTS#d=[[PNX_MANUSCRIPTS990001265200205171-1,FL50147206]] Basel University Library, MS. R II 2 https://www.nli.org.il/en/discover/manuscripts/hebrew-manuscripts/viewerpage?vid=MANUSCRIPTS#d=[[PNX_MANUSCRIPTS990001730950205171-1,FL61239499]] Bibliothèque Nationale de France, Hébreu 621 https://gallica.bnf.fr/ark:/12148/btv1b105420719/f70.planchecontact State Library Berlin, MS. OR 4200 https://www.nli.org.il/en/discover/manuscripts/hebrew-manuscripts/viewerpage?vid=MANUSCRIPTS#d=[[PNX_MANUSCRIPTS990000819800205171-1,FL55910250]] The Vanishing Verses: Curious Cases of Textual Tampering in Our Machzor
Across the humanities and social sciences, scholars increasingly use quantitative methods to study textual data. Considered together, this research represents an extraordinary event in the long history of textuality. More or less all at once, the corpus has emerged as a major genre of cultural and scientific knowledge. In Literary Mathematics: Quantitative Theory for Textual Studies (Stanford UP, 2022), Michael Gavin grapples with this development, describing how quantitative methods for the study of textual data offer powerful tools for historical inquiry and sometimes unexpected perspectives on theoretical issues of concern to literary studies. Student-friendly and accessible, the book advances this argument through case studies drawn from the Early English Books Online corpus. Gavin shows how a copublication network of printers and authors reveals an uncannily accurate picture of historical periodization; that a vector-space semantic model parses historical concepts in incredibly fine detail; and that a geospatial analysis of early modern discourse offers a surprising panoramic glimpse into the period's notion of world geography. Across these case studies, Gavin challenges readers to consider why corpus-based methods work so effectively and asks whether the successes of formal modeling ought to inspire humanists to reconsider fundamental theoretical assumptions about textuality and meaning. As Gavin reveals, by embracing the expressive power of mathematics, scholars can add new dimensions to digital humanities research and find new connections with the social sciences. Michael Gavin is Associate Professor of English at the University of South Carolina and author of The Invention of English Criticism, 1650-1760 (2015) Morteza Hajizadeh is a Ph.D. graduate in English from the University of Auckland in New Zealand. His research interests are Cultural Studies; Critical Theory; Environmental History; Medieval (Intellectual) History; Gothic Studies; 18th and 19th Century British Literature. YouTube channel. Twitter. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
Across the humanities and social sciences, scholars increasingly use quantitative methods to study textual data. Considered together, this research represents an extraordinary event in the long history of textuality. More or less all at once, the corpus has emerged as a major genre of cultural and scientific knowledge. In Literary Mathematics: Quantitative Theory for Textual Studies (Stanford UP, 2022), Michael Gavin grapples with this development, describing how quantitative methods for the study of textual data offer powerful tools for historical inquiry and sometimes unexpected perspectives on theoretical issues of concern to literary studies. Student-friendly and accessible, the book advances this argument through case studies drawn from the Early English Books Online corpus. Gavin shows how a copublication network of printers and authors reveals an uncannily accurate picture of historical periodization; that a vector-space semantic model parses historical concepts in incredibly fine detail; and that a geospatial analysis of early modern discourse offers a surprising panoramic glimpse into the period's notion of world geography. Across these case studies, Gavin challenges readers to consider why corpus-based methods work so effectively and asks whether the successes of formal modeling ought to inspire humanists to reconsider fundamental theoretical assumptions about textuality and meaning. As Gavin reveals, by embracing the expressive power of mathematics, scholars can add new dimensions to digital humanities research and find new connections with the social sciences. Michael Gavin is Associate Professor of English at the University of South Carolina and author of The Invention of English Criticism, 1650-1760 (2015) Morteza Hajizadeh is a Ph.D. graduate in English from the University of Auckland in New Zealand. His research interests are Cultural Studies; Critical Theory; Environmental History; Medieval (Intellectual) History; Gothic Studies; 18th and 19th Century British Literature. YouTube channel. Twitter. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/literary-studies
Across the humanities and social sciences, scholars increasingly use quantitative methods to study textual data. Considered together, this research represents an extraordinary event in the long history of textuality. More or less all at once, the corpus has emerged as a major genre of cultural and scientific knowledge. In Literary Mathematics: Quantitative Theory for Textual Studies (Stanford UP, 2022), Michael Gavin grapples with this development, describing how quantitative methods for the study of textual data offer powerful tools for historical inquiry and sometimes unexpected perspectives on theoretical issues of concern to literary studies. Student-friendly and accessible, the book advances this argument through case studies drawn from the Early English Books Online corpus. Gavin shows how a copublication network of printers and authors reveals an uncannily accurate picture of historical periodization; that a vector-space semantic model parses historical concepts in incredibly fine detail; and that a geospatial analysis of early modern discourse offers a surprising panoramic glimpse into the period's notion of world geography. Across these case studies, Gavin challenges readers to consider why corpus-based methods work so effectively and asks whether the successes of formal modeling ought to inspire humanists to reconsider fundamental theoretical assumptions about textuality and meaning. As Gavin reveals, by embracing the expressive power of mathematics, scholars can add new dimensions to digital humanities research and find new connections with the social sciences. Michael Gavin is Associate Professor of English at the University of South Carolina and author of The Invention of English Criticism, 1650-1760 (2015) Morteza Hajizadeh is a Ph.D. graduate in English from the University of Auckland in New Zealand. His research interests are Cultural Studies; Critical Theory; Environmental History; Medieval (Intellectual) History; Gothic Studies; 18th and 19th Century British Literature. YouTube channel. Twitter. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/mathematics
Across the humanities and social sciences, scholars increasingly use quantitative methods to study textual data. Considered together, this research represents an extraordinary event in the long history of textuality. More or less all at once, the corpus has emerged as a major genre of cultural and scientific knowledge. In Literary Mathematics: Quantitative Theory for Textual Studies (Stanford UP, 2022), Michael Gavin grapples with this development, describing how quantitative methods for the study of textual data offer powerful tools for historical inquiry and sometimes unexpected perspectives on theoretical issues of concern to literary studies. Student-friendly and accessible, the book advances this argument through case studies drawn from the Early English Books Online corpus. Gavin shows how a copublication network of printers and authors reveals an uncannily accurate picture of historical periodization; that a vector-space semantic model parses historical concepts in incredibly fine detail; and that a geospatial analysis of early modern discourse offers a surprising panoramic glimpse into the period's notion of world geography. Across these case studies, Gavin challenges readers to consider why corpus-based methods work so effectively and asks whether the successes of formal modeling ought to inspire humanists to reconsider fundamental theoretical assumptions about textuality and meaning. As Gavin reveals, by embracing the expressive power of mathematics, scholars can add new dimensions to digital humanities research and find new connections with the social sciences. Michael Gavin is Associate Professor of English at the University of South Carolina and author of The Invention of English Criticism, 1650-1760 (2015) Morteza Hajizadeh is a Ph.D. graduate in English from the University of Auckland in New Zealand. His research interests are Cultural Studies; Critical Theory; Environmental History; Medieval (Intellectual) History; Gothic Studies; 18th and 19th Century British Literature. YouTube channel. Twitter. Learn more about your ad choices. Visit megaphone.fm/adchoices
Across the humanities and social sciences, scholars increasingly use quantitative methods to study textual data. Considered together, this research represents an extraordinary event in the long history of textuality. More or less all at once, the corpus has emerged as a major genre of cultural and scientific knowledge. In Literary Mathematics: Quantitative Theory for Textual Studies (Stanford UP, 2022), Michael Gavin grapples with this development, describing how quantitative methods for the study of textual data offer powerful tools for historical inquiry and sometimes unexpected perspectives on theoretical issues of concern to literary studies. Student-friendly and accessible, the book advances this argument through case studies drawn from the Early English Books Online corpus. Gavin shows how a copublication network of printers and authors reveals an uncannily accurate picture of historical periodization; that a vector-space semantic model parses historical concepts in incredibly fine detail; and that a geospatial analysis of early modern discourse offers a surprising panoramic glimpse into the period's notion of world geography. Across these case studies, Gavin challenges readers to consider why corpus-based methods work so effectively and asks whether the successes of formal modeling ought to inspire humanists to reconsider fundamental theoretical assumptions about textuality and meaning. As Gavin reveals, by embracing the expressive power of mathematics, scholars can add new dimensions to digital humanities research and find new connections with the social sciences. Michael Gavin is Associate Professor of English at the University of South Carolina and author of The Invention of English Criticism, 1650-1760 (2015) Morteza Hajizadeh is a Ph.D. graduate in English from the University of Auckland in New Zealand. His research interests are Cultural Studies; Critical Theory; Environmental History; Medieval (Intellectual) History; Gothic Studies; 18th and 19th Century British Literature. YouTube channel. Twitter. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/communications
Expert Matchmaker Barbie Adler is back with more dating advice! Plus, STOP with the "Textual" relationships - no pen pals!See omnystudio.com/listener for privacy information.
Expert Matchmaker Barbie Adler is back with more dating advice! Plus, STOP with the "Textual" relationships - no pen pals!See omnystudio.com/listener for privacy information.
The Textual Integrity of Ṣaḥīḥ al-Bukhārī: A Study on the Primary Recensions, Textual Variants, and Transmission of the Ṣaḥīḥ https://www.amazon.co.uk/Textual-Integrity-%E1%B9%A2a%E1%B8%A5%C4%AB%E1%B8%A5-al-Bukh%C4%81r%C4%AB-Transmission/dp/B0CWYY2XBGMufti Muntasir Zaman https://yaqeeninstitute.org/team/muntasir-zamanQalam Institute https://www.qalam.institute/muntasir-zamanSupport this podcast at — https://redcircle.com/blogging-theology/donationsAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Ella Houston's book Advertising Disability (Routledge, 2024) invites Cultural Disability Studies to consider how advertising, as one of the most ubiquitous forms of popular culture, shapes attitudes towards disability. The research presented in the book provides a much-needed examination of the ways in which disability and mental health issues are depicted in different types of advertising, including charity 'sadvertisements', direct-to-consumer pharmaceutical advertisements and 'pro-diversity' brand campaigns. Textual analyses of advertisements from the eighteenth century onwards reveal how advertising reinforces barriers facing disabled people, such as stigmatising attitudes, ableist beauty 'ideals', inclusionism and the unstable crutch of charity. As well as investigating how socio-cultural meanings associated with disability are influenced by multimodal forms of communication in advertising, insights from empirical research conducted with disabled women in the United Kingdom and the United States are provided. Moving beyond traditional textual approaches to analysing cultural representations, the book emphasises how disabled people and activists develop counternarratives informed by their personal experiences of disability, challenging ableist messages promoted by advertisements. From start to finish, activist concepts developed by the Disabled People's Movement and individuals' embodied knowledge surrounding disability, impairments and mental health issues inform critiques of advertisements. Its critically informed approach to analysing portrayals of disability is relevant to advertisers, scholars and students in advertising studies and media studies who are interested in portraying diversity in marketing and promotional materials as well as scholars and students of disability studies and sociology more broadly. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
Ella Houston's book Advertising Disability (Routledge, 2024) invites Cultural Disability Studies to consider how advertising, as one of the most ubiquitous forms of popular culture, shapes attitudes towards disability. The research presented in the book provides a much-needed examination of the ways in which disability and mental health issues are depicted in different types of advertising, including charity 'sadvertisements', direct-to-consumer pharmaceutical advertisements and 'pro-diversity' brand campaigns. Textual analyses of advertisements from the eighteenth century onwards reveal how advertising reinforces barriers facing disabled people, such as stigmatising attitudes, ableist beauty 'ideals', inclusionism and the unstable crutch of charity. As well as investigating how socio-cultural meanings associated with disability are influenced by multimodal forms of communication in advertising, insights from empirical research conducted with disabled women in the United Kingdom and the United States are provided. Moving beyond traditional textual approaches to analysing cultural representations, the book emphasises how disabled people and activists develop counternarratives informed by their personal experiences of disability, challenging ableist messages promoted by advertisements. From start to finish, activist concepts developed by the Disabled People's Movement and individuals' embodied knowledge surrounding disability, impairments and mental health issues inform critiques of advertisements. Its critically informed approach to analysing portrayals of disability is relevant to advertisers, scholars and students in advertising studies and media studies who are interested in portraying diversity in marketing and promotional materials as well as scholars and students of disability studies and sociology more broadly. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/sociology
Ella Houston's book Advertising Disability (Routledge, 2024) invites Cultural Disability Studies to consider how advertising, as one of the most ubiquitous forms of popular culture, shapes attitudes towards disability. The research presented in the book provides a much-needed examination of the ways in which disability and mental health issues are depicted in different types of advertising, including charity 'sadvertisements', direct-to-consumer pharmaceutical advertisements and 'pro-diversity' brand campaigns. Textual analyses of advertisements from the eighteenth century onwards reveal how advertising reinforces barriers facing disabled people, such as stigmatising attitudes, ableist beauty 'ideals', inclusionism and the unstable crutch of charity. As well as investigating how socio-cultural meanings associated with disability are influenced by multimodal forms of communication in advertising, insights from empirical research conducted with disabled women in the United Kingdom and the United States are provided. Moving beyond traditional textual approaches to analysing cultural representations, the book emphasises how disabled people and activists develop counternarratives informed by their personal experiences of disability, challenging ableist messages promoted by advertisements. From start to finish, activist concepts developed by the Disabled People's Movement and individuals' embodied knowledge surrounding disability, impairments and mental health issues inform critiques of advertisements. Its critically informed approach to analysing portrayals of disability is relevant to advertisers, scholars and students in advertising studies and media studies who are interested in portraying diversity in marketing and promotional materials as well as scholars and students of disability studies and sociology more broadly. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/american-studies
Follow Him: A Come, Follow Me Podcast featuring Hank Smith & John Bytheway
Dr. Boren continues to examine the war chapters of Alma and the principles of righteous leadership as well as the evidence of God's hand in preparing the Book of Mormon for modern-day application and testimony.SHOW NOTES/TRANSCRIPTSEnglish: https://tinyurl.com/podcastBM33ENFrench: https://tinyurl.com/podcastBM33FRGerman: https://tinyurl.com/podcastBM33DEPortuguese: https://tinyurl.com/podcastBM33PTSpanish: https://tinyurl.com/podcastBM33ES YOUTUBEhttps://youtu.be/9FQtlm-rBioALL EPISODES/SHOW NOTESfollowHIM website: https://www.followHIMpodcast.comFREE PDF DOWNLOADS OF followHIM QUOTE BOOKSNew Testament: https://tinyurl.com/PodcastNTBookOld Testament: https://tinyurl.com/PodcastOTBookWEEKLY NEWSLETTERhttps://tinyurl.com/followHIMnewsletterSOCIAL MEDIAInstagram: https://www.instagram.com/followHIMpodcastFacebook: https://www.facebook.com/followhimpodcastTIMECODE00:00 Part II– Dr. David Boren03:14 Alma 46:21 - The title of liberty and running together08:00 Alma 46 Joseph of Egypt and covenant reminders11:02 Alma 47 - Amalickiah stirs up Lamanites to anger12:44 Alma 47:12-18 - Lehonti was “fixed in his mind”16:24 Don't come down from your mountain20:53 The Amalickiah's of today22:19 Alma 28:1-5 - Strengthening weak places25:01 Alma 48:7-17 - Moroni's characteristics and preparations for war28:20 Alma 48:19 - Helaman and others serve without recognition32:44 Textual analysis of Amalickiah's 13 words36:31 Alma 48:21-24 - How do Christians go to war?38:40 Alma 49:10-14 - Great leaders make needed changes44:21 Alma 49:26 -Lehi and the people of Morianton45:18 Alma 49:36 - Being hurt by people close to us49:41 Alma 51:2-9 - Contentions and the voice of the people50:45 Alma 51:19-21, 33-34 - End of the king-men and Teancum53:11 Alma 52:1 Amalickiah killed on the first day of the year54:36 The form of the Book of Alma 56:44 Alma 52 - Asking for help57:36 Alma 52:19 - Church councils and leadership1:02:57 Inviting the Lord into our councils1:04:42 Dr. Boren's ideas on leadership and testimony of Jesus Christ and the Book of Mormon1:08:25 End of Part II– Dr. David BorenThanks to the followHIM team:Steve & Shannon Sorensen: Cofounder, Executive Producer, SponsorDavid & Verla Sorensen: SponsorsDr. Hank Smith: Co-hostJohn Bytheway: Co-hostDavid Perry: ProducerKyle Nelson: Marketing, SponsorLisa Spice: Client Relations, Editor, Show NotesJamie Neilson: Social Media, Graphic DesignWill Stoughton: Video EditorKrystal Roberts: Translation Team, English & French Transcripts, WebsiteAriel Cuadra: Spanish Transcripts"Let Zion in Her Beauty Rise" by Marshall McDonaldhttps://www.marshallmcdonaldmusic.com
Lets talk ADHD and hyper sexuality- raw dogging it without medication! ED brought on by performance anxiety or is it an serious health concern... One wife has found her husband cheating via text..his lack of remorse is very telling. Tune in now! VIIA Hemp: Get 20% off your purchase when you use code "Housewife" at VIIAHemp.com (21+) Bathmate: Go to BathmateDirect.com/HornyHousewife to save 10% on your purchase. Epiphany: Get 20% off Epiphany Clit Arousal Serum with code HOUSEWIFE at tryepiphany.com Pop Star: Use code HOUSEWIFE to get 20% off your first order at https://www.popstarlabs.com/hornyhousewife. ASK ANON @ www.thehornyhousewifepodcast.com Follow me on IG @thehornyhousewifepodcast
In our final episode of our polar environmental humanities series, we have Penn State English professor Hester Blum on to discuss her environmental humanities research on polar ecomedia! Dr. Blum discusses the ephemeral texts and productions aboard Arctic and Antarctic voyages including newspapers. Newspapers on polar voyages? Yes, you heard that right. These texts have contemporary and global lessons to teach in that their production took place while in extreme environments. For more on Hester: Twitter: @hesterblum Email: hester.blum@psu.edu Website: hesterblum.com ASLE EcoCast: If you have an idea for an episode, please submit your proposal here: https://forms.gle/Y1S1eP9yXxcNkgWHA Twitter: @ASLE_EcoCast Lindsay Jolivette: @lin_jolivette If you're enjoying the show, please consider subscribing, sharing, and writing reviews on your favorite podcast platform(s)! Episode recorded May 22, 2024. CC BY-NC-ND 4.0
Biblical scholarship can be intimidating for many of us. Textual criticism, history, philosophy, original languages, you name it. Books on these subjects can be daunting and overwhelming to digest. Where to begin? This week and next on the Profile, we think we have a helpful recommendation. Author Dr. Benjamin Shaw, longtime research assistant to Dr. Gary Habermas, has written a brand-new and concise book on the topic of the historical reliability of the New Testament - Trustworthy - Thirteen Arguments for the Reliability of the New Testament (www.ivpress.com/Trustworthy). Whether you are familiar with the topic or not, Ben's book serves as a helpful reminder to those who already are familiar with the subject, and an insightful easy-to-understand primer for those just curious or for those who have recently become Christians.Dr. Ben Shaw has been working with Dr. Gary R. Habermas for over a decade doing philosophical, historical, and theological research while also publishing multiple works together. Additionally, part of his responsibilities was to minister to various people who had questions about Christianity: disciples, doubters, and skeptics. Dr. Shaw has authored or co-authored over two dozen publications (see below) and has given presentations at conferences (AAR, etc.) and universities (UVA, etc.). He has taught at Liberty University and Colorado Christian University. Ben has an MA in Religious Studies and a Ph.D. in Theology and Apologetics from Liberty University.Related Links: Free access to some related Watchman Profiles: Watchman Fellowship 4-page Profile on Atheism by Dr. Robert M. Bowman, Jr: www.watchman.org/Atheism Watchman Fellowship 4-page Profile on Agnosticism by W. Russell Crawford: www.watchman.org/Agnostic Watchman Fellowship 4-page Profile on Naturalism by Daniel Ray: www.watchman.org/Naturalism Additional ResourcesFREE: We are also offering a subscription to our 4-page bimonthly Profiles here: www.watchman.org/Free.SUPPORT: Help us create more content like this. Make a tax-deductible donation here: www.watchman.org/give.Apologetics Profile is a ministry of Watchman Fellowship For more information, visit www.watchman.org © Watchman Fellowship, Inc.
As the fallout from the momentous Supreme Court decision on presidential immunity continues to reverberate, MSNBC legal analysts Andrew Weissmann and Mary McCord offer some updates, then turn to another significant ruling from the High Court out on Friday: Fisher v. U.S. At issue was whether the charge of obstruction of an official proceeding could be applied to Capitol rioters in the wake of their actions on January 6th. Despite the ruling in favor of the defendant, their guest Ryan Goodman of Just Security confirms the limited impact this decision will have on those charged for their role in the chaos of January 6th, and on Donald Trump's election interference case in D.C.Further reading: Here is the analysis Ryan, Mary and Andrew wrote regarding the Fischer decision for Just Security: The Limited Effects of Fischer: DOJ Data Reveals Supreme Court's Narrowing of Jan. 6th Obstruction Charges Will Have Minimal Impact.
Welcome to Day 2381 of Wisdom-Trek, and thank you for joining me. This is Guthrie Chamberlain, Your Guide to Wisdom – Theology Thursday – Why Circumcision? – I Dare You Not To Bore Me With The Bible Wisdom-Trek Podcast Script - Day 2381 Welcome to Wisdom-Trek with Gramps! I am Guthrie Chamberlain, and we are on Day 2381 of our Trek. The Purpose of Wisdom-Trek is to create a legacy of wisdom, to seek out discernment and insights, and to boldly grow where few have chosen to grow before. Today is the fourth lesson in our segment, Theology Thursday. Utilizing excerpts from a book titled: I Dare You Not To Bore Me With The Bible written by Hebrew Bible scholar and professor Dr. Michael S Heiser, we will invest a couple of years going through the entire Bible, exploring short Biblical lessons that you may not have received in Bible classes or Church. The Bible is a wonderful book. Its pages reveal the epic story of God's redemption of humankind and the long, bitter conflict against evil. Yet it's also a book that seems strange to us. While God's Word was written for us, it wasn't written to us. Today, our lesson is Why Circumcision? Circumcision is mentioned nearly 100 times in the Bible. It is a central focus for Old Testament and New Testament theology (Rom 4:9-12: Gal 2:1-12: 5:1-10). If we're honest, that just sounds absurd. Circumcision was the sign of God's covenant with Abraham (Gen 17:9-14), but it was also widely practiced in the ancient Near East (the method, though, wasn't always the same). Jeremiah 9:25-26 notes that Israel's neighbors were circumcised. Archaeologists have also found that it was practiced in Syria and Phoenicia. Textual remains indicate that circumcision in Egypt goes back to at least 2200 BC, centuries before the Israelites were enslaved. Israelite men may have even submitted to Egyptian circumcision while in Egypt, since Joshua commanded the men crossing into the promised land to be recircumcised in order to “roll away the reproach of Egypt” (Josh 5:2, 9). The evidence suggests that circumcision did not distinguish Israelite men from their foreign neighbors. When God told Abraham to be circumcised, he was past the age of bearing children, and his wife, Sarah, was incapable of having children (Gen 18:11). Nevertheless, it would be through Sarah's womb (Gen 17:21: 18:14) that God would fulfill His promise of innumerable offspring to Abraham (Gen 12:1-3). God's covenant with Abraham could only be realized by miraculous intervention. The miraculous nature of Isaac's birth is the key to understanding circumcision as the sign of the covenant. After God made His promise to Abraham, every male member of Abraham's household was required to be circumcised (Gen 17:15-27). Every male—and every woman, since the males were all incapacitated for a time—knew that circumcision was connected to God's promise. It probably didn't make any sense, though, until Sarah became pregnant. Everyone in Abraham's household witnessed the miracle of Isaac's birth. From that point on, every male understood why they had been circumcised: Their entire race—their very existence—began with a miraculous act of God. Every woman was reminded of this when she had sexual relations with her Israelite husband and when her sons were circumcised. Circumcision was a visible, continuous reminder that Israel owed its...
Early Jewish believers in Yeshua created a catechism for new disciples from among the nations. Lost for hundreds of years, this document, the Didache, reveals the expectations and standards the early Yeshua-believing community put in place for Gentile disciples of Yeshua. Our guest today, Dr. Daniel Nessim, has spent his academic career demonstrating that the first disciples of Yeshua considered the Torah to be the moral compass of all those who joined the community of disciples, whether Jewish or Gentile. – Episode Takeaways – The Didache is a discipleship manual created by early Jewish believers in Yeshua for personal Gentile discipleship and how to conduct themselves within the community. The Didache was written around 80 CE during a tumultuous time in history and reflects the beliefs and values of the early Jewish believers, who considered the Torah to be the moral compass for all disciples. The arch of the Didache progresses from directing individuals on the proper moral path to addressing issues of community life that were particularly applicable at the time the Didache was written. – Episode Chapters – 00:00 Welcome Dr. Daniel Nessim to Messiah Podcast! 02:41 The Didache: What it is and where it came from. 05:44 The purpose and significance of the Didache. 08:16 Relationship of the Didache to other early Christian writings. 09:49 The historical occasion for writing down the Didache. 12:37 The clues that betray the Jewishness of the Didache. 14:20 Rediscovery of the Didache. 15:52 Impact of the Dead Sea Scrolls on Didache research. 21:44 Thesis of the book “Torah for Gentiles” by Dr. Daniel Nessim. 28:16 Parallels with Greek moral instruction. 31:12 Modern interpretations of the Didache. 32:36 Textual criticism and the similar sayings of Yeshua in the Didache and the Gospels. 37:52 Hard lines and leniencies in the Didache's concession passages. 41:37 Is there a specific list of Gentile responsibilities to the Torah? 49:02 Judaism as a religion of grace. 50:07 Why the Didache disappeared. 52:24 The end of Jewish Christianity and the historical context of the Didache. 58:27 The value of the Didache in the present era. – Episode Resources – Torah for Gentiles?: What the Jewish Authors of the Didache Had to Say, by Dr. Daniel Nessim https://www.amazon.com/Torah-Gentiles-Jewish-Authors-Didache-ebook/dp/B091G5C914 The Way of Life, The Rediscovered Teachings of the Twelve Jewish Apostles to the Gentiles, by Toby Janicki https://ffoz.store/products/the-way-of-life-book Messiah Podcast is a production of First Fruits of Zion (https://ffoz.org) in conjunction with Messiah Magazine. This publication is designed to provide rich substance, meaningful Jewish contexts, cultural understanding of the teaching of Jesus, and the background of modern faith from a Messianic Jewish perspective. Messiah Podcast theme music provided with permission by Joshua Aaron Music (http://JoshuaAaron.tv). “Cover the Sea” Copyright WorshipinIsrael.com songs 2020. All rights reserved.
Author and content creator Drew Afualo (The Comment Section Podcast) joins Whtiney Cummings for a brand new episode of the Good For You Podcast to discuss proper text grammar, FaceTiming with famous friends, the worst male celebrity and more. Big Baby 2024 Tour Tickets Now On Sale: https://bit.ly/3PaFegF Thank you to our sponsors! SHOPIFY: https://www.shopify.com/whitney SHHTAPE: https://www.shh_tape.com code:WHITNEY50 FACTOR: https://www.factormeals.com/whitney50 code WHITNEY50 RITUAL: https://www.ritual.com/whitney 00:00 Welcome To The Show 10:04 That Is A Virgo 19:57 Come See Me Live! 24:00 Texting With Brittany Broski 33:43 You're On My Level 42:41 Fired From The NFL 56:42 Worst Male Celebrity 1:05:09 Do You Lotion? Drew Afualo is a content creator, women's rights advocate, podcast host and author, best known as TikTok's “Crusader for Women”. From her hilariously witty content to her no-BS approach to shoveling misogyny out of the TikTok app via viral takedowns, Drew takes female empowerment to new levels and has established herself as a preeminent feminist leader of her generation with an audience of over nine million of social media. She was named Adweek's 2022 Digital & Tech Creator of the Year, Meta's Creator of Tomorrow, one of Time magazine's Next Generation Leaders, and one of Forbes' Top Creators of 2023. Drew also served as the official red-carpet correspondent at the 2023 Academy Awards and hosts the Spotify-exclusive podcast The Comment Section. Her first book, LOUD: Accept Nothing Less Than the Life You Deserve, will publish July 30th, 2024 via AUWA Books and is available for pre-order now at drew-afualo.com. URL: https://www.drew-afualo.com Socials: TikTok: @drewafualo // IG: @drewafualo // Twitter: @drewafualo1247