POPULARITY
In this episode, Marlow Warnicke, lead for the Slinky project, and Tim Wickberg, CTO of SchedMD, join us to discuss their work integrating HPC scheduler Slurm with Kubernetes. They provide background on Slurm's origins, its open source nature, and its evolution into Slinky to address Kubernetes's limitations in scheduling AI and HPC workloads. The discussion touches on the unique challenges in the MLOps space, the need for fine-grained resource control, and their collaborative efforts with various communities to enhance Kubernetes's efficiency. They also share the roadmap for Slinky and avenues for community collaboration and contribution. 00:00 Introduction and Guest Introductions 00:39 Overview of Slurm and Its Evolution 01:44 The Fusion of Slurm and Kubernetes: Slinky 04:14 Challenges in Kubernetes Scheduling 09:07 Unique Challenges in MLOps 12:58 Community Collaboration and Future Plans 16:41 Getting Involved and Final Thoughts
In this podcast episode, we talked with Andrey Cheptsov about The future of AI infrastructure. About the Speaker: Andrey Cheptsov is the founder and CEO of dstack, an open-source alternative to Kubernetes and Slurm, built to simplify the orchestration of AI infrastructure. Before dstack, Andrey worked at JetBrains for over a decade helping different teams make the best developer tools. During the event, the guest, Andrey Cheptsov, founder and CEO of dstack, discussed the complexities of AI infrastructure. We explore topics like the challenges of using Kubernetes for AI workloads, the need to rethink container orchestration, and the future of hybrid and cloud-only infrastructures. Andrey also shares insights into the role of on-premise and bare-metal solutions, edge computing, and federated learning. 0:00 Andrey's Career Journey: From JetBrains to DStack 5:00 The Motivation Behind DStack 7:00 Challenges in Machine Learning Infrastructure 10:00 Transitioning from Cloud to On-Prem Solutions 14:30 Reflections on OpenAI's Evolution 17:30 Open Source vs Proprietary Models: A Balanced Perspective 21:01 Monolithic vs. Decentralized AI businesses 22:05 The role of privacy and control in AI for industries like banking and healthcare 30:00 Challenges in training large AI models: GPUs and distributed systems 37:03 DeepSpeed's efficient training approach vs. brute force methods 39:00 Challenges for small and medium businesses: hosting and fine-tuning models 47:01 Managing Kubernetes challenges for AI teams 52:00 Hybrid vs. cloud-only infrastructure 56:03 On-premise vs. bare-metal solutions 58:05 Exploring edge computing and its challenges
Season 6 Episode 9: Cure So the gang go and visit a civilization that are DEF not Nazis! Weird as hell, tho. They got a super drink that cures diseases, and all they want in return is our rolodex of Goul'd Planets. The gang is cautious. Then Jonas and Teal'c break into a warehouse, and Here There Be Gou'lds, all swimmin in the pools. The Slurm is from them? Eww. But where'd they come from? Will it be a super, canon-defying plot twist, which makes a lot of previous episodes (Especially Hathor) make no goddamn sense!!!???. . . . . . .yes, yes it will. ----more---- 00:00 - Intro 7:42 - 24 Seconds 9:10 - Episode Debrief 1:16:00 - Were We Comforted 1:18:18 - Yeh Neh or Meh 1:21:33 - Next Episode 1:22:48 - ComeTrya! 1:23:35 - Get To Know Your Hosts 1:27:25 - Outro
Leaving Lost Balance. They give you a hat, don't they? Shapes. Naga the Captain. The speed of the pain of the white light. Lotus McBotus vibes. Lights in sticks. Skygazing susurrusly. Blind Temple burn. Together4NEVA. Not fine, nothing is fine. Fail wheel. Cone! Fartin' Fire Piece. Helios? Poking of stuff. The guy in the shed. Solid Ramp. Jackelope razor blade. Peckening. For your wine blog. A little neighborhood I loved. Beth makes ch-ch-ch-ch-changes. Rex Rangers Esplanade. D.J. Slurm, Plumers 2024, & Crocs boots for fuck's sake. Happiness is a Covid in June. Going to hell (station). Old bike bye-bye. Apprciating listeners appreciating Accuracy Third. Never giving thanks again. Commisery. Incelcaminos. "Sunscreen" dispensing. https://burningman.org/event/2024-art-installations/ https://www.facebook.com/reel/444493248620917 MUSIC: "Shout at a DJ" by Late Bus JOIN OUR DISCORD: https://discord.gg/qXUb7hf6bd FOLLOW US ON BLUESKY: @accuracy3rd.bsky.social Patreon us on Patreon: https://www.patreon.com/A3rd
Good news everyone! It's time to open... The Scary Gate. We Leap forward in time to the Year 3000 and the fandom of Futurama.In this episode we got members of the Gateleapers Express cast together for a series of games and challenges set in everyone's second favorite Matt Groening fandom. Can Ben and Jon pilot there odd space team to victory? Or will Audra and Chris be the underdogs that wait years for us to return? Find out, in the world of tomorrow... Now in color!Follow and support the Kickstarter for Space Oddities Issue #3!!Listen to GeeksplorationWe are an ad and listener supported podcast, but mainly listener supported! Consider supporting our production over at patreon.com/gateleapers. All supporters get full videos of each episode recording, bonus monthly gameshows and ad-free audio episodes.Do you have a suggestion for a fandom we've not yet covered? Are you a podcaster, creative or performer who would like to be a guest on our show? Get in touch! gateleapers@gmail.comIn this episode players must know their IMDB Trivia, play a game of audio Pictionary, find the correct date on the Futurama timeline, and know their celebrity guest stars.Music: BoucheDag by Alexander Nakarada (serpentsoundstudios.com)Licensed under Creative Commons BY Attribution 4.0 Licensehttps://creativecommons.org/licenses/by/4.0/Become a supporter of this podcast: https://www.spreaker.com/podcast/gateleapers-a-fandom-gameshow--5150861/support.
Join us at our first in-person conference on June 25 all about AI Quality: https://www.aiqualityconference.com Simon Karasik is a proactive and curious ML Engineer with 5 years of experience. Developed & deployed ML models at WEB and Big scale for Ads and Tax. Huge thank you to Nebius AI for sponsoring this episode. Nebius AI - https://nebius.ai/ MLOps podcast #228 with Simon Karasik, Machine Learning Engineer at Nebius AI, Handling Multi-Terabyte LLM Checkpoints. // Abstract The talk provides a gentle introduction to the topic of LLM checkpointing: why is it hard, how big are the checkpoints. It covers various tips and tricks for saving and loading multi-terabyte checkpoints, as well as the selection of cloud storage options for checkpointing. // Bio Full-stack Machine Learning Engineer, currently working on infrastructure for LLM training, with previous experience in ML for Ads, Speech, and Tax. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Simon on LinkedIn: https://www.linkedin.com/in/simon-karasik/ Timestamps: [00:00] Simon preferred beverage [01:23] Takeaways [04:22] Simon's tech background [08:42] Zombie models garbage collection [10:52] The road to LLMs [15:09] Trained models Simon worked on [16:26] LLM Checkpoints [20:36] Confidence in AI Training [22:07] Different Checkpoints [25:06] Checkpoint parts [29:05] Slurm vs Kubernetes [30:43] Storage choices lessons [36:02] Paramount components for setup [37:13] Argo workflows [39:49] Kubernetes node troubleshooting [42:35] Cloud virtual machines have pre-installed mentoring [45:41] Fine-tuning [48:16] Storage, networking, and complexity in network design [50:56] Start simple before advanced; consider model needs. [53:58] Join us at our first in-person conference on June 25 all about AI Quality
Join us in this episode of the Channels podcast to get a recap of some of the biggest features to be added to Nextflow in 2023 and take a look at some of the things coming in 2024. We tried to do this in Episode 27 but ended up spending nearly all the time discussing community and nf-core, so this episode is dedicated to just Nextflow features. We cover Phil's top hits: 2023 Fusion support on Azure Batch, Google Batch, SLURM, LSF Spack integration Markdown docs, developer docs New `nextflow inspect` command Channel "topics" AWS Fargate for compute tasks 2024 Job arrays Garbage collection (aka work directory cleanup) Command line interface v2 Improvements to Nextflow packaging Workflow inputs and outputs schema Module configuration / config v2
Jonathan Frankle works as Chief Scientist (Neural Networks) at MosaicML (recently acquired by Databricks), a startup dedicated to making it easy and cost-effective for anyone to train large-scale, state-of-the-art neural networks. He leads the research team. MLOps podcast #205 with Jonathan Frankle, Chief Scientist (Neural Networks) at Databricks, The Myth of AI Breakthroughs, co-hosted by Denny Lee, brought to us by our Premium Brand Partner, Databricks. // Abstract Jonathan takes us behind the scenes of the rigorous work they undertake to test new knowledge in AI and to create effective and efficient model training tools. With a knack for cutting through the hype, Jonathan focuses on the realities and usefulness of AI and its application. We delve into issues such as face recognition systems, the 'lottery ticket hypothesis,' and robust decision-making protocols for training models. Our discussion extends into Jonathan's interesting move into the world of law as an adjunct professor, the need for healthy scientific discourse, his experience with GPUs, and the amusing claim of a revolutionary algorithm called Qstar. // Bio Jonathan Frankle is Chief Scientist (Neural Networks) at Databricks, where he leads the research team toward the goal of developing more efficient algorithms for training neural networks. He arrived via Databricks' $1.3B acquisition of MosaicML as part of the founding team. He recently completed his PhD at MIT, where he empirically studied deep learning with Prof. Michael Carbin, specifically the properties of sparse networks that allow them to train effectively (his "Lottery Ticket Hypothesis" - ICLR 2019 Best Paper). In addition to his technical work, he is actively involved in policymaking around challenges related to machine learning. He earned his BSE and MSE in computer science at Princeton and has previously spent time at Google Brain and Facebook AI Research as an intern and Georgetown Law as an Adjunct Professor of Law. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: www.jfrankle.com Facial recognition: perpetuallineup.orgThe Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networksby Jonathan Frankle and Michael Carbin paper: https://arxiv.org/abs/1803.03635 --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Denny on LinkedIn: https://linkedin.com/in/dennyglee Connect with Jonathan on LinkedIn: https://www.linkedin.com/in/jfrankle/ Timestamps: [00:00] Jonathan's preferred coffee [01:16] Takeaways [07:19] LM Avalanche Panel Surprise [10:07] Adjunct Professor of Law [12:59] Low facial recognition accuracy [14:22] Automated decision making human in the loop argument [16:09] Control vs. Outsourcing Concerns [18:02] perpetuallineup.org [23:41] Face Recognition Challenges [26:18] The lottery ticket hypothesis [29:20] Mosaic Role: Model Expertise [31:40] Expertise Integration in Training [38:19] SLURM opinions [41:30] GPU Affinity [45:04] Breakthroughs with QStar [49:52] Deciphering the noise advice [53:07] Real Conversations [55:47] How to cut through the noise [1:00:12] Research Iterations and Timelines [1:02:30] User Interests, Model Limits [1:06:18] Debugability [1:08:00] Wrap up
We are running an end of year survey for our listeners! Please let us know any feedback you have, what episodes resonated with you, and guest requests for 2024! Survey link here!Before language models became all the rage in November 2022, image generation was the hottest space in AI (it was the subject of our first piece on Latent Space!) In our interview with Sharif Shameem from Lexica we talked through the launch of StableDiffusion and the early days of that space. At the time, the toolkit was still pretty rudimentary: Lexica made it easy to search images, you had the AUTOMATIC1111 Web UI to generate locally, some HuggingFace spaces that offered inference, and eventually DALL-E 2 through OpenAI's platform, but not much beyond basic text-to-image workflows.Today's guest, Suhail Doshi, is trying to solve this with Playground AI, an image editor reimagined with AI in mind. Some of the differences compared to traditional text-to-image workflows:* Real-time preview rendering using consistency: as you change your prompt, you can see changes in real-time before doing a final rendering of it.* Style filtering: rather than having to prompt exactly how you'd like an image to look, you can pick from a whole range of filters both from Playground's model as well as Stable Diffusion (like RealVis, Starlight XL, etc). We talk about this at 25:46 in the podcast.* Expand prompt: similar to DALL-E3, Playground will do some prompt tuning for you to get better results in generation. Unlike DALL-E3, you can turn this off at any time if you are a prompting wizard* Image editing: after generation, you have tools like a magic eraser, inpainting pencil, etc. This makes it easier to do a full workflow in Playground rather than switching to another tool like Photoshop.Outside of the product, they have also trained a new model from scratch, Playground v2, which is fully open source and open weights and allows for commercial usage. They benchmarked the model against SDXL across 1,000 prompts and found that humans preferred the Playground generation 70% of the time. They had similar results on PartiPrompts:They also created a new benchmark, MJHQ-30K, for “aesthetic quality”:We introduce a new benchmark, MJHQ-30K, for automatic evaluation of a model's aesthetic quality. The benchmark computes FID on a high-quality dataset to gauge aesthetic quality.We curate the high-quality dataset from Midjourney with 10 common categories, each category with 3K samples. Following common practice, we use aesthetic score and CLIP score to ensure high image quality and high image-text alignment. Furthermore, we take extra care to make the data diverse within each category.Suhail was pretty open with saying that Midjourney is currently the best product for imagine generation out there, and that's why they used it as the base for this benchmark. I think it's worth comparing yourself to maybe the best thing and try to find like a really fair way of doing that. So I think more people should try to do that. I definitely don't think you should be kind of comparing yourself on like some Google model or some old SD, Stable Diffusion model and be like, look, we beat Stable Diffusion 1.5. I think users ultimately want care, how close are you getting to the thing that people mostly agree with? [00:23:47]We also talked a lot about Suhail's founder journey from starting Mixpanel in 2009, then going through YC again with Mighty, and eventually sunsetting that to pivot into Playground. Enjoy!Show Notes* Suhail's Twitter* “Starting my road to learn AI”* Bill Gates book trip* Playground* Playground v2 Announcement* $40M raise announcement* “Running infra dev ops for 24 A100s”* Mixpanel* Mighty* “I decided to stop working on Mighty”* Fast.ai* CivitTimestamps* [00:00:00] Intros* [00:02:59] Being early in ML at Mixpanel* [00:04:16] Pivoting from Mighty to Playground and focusing on generative AI* [00:07:54] How DALL-E 2 inspired Mighty* [00:09:19] Reimagining the graphics editor with AI* [00:17:34] Training the Playground V2 model from scratch to advance generative graphics* [00:21:11] Techniques used to improve Playground V2 like data filtering and model tuning* [00:25:21] Releasing the MJHQ30K benchmark to evaluate generative models* [00:30:35] The limitations of current models for detailed image editing tasks* [00:34:06] Using post-generation user feedback to create better benchmarks* [00:38:28] Concerns over potential misuse of powerful generative models* [00:41:54] Rethinking the graphics editor user experience in the AI era* [00:45:44] Integrating consistency models into Playground using preview rendering* [00:47:23] Interacting with the Stable Diffusion LoRAs community* [00:51:35] Running DevOps on A100s* [00:53:12] Startup ideas?TranscriptAlessio: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO-in-Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI. [00:00:15]Swyx: Hey, and today in the studio we have Suhail Doshi, welcome. [00:00:18]Suhail: Yeah, thanks. Thanks for having me. [00:00:20]Swyx: So among many things, you're a CEO and co-founder of Mixpanel, and I think about three years ago you left to start Mighty, and more recently, I think about a year ago, transitioned into Playground, and you've just announced your new round. How do you like to be introduced beyond that? [00:00:34]Suhail: Just founder of Playground is fine, yeah, prior co-founder and CEO of Mixpanel. [00:00:40]Swyx: Yeah, awesome. I'd just like to touch on Mixpanel a little bit, because it's obviously one of the more successful analytics companies we previously had amplitude on, and I'm curious if you had any reflections on the interaction of that amount of data that people would want to use for AI. I don't know if there's still a part of you that stays in touch with that world. [00:00:59]Suhail: Yeah, I mean, the short version is that maybe back in like 2015 or 2016, I don't really remember exactly, because it was a while ago, we had an ML team at Mixpanel, and I think this is when maybe deep learning or something really just started getting kind of exciting, and we were thinking that maybe given that we had such vast amounts of data, perhaps we could predict things. So we built two or three different features, I think we built a feature where we could predict whether users would churn from your product. We made a feature that could predict whether users would convert, we built a feature that could do anomaly detection, like if something occurred in your product, that was just very surprising, maybe a spike in traffic in a particular region, can we tell you that that happened? Because it's really hard to like know everything that's going on with your data, can we tell you something surprising about your data? And we tried all of these various features, most of it boiled down to just like, you know, using logistic regression, and it never quite seemed very groundbreaking in the end. And so I think, you know, we had a four or five person ML team, and I think we never expanded it from there. And I did all these Fast AI courses trying to learn about ML. And that was the- That's the first time you did fast AI. Yeah, that was the first time I did fast AI. Yeah, I think I've done it now three times, maybe. [00:02:12]Swyx: Oh, okay. [00:02:13]Suhail: I didn't know it was the third. No, no, just me reviewing it, it's maybe three times, but yeah. [00:02:16]Swyx: You mentioned prediction, but honestly, like it's also just about the feedback, right? The quality of feedback from users, I think it's useful for anyone building AI applications. [00:02:25]Suhail: Yeah. Yeah, I think I haven't spent a lot of time thinking about Mixpanel because it's been a long time, but sometimes I'm like, oh, I wonder what we could do now. And then I kind of like move on to whatever I'm working on, but things have changed significantly since. [00:02:39]Swyx: And then maybe we'll touch on Mighty a little bit. Mighty was very, very bold. My framing of it was, you will run our browsers for us because everyone has too many tabs open. I have too many tabs open and slowing down your machines that you can do it better for us in a centralized data center. [00:02:51]Suhail: Yeah, we were first trying to make a browser that we would stream from a data center to your computer at extremely low latency, but the real objective wasn't trying to make a browser or anything like that. The real objective was to try to make a new kind of computer. And the thought was just that like, you know, we have these computers in front of us today and we upgrade them or they run out of RAM or they don't have enough RAM or not enough disk or, you know, there's some limitation with our computers, perhaps like data locality is a problem. Why do I need to think about upgrading my computer ever? And so, you know, we just had to kind of observe that like, well, actually it seems like a lot of applications are just now in the browser, you know, it's like how many real desktop applications do we use relative to the number of applications we use in the browser? So it's just this realization that actually like, you know, the browser was effectively becoming more or less our operating system over time. And so then that's why we kind of decided to go, hmm, maybe we can stream the browser. Fortunately, the idea did not work for a couple of different reasons, but the objective is try to make sure new computer. [00:03:50]Swyx: Yeah, very, very bold. [00:03:51]Alessio: Yeah, and I was there at YC Demo Day when you first announced it. It was, I think, the last or one of the last in-person ones, at Pier34 in Mission Bay. How do you think about that now when everybody wants to put some of these models in people's machines and some of them want to stream them in, do you think there's maybe another wave of the same problem before it was like browser apps too slow, now it's like models too slow to run on device? [00:04:16]Suhail: Yeah. I mean, I've obviously pivoted away from Mighty, but a lot of what I somewhat believed at Mighty, maybe why I'm so excited about AI and what's happening, a lot of what Mighty was about was like moving compute somewhere else, right? Right now, applications, they get limited quantities of memory, disk, networking, whatever your home network has, et cetera. You know, what if these applications could somehow, if we could shift compute, and then these applications have vastly more compute than they do today. Right now it's just like client backend services, but you know, what if we could change the shape of how applications could interact with things? And it's changed my thinking. In some ways, AI has like a bit of a continuation of my belief that like perhaps we can really shift compute somewhere else. One of the problems with Mighty was that JavaScript is single-threaded in the browser. And what we learned, you know, the reason why we kind of abandoned Mighty was because I didn't believe we could make a new kind of computer. We could have made some kind of enterprise business, probably it could have made maybe a lot of money, but it wasn't going to be what I hoped it was going to be. And so once I realized that most of a web app is just going to be single-threaded JavaScript, then the only thing you could do largely withstanding changing JavaScript, which is a fool's errand most likely, make a better CPU, right? And there's like three CPU manufacturers, two of which sell, you know, big ones, you know, AMD, Intel, and then of course like Apple made the M1. And it's not like single-threaded CPU core performance, single-core performance was increasing very fast, it's plateauing rapidly. And even these different companies were not doing as good of a job, you know, sort of with the continuation of Moore's law. But what happened in AI was that you got like, if you think of the AI model as like a computer program, like just like a compiled computer program, it is literally built and designed to do massive parallel computations. And so if you could take like the universal approximation theorem to its like kind of logical complete point, you know, you're like, wow, I can get, make computation happen really rapidly and parallel somewhere else, you know, so you end up with these like really amazing models that can like do anything. It just turned out like perhaps the new kind of computer would just simply be shifted, you know, into these like really amazing AI models in reality. Yeah. [00:06:30]Swyx: Like I think Andrej Karpathy has always been, has been making a lot of analogies with the LLMOS. [00:06:34]Suhail: I saw his video and I watched that, you know, maybe two weeks ago or something like that. I was like, oh man, this, I very much resonate with this like idea. [00:06:41]Swyx: Why didn't I see this three years ago? [00:06:43]Suhail: Yeah. I think, I think there still will be, you know, local models and then there'll be these very large models that have to be run in data centers. I think it just depends on kind of like the right tool for the job, like any engineer would probably care about. But I think that, you know, by and large, like if the models continue to kind of keep getting bigger, you're always going to be wondering whether you should use the big thing or the small, you know, the tiny little model. And it might just depend on like, you know, do you need 30 FPS or 60 FPS? Maybe that would be hard to do, you know, over a network. [00:07:13]Swyx: You tackled a much harder problem latency wise than the AI models actually require. Yeah. [00:07:18]Suhail: Yeah. You can do quite well. You can do quite well. You definitely did 30 FPS video streaming, did very crazy things to make that work. So I'm actually quite bullish on the kinds of things you can do with networking. [00:07:30]Swyx: Maybe someday you'll come back to that at some point. But so for those that don't know, you're very transparent on Twitter. Very good to follow you just to learn your insights. And you actually published a postmortem on Mighty that people can read up on and willing to. So there was a bit of an overlap. You started exploring the AI stuff in June 2022, which is when you started saying like, I'm taking fast AI again. Maybe, was there more context around that? [00:07:54]Suhail: Yeah. I think I was kind of like waiting for the team at Mighty to finish up, you know, something. And I was like, okay, well, what can I do? I guess I will make some kind of like address bar predictor in the browser. So we had, you know, we had forked Chrome and Chromium. And I was like, you know, one thing that's kind of lame is that like this browser should be like a lot better at predicting what I might do, where I might want to go. It struck me as really odd that, you know, Chrome had very little AI actually or ML inside this browser. For a company like Google, you'd think there's a lot. Code is actually just very, you know, it's just a bunch of if then statements is more or less the address bar. So it seemed like a pretty big opportunity. And that's also where a lot of people interact with the browser. So, you know, long story short, I was like, hmm, I wonder what I could build here. So I started to take some AI courses and review the material again and get back to figuring it out. But I think that was somewhat serendipitous because right around April was, I think, a very big watershed moment in AI because that's when Dolly 2 came out. And I think that was the first truly big viral moment for generative AI. [00:08:59]Swyx: Because of the avocado chair. [00:09:01]Suhail: Yeah, exactly. [00:09:02]Swyx: It wasn't as big for me as Stable Diffusion. [00:09:04]Suhail: Really? [00:09:05]Swyx: Yeah, I don't know. Dolly was like, all right, that's cool. [00:09:07]Suhail: I don't know. Yeah. [00:09:09]Swyx: I mean, they had some flashy videos, but it didn't really register. [00:09:13]Suhail: That moment of images was just such a viral novel moment. I think it just blew people's mind. Yeah. [00:09:19]Swyx: I mean, it's the first time I encountered Sam Altman because they had this Dolly 2 hackathon and they opened up the OpenAI office for developers to walk in back when it wasn't as much of a security issue as it is today. I see. Maybe take us through the journey to decide to pivot into this and also choosing images. Obviously, you were inspired by Dolly, but there could be any number of AI companies and businesses that you could start and why this one, right? [00:09:45]Suhail: Yeah. So I think at that time, Mighty and OpenAI was not quite as popular as it is all of a sudden now these days, but back then they had a lot more bandwidth to kind of help anybody. And so we had been talking with the team there around trying to see if we could do really fast low latency address bar prediction with GPT-3 and 3.5 and that kind of thing. And so we were sort of figuring out how could we make that low latency. I think that just being able to talk to them and kind of being involved gave me a bird's eye view into a bunch of things that started to happen. Latency first was the Dolly 2 moment, but then stable diffusion came out and that was a big moment for me as well. And I remember just kind of like sitting up one night thinking, I was like, you know, what are the kinds of companies one could build? Like what matters right now? One thing that I observed is that I find a lot of inspiration when I'm working in a field in something and then I can identify a bunch of problems. Like for Mixpanel, I was an intern at a company and I just noticed that they were doing all this data analysis. And so I thought, hmm, I wonder if I could make a product and then maybe they would use it. And in this case, you know, the same thing kind of occurred. It was like, okay, there are a bunch of like infrastructure companies that put a model up and then you can use their API, like Replicate is a really good example of that. There are a bunch of companies that are like helping you with training, model optimization, Mosaic at the time, and probably still, you know, was doing stuff like that. So I just started listing out like every category of everything, of every company that was doing something interesting. I started listing out like weights and biases. I was like, oh man, weights and biases is like this great company. Do I want to compete with that company? I might be really good at competing with that company because of Mixpanel because it's so much of like analysis. But I was like, no, I don't want to do anything related to that. That would, I think that would be too boring now at this point. So I started to list out all these ideas and one thing I observed was that at OpenAI, they had like a playground for GPT-3, right? All it was is just like a text box more or less. And then there were some settings on the right, like temperature and whatever. [00:11:41]Swyx: Top K. [00:11:42]Suhail: Yeah, top K. You know, what's your end stop sequence? I mean, that was like their product before GPT, you know, really difficult to use, but fun if you're like an engineer. And I just noticed that their product kind of was evolving a little bit where the interface kind of was getting a little bit more complex. They had like a way where you could like generate something in the middle of a sentence and all those kinds of things. And I just thought to myself, I was like, everything is just like this text box and you generate something and that's about it. And stable diffusion had kind of come out and it was all like hugging face and code. Nobody was really building any UI. And so I had this kind of thing where I wrote prompt dash like question mark in my notes and I didn't know what was like the product for that at the time. I mean, it seems kind of trite now, but I just like wrote prompt. What's the thing for that? Manager. Prompt manager. Do you organize them? Like, do you like have a UI that can play with them? Yeah. Like a library. What would you make? And so then, of course, then you thought about what would the modalities be given that? How would you build a UI for each kind of modality? And so there are a couple of people working on some pretty cool things. And I basically chose graphics because it seemed like the most obvious place where you could build a really powerful, complex UI. That's not just only typing a box. It would very much evolve beyond that. Like what would be the best thing for something that's visual? Probably something visual. Yeah. I think that just that progression kind of happened and it just seemed like there was a lot of effort going into language, but not a lot of effort going into graphics. And then maybe the very last thing was, I think I was talking to Aditya Ramesh, who was the co-creator of DALL-E 2 and Sam. And I just kind of went to these guys and I was just like, hey, are you going to make like a UI for this thing? Like a true UI? Are you going to go for this? Are you going to make a product? For DALL-E. Yeah. For DALL-E. Yeah. Are you going to do anything here? Because if you are going to do it, just let me know and I will stop and I'll go do something else. But if you're not going to do anything, I'll just do it. And so we had a couple of conversations around what that would look like. And then I think ultimately they decided that they were going to focus on language primarily. And I just felt like it was going to be very underinvested in. Yes. [00:13:46]Swyx: There's that sort of underinvestment from OpenAI, but also it's a different type of customer than you're used to, presumably, you know, and Mixpanel is very good at selling to B2B and developers will figure on you or not. Yeah. Was that not a concern? [00:14:00]Suhail: Well, not so much because I think that, you know, right now I would say graphics is in this very nascent phase. Like most of the customers are just like hobbyists, right? Yeah. Like it's a little bit of like a novel toy as opposed to being this like very high utility thing. But I think ultimately, if you believe that you could make it very high utility, the probably the next customers will end up being B2B. It'll probably not be like a consumer. There will certainly be a variation of this idea that's in consumer. But if your quest is to kind of make like something that surpasses human ability for graphics, like ultimately it will end up being used for business. So I think it's maybe more of a progression. In fact, for me, it's maybe more like Mixpanel started out as SMB and then very much like ended up starting to grow up towards enterprise. So for me, I think it will be a very similar progression. But yeah, I mean, the reason why I was excited about it is because it was a creative tool. I make music and it's AI. It's like something that I know I could stay up till three o'clock in the morning doing. Those are kind of like very simple bars for me. [00:14:56]Alessio: So you mentioned Dolly, Stable Diffusion. You just had Playground V2 come out two days ago. Yeah, two days ago. [00:15:02]Suhail: Two days ago. [00:15:03]Alessio: This is a model you train completely from scratch. So it's not a cheap fine tune on something. You open source everything, including the weights. Why did you decide to do it? I know you supported Stable Diffusion XL in Playground before, right? Yep. What made you want to come up with V2 and maybe some of the interesting, you know, technical research work you've done? [00:15:24]Suhail: Yeah. So I think that we continue to feel like graphics and these foundation models for anything really related to pixels, but also definitely images continues to be very underinvested. It feels a little like graphics is in like this GPT-2 moment, right? Like even GPT-3, even when GPT-3 came out, it was exciting, but it was like, what are you going to use this for? Yeah, we'll do some text classification and some semantic analysis and maybe it'll sometimes like make a summary of something and it'll hallucinate. But no one really had like a very significant like business application for GPT-3. And in images, we're kind of stuck in the same place. We're kind of like, okay, I write this thing in a box and I get some cool piece of artwork and the hands are kind of messed up and sometimes the eyes are a little weird. Maybe I'll use it for a blog post, you know, that kind of thing. The utility feels so limited. And so, you know, and then we, you sort of look at Stable Diffusion and we definitely use that model in our product and our users like it and use it and love it and enjoy it, but it hasn't gone nearly far enough. So we were kind of faced with the choice of, you know, do we wait for progress to occur or do we make that progress happen? So yeah, we kind of embarked on a plan to just decide to go train these things from scratch. And I think the community has given us so much. The community for Stable Diffusion I think is one of the most vibrant communities on the internet. It's like amazing. It feels like, I hope this is what like Homebrew Club felt like when computers like showed up because it's like amazing what that community will do and it moves so fast. I've never seen anything in my life and heard other people's stories around this where an academic research paper comes out and then like two days later, someone has sample code for it. And then two days later, there's a model. And then two days later, it's like in nine products, you know, they're all competing with each other. It's incredible to see like math symbols on an academic paper go to well-designed features in a product. So I think the community has done so much. So I think we wanted to give back to the community kind of on our way. Certainly we would train a better model than what we gave out on Tuesday, but we definitely felt like there needs to be some kind of progress in these open source models. The last kind of milestone was in July when Stable Diffusion Excel came out, but there hasn't been anything really since. Right. [00:17:34]Swyx: And there's Excel Turbo now. [00:17:35]Suhail: Well, Excel Turbo is like this distilled model, right? So it's like lower quality, but fast. You have to decide, you know, what your trade off is there. [00:17:42]Swyx: It's also a consistency model. [00:17:43]Suhail: I don't think it's a consistency model. It's like it's they did like a different thing. Yeah. I think it's like, I don't want to get quoted for this, but it's like something called ad like adversarial or something. [00:17:52]Swyx: That's exactly right. [00:17:53]Suhail: I've read something about that. Maybe it's like closer to GANs or something, but I didn't really read the full paper. But yeah, there hasn't been quite enough progress in terms of, you know, there's no multitask image model. You know, the closest thing would be something called like EmuEdit, but there's no model for that. It's just a paper that's within meta. So we did that and we also gave out pre-trained weights, which is very rare. Usually you just get the aligned model and then you have to like see if you can do anything with it. So we actually gave out, there's like a 256 pixel pre-trained stage and a 512. And we did that for academic research because we come across people all the time in academia, they have access to like one A100 or eight at best. And so if we can give them kind of like a 512 pre-trained model, our hope is that there'll be interesting novel research that occurs from that. [00:18:38]Swyx: What research do you want to happen? [00:18:39]Suhail: I would love to see more research around things that users care about tend to be things like character consistency. [00:18:45]Swyx: Between frames? [00:18:46]Suhail: More like if you have like a face. Yeah, yeah. Basically between frames, but more just like, you know, you have your face and it's in one image and then you want it to be like in another. And users are very particular and sensitive to faces changing because we know we're trained on faces as humans. Not seeing a lot of innovation, enough innovation around multitask editing. You know, there are two things like instruct pics to pics and then the EmuEdit paper that are maybe very interesting, but we certainly are not pushing the fold on that in that regard. All kinds of things like around that rotation, you know, being able to keep coherence across images, style transfer is still very limited. Just even reasoning around images, you know, what's going on in an image, that kind of thing. Things are still very, very underpowered, very nascent. So therefore the utility is very, very limited. [00:19:32]Alessio: On the 1K Prompt Benchmark, you are 2.5x prefer to Stable Diffusion XL. How do you get there? Is it better images in the training corpus? Can you maybe talk through the improvements in the model? [00:19:44]Suhail: I think they're still very early on in the recipe, but I think it's a lot of like little things and you know, every now and then there are some big important things like certainly your data quality is really, really important. So we spend a lot of time thinking about that. But I would say it's a lot of things that you kind of clean up along the way as you train your model. Everything from captions to the data that you align with after pre-train to how you're picking your data sets, how you filter your data sets. I feel like there's a lot of work in AI that doesn't really feel like AI. It just really feels like just data set filtering and systems engineering and just like, you know, and the recipe is all there, but it's like a lot of extra work to do that. I think we plan to do a Playground V 2.1, maybe either by the end of the year or early next year. And we're just like watching what the community does with the model. And then we're just going to take a lot of the things that they're unhappy about and just like fix them. You know, so for example, like maybe the eyes of people in an image don't feel right. They feel like they're a little misshapen or they're kind of blurry feeling. That's something that we already know we want to fix. So I think in that case, it's going to be about data quality. Or maybe you want to improve the kind of the dynamic range of color. You know, we want to make sure that that's like got a good range in any image. So what technique can we use there? There's different things like offset noise, pyramid noise, terminal zero, SNR, like there are all these various interesting things that you can do. So I think it's like a lot of just like tricks. Some are tricks, some are data, and some is just like cleaning. [00:21:11]Swyx: Specifically for faces, it's very common to use a pipeline rather than just train the base model more. Do you have a strong belief either way on like, oh, they should be separated out to different stages for like improving the eyes, improving the face or enhance or whatever? Or do you think like it can all be done in one model? [00:21:28]Suhail: I think we will make a unified model. Yeah, I think it will. I think we'll certainly in the end, ultimately make a unified model. There's not enough research about this. Maybe there is something out there that we haven't read. There are some bottlenecks, like for example, in the VAE, like the VAEs are ultimately like compressing these things. And so you don't know. And then you might have like a big informational information bottleneck. So maybe you would use a pixel based model, perhaps. I think we've talked to people, everyone from like Rombach to various people, Rombach trained stable diffusion. I think there's like a big question around the architecture of these things. It's still kind of unknown, right? Like we've got transformers and we've got like a GPT architecture model, but then there's this like weird thing that's also seemingly working with diffusion. And so, you know, are we going to use vision transformers? Are we going to move to pixel based models? Is there a different kind of architecture? We don't really, I don't think there have been enough experiments. Still? Oh my God. [00:22:21]Swyx: Yeah. [00:22:22]Suhail: That's surprising. I think it's very computationally expensive to do a pipeline model where you're like fixing the eyes and you're fixing the mouth and you're fixing the hands. [00:22:29]Swyx: That's what everyone does as far as I understand. [00:22:31]Suhail: I'm not exactly sure what you mean, but if you mean like you get an image and then you will like make another model specifically to fix a face, that's fairly computationally expensive. And I think it's like not probably not the right way. Yeah. And it doesn't generalize very well. Now you have to pick all these different things. [00:22:45]Swyx: Yeah. You're just kind of glomming things on together. Yeah. Like when I look at AI artists, like that's what they do. [00:22:50]Suhail: Ah, yeah, yeah, yeah. They'll do things like, you know, I think a lot of ARs will do control net tiling to do kind of generative upscaling of all these different pieces of the image. Yeah. And I think these are all just like, they're all hacks ultimately in the end. I mean, it just to me, it's like, let's go back to where we were just three years, four years ago with where deep learning was at and where language was that, you know, it's the same thing. It's like we were like, okay, well, I'll just train these very narrow models to try to do these things and kind of ensemble them or pipeline them to try to get to a best in class result. And here we are with like where the models are gigantic and like very capable of solving huge amounts of tasks when given like lots of great data. [00:23:28]Alessio: You also released a new benchmark called MJHQ30K for automatic evaluation of a model's aesthetic quality. I have one question. The data set that you use for the benchmark is from Midjourney. Yes. You have 10 categories. How do you think about the Playground model, Midjourney, like, are you competitors? [00:23:47]Suhail: There are a lot of people, a lot of people in research, they like to compare themselves to something they know they can beat, right? Maybe this is the best reason why it can be helpful to not be a researcher also sometimes like I'm not trained as a researcher, I don't have a PhD in anything AI related, for example. But I think if you care about products and you care about your users, then the most important thing that you want to figure out is like everyone has to acknowledge that Midjourney is very good. They are the best at this thing. I'm happy to admit that. I have no problem admitting that. Just easy. It's very visual to tell. So I think it's incumbent on us to try to compare ourselves to the thing that's best, even if we lose, even if we're not the best. At some point, if we are able to surpass Midjourney, then we only have ourselves to compare ourselves to. But on First Blush, I think it's worth comparing yourself to maybe the best thing and try to find like a really fair way of doing that. So I think more people should try to do that. I definitely don't think you should be kind of comparing yourself on like some Google model or some old SD, Stable Diffusion model and be like, look, we beat Stable Diffusion 1.5. I think users ultimately want care, how close are you getting to the thing that people mostly agree with? So we put out that benchmark for no other reason to say like, this seems like a worthy thing for us to at least try, for people to try to get to. And then if we surpass it, great, we'll come up with another one. [00:25:06]Alessio: Yeah, no, that's awesome. And you killed Stable Diffusion Excel and everything. In the benchmark chart, it says Playground V2 1024 pixel dash aesthetic. Do you have kind of like, yeah, style fine tunes or like what's the dash aesthetic for? [00:25:21]Suhail: We debated this, maybe we named it wrong or something, but we were like, how do we help people realize the model that's aligned versus the models that weren't? Because we gave out pre-trained models, we didn't want people to like use those. So that's why they're called base. And then the aesthetic model, yeah, we wanted people to pick up the thing that makes things pretty. Who wouldn't want the thing that's aesthetic? But if there's a better name, we're definitely open to feedback. No, no, that's cool. [00:25:46]Alessio: I was using the product. You also have the style filter and you have all these different styles. And it seems like the styles are tied to the model. So there's some like SDXL styles, there's some Playground V2 styles. Can you maybe give listeners an overview of how that works? Because in language, there's not this idea of like style, right? Versus like in vision model, there is, and you cannot get certain styles in different [00:26:11]Suhail: models. [00:26:12]Alessio: So how do styles emerge and how do you categorize them and find them? [00:26:15]Suhail: Yeah, I mean, it's so fun having a community where people are just trying a model. Like it's only been two days for Playground V2. And we actually don't know what the model's capable of and not capable of. You know, we certainly see problems with it. But we have yet to see what emergent behavior is. I mean, we've just sort of discovered that it takes about like a week before you start to see like new things. I think like a lot of that style kind of emerges after that week, where you start to see, you know, there's some styles that are very like well known to us, like maybe like pixel art is a well known style. Photorealism is like another one that's like well known to us. But there are some styles that cannot be easily named. You know, it's not as simple as like, okay, that's an anime style. It's very visual. And in the end, you end up making up the name for what that style represents. And so the community kind of shapes itself around these different things. And so if anyone that's into stable diffusion and into building anything with graphics and stuff with these models, you know, you might have heard of like Proto Vision or Dream Shaper, some of these weird names, but they're just invented by these authors. But they have a sort of je ne sais quoi that, you know, appeals to users. [00:27:26]Swyx: Because it like roughly embeds to what you what you want. [00:27:29]Suhail: I guess so. I mean, it's like, you know, there's one of my favorite ones that's fine tuned. It's not made by us. It's called like Starlight XL. It's just this beautiful model. It's got really great color contrast and visual elements. And the users love it. I love it. And it's so hard. I think that's like a very big open question with graphics that I'm not totally sure how we'll solve. I don't know. It's, it's like an evolving situation too, because styles get boring, right? They get fatigued. Like it's like listening to the same style of pop song. I try to relate to graphics a little bit like with music, because I think it gives you a little bit of a different shape to things. Like it's not as if we just have pop music, rap music and country music, like all of these, like the EDM genre alone has like sub genres. And I think that's very true in graphics and painting and art and anything that we're doing. There's just these sub genres, even if we can't quite always name them. But I think they are emergent from the community, which is why we're so always happy to work with the community. [00:28:26]Swyx: That is a struggle. You know, coming back to this, like B2B versus B2C thing, B2C, you're going to have a huge amount of diversity and then it's going to reduce as you get towards more sort of B2B type use cases. I'm making this up here. So like you might be optimizing for a thing that you may eventually not need. [00:28:42]Suhail: Yeah, possibly. Yeah, possibly. I think like a simple thing with startups is that I worry sometimes by trying to be overly ambitious and like really scrutinizing like what something is in its most nascent phase that you miss the most ambitious thing you could have done. Like just having like very basic curiosity with something very small can like kind of lead you to something amazing. Like Einstein definitely did that. And then he like, you know, he basically won all the prizes and got everything he wanted and then basically did like kind of didn't really. He can dismiss quantum and then just kind of was still searching, you know, for the unifying theory. And he like had this quest. I think that happens a lot with like Nobel Prize people. I think there's like a term for it that I forget. I actually wanted to go after a toy almost intentionally so long as that I could see, I could imagine that it would lead to something very, very large later. Like I said, it's very hobbyist, but you need to start somewhere. You need to start with something that has a big gravitational pull, even if these hobbyists aren't likely to be the people that, you know, have a way to monetize it or whatever, even if they're, but they're doing it for fun. So there's something, something there that I think is really important. But I agree with you that, you know, in time we will absolutely focus on more utilitarian things like things that are more related to editing feats that are much harder. And so I think like a very simple use case is just, you know, I'm not a graphics designer. It seems like very simple that like you, if we could give you the ability to do really complex graphics without skill, wouldn't you want that? You know, like my wife the other day was set, you know, said, I wish Playground was better. When are you guys going to have a feature where like we could make my son, his name's Devin, smile when he was not smiling in the picture for the holiday card. Right. You know, just being able to highlight his, his mouth and just say like, make him smile. Like why can't we do that with like high fidelity and coherence, little things like that, all the way to putting you in completely different scenarios. [00:30:35]Swyx: Is that true? Can we not do that in painting? [00:30:37]Suhail: You can do in painting, but the quality is just so bad. Yeah. It's just really terrible quality. You know, it's like you'll do it five times and it'll still like kind of look like crooked or just artifact. Part of it's like, you know, the lips on the face, there's such little information there. So small that the models really struggle with it. Yeah. [00:30:55]Swyx: Make the picture smaller and you don't see it. That's my trick. I don't know. [00:30:59]Suhail: Yeah. Yeah. That's true. Or, you know, you could take that region and make it really big and then like say it's a mouth and then like shrink it. It feels like you're wrestling with it more than it's doing something that kind of surprises you. [00:31:12]Swyx: Yeah. It feels like you are very much the internal tastemaker, like you carry in your head this vision for what a good art model should look like. Do you find it hard to like communicate it to like your team and other people? Just because it's obviously it's hard to put into words like we just said. [00:31:26]Suhail: Yeah. It's very hard to explain. Images have such high bitrate compared to just words and we don't have enough words to describe these things. It's not terribly difficult. I think everyone on the team, if they don't have good kind of like judgment taste or like an eye for some of these things, they're like steadily building it because they have no choice. Right. So in that realm, I don't worry too much, actually. Like everyone is kind of like learning to get the eye is what I would call it. But I also have, you know, my own narrow taste. Like I don't represent the whole population either. [00:31:59]Swyx: When you benchmark models, you know, like this benchmark we're talking about, we use FID. Yeah. Input distance. OK. That's one measure. But like it doesn't capture anything you just said about smiles. [00:32:08]Suhail: Yeah. FID is generally a bad metric. It's good up to a point and then it kind of like is irrelevant. Yeah. [00:32:14]Swyx: And then so are there any other metrics that you like apart from vibes? I'm always looking for alternatives to vibes because vibes don't scale, you know. [00:32:22]Suhail: You know, it might be fun to kind of talk about this because it's actually kind of fresh. So up till now, we haven't needed to do a ton of like benchmarking because we hadn't trained our own model and now we have. So now what? What does that mean? How do we evaluate it? And, you know, we're kind of like living with the last 48, 72 hours of going, did the way that we benchmark actually succeed? [00:32:43]Swyx: Did it deliver? [00:32:44]Suhail: Right. You know, like I think Gemini just came out. They just put out a bunch of benchmarks. But all these benchmarks are just an approximation of how you think it's going to end up with real world performance. And I think that's like very fascinating to me. So if you fake that benchmark, you'll still end up in a really bad scenario at the end of the day. And so, you know, one of the benchmarks we did was we kind of curated like a thousand prompts. And I think that's kind of what we published in our blog post, you know, of all these tasks that we a lot of some of them are curated by our team where we know the models all suck at it. Like my favorite prompt that no model is really capable of is a horse riding an astronaut, the inverse one. And it's really, really hard to do. [00:33:22]Swyx: Not in data. [00:33:23]Suhail: You know, another one is like a giraffe underneath a microwave. How does that work? Right. There's so many of these little funny ones. We do. We have prompts that are just like misspellings of things. Yeah. We'll figure out if the models will figure it out. [00:33:36]Swyx: They should embed to the same space. [00:33:39]Suhail: Yeah. And just like all these very interesting weirdo things. And so we have so many of these and then we kind of like evaluate whether the models are any good at it. And the reality is that they're all bad at it. And so then you're just picking the most aesthetic image. We're still at the beginning of building like the best benchmark we can that aligns most with just user happiness, I think, because we're not we're not like putting these in papers and trying to like win, you know, I don't know, awards at ICCV or something if they have awards. You could. [00:34:05]Swyx: That's absolutely a valid strategy. [00:34:06]Suhail: Yeah, you could. But I don't think it could correlate necessarily with the impact we want to have on humanity. I think we're still evolving whatever our benchmarks are. So the first benchmark was just like very difficult tasks that we know the models are bad at. Can we come up with a thousand of these, whether they're hand rated and some of them are generated? And then can we ask the users, like, how do we do? And then we wanted to use a benchmark like party prompts. We mostly did that so people in academia could measure their models against ours versus others. But yeah, I mean, fit is pretty bad. And I think in terms of vibes, it's like you put out the model and then you try to see like what users make. And I think my sense is that we're going to take all the things that we notice that the users kind of were failing at and try to find like new ways to measure that, whether that's like a smile or, you know, color contrast or lighting. One benefit of Playground is that we have users making millions of images every single day. And so we can just ask them for like a post generation feedback. Yeah, we can just ask them. We can just say, like, how good was the lighting here? How was the subject? How was the background? [00:35:06]Swyx: Like a proper form of like, it's just like you make it, you come to our site, you make [00:35:10]Suhail: an image and then we say, and then maybe randomly you just say, hey, you know, like, how was the color and contrast of this image? And you say it was not very good, just tell us. So I think I think we can get like tens of thousands of these evaluations every single day to truly measure real world performance as opposed to just like benchmark performance. I would like to publish hopefully next year. I think we will try to publish a benchmark that anyone could use, that we evaluate ourselves on and that other people can, that we think does a good job of approximating real world performance because we've tried it and done it and noticed that it did. Yeah. I think we will do that. [00:35:45]Swyx: I personally have a few like categories that I consider special. You know, you know, you have like animals, art, fashion, food. There are some categories which I consider like a different tier of image. Top among them is text in images. How do you think about that? So one of the big wow moments for me, something I've been looking out for the entire year is just the progress of text and images. Like, can you write in an image? Yeah. And Ideogram came out recently, which had decent but not perfect text and images. Dolly3 had improved some and all they said in their paper was that they just included more text in the data set and it just worked. I was like, that's just lazy. But anyway, do you care about that? Because I don't see any of that in like your sample. Yeah, yeah. [00:36:27]Suhail: The V2 model was mostly focused on image quality versus like the feature of text synthesis. [00:36:33]Swyx: Well, as a business user, I care a lot about that. [00:36:35]Suhail: Yeah. Yeah. I'm very excited about text synthesis. And yeah, I think Ideogram has done a good job of maybe the best job. Dolly has like a hit rate. Yes. You know, like sometimes it's Egyptian letters. Yeah. I'm very excited about text synthesis. You know, I don't have much to say on it just yet. You know, you don't want just text effects. I think where this has to go is it has to be like you could like write little tiny pieces of text like on like a milk carton. That's maybe not even the focal point of a scene. I think that's like a very hard task that, you know, if you could do something like that, then there's a lot of other possibilities. Well, you don't have to zero shot it. [00:37:09]Swyx: You can just be like here and focus on this. [00:37:12]Suhail: Sure. Yeah, yeah. Definitely. Yeah. [00:37:16]Swyx: Yeah. So I think text synthesis would be very exciting. I'll also flag that Max Wolf, MiniMaxxier, which you must have come across his work. He's done a lot of stuff about using like logo masks that then map onto food and vegetables. And it looks like text, which can be pretty fun. [00:37:29]Suhail: That's the wonderful thing about like the open source community is that you get things like control net and then you see all these people do these just amazing things with control net. And then you wonder, I think from our point of view, we sort of go that that's really wonderful. But how do we end up with like a unified model that can do that? What are the bottlenecks? What are the issues? The community ultimately has very limited resources. And so they need these kinds of like workaround research ideas to get there. But yeah. [00:37:55]Swyx: Are techniques like control net portable to your architecture? [00:37:58]Suhail: Definitely. Yeah. We kept the Playground V2 exactly the same as SDXL. Not because not out of laziness, but just because we knew that the community already had tools. You know, all you have to do is maybe change a string in your code and then, you know, retrain a control net for it. So it was very intentional to do that. We didn't want to fragment the community with different architectures. Yeah. [00:38:16]Swyx: So basically, I'm going to go over three more categories. One is UIs, like app UIs, like mock UIs. Third is not safe for work, and then copyrighted stuff. I don't know if you care to comment on any of those. [00:38:28]Suhail: I think the NSFW kind of like safety stuff is really important. I kind of think that one of the biggest risks kind of going into maybe the U.S. election year will probably be very interrelated with like graphics, audio, video. I think it's going to be very hard to explain, you know, to a family relative who's not kind of in our world. And our world is like sometimes very, you know, we think it's very big, but it's very tiny compared to the rest of the world. Some people like there's still lots of humanity who have no idea what chat GPT is. And I think it's going to be very hard to explain, you know, to your uncle, aunt, whoever, you know, hey, I saw President Biden say this thing on a video, you know, I can't believe, you know, he said that. I think that's going to be a very troubling thing going into the world next year, the year after. [00:39:12]Swyx: That's more like a risk thing, like deepfakes, faking, political faking. But there's a lot of studies on how for most businesses, you don't want to train on not safe for work images, except that it makes you really good at bodies. [00:39:24]Suhail: Personally, we filter out NSFW type of images in our data set so that it's, you know, so our safety filter stuff doesn't have to work as hard. [00:39:32]Swyx: But you've heard this argument that not safe for work images are very good at human anatomy, which you do want to be good at. [00:39:38]Suhail: It's not like necessarily a bad thing to train on that data. It's more about like how you go and use it. That's why I was kind of talking about safety, you know, in part, because there are very terrible things that can happen in the world. If you have an extremely powerful graphics model, you know, suddenly like you can kind of imagine, you know, now if you can like generate nudes and then there's like you could do very character consistent things with faces, like what does that lead to? Yeah. And so I tend to think more what occurs after that, right? Even if you train on, let's say, you know, new data, if it does something to kind of help, there's nothing wrong with the human anatomy, it's very valid for a model to learn that. But then it's kind of like, how does that get used? And, you know, I won't bring up all of the very, very unsavory, terrible things that we see on a daily basis on the site, but I think it's more about what occurs. And so we, you know, we just recently did like a big sprint on safety. It's very difficult with graphics and art, right? Because there is tasteful art that has nudity, right? They're all over in museums, like, you know, there's very valid situations for that. And then there's the things that are the gray line of that, you know, what I might not find tasteful, someone might be like, that is completely tasteful, right? And then there are things that are way over the line. And then there are things that maybe you or, you know, maybe I would be okay with, but society isn't, you know? So where does that kind of end up on the spectrum of things? I think it's really hard with art. Sometimes even if you have like things that are not nude, if a child goes to your site, scrolls down some images, you know, classrooms of kids, you know, using our product, it's a really difficult problem. And it stretches mostly culture, society, politics, everything. [00:41:14]Alessio: Another favorite topic of our listeners is UX and AI. And I think you're probably one of the best all-inclusive editors for these things. So you don't just have the prompt, images come out, you pray, and now you do it again. First, you let people pick a seed so they can kind of have semi-repeatable generation. You also have, yeah, you can pick how many images and then you leave all of them in the canvas. And then you have kind of like this box, the generation box, and you can even cross between them and outpaint. There's all these things. How did you get here? You know, most people are kind of like, give me text, I give you image. You know, you're like, these are all the tools for you. [00:41:54]Suhail: Even though we were trying to make a graphics foundation model, I think we think that we're also trying to like re-imagine like what a graphics editor might look like given the change in technology. So, you know, I don't think we're trying to build Photoshop, but it's the only thing that we could say that people are largely familiar with. Oh, okay, there's Photoshop. What would Photoshop compare itself to pre-computer? I don't know, right? It's like, or kind of like a canvas, but you know, there's these menu options and you can use your mouse. What's a mouse? So I think that we're trying to re-imagine what a graphics editor might look like, not just for the fun of it, but because we kind of have no choice. Like there's this idea in image generation where you can generate images. That's like a super weird thing. What is that in Photoshop, right? You have to wait right now for the time being, but the wait is worth it often for a lot of people because they can't make that with their own skills. So I think it goes back to, you know, how we started the company, which was kind of looking at GPT-3's Playground, that the reason why we're named Playground is a homage to that actually. And, you know, it's like, shouldn't these products be more visual? These prompt boxes are like a terminal window, right? We're kind of at this weird point where it's just like MS-DOS. I remember my mom using MS-DOS and I memorized the keywords, like DIR, LS, all those things, right? It feels a little like we're there, right? Prompt engineering, parentheses to say beautiful or whatever, waits the word token more in the model or whatever. That's like super strange. I think a large portion of humanity would agree that that's not user-friendly, right? So how do we think about the products to be more user-friendly? Well, sure, you know, sure, it would be nice if I wanted to get rid of, like, the headphones on my head, you know, it'd be nice to mask it and then say, you know, can you remove the headphones? You know, if I want to grow, expand the image, you know, how can we make that feel easier without typing lots of words and being really confused? I don't even think we've nailed the UI UX yet. Part of that is because we're still experimenting. And part of that is because the model and the technology is going to get better. And whatever felt like the right UX six months ago is going to feel very broken now. So that's a little bit of how we got there is kind of saying, does everything have to be like a prompt in a box? Or can we do things that make it very intuitive for users? [00:44:03]Alessio: How do you decide what to give access to? So you have things like an expand prompt, which Dally 3 just does. It doesn't let you decide whether you should or not. [00:44:13]Swyx: As in, like, rewrites your prompts for you. [00:44:15]Suhail: Yeah, for that feature, I think once we get it to be cheaper, we'll probably just give it up. We'll probably just give it away. But we also decided something that might be a little bit different. We noticed that most of image generation is just, like, kind of casual. You know, it's in WhatsApp. It's, you know, it's in a Discord bot somewhere with Majorny. It's in ChatGPT. One of the differentiators I think we provide is at the expense of just lots of users necessarily. Mainstream consumers is that we provide as much, like, power and tweakability and configurability as possible. So the only reason why it's a toggle, because we know that users might want to use it and might not want to use it. There's some really powerful power user hobbyists that know what they're doing. And then there's a lot of people that just want something that looks cool, but they don't know how to prompt. And so I think a lot of Playground is more about going after that core user base that, like, knows, has a little bit more savviness and how to use these tools. You know, the average Dell user is probably not going to use ControlNet. They probably don't even know what that is. And so I think that, like, as the models get more powerful, as there's more tooling, hopefully you'll imagine a new sort of AI-first graphics editor that's just as, like, powerful and configurable as Photoshop. And you might have to master a new kind of tool. [00:45:28]Swyx: There's so many things I could go bounce off of. One, you mentioned about waiting. We have to kind of somewhat address the elephant in the room. Consistency models have been blowing up the past month. How do you think about integrating that? Obviously, there's a lot of other companies also trying to beat you to that space as well. [00:45:44]Suhail: I think we were the first company to integrate it. Ah, OK. [00:45:47]Swyx: Yeah. I didn't see your demo. [00:45:49]Suhail: Oops. Yeah, yeah. Well, we integrated it in a different way. OK. There are, like, 10 companies right now that have kind of tried to do, like, interactive editing, where you can, like, draw on the left side and then you get an image on the right side. We decided to kind of, like, wait and see whether there's, like, true utility on that. We have a different feature that's, like, unique in our product that is called preview rendering. And so you go to the product and you say, you know, we're like, what is the most common use case? The most common use case is you write a prompt and then you get an image. But what's the most annoying thing about that? The most annoying thing is, like, it feels like a slot machine, right? You're like, OK, I'm going to put it in and maybe I'll get something cool. So we did something that seemed a lot simpler, but a lot more relevant to how users already use these products, which is preview rendering. You toggle it on and it will show you a render of the image. And then graphics tools already have this. Like, if you use Cinema 4D or After Effects or something, it's called viewport rendering. And so we try to take something that exists in the real world that has familiarity and say, OK, you're going to get a rough sense of an early preview of this thing. And then when you're ready to generate, we're going to try to be as coherent about that image that you saw. That way, you're not spending so much time just like pulling down the slot machine lever. I think we were the first company to actually ship a quick LCM thing. Yeah, we were very excited about it. So we shipped it very quick. Yeah. [00:47:03]Swyx: Well, the demos I've been seeing, it's not like a preview necessarily. They're almost using it to animate their generations. Like, because you can kind of move shapes. [00:47:11]Suhail: Yeah, yeah, they're like doing it. They're animating it. But they're sort of showing, like, if I move a moon, you know, can I? [00:47:17]Swyx: I don't know. To me, it unlocks video in a way. [00:47:20]Suhail: Yeah. But the video models are already so much better than that. Yeah. [00:47:23]Swyx: There's another one, which I think is general ecosystem of Loras, right? Civit is obviously the most popular repository of Loras. How do you think about interacting with that ecosystem? [00:47:34]Suhail: The guy that did Lora, not the guy that invented Loras, but the person that brought Loras to Stable Diffusion actually works with us on some projects. His name is Simu. Shout out to Simu. And I think Loras are wonderful. Obviously, fine tuning all these Dreambooth models and such, it's just so heavy. And it's obvious in our conversation around styles and vibes, it's very hard to evaluate the artistry of these things. Loras give people this wonderful opportunity to create sub-genres of art. And I think they're amazing. Any graphics tool, any kind of thing that's expressing art has to provide some level of customization to its user base that goes beyond just typing Greg Rakowski in a prompt. We have to give more than that. It's not like users want to type these real artist names. It's that they don't know how else to get an image that looks interesting. They truly want originality and uniqueness. And I think Loras provide that. And they provide it in a very nice, scalable way. I hope that we find something even better than Loras in the long term, because there are still weaknesses to Loras, but I think they do a good job for now. Yeah. [00:48:39]Swyx: And so you would never compete with Civit? You would just kind of let people import? [00:48:43]Suhail: Civit's a site where all these things get kind of hosted by the community, right? And so, yeah, we'll often pull down some of the best things there. I think when we have a significantly better model, we will certainly build something that gets closer to that. Again, I go back to saying just I still think this is very nascent. Things are very underpowered, right? Loras are not easy to train. They're easy for an engineer. It sure would be nicer if I could just pick five or six reference images, right? And they might even be five or six different reference images that are not... They're just very different. They communicate a style, but they're actually like... It's like a mood board, right? And you have to be kind of an engineer almost to train these Loras or go to some site and be technically savvy, at least. It seems like it'd be much better if I could say, I love this style. Here are five images and you tell the model, like, this is what I want. And the model gives you something that's very aligned with what your style is, what you're talking about. And it's a style you couldn't even communicate, right? There's n
Subscribe to listen to Techno music DJ Mix, Tech House music, Deep House, Acid Techno, and Minimal Techno.
Only one remembers and is not bent to my will. Seven seals, seven tribes, six reclaimed, one shall be mine. And ere the end of all your days the Crimson Khan shall ride... Content Warnings: N/A Transcript Patrons Trent Hulbert, Gene Mercer, Keagan Adams, Raymond Mercado, Yiva Silver, Matt Mastin, Slurm, Corey Wolter, Dalton Cave, Lazy Heirophant, Nulland Paranoid, Lord Toffee, BillyBobUsername, Rowan Hawthrope, ShadowJoker, and Budbud9399 Cast & Crew SCP Archives was created by Pacific S. Obadiah & Jon Grilz SCP-3838 was written by Tufto Script by Kevin Whitlock Narrator - Dustin Parsons Researcher - Chris Harris-Beechy Dr. Rizvi - Addison Peacock Envoy - Chadd Underwood Theme Song by Tom Rory Parsons Editor - Veronica California Showrunner - Kale Brown Producer - Pacific S. Obadiah Executive Producers - Tom Owen & Brad Miska Presented by Bloody FM www.Bloody-Disgusting.com www.SCParchives.com Patreon: https://www.patreon.com/scp_pod Twitter: https://twitter.com/scp_pod Facebook: https://www.facebook.com/scppod Discord: https://discord.gg/tJEeNUzeZX Learn more about your ad choices. Visit megaphone.fm/adchoices
Jeremy chats with Melbourne band Slurm about their latest release. You can take a listen to the single "Paper Skin" right here.
Episode Notes Kyle is back after Phil's fantastic solo episode! Today we watch an all-time classic: Futurama. Developed from the mind of Matt Groening and company of Simpsons fame, Futurama begins when a very dumb 20th century man freezes for 1000 years in a cryogenic pod on New Years Eve 1999. When he awakes from his cold, long slumber, he finds himself in the year 3000! Many gags and silly happenings logically follow. Slurm! Highly addictive green sludge that advertises itself as such. Please slurp down this definitely-not-worm-poop and enjoy an episode centered on the profit motive and silly other sh*t. Start your watch along experience at 07:58. The news and further discussion begins at 36:55. If you have any thoughts, suggestions, angry emotes, or whatever that you would like to send us, please reach out at UnsociablistPod@gmail.com or on twitter as @unsociablist. I recognize that posting this will bring us spam and I dare you to do it. https://free-palestine.carrd.co/#donations https://bailproject.org Find out more at https://the-unsociablists.pinecast.co This podcast is powered by Pinecast.
We done got out the house for another of whatever this is.
We done got out the house for another of whatever this is.
Hope you enjoy part TWO of our time at Tree Fort Music Fest 2021. There's a great post-rock-night vibe to this episode, featuring interviews with Blood Lemon, Sego, and Slurm Flirty Worm. That's one L.A.-based band for every two Boise-area bands. Is it too much Boise? Not enough? We wouldn't presume to speak for you. Music this week:"Around You" by Blackwater Holylight (2:54)"Burned" by Blood Lemon (23:01)"Down with Me" by Sego (43:57)"Week of Wednesdays" by Slurm Flirty Worm (64:37)"Adamant" by Mic Capes (68:28)
Good news, everyone! This issue of Pinch of Nerd is about Futurama! The normal crew is joined by special guest Wanda the Cheese Wizard (a for-real certified cheese expert) as they talk about favorite Futurama episodes, characters, and moments from the wonderful cartoon. Along the way they also talk about whether or not they'd eat Popplers, what they think Slurm tastes like, and why Velveeta is the best cheese. This episode's recipe from Chef Aaron is a delicious Jerk sauce, served to the gang on a bit of shredded chicken. Recipe for the Jerk sauce: http://pinchofnerd.com/jerk-sauce/
Treefort Music Fest 2020 2021, or Treefort number nine, was a culmination of years of planning in the face of a global pandemic. For both event organizers, fans, and artists, it was a festival unlike any we could expect. Being mindful of social distance, wearing masks, we caught up with Idaho's own punk-creative/semi-nerd-rock Slurm Flirty […]
We watched it; we can't unwatch it! In this episode we review four episodes of Futurama, an animated series about a pizza delivery boy who accidentally gets cryogenically frozen and wakes up in the year 3000. What does the future have to do with the stone age? There's only one way to find out! So grab a can of Slurm and a bowl of Bachelor chow and settle in for this Anthology of Interest! In this episode: Greyfriar's Bobby: https://en.wikipedia.org/wiki/Greyfriars_Bobby Repatriation of the Kabwe skull: https://www.sapiens.org/biology/repatriation-kabwe-skull/ Comedian James Acaster on the absurdity of the British Empire: https://www.youtube.com/watch?v=x73PkUvArJY The Immortal Life of Henrietta Lacks by Rebecca Skloot: https://goodreads.com/book/show/6493208-the-immortal-life-of-henrietta-lacks The Piltdown Hoax: https://www.livescience.com/56327-piltdown-man-hoax.html How to pronounce “Neanderthal”: https://www.discovermagazine.com/planet-earth/is-it-neander-tal-or-neander-thal Frozen Fauna of the Mammoth Steppe by Dale Guthrie: https://press.uchicago.edu/ucp/books/book/chicago/F/bo3774765.html
We watched it; we can't unwatch it! In this episode we review four episodes of Futurama, an animated series about a pizza delivery boy who accidentally gets cryogenically frozen and wakes up in the year 3000. What does the future have to do with the stone age? There's only one way to find out! So grab a can of Slurm and a bowl of Bachelor chow and settle in for this Anthology of Interest!In this episode:Greyfriar's Bobbyhttps://en.wikipedia.org/wiki/Greyfriars_BobbyRepatriation of the Kabwe skullhttps://www.sapiens.org/biology/repatriation-kabwe-skull/Comedian James Acaster on the absurdity of the British Empirehttps://www.youtube.com/watch?v=x73PkUvArJYThe Immortal Life of Henrietta Lacks by Rebecca Skloothttps://goodreads.com/book/show/6493208-the-immortal-life-of-henrietta-lacks The Piltdown Hoaxhttps://www.livescience.com/56327-piltdown-man-hoax.htmlHow to pronounce “Neanderthal”https://www.discovermagazine.com/planet-earth/is-it-neander-tal-or-neander-thalFrozen Fauna of the Mammoth Steppe by Dale Guthriehttps://press.uchicago.edu/ucp/books/book/chicago/F/bo3774765.html
WiteSand, a startup that aims to take enterprise networking to cloud, has emerged from stealth mode, announcing its raise of $12.5 million in seed funding to date from institutions and angels. WiteSand, founded in 2019, consolidates on-premise networking tools into a unified cloud-delivered service and enables companies to monitor, secure and manage their enterprise network infrastructure. In a statement to the press, WiteSand enunciated the complications that medium to large enterprises face in networking due to employee mobility and hybrid work environments. Managing networking solutions on-premise is also tedious and time-consuming, it added.San Francisco-based Side, a real estate technology company, helping independent brokerages turn into brands and business, has announced an additional $50 million fundraise to its $150 million Series D funding that it raised in March 2021. The additional funding values the company at $2.5 billion, which is more than a two-fold increase from its $1 billion valuation attained during Series D.Elo7, a Brazilian online marketplace, will be acquired by Etsy, an American eCommerce company, for $217 million. Elo7, one of Latin America's most popular e-commerce sites, would provide Etsy a far wider market presence, with 1.9 million active customers, 56,000 active merchants, and around 8 million items for sale. With its existing management team located in Sao Paulo, Brazil, Elo7 will continue to function independently.Dell has released Omnia, an open-source software suite aimed at making AI and compute-intensive job deployment and administration easier. The program consists of an Ansible playbook for deploying convergent workloads with Slurm and containers and library frameworks, services, and apps. Omnia may utilize Slurm or Kubernetes to build workload management clusters, and it attempts to reuse existing projects rather than starting from scratch.Camions Logistics Solutions Private Limited (“GoBOLT“), a tech-based logistics startup, has announced that it has raised $20 million in a Series B funding round led by Paragon Partners Growth Fund II and existing investor Aavishkaar Capital. Small supplementary components, as well as debt lines from private banks, are included in the round.Mastercard has made a strategic equity investment in Instamojo, payments, and online commerce platform. Sampad Swain, Akash Gehani, and Aditya Sengupta established Instamojo as a small business payment processor in 2012. According to the two firms, the investment would benefit MSMEs as well as gig economy workers such as small food and beverage operators, electricians, tutors, and others.Fleet management solutions provider LocoNav has announced its Series B fundraise of $37 million, led by Quiet Capital, Anthemis Group and Sequoia Capital India, reports state. The proceeds from the fund would be used for expanding into the US and other emerging markets, building partnerships, and making strategic acquisitions.Tapcart, a SaaS platform enabling e-commerce merchants to launch and manage mobile apps for their brands, has announced its Series B fundraise of $50 million in a round led by Left Lane Capital. Shopify, SignalFire, Greycroft, Act One Ventures and Amplify LA, participated in the round. The company had raised $10 million in Series A funding led by SignalFire last year. With the current funding, the total amount raised by Tapcart crosses $65 million, as per Crunchbase.
Disappointed not to see Amazon take the opportunity to increase its executive diversity with its new CEO. A big thanks to this week's sponsors: Foghorn Consulting, which provides full-stack cloud solutions with a focus on strategy, planning and execution for enterprises seeking to take advantage of the transformative capabilities of AWS, Google Cloud and Azure. This week's highlights If Amazon was the royal family, this would be like Harry becoming King. Google found slugs in its lettuce and is not happy about it. Azure wants to shut The Cloud Pod up for good this time. General News: Nothing Spicy Sysdig is releasing unified cloud and container security with the launch of Unified Threat detection across AWS cloud and containers. Interesting that it uses Cloud Custodian. Amazon Web Services: No Longer Hiring Tableau CEO Adam Selipsky will return to Amazon Web Services as CEO. We did not see this coming. Introducing Amazon S3 Object Lambda. They listened to us! Google Cloud Platform: Slurm It Up Google Cloud caps sales commissions as losses mount. This will remove the motivation to go after smaller deals. Google announces a new method of obtaining Compute Engine instances for batch processing. We thought it was attacking our workloads but it actually wasn't — our bad. Google is announcing the preview of its Network Connectivity Center. No potatoes, thankfully. Announcing the newest set of features for Slurm running on Google Cloud. Worst name ever. Google announces A2 VMs are now generally available with the largest GPU cloud instances with NVIDIA A100 GPUs. Is this the computer version of scalping tickets? Google announces high-bandwidth network configurations for General Purpose N2 and Compute Optimized C2 Compute Engine VM families. We'd love to know what the technology is behind this. Azure: Not Happy With The Cloud Pod Azure announces plans to expand the Azure Availability Zones to more regions. We'll take credit for this one. TCP Lightning Round After a large amount of debate about who should win, Jonathan takes this week's point, leaving scores at Justin (3), Ryan (3), Jonathan (5). Other headlines mentioned: General availability: Enhanced Azure Dashboards experience for pinned Log Analytics parts Azure Monitor SQL insights for Azure SQL in public preview Announcing AWS Media Intelligence solutions' Amazon EC2 now supports UEFI boot when migrating virtual machines to EC2 Amazon EKS reduces cluster creation time by 40% Amazon EC2 Auto Scaling Instance Refresh now supports phased deployments Amazon RDS for MySQL now supports rollback protection for database major version upgrades Amazon QLDB Increases Verification APIs Throughput by an Order of Magnitude AWS announces Developer Preview release of opinionated deployment tool for .NET CLI Leverage state of the art Natural Language Processing with Hugging Face and Amazon SageMaker Amazon QuickSight launches Custom Tooltips, Updates to Anomaly Detection, and More AWS Cost Categories now supports inherited and default values AWS Glue Studio now supports transforms defined in SQL Cloud Spanner launches point-in-time-recovery capability Things Coming Up Microsoft Build — May 19–21 (Digital) Google Cloud Next — Not announced yet (one site says Moscone is reserved June 28–30) Google Cloud Next 2021 — October 12–14, 2021 AWS re:Invent — November 29–December 3 — Las Vegas Oracle Open World
I veckans avsnitt snackar vi matlagningstrender på Tiktok, om traumatiska ostupplevelser och om att Pokemon-kunskap ändå någonstans hör till allmänbildningen. Patrik maler på lite om Final fantasy 7 och pratar om ett nytt, hett rykte om en ny nintendo-konsol. Dessutom kommer vi fortsätta quizza, och ta oss en djupdykning i några av de vanligaste karaktärsarketyperna. Veckans tips: Final Fantasy 7 Remake på Playstation Plus. (Gratis i mars)
Hoopy Froods Matt, Ian, Walter, and Josh grab a towel, some Slurm, and ponder the great mysteries of our universe – Are we alone? Will we ever find alien life? Is Josh actually a Vogon?
Welcome to the podcast Slurm, the highly addictive future soft drink secreted from a giant alien worm! This week we talk about more fallout from the Mike Clevinger/Zach Plesac situation, the new Cincinnati Reds COVID scare and the Miami Marlins recovery process, including whether or not they should consider bringing back the players who got infected or just roll with the replacements they have. In non-COVID news, Charlie Blackmon is chasing a .400 average for the season, even though it's severely shortened, and his teammate Daniel Bard is dealing after missing seven seasons due to mental health concerns. Somebody should get that dude a movie. In our Uni-Watch segment, we tip our caps to MLB's handling of Negro League Day and Lovecraft Country's faithful recreation of Jackie Robinson's uniform. Finally our Futurama Picks of the week: Attack of the Killer App (Ben), Murder on the Planet Express (Jacob) This Week in Blernsball is proud to be joining the AZSE Network of podcasts. Check out their full lineup at http://azsenetwork.com/ To learn how to make your own Slurm, check out How To Drink's YouTube at: https://www.youtube.com/watch?v=DLRLlXHdMlo
Blast off to the year 3000 to hear Chris, Sam and John share the horrifying truth about Slurm, and then […] The post ADtempt 003 – Slurm appeared first on ADtempted.
¡Cambios en DC! ¡X-Men! ¡Mario Lego! ¡Black Widow! ¡DAVID HARBOUR! ¿Go-Bots? ¿Güat? También conversamos sobre lo nuevo para los cosplayers en la edición 2020 del Puerto Rico Comic Con. Con Rikky Carrión, productor ejecutivo del Puerto Rico Comic Con, Valería Montoya (@PunkyVal) y Pedro Valle Javier (@PeteValle). ¡Participa de la conversación! ¡Búscanos en facebook.com/prcomiccon, twitter.com/prcomiccon o instagram.com/prcomiccon! ¡Envíanos un mensaje! Participa de La Pregunta de la Semana aquí: https://bit.ly/2W6ojRH ¿Te gustó? ¡Regálanos ⭐️⭐️⭐️⭐️⭐️en tu aplicación de podcasts favorita! ¿No tienes tu boleto para el Puerto Rico Comic Con? ¡Consíguelo aquí! bit.ly/2020TiX See omnystudio.com/listener for privacy information.
In this interview with NERSC HPC Systems Engineer Chris Samuel, learn all about Slurm: the life cycle of a job submitted to the batch queue, how Slurm decides which jobs should run when, and tips and tricks on making Slurm work for you.
Live from aisle four of the Weaver Street Market grocery store, it's the Raleigh Bitcoin Slurms
What's All Happening at SC19 in Denver?Supercomputing '19 is coming to Denver this year and who better than Rich Brueckner to give us a sneak peak. Super excited to have Rich and his signature laugh on this show again.In this podcast, the Radio Free HPC team reviews the full list of ancillary events at SC19, and Henry gives us one more reason to stay offline. Oh, and a few predictions.There's a lot that happens before the exhibit floor on Monday night. Our old pal Rich Brueckner from insideHPC joins us to give us the full rundown.SC19 Ancillary Events:HP-CAST. HPE's user group meeting starts things off on Friday, Nov. 14 - Saturday, Nov. 15. This two-day event will be the first HP-CAST meeting with Cray in the fold, so we're looking to some great insight as to how the two companies will merge their product line, partner network, and HPC ecosystems.Intel HPC Developer Conference. In this two-day event on Sunday, Nov. 15 - Monday, Nov. 16, Intel offers a robust program to connect the HPC community with Intel developers, Intel engineers, and industry experts. "We’ll help you tackle your HPC challenges by offering a wide range of learning opportunities at SC19.HPC Day with The Next Platform. Making its debut at SC19, HPC Day on Sunday, Nov. 17 is an in-depth day with thought leaders at the front of high performance innovation. In a series of on-stage interviews (no slides) with industry thought leaders, the Next Platform what’s relevant to the future of supercomputing.Arm HPC User Group. Now in its fifth year, the all-day event takes place on Monday, Nov. 18 at the Curtis Hotel in Denver. "This is not a Marketing event -- we have a full day agenda of strategic partners and end-users from all regions of the world sharing their experiences, best practices, plans, ecosystem advances, and results on Arm-based platforms for HPC applications."Dell EMC HPC Community. Kicking off at 8:00am on Monday, Nov. 18, the Dell HPC Community meeting will feature keynote presentations by HPC experts and a networking event to discuss best practices in the use of Dell EMC HPC Systems. Attendees will have the unique opportunity to receive updates on HPC strategy, product and solution plans from Dell executives and technical staff and technology partners.DDN User Group. Starting at 1:00pm on Monday, Nov. 18, the DDN User Group brings together the best and brightest scientists, researchers and technologists to share and learn how leading global HPC organizations are executing cutting-edge initiatives that are transforming the world. The goal of the event is to gather the community during SC to discover how HPC organizations are assessing and leveraging technology to raise the bar on HPC innovations and best practices.NVIDIA 2019 Special Address. You’re invited to attend the NVIDIA 2019 Special Address from founder and CEO, Jensen Huang. The event takes place 3:00pm - 5:00pm on Monday, November 18. Last year's address featured spectacular cosmology visualizations computed on NVIDIA GPUs. What will be revealed about accelerated computing on stage this year? Don't miss it. You must RSVP to attend.Beowulf Bash at SC19. After the SC19 show floor closes on Monday night, the Beowulf Bash is the party not to miss. "This year, we thought it would be great to do Stranger Things theme party. There will be 80s-style entertainment, games, the best 80s tribute band. Food, beverages, entertainment, and Eggo Waffles provided.Hyperion Research HPC Market Briefing Breakfast. Starting at 7:00am on Tuesday, Nov. 19, this informative briefing from Hyperion Research is always standing-room only. Get there early!Nimbix Lounge Party. On Tuesday night, Nimbix will host its 7th Annual Lounge Party in Denver. "We invite you along with our co-host Intel to enjoy an evening of entertainment, cocktails and delicious food at White Pie."Lunch and Learn - Getting a Handle on HPC Cloud Costs. Starting at noon on Wednesday, Nov. 20, this lunch event will share the many advantages of using a cloud spend management platform and how to avoid expensive mistakes when migrating HPC workloads to the cloud. This event is recommended for anyone considering the use of cloud for HPC workloads and will be particularly useful for attendees running Slurm, Univa Grid Engine, or open-source Grid Engine. The session will focus on real-world deployment examples and provide technical demonstrations that show how hybrid clouds can be deployed efficiently and cost-effectively across multiple cloud providers.Check out the insideHPC Events Calendar and send an email to news@insideHPC.com if your organization sponsoring a function at SC19 and you'd like it listed.Why No One Should Ever Be Online. Ever.Henry’s tells us even internet "domain name registrars" are not immune, describing breaches at NetworkSolutions.com, Register.com and Web.com which eventually led them to ask customers to reset their passwords. They apparently discovered the hack in August 2019 in which customer account information was accessed. [Yes, we're still massaging the title of this segment but looks like the above is gelling, albeit w/o a shorter tweet-friendly version.]Listen in to hear the full conversation* Download the MP3 * Subscribe on iTunes * RSS Feed * Follow us on Twitter * Sign up for the insideHPC Newsletter
Fry wins a fabulous contest to tour the Slurm factory and party with Slurms McKenzie when Andy and Scott review Futurama, Season 2, Episode 4, "Fry and the Slurm Factory". Find more Why Not Futurama? through the official RF4RM social media channels: Web | Twitter | Facebook | Instagram Rate, review, & subscribe to Why Not Futurama? on: Apple Podcasts | Google Play | Stitcher Your feedback is appreciated. Send emails to podcast@rf4rm.com
In this week’s Conversations in the Cloud, we are joined by Esther Baldwin, Artificial Intelligence Solutions Architect at Intel. Esther talks about AI for Good and how she was inspired by her mother to make a difference in the world. She encourages young people entering the industry to have courage – to talk to experts and leaders, to learn and grow, and not to let them themselves be held back. Esther recommends following fellow Intel star Riva Tez at https://twitter.com/rivatez for inspiration on this. A Forrester study noted that pre-configured and verified IT infrastructure was a key strategy for addressing solution complexity. Esther notes that our work isn’t complete unless solutions are consumable by customers at scale. On the topic of HPC & AI converged clusters, there’s a perception that if you want to do AI, you must stand up a separate cluster, which Esther notes is not true. Existing HPC customers can do AI on their existing infrastructure with solutions like HPC & AI converged clusters. However, running three workloads – HPC, AI, analytics – together can be tough, with the main problem being that they all have their own software stack and libraries that are optimized for specific applications. With a shared infrastructure, it can be challenging to run these all at the same time. The queuing system creates difficulties and even the underlying file system is incompatible. This is where Intel® Select Solutions comes in. Intel Select Solutions help people leverage experience with a pre-configured path to get a faster time to value. Intel® Select Solutions for HPC & AI Clusters offers users a quick start for those in the HPC environment wanting to run AI workloads. There are two options for Intel Select Solutions for HPC & AI Clusters – an open source version with Magpie and Slurm and a commercially available version with Univa Grid Engine. Intel offers a family of Intel Select Solutions for HPC and AI. Building on the foundation of Intel® Select Solutions for Simulation & Modeling, customers can also utilize solutions for simulation & visualization and genomics analytics, in addition to AI solutions like BigDL on Apache Spark and AI Inferencing. Esther notes that the HPC & AI convergence is already a trend, with AI becoming part of a wide variety of workloads. This solution will enable new HPC and AI use cases, in addition to seeing lower total cost of operations, better cluster management, and stronger workload performance. More information on Intel Select Solutions and Intel HPC solutions is available at www.intel.com/selectsolutions and www.intel.com/hpc.
Kinder, wie die Zeit vergeht…! Vor 20 Jahren ging Futurama an den Start! In Episode 2 widmen wir uns (mehr oder weniger) zwei Stunden lang der abgefahrenen Space-Cartoon-Serie von Simpsons-Schöpfer Matt Groening. Erwartet nichts geringeres als einen wahren Fun-Fact-Marathon - unter anderem mit dabei: Liebe für @weldebier sowie ambivalente Gefühle für @cocacola_de und Mobile Gaming. Macht Euch ein Slurm™ auf und habt Spaß! Vielen Dank an unseren Unterstützer, Mick Crisp (www.instgram.com/mickcrisp), der mit @_nintendoh_ und @mariokemon zwei sehr interessante und lustige Projekte gelaunched hat. Übrigens freuen wir uns noch immer sehr ganz arg viel über 5-Sterne-Bewertungen, Kommentare und natürlich Abos auf iTunes, Spotify und allen anderen Podcast-Portalen. Likes, Follows und Weiterempfehlungen auf den Socials finden wir auch nicht ungut! ;-) Bleibt locker, bleibt Backlogger!
After last episode's jump into the world of conspiracy and aliens controlling our governments, I thought we’d bring it back down to earth and turn our attention to the world of animation; in particular, that of one Matt Groening. We’ll be discussing various things such as our personal favourite episodes and characters, the YouTube video “The Day The Simpsons Died”, the seemingly never-ending debate of The Simpsons vs Futurama and how to properly pronounce his name.. Is it “Graining?” is it “Groaning?” and is he part of the global elite? So grab a Slurm, order a Krusty burger and sit back and listen to the four of us ramble on about cartoons like any of this matters.
It's All About Punk Show Episode 6. Hardcore punk, show in C-Squat in NY, russian and post-soviet punk and hardcore. Flower on the photo. If you have questions about this show or you want to play this podcast on your radio, contact me itsallaboutpunkshow@gmail.com. Ramones - 53rd & 3rd Lumpy and The Dumpers - I'm gonna move to NY Namatay sa ignay - Sakripisyo Exotica - Caminando SIGNAL - Rat Pink Eye Pobreza Mental - Falsa Vida Asylum - Riding High Flower - Distraction From Atrocity Tørsö - Eating Scraps Halshug - Indre Fængsel TARANTÜLA - Hunting at the Zoo URBANOIA - Kontroll LØVVE - You can't understand Humanity is a Curse - Aphotic Ca$halot - Izhevsk City Hardcore Людоед - Путешествие К Концу Degenerative Behavior - Веревки и кляп Rat's Eyes - Digital Priority МРАЗЬ - Таня НАЗАРБАЕВ ТЕРРОР МАШИН - Науқас IKNOW - Обвиняя жертву The Slurm - Рейтинг Minuala - Хаос во тьме
Sponsored by Slurm ----more---- An anachronistic tomb lay before us this episode. What's worse, the true prize lies beyond a very straightforward door. What skills could possibly be useful for opening it? It will probably be violence. ----more---- We are running this game using a Powered by the Apocalypse system called Broken Worlds written by Tom Parkinson-Morgan, the creator of Kill Six Billion Demons. The game can be found on his Patreon here: Broken Worlds
The boys are joined by actual couple Alex Frew and Kendra Vaughan! Will the impressionable Ariel fall for wandering pirate Jack Sparrow? Will Slug Love thrive with Jabba and the Slurm Queen? Plus, who will be the best match for The Good Place's Janet? Find in our most heated matchmaker segment to date!
Call the show at 612-643-1108 or email transatheistpod@outlook.com Facebook: http://www.facebook.com/transatheistpod Twitter: http://www.twitter.com/transatheistpod Instagram: https://www.instagram.com/transatheistpod/ This podcast is proud to be a part of the Trans Podcaster Visibility Initiative: https://www.facebook.com/transvisiblepodcaster/ Main show page is http://www.thequeerlife.org/category/transatheistpod/ The Trans Lifeline is http://www.translifeline.org/ US: (877) 565-8860 and Canada: (877) 330-6366 Quick links 00:20 Christianity is Oppressed 07:31 Initro to Slurm Flirty Worm 47:53 "Dr. Oolongs Couples Therapy" by Slurm Flirty Worm This episode features an interview with a really fun punk band from the Pacific Northwest, Slurm Flirty Worm. Slurm's drummer and back up vocalist (Abbi) is trans, and a listener local to Slurm helped connect Abbi and I. The whole band, Abbi, Michael, & Zakk are awesome, and I had a lot of fun chatting with them about how they got their name, what kind of music they play, and of course which hot pockets are the best hot pockets. Links: Slurm Flirty Worm Bandcamp: https://slurmflirtyworm.bandcamp.com/ Slurm Flirty Worm Facebook: https://www.facebook.com/slurmflirtyworm/ Slurm Flirty Worm Twitter: https://twitter.com/SlurmFlirtyWorm Obsidian Shell: http://www.obsidianshell.com/ Jokes How can you tell a timberwolf from a grey wolf? From its bark! What do you call an everyday potato? A commentator Why is that new cell phone wearing glasses? Because its lost all its contacts! Thanks again for downloading and listening!
Listen in to find out why NERSC switched to Slurm as its batch system and job scheduler, how Slurm schedules jobs, and how you can get jobs through the queue faster in this interview with NERSC Computational Systems Group lead Doug Jacobsen.
It takes more than 12 parsecs to fly through Solo: A Star Wars Story. Chewie is the human rancor; L3 is stuck in the Ghostbusters' mainframe; there's battlebots bars, Lost Boys, Dr. Zaius, and more. A new award is given, and predictions are graded. Sit back with a can of Slurm and enjoy.
FINALLY these nerds talk about Batman! Who gives a shit about Green Lantern (thanks Pat)!? Gimme that dark, brawny boy who lurks in the night...as a kid...3 seasons in... We're talking about Gotham with the final Slurm-er Tommy Roulette! Come hear the ups and downs of the (sorta) Batman series! Don't forget, if you want to be on the show or have something you want to tell us, email us at secondshotpod@gmail.com or call and leave a voicemail at 216-309-0942 #Podcast #Comedy #Batman #Gotham #Fox #ZeroYear #SexAndTheCity #Penguin #Joker #Solo #StarWars #Slurmcast #Futurama #Cleveland #ohio
This week on There’s No Place Like Terra: Jonas and Teal’c team up to investigate the Slurm factory, the Tok’ra deal with some double standards, and Nixie & Grace reminisce about PBS children’s shows. You can find us at: @terrapodcast on twitter facebook.com/theresnoplaceliketerra patreon.com/theresnoplaceliketerra theresnoplaceliketerra@gmail.com
Nenad Stojanovski and Andrew Hoying join Mark and Melanie this week to discuss Forseti - open source tools for Google Cloud Platform security. Nenad Stojanovski Staff Security Engineer, Spotify Andrew Hoying Andrew Hoying is a Senior Security Engineer at Google. His goal is to ensure all services built by Google and running on Google Cloud Platform have the same, or better, security assurances as services running in any other environment. He is also a top contributor to the Forseti Security open-source project, helping enterprises monitor and secure their GCP environments. Cool things of the week Shopify's Infrastructure Collaboration with Google blog Kubernetes Engine Private Clusters now available in beta blog Easy HPC clusters on GCP with Slurm blog Understand your spending at a glance with Google Cloud Billing reports beta blog Interview Forseti Security site docs github Google Cloud Shell site docs Forseti Security Question of the week How do I automatically scan the Docker images in your Google Cloud Repository for known vulnerabilities? Scanning Vulnerabilities in Docker images blog Container Registry Vulnerability Scanning docs Where can you find us next? Melanie will be speaking about AI at Techtonica on April 11th, and April 14th will be participating in a panel on Diversity and Inclusion at the Harker Research Symposium
Eddie Fetts, soprano de galacto-opera, odia levantarse por las mañanas. Renunciar al calor, salir al frío espacio exterior, meterse en el tren o el bús (dragones modernos), la responsabilidad. Cada mañana un trauma. A él, como a ti sin duda, le gustaría más quedarse debajo del edredón, en un éxtasis casi uterino, cinco minutos más: ca-len-ti-to. Por eso es raro que le encontremos entre los restos de una nave estrellada contra un planeta inexplorado de la Nebulosa Cabeza de Caballo en compañía del otro superviviente: su mater. ~ Guía sin spoilers ~ [0:08:30] Conocemos la historia conyugal de Eddie [0:18:00] Aparecen las panradios. [0:31:00] Cambio de escenario [0:38:00] Pequeñas criaturas babosas [0:46:30] Explicación larga sobre la forma de vida delas piedras [1:07:00] Comunicación entre humanos [1:12:30] La madre se recupera del susto [1:22:00] Eso no! Eso no! .. Pues sí. Lo lamento por la falta total de energía en la voz ese día, iba a repetirlo/mejorarlo pero igual que el año pasado se viene encima una vorágine total que me va a dejar fuera de juego porque necesito hacer otras cosas así que para no dejarlo aparcado decidí subirlo así. De todos modos llegaré al 137 algún día y el karma habrá sido retribuido, por ahora lo dejo en otro número primo que también mola (113). [Puede haber enormes dramas con el volumen de los sonidos o cosas molestas, le daré una revisión y si alguien opina algo se agradece, luego quitaré esta nota] [semi-spoilers] “Mother” se publico en 1953 (¡) en la revista Thrilling Wonder Stories. Está dividido en ocho actos pero me abstuve de decir “parte segunda”, “tercera”, etc entre cada una. Sin haberse vuelto a editar en castellano este relato se integró en 1960 en la colección "Stange Relations" (t.Relaciones Extrañas). Todos los relatos recogidos cuentan algún tipo de romance inter-especies y “Madre” tiene una secuela de igual calidad: “Hija”. Es genial traer un cuento de un tocayo de Dick, Philip Jose Farmer, a la lista de Otros Autores. Sus relatos desde luego se alejan de la visión convencional del género, trata las relaciones carnales y espirituales de una forma visceral, sin tapujos ni remilgos y tuvieron que causar polémica en su época, menos mal que la “gente seria” no leía estas patochadas de la “ciencia ficción” ,gnah… otra cosa que le caracteriza es la creación detallada de formas de vida alienígena (y sus sociedades) operativizando desde su biología hasta sus maneras de organizarse, reproducirse, etc. Se hizo famoso en 1963 por la novela “Los amantes” que había sido rechazada en su momento por John Campbell, editor desde 1938 de Astounding (Science Fiction) udel queimagino que ya hemos hablado alguna vez. Por cierto como escritor le debemos el cuento en que se inspiró "The thing" (la peli de Cronenberg se llama así pero la auténtica se llama "The thing from another world", recomendable para cuando se tiene antojo de pelis cincuenteras de ciencia ficción, jo, que disfrutonas. [aquí online pero en castellano https://drive.google.com/file/d/0Bz9jWNeBbZrFa1hPaDJHVVVfVms/view ] ) Retomando: Campbell la calificó de “nauseabunda”. Sobre esta novela ("Los amantes") cito literal de este obituario del País, el periódico que se calló del caballo: “En la historia, Hal Yarrow, miembro de una comunidad humana puritana y a la sazón casado, es enviado al planeta Ozagen, patria de los insectoides “wogglebugs”, donde vive un romance fou (¡y tan fou!) con una enigmática mujer llamada Jeannette, en realidad un parásito mimético, la lalitha, que toma forma humana para aparearse y queda preñada por un extraño sistema de orgasmo fotográfico. "¡Has engendrado larvas!", le espeta un colega al protagonista. "¡Monstruos de una unión impía! ¡Niños insectos!”… [ https://elpais.com/diario/2009/02/27/necrologicas/1235689201_850215.html ] De este relato me gustan varias cosas: el simbolismo como elemento narrativo, el traer ideas psicológicas y hacerlas de carne y hueso, la atmósfera visceral sin convencionalismos y un poco de horror, tratar temas asi de telenovela pero sin dejar de ser ciencia ficción y el esfuerzo por dotar de credibilidad las especies alienígenas descritas. La prosa de Farmer no es fluidísima pero sí muy profunda. Detrás de cada palabra hay mucho más que el simple significado semántico de ésta: sensaciones que evoca, conexiones, y sobre todo significados simbólicos. Esto procede ya que el caldo de cultivo de la historia son las teorías psicoanalíticas. Qué te parece llevar a la ciencia ficción la idea de que nuestras experiencias tempranas afectan la forma en que nos relacionaremos con el mundo, construiremos nuestro ego y autoestima? No sólo del complejo de Edipo (los problemas del hombre surgen por la represión de su pulsión por sublimar a su madre como su objeto de deseo) vive el relato (aunque el protagonista literalmente encuentra satisfacción a ese anhelo -el de follarse a su madre), además de Freud hay que sacar a Jung por el manejo de los símbolos (el pitorro-tetina de la botella de whisky, el puchero, la cavidad uterina de la roca, etc). Al principio del relato la doctora comenta que existen diferentes tipos de terapias “simbólicas”: A, B, C… etc (pero ninguna sirve, sólo ella es capaz de cuidar bien a su bebé). Y también otras psicologías: podríamos decir que Eddie Fetts es una buena representación de el “niño” según el Análisis Transaccional. Pero sobretodo la teoría del Apego. El apego (attachment) es el vínculo que se genera entre la madre y el niño. Los muy conductistas creen que es una forma compleja de “impronta” (la impronta es el mecanismo innato de los pájaros que les lleva a seguir lo primero que vean nada más salir del cascarón. Lo más probable es que sea su madre pero si les pones un zapato seguirán al zapato. Ejemplo genial en el librito imperdible, genial, de Konrad Lorenz “Hablé con las bestias, los peces y los pájaros”). Afortunadamente los cognitivos en los 60 se dieron cuenta que la historia era más compleja y que los tipos de apego desarrollados en la infancia condicionaban los estilos del individuo adulto: cómo se relaciona con el munodo, como afronta la frustración, como identifica sus deseos, etc. Hay unos experimentos muy guais (en contraste con los experimentos conductias de Harlow en los que deprivaba a macacos del amor materno) en los que observaban la conducta de niños pequeños cuando estaban con su madre y un adulto en la misma habitación y cuando ella abandonaba la estancia (el experimento del extraño de Harlow). En ese momento algunos niños pataleaban, se retraían, y otros continuaban explorando. Esto último es lo que llamaron “apego seguro”. Parece que una madre súper-protectora que no de suficiente independencia a su hijo (y en el caso del relato hay varias frases que nos hacen pensar que la doctora Fetts realmente no quiere renunciar a su papel de Madre-protectora, “acabaría contándoselo voluntariamente”, “ella es la única que sabe cuidarle”, etc) origina niños inseguros. En cualquier caso el protagonista se ha quedado en la etapa oral, como yo, que sigo siendo fumador. Por otra parte, aunque un apego excesivo es malo, no tener figura de apego ninguna "es mal" también. Por lo que leemos podemos pensar que Eddie venía ya "tocado" de casa (no sabemos si se debe a él o a una madre excesivamente controladora y protectora) pero el momento crítico en que entra en depresión es tras un desengaño amoroso. Resulta que un adulto normal puede transferir su objeto de deseo de la madre a sí mísmo o de la madre a otra persona. Cuando recibimos un palo muy grande y se corta la conexión con esa figura de apego (por ejemplo una ruputra fuerte) el yo puede quedarse sólo y destruido, con cero confianza (apego seguro) y sobretodo incapaz de ser feliz, o lo que es lo mismo: deprimido; que es lo que tiene Eddie. Ni llevar los cuerpos mutilados de los compañeros de tripulación de una habitación a otra le conmueve de verdad (ni su plato roto) no se expresa emocionalmente, etc. Se supone que un adulto sano no proyectaría en su pareja la figura de la madre buena que hace que el niño se sienta seguro y explore - pero; como conjugar esta independencia adulta sin perder el bello placer que es quererse como si fuéramos gorriones (que nunca abandonan a su pareja)? ahmijo! No sólo trata temas más propios de un melodrama en un contexto espacial sino que estos temas se convierten en personajes materiales del relato (a la Chronenberg). Pero no construye un personaje-útero de forma gratuita sino que se esfuerza por inventar de forma operativa cómo sería ese nuevo organismo, cuál es su ciclo vital, etc. Eso es algo que me parece separa la buena y la mala ciencia ficción. Por ejemplo: Avatar era increíble en su momento, con esas imágenes, los Navi, su gente, el pueblo y un par de animales que salían pero la gracia está en imaginar cómo sería ese ecosistema alienígena. Es mu y chulo que se reproduzcan con unas colas de colores iridiscentes que brillan y molan mazo… pero, por qué? Por que la evolución les ha hecho así? Stanislaw Lem también es un gran constructor-descriptor de inteligencias alienígenas (sigo recomendando los relatos recogidos en “Mascaras” editados por XXX) pero para él la comunicación entre especies es casi siempre imposible. Aquí no, ni mucho menos, sólo que los nuevos seres se comunican por ondas de radio. Por qué no? Parece útil. No sólo se da una relación de amor-sexo ente un humano y un alienígena si no que la comunicación se realiza sin problemas desde el primer momento. En la parte quinta del relato explica como el organismo es primero caracol (Sluggo) y cuando alcanza el tamaño de un cerdo es expulsado. Busca una colina libre o mejor el cuerpo sin vida de un antiguo ser para establecerse. Como extiende sus tentáculos-raíces y extrae las moléculas necesarias para construir cualquier cosa siempre y cuando haya tenido contacto con ese objeto y haya podido codificar su patrón y almacenarlo en su “cerebelo” (es curioso que use la palabra cerebelo en lugar de cerebro o hipocampo). En este sentido las Madres son como fábricas. Mejor, como impresoras 3D orgánicas, uno de las primeras veces que veo este concepto, no creo que sea la primigenia. Se inventa también como se reproducen y cómo es posible que ese sistema permita “errores” – mutaciones – adaptabilidad darwinista. Conceptos a lo loco que me han gustado: La viscosidad omnipresente durante toda la segunda mitad del relato, su "mucilaginez": los sluggos, los tentáculos, la cavidad que arroja comida en puchero, etc. Se agradece que la cavidad por la que arroja comida la llamen “iris” y no “esfínter”. Igual el agujero que exuda olores y dentro del cual hierve puchero. En este sentido volví a recordar los monstruos-ano de la película “El almuerzo desnudo”. No consigo encontrar el aria “Marinero Antiguo” de Giannelli, pero lo he intentado mucho. Igual que el nombre de Sluggo, que me recuerda el Slurm de Futurama y la babosa de “La Fe de nuestros padres” me gusta el nombre que da Eddie a la Madre: Polifema. La historia del cíclope Polifemo me encanta, es genial para contársela a los niños y siempre que pillo algún sobrino por banda se la cuento (“nadie me ha dejado ciego, nadieee”). En este caso se refiere a que cambia a su capacidad para trasmutar, cambiar de forma. Como siempre tengo que traer de referencia un capítulo de Enano Rojo de mismo nombre: Polifemo. “Buenísimo”. Es un detalle guai para la trama que lógicamente cuando el organismo muere se abre su esfínter/iris para dejar libre alguna cría si la hubiere así el autor hace menos “ex machina” (cuando algo ocurre en una trama sólo para poder resolverla sin estar justificado por el cosmos general del relato) para justificar la idea de la doctora Fetts de plantar un carcinoma (no buscar jamás la palabra carcinoma en google imágenes) en su Madre. Al final del relato cuando Eddie dice a su Madre que la doctora es su madre de verdad ella entra en shock, tiene un trauma. Es un anatema. Los móviles son machos. No es concebible dentro de su mundo pero sobretodo: afecta a su concepto, a su papel en la psicología transaccional: ella es la madre, la protectora, la cuidadora, la que da. Así que desarrolla una neurosis, como mecanismo de defensa. También hace referencia a otro conceptos psicológicos pues, los mecanismos de defensa que sirven para guardar nuestra identidad (“cubre los recuerdos del suceso con un cartel que dice “No Tocar”) o el estrés post-traumático. El momento en que Eddie “satisface su complejo de Edipo” y después de follar con ella está a punto de ser molido y arrojado a su estómago. Cuando caen los sluggos recién nacido y al golpearse contra el suelo empiezan a oírse emisiones de radio es como la palmada que nos dá el médico en la espalda para que empecemos a respirar y el grito primigenio. El primer trauma: el nacimiento. Y un trauma cada mañana, saliendo de la cama (útero) y metiéndonos y yendo a la industrialidad del metro/cercanías, etc. Es total cuando Eddie está resignado/encantado de quedarse allí dentro y sin embargo los sluggos siguen naciendo y siendo arrojados a fuera pero él se hace bola y sólo quiere quedarse acurrucado, es una imagen muy patética y tierna a la par. El tiempo yendo hacia atrás al principio y al final del relato como metáfora. Que es muy radical para estar escrito en los años 50. Detalles como que Eddie necesita "nombrar" las cosas (le pone nombre a su piano) para no andar como un marinero perdido que no puede seguir sus referencias en la costa tienen que hacernos pensar en el animismo (dotar de espíritu a los objetos). Esto es propio del "pensamiento mágico". Según los antropólogos el "pensamiento mágico" es el que tienen las tribus y se identifica por el animismo y por la existencia de asociaciones mágicas entrer objetos/eventos por proximidad o contingencia. También lo tienen los niños. Es claro que Eddie no ha llegado a desarrollarse como un adulto pleno hecho y derecho y se puede hacer una analogía com hacen los sociólogos enter el pensamiento de las tribus anteriores y el actual y el pensamiento de un niño y el de un adulto (comparación por otra parte criticable). Por supuesto las panradios, que nos explican pronto que son como una especie version espacial y de la navaja suiza que incluye una una radio emisora-transmisora y una pila que peude hacerla durar años: sin este detalle no habría historia. También el tema del hilo finísimo (que desenrrollado mide treinta metros) que contiene 3mil operas, canciones y toda la biblioteca de Trantor. Una buena manera de psar el rato dentro del vientre de la ballena (digo, de la piedra) escuchar audiolibros. Ilustración: Ñuke Mapu (Madre Tierra t.Mapuche). Música final: “Teardrop” (Massive Attack) Audiorelato, ciencia ficción, alien, alienígena, extraterrestre, planeta, baba, útero, tentáculo, terror, miedo, horror, radio, mensaje, comunicación, psicología, apego, madre, niño, niña, hijos, padre, apego, seguridad, ansiedad, depresión, infancia, trauma, misterio, paranormal, monstruo, vida, piedra, planta, animal, literatura, historia, relato, cuento ,cuentos, historias, relatos, audiolibro, voz, humana, Philip, Philip J. Farmer, Farmer, narración, ambiente, lectura, libro.
Eddie Fetts, soprano de galacto-opera, odia levantarse por las mañanas. Renunciar al calor, salir al frío espacio exterior, meterse en el tren o el bús (dragones modernos), la responsabilidad. Cada mañana un trauma. A él, como a ti sin duda, le gustaría más quedarse debajo del edredón, en un éxtasis casi uterino, cinco minutos más: ca-len-ti-to. Por eso es raro que le encontremos entre los restos de una nave estrellada contra un planeta inexplorado de la Nebulosa Cabeza de Caballo en compañía del otro superviviente: su mater. ~ Guía sin spoilers ~ [0:08:30] Conocemos la historia conyugal de Eddie [0:18:00] Aparecen las panradios. [0:31:00] Cambio de escenario [0:38:00] Pequeñas criaturas babosas [0:46:30] Explicación larga sobre la forma de vida delas piedras [1:07:00] Comunicación entre humanos [1:12:30] La madre se recupera del susto [1:22:00] Eso no! Eso no! .. Pues sí. Lo lamento por la falta total de energía en la voz ese día, iba a repetirlo/mejorarlo pero igual que el año pasado se viene encima una vorágine total que me va a dejar fuera de juego porque necesito hacer otras cosas así que para no dejarlo aparcado decidí subirlo así. De todos modos llegaré al 137 algún día y el karma habrá sido retribuido, por ahora lo dejo en otro número primo que también mola (113). [Puede haber enormes dramas con el volumen de los sonidos o cosas molestas, le daré una revisión y si alguien opina algo se agradece, luego quitaré esta nota] [semi-spoilers] “Mother” se publico en 1953 (¡) en la revista Thrilling Wonder Stories. Está dividido en ocho actos pero me abstuve de decir “parte segunda”, “tercera”, etc entre cada una. Sin haberse vuelto a editar en castellano este relato se integró en 1960 en la colección "Stange Relations" (t.Relaciones Extrañas). Todos los relatos recogidos cuentan algún tipo de romance inter-especies y “Madre” tiene una secuela de igual calidad: “Hija”. Es genial traer un cuento de un tocayo de Dick, Philip Jose Farmer, a la lista de Otros Autores. Sus relatos desde luego se alejan de la visión convencional del género, trata las relaciones carnales y espirituales de una forma visceral, sin tapujos ni remilgos y tuvieron que causar polémica en su época, menos mal que la “gente seria” no leía estas patochadas de la “ciencia ficción” ,gnah… otra cosa que le caracteriza es la creación detallada de formas de vida alienígena (y sus sociedades) operativizando desde su biología hasta sus maneras de organizarse, reproducirse, etc. Se hizo famoso en 1963 por la novela “Los amantes” que había sido rechazada en su momento por John Campbell, editor desde 1938 de Astounding (Science Fiction) udel queimagino que ya hemos hablado alguna vez. Por cierto como escritor le debemos el cuento en que se inspiró "The thing" (la peli de Cronenberg se llama así pero la auténtica se llama "The thing from another world", recomendable para cuando se tiene antojo de pelis cincuenteras de ciencia ficción, jo, que disfrutonas. [aquí online pero en castellano https://drive.google.com/file/d/0Bz9jWNeBbZrFa1hPaDJHVVVfVms/view ] ) Retomando: Campbell la calificó de “nauseabunda”. Sobre esta novela ("Los amantes") cito literal de este obituario del País, el periódico que se calló del caballo: “En la historia, Hal Yarrow, miembro de una comunidad humana puritana y a la sazón casado, es enviado al planeta Ozagen, patria de los insectoides “wogglebugs”, donde vive un romance fou (¡y tan fou!) con una enigmática mujer llamada Jeannette, en realidad un parásito mimético, la lalitha, que toma forma humana para aparearse y queda preñada por un extraño sistema de orgasmo fotográfico. "¡Has engendrado larvas!", le espeta un colega al protagonista. "¡Monstruos de una unión impía! ¡Niños insectos!”… [ https://elpais.com/diario/2009/02/27/necrologicas/1235689201_850215.html ] De este relato me gustan varias cosas: el simbolismo como elemento narrativo, el traer ideas psicológicas y hacerlas de carne y hueso, la atmósfera visceral sin convencionalismos y un poco de horror, tratar temas asi de telenovela pero sin dejar de ser ciencia ficción y el esfuerzo por dotar de credibilidad las especies alienígenas descritas. La prosa de Farmer no es fluidísima pero sí muy profunda. Detrás de cada palabra hay mucho más que el simple significado semántico de ésta: sensaciones que evoca, conexiones, y sobre todo significados simbólicos. Esto procede ya que el caldo de cultivo de la historia son las teorías psicoanalíticas. Qué te parece llevar a la ciencia ficción la idea de que nuestras experiencias tempranas afectan la forma en que nos relacionaremos con el mundo, construiremos nuestro ego y autoestima? No sólo del complejo de Edipo (los problemas del hombre surgen por la represión de su pulsión por sublimar a su madre como su objeto de deseo) vive el relato (aunque el protagonista literalmente encuentra satisfacción a ese anhelo -el de follarse a su madre), además de Freud hay que sacar a Jung por el manejo de los símbolos (el pitorro-tetina de la botella de whisky, el puchero, la cavidad uterina de la roca, etc). Al principio del relato la doctora comenta que existen diferentes tipos de terapias “simbólicas”: A, B, C… etc (pero ninguna sirve, sólo ella es capaz de cuidar bien a su bebé). Y también otras psicologías: podríamos decir que Eddie Fetts es una buena representación de el “niño” según el Análisis Transaccional. Pero sobretodo la teoría del Apego. El apego (attachment) es el vínculo que se genera entre la madre y el niño. Los muy conductistas creen que es una forma compleja de “impronta” (la impronta es el mecanismo innato de los pájaros que les lleva a seguir lo primero que vean nada más salir del cascarón. Lo más probable es que sea su madre pero si les pones un zapato seguirán al zapato. Ejemplo genial en el librito imperdible, genial, de Konrad Lorenz “Hablé con las bestias, los peces y los pájaros”). Afortunadamente los cognitivos en los 60 se dieron cuenta que la historia era más compleja y que los tipos de apego desarrollados en la infancia condicionaban los estilos del individuo adulto: cómo se relaciona con el munodo, como afronta la frustración, como identifica sus deseos, etc. Hay unos experimentos muy guais (en contraste con los experimentos conductias de Harlow en los que deprivaba a macacos del amor materno) en los que observaban la conducta de niños pequeños cuando estaban con su madre y un adulto en la misma habitación y cuando ella abandonaba la estancia (el experimento del extraño de Harlow). En ese momento algunos niños pataleaban, se retraían, y otros continuaban explorando. Esto último es lo que llamaron “apego seguro”. Parece que una madre súper-protectora que no de suficiente independencia a su hijo (y en el caso del relato hay varias frases que nos hacen pensar que la doctora Fetts realmente no quiere renunciar a su papel de Madre-protectora, “acabaría contándoselo voluntariamente”, “ella es la única que sabe cuidarle”, etc) origina niños inseguros. En cualquier caso el protagonista se ha quedado en la etapa oral, como yo, que sigo siendo fumador. Por otra parte, aunque un apego excesivo es malo, no tener figura de apego ninguna "es mal" también. Por lo que leemos podemos pensar que Eddie venía ya "tocado" de casa (no sabemos si se debe a él o a una madre excesivamente controladora y protectora) pero el momento crítico en que entra en depresión es tras un desengaño amoroso. Resulta que un adulto normal puede transferir su objeto de deseo de la madre a sí mísmo o de la madre a otra persona. Cuando recibimos un palo muy grande y se corta la conexión con esa figura de apego (por ejemplo una ruputra fuerte) el yo puede quedarse sólo y destruido, con cero confianza (apego seguro) y sobretodo incapaz de ser feliz, o lo que es lo mismo: deprimido; que es lo que tiene Eddie. Ni llevar los cuerpos mutilados de los compañeros de tripulación de una habitación a otra le conmueve de verdad (ni su plato roto) no se expresa emocionalmente, etc. Se supone que un adulto sano no proyectaría en su pareja la figura de la madre buena que hace que el niño se sienta seguro y explore - pero; como conjugar esta independencia adulta sin perder el bello placer que es quererse como si fuéramos gorriones (que nunca abandonan a su pareja)? ahmijo! No sólo trata temas más propios de un melodrama en un contexto espacial sino que estos temas se convierten en personajes materiales del relato (a la Chronenberg). Pero no construye un personaje-útero de forma gratuita sino que se esfuerza por inventar de forma operativa cómo sería ese nuevo organismo, cuál es su ciclo vital, etc. Eso es algo que me parece separa la buena y la mala ciencia ficción. Por ejemplo: Avatar era increíble en su momento, con esas imágenes, los Navi, su gente, el pueblo y un par de animales que salían pero la gracia está en imaginar cómo sería ese ecosistema alienígena. Es mu y chulo que se reproduzcan con unas colas de colores iridiscentes que brillan y molan mazo… pero, por qué? Por que la evolución les ha hecho así? Stanislaw Lem también es un gran constructor-descriptor de inteligencias alienígenas (sigo recomendando los relatos recogidos en “Mascaras” editados por XXX) pero para él la comunicación entre especies es casi siempre imposible. Aquí no, ni mucho menos, sólo que los nuevos seres se comunican por ondas de radio. Por qué no? Parece útil. No sólo se da una relación de amor-sexo ente un humano y un alienígena si no que la comunicación se realiza sin problemas desde el primer momento. En la parte quinta del relato explica como el organismo es primero caracol (Sluggo) y cuando alcanza el tamaño de un cerdo es expulsado. Busca una colina libre o mejor el cuerpo sin vida de un antiguo ser para establecerse. Como extiende sus tentáculos-raíces y extrae las moléculas necesarias para construir cualquier cosa siempre y cuando haya tenido contacto con ese objeto y haya podido codificar su patrón y almacenarlo en su “cerebelo” (es curioso que use la palabra cerebelo en lugar de cerebro o hipocampo). En este sentido las Madres son como fábricas. Mejor, como impresoras 3D orgánicas, uno de las primeras veces que veo este concepto, no creo que sea la primigenia. Se inventa también como se reproducen y cómo es posible que ese sistema permita “errores” – mutaciones – adaptabilidad darwinista. Conceptos a lo loco que me han gustado: La viscosidad omnipresente durante toda la segunda mitad del relato, su "mucilaginez": los sluggos, los tentáculos, la cavidad que arroja comida en puchero, etc. Se agradece que la cavidad por la que arroja comida la llamen “iris” y no “esfínter”. Igual el agujero que exuda olores y dentro del cual hierve puchero. En este sentido volví a recordar los monstruos-ano de la película “El almuerzo desnudo”. No consigo encontrar el aria “Marinero Antiguo” de Giannelli, pero lo he intentado mucho. Igual que el nombre de Sluggo, que me recuerda el Slurm de Futurama y la babosa de “La Fe de nuestros padres” me gusta el nombre que da Eddie a la Madre: Polifema. La historia del cíclope Polifemo me encanta, es genial para contársela a los niños y siempre que pillo algún sobrino por banda se la cuento (“nadie me ha dejado ciego, nadieee”). En este caso se refiere a que cambia a su capacidad para trasmutar, cambiar de forma. Como siempre tengo que traer de referencia un capítulo de Enano Rojo de mismo nombre: Polifemo. “Buenísimo”. Es un detalle guai para la trama que lógicamente cuando el organismo muere se abre su esfínter/iris para dejar libre alguna cría si la hubiere así el autor hace menos “ex machina” (cuando algo ocurre en una trama sólo para poder resolverla sin estar justificado por el cosmos general del relato) para justificar la idea de la doctora Fetts de plantar un carcinoma (no buscar jamás la palabra carcinoma en google imágenes) en su Madre. Al final del relato cuando Eddie dice a su Madre que la doctora es su madre de verdad ella entra en shock, tiene un trauma. Es un anatema. Los móviles son machos. No es concebible dentro de su mundo pero sobretodo: afecta a su concepto, a su papel en la psicología transaccional: ella es la madre, la protectora, la cuidadora, la que da. Así que desarrolla una neurosis, como mecanismo de defensa. También hace referencia a otro conceptos psicológicos pues, los mecanismos de defensa que sirven para guardar nuestra identidad (“cubre los recuerdos del suceso con un cartel que dice “No Tocar”) o el estrés post-traumático. El momento en que Eddie “satisface su complejo de Edipo” y después de follar con ella está a punto de ser molido y arrojado a su estómago. Cuando caen los sluggos recién nacido y al golpearse contra el suelo empiezan a oírse emisiones de radio es como la palmada que nos dá el médico en la espalda para que empecemos a respirar y el grito primigenio. El primer trauma: el nacimiento. Y un trauma cada mañana, saliendo de la cama (útero) y metiéndonos y yendo a la industrialidad del metro/cercanías, etc. Es total cuando Eddie está resignado/encantado de quedarse allí dentro y sin embargo los sluggos siguen naciendo y siendo arrojados a fuera pero él se hace bola y sólo quiere quedarse acurrucado, es una imagen muy patética y tierna a la par. El tiempo yendo hacia atrás al principio y al final del relato como metáfora. Que es muy radical para estar escrito en los años 50. Detalles como que Eddie necesita "nombrar" las cosas (le pone nombre a su piano) para no andar como un marinero perdido que no puede seguir sus referencias en la costa tienen que hacernos pensar en el animismo (dotar de espíritu a los objetos). Esto es propio del "pensamiento mágico". Según los antropólogos el "pensamiento mágico" es el que tienen las tribus y se identifica por el animismo y por la existencia de asociaciones mágicas entrer objetos/eventos por proximidad o contingencia. También lo tienen los niños. Es claro que Eddie no ha llegado a desarrollarse como un adulto pleno hecho y derecho y se puede hacer una analogía com hacen los sociólogos enter el pensamiento de las tribus anteriores y el actual y el pensamiento de un niño y el de un adulto (comparación por otra parte criticable). Por supuesto las panradios, que nos explican pronto que son como una especie version espacial y de la navaja suiza que incluye una una radio emisora-transmisora y una pila que peude hacerla durar años: sin este detalle no habría historia. También el tema del hilo finísimo (que desenrrollado mide treinta metros) que contiene 3mil operas, canciones y toda la biblioteca de Trantor. Una buena manera de psar el rato dentro del vientre de la ballena (digo, de la piedra) escuchar audiolibros. Ilustración: Ñuke Mapu (Madre Tierra t.Mapuche). Música final: “Teardrop” (Massive Attack) Audiorelato, ciencia ficción, alien, alienígena, extraterrestre, planeta, baba, útero, tentáculo, terror, miedo, horror, radio, mensaje, comunicación, psicología, apego, madre, niño, niña, hijos, padre, apego, seguridad, ansiedad, depresión, infancia, trauma, misterio, paranormal, monstruo, vida, piedra, planta, animal, literatura, historia, relato, cuento ,cuentos, historias, relatos, audiolibro, voz, humana, Philip, Philip J. Farmer, Farmer, narración, ambiente, lectura, libro.
In this episode recorded live at Gary Sohmer's South Coast Comic Con at a long closed JC Penny at the Hanover Mall Ken welcomes Pete & Pete, Danny Tamberelli and Michael C Marona. Ken, Danny and Mike discuss Pete and Pete reunions, nostalgia, The Beach Boys, US Patents, All That, Figured it Out, Nickelodeon, Tool & Die, Halloweenie, Mr. Slurm, Hal Hartley, Bacon Neck, Steve Buscemi, the Tarantino crossover, Inspector 34, Janitors, how Hunter S. Thompson was NOT on The Adventures of Pete and Pete, Colombo, Home Alone, the lost Adventures of Pete & Pete Season 3 DVD set, setting fires, exploding honey bears, Slackers, Puppetry of the Penis, Suck/Jerk/Weed, Werewolf, the perils of Music Rights, Snow Dya and the wonders of Undressed. Ken then welcomes Billy West to the stage to discuss Nick in the 90s along with Danny and Mike. They discuss Love and Rockets, the burping room, Toby Huss, MTV promos, the origins of Ren & Stimpy, Moxie, Malort, rejected scenes, censored scenes, sharing and office with the State, cosplaying, and getting into the voice over business.
This episode, the CO goons talk about food, foodie culture, food science, weird food related hobbies, and more. It's a regular comida of errors! Links to some of the stuff mentioned in this episode: http://www.nytimes.com/2009/09/14/business/energy-environment/14borlaug.html https://en.wikipedia.org/wiki/Golden_rice https://www.theguardian.com/science/blog/2010/sep/14/chilli-hot-food http://www.bbc.co.uk/worldservice/sci_tech/highlights/010730_chillies.shtml http://sitn.hms.harvard.edu/flash/2012/issue131b/ https://www.youtube.com/channel/UC2I6Et1JkidnnbWgJFiMeHA
Ever wondered what happens when you give four crazy tired nerds in a room and give them Slurm? listen to this episode and try to find out. It may take you a few tries... Anyway this week is all about Futurama's first season and original run. Stay tuned to listen to this weeks homework question. And if you havent liked, comment, subscribed and rated... why not?? come on! really guys? don't you love us?
Finally, after three years, a new episode! Thanks for sticking with us during the unplanned hiatus. During the recording we forgot about our "lost" episode 20 that we totally did record, so that's where the Episode 20 references are coming from. In this episode we watch the (as of yet) series finale "Meanwhile", and also cook up some Slurm! Plus we talk about the upcoming Futurama mobile game (which since recording is now available for pre-registration on Google Play!), and other fun stuff. The Slurm video is coming soon - listen for a brief clip at the end of the show!Download the podcast from Archive.org or your favourite podcatcher! Available on iTunes, Stitcher, and Google Play. Some links we mentioned:- A Trip to the Moon- Fan-O-Rama - Futurama fan film
Today's episode is brought to you by Slurm! It's highly addictive! But just what is the secret ingredient, exactly?Talking points include worms, remembering to tell you when things are worms, secret ingredients, and of course, telling Grunka Lunkas that you hate them.Ben puts his own hand down his throat (and out is compartment...?). Mike reveals his not so secret soda obsession.
OpenShift Ansible Tower CloudForms Red Hat OpenStack Platform SC16 2016 Formula 1 Petronas Malaysia Grand Prix Singularity Dmac! Slurm Jeremy Eder on D&G! Shadowbox We Give Thanks Jamie Duncan for being our special guest star! Special Guest: Jamie Duncan.
It's been a tumultuous week here in the United States. The election has certainly got a lot of people in a state of not-calmness, so we thought we'd avoid talking politics altogether and give everyone a safe space full of breathing room while we talk about food. Throughout the science fiction and fantasy genres, there have been moments where culinary concoctions make it into our bubble of awareness. From blue milk to Romulan ale, Klingon gagh to Slurm, there are probably more dishes out there than you might think. Because oftentimes it's just part of the scenery, something in the background. Pay it no mind. Except this week, we pay it a great deal of mind. Because talking politics right now is likely to get you a side-eye.
Jacking on with Special Guest Dan Holahan (@superdan042) ________________________________________ FOLLOW SLURMCAST ONLINE! www.facebook.com/slurmcast Twitter:@slurmcastpod instagram: @slurmcastpod Call or Text us 1-216-438-1077 (message and data rates may apply) http://slurmcast.libsyn.com/ slurmcastpod@gmail.com produced and edited by HateCat Inc © 2016 HatCat Inc
A giant, smelly piece of garbage. With Special Guests Sarah Pivovar & Rachel Keaton ________________________________________ FOLLOW SLURMCAST ONLINE! www.facebook.com/slurmcast Twitter:@slurmcastpod instagram: @slurmcastpod Call or Text us 1-216-438-1077 (message and data rates may apply) http://slurmcast.libsyn.com/ slurmcastpod@gmail.com produced and edited by HateCat Inc © 2016 HatCat Inc
Fry becomes rich, buys a lot of stuff. Most importantly he buys the last known can of anchovies. With Special Guest: Sebastian King from Thundecougarfalconhawk (https://tcfh.bandcamp.com) FOLLOW SLURMCAST ONLINE! www.facebook.com/slurmcast Twitter:@slurmcastpod instagram: @slurmcastpod Call or Text us 1-216-438-1077 http://slurmcast.libsyn.com/ slurmcastpod@gmail.com produced and edited by HateCat Inc © 2016 HatCat Inc
Slurm? Who the heck is slurm?? Salt water fish, Sammie the dog and some home made music...
Joshua, aka Slurm has been playing since 2009. He is one of the resident DJs of Lost Beach in Montañita. He is also part of Global Unity Movement, one of the most experienced production houses in Ecuador. Currently maintains his label "Savia Park" project started in late 2013 featuring the best producers of his country and supported by international remixers.
In this adventure Big Pete is forced to take the deadly metal and woodshop class. He isn’t happy about having to work with the “socketheads” and even less thrilled to be until the dictatorship of Mr. Slurm. Everyone in the … Continue reading → The post WBA 005: Tool and Die | Adventures of Pete & Pete appeared first on Welcome Back, Artie..
Altamente aleatório e inovador, o podcast SB volta pra falar de assuntos variados como o underground Pacman e as aventuras fantasmagóricas, o mangá do cara que derrota seus inimigos com apenas um soco e também pra ninguém achar que reality show é só BBB e coisas bizarras, falamos de King of Nerds. Participantes: Thiago Sepúlvida, José Manoel e Lucas Sepúlvida. Download aqui: SB 21 - Pacman, One Punch-man e King of Nerds Nosso Feed: http://feeds.feedburner.com/SBcast iTunes: https://itunes.apple.com/br/podcast/sbcast/id664373593 A gente não pede, mas se você quiser pode entrar em contato com a gente: Twitter Facebook Email: sbcastpodcast@gmail.com
Hello! It's not often I have fought a giant demon baby but lets face the facts... sometimes you are going to have to deal with a giant demon baby. I discuss: DMC: Devil May Dry
Good news everyone, it's time to take a trip to the 31st century with Damon Shaw, Mike Ortiz, Pete Lucas, Yossi Bloch, and Ben McCullough as they take a look at the fine folks of the Planet Express delivery service, in the BEST OF FUTURAMA. Get yourself powered up on booze (or Slurm) and prepare to kiss my shiny metal ass. ALL GLORY TO THE HYPNOTOAD And of course, the brackets: PDF and Excel format.
Happy Thanksgiving! This week it's "Futurama," the Matt Groening cartoon about all the crazy goings-on in the 30th century. We also play Name That Toon and close out the show with a song from TonyBear. Enjoy! Song: Vegan Hearts by TonyBear Follow Sketchy: SketchyPodcast.com facebook.com/SketchyPodcast twitter.com/SketchyPodcast SketchyPodcast@gmail.com
In this week's episode, Mike and Steve discuss the 13th episode of Season 1, "Fry and the Slurm Factory", and try to determine whether or not it's the grossest episode in Futurama history. They also discuss some of the other parodies of Futurama, as well as cover off some Futurama News at the top of the show. Download now for the thrilling discussion - one that even forgets to mention that the Slurm contest has Fry looking for a bottle cap in a can.Download from Archive.orgScreen capture of "3D Scrabble"Music Credits:Nelly Furtado - Big Hoops (Bigger the Better) (Spirit Indestructible)
En esta ocasioÌn la saga decisioÌ abrir nuestra propia miscelanea donde traeremos de todo extraordinarios y no tan sorprendentes universos los productos que siempre hemos querido probar, desde pan de lenvas hasta las grajeas de todo los sabores.