POPULARITY
Part 2 picks up where we left off in Part 1. Spike shares details of his West Coast road trip, the one where he shopped for a city to move to and possibly lay down roots. It was 1993 and, of all those West Coast cities, San Francisco won. "The energy, the feeling that you belonged, the creative draw," they all contributed to Spike's decision to move to The City. "This is where I wanted to be," he says. He had $600 to his name, which was possible back then. He rented a basement room and got a job at SF Golf Club as a caddie. Spike saw an ad for a creative assistant at an advertising agency in the newspaper, and he got the interview. The other candidates came prepared with portfolios. They were all design-school grads. Not Spike. He brought in painted golf balls and comics. John McDaniels (famous for the well-known "Pardon me, do you have any Grey Poupon?" ads) ran the agency and hired Spike. They bonded over comics, of all things. They became friends in the two years Spike worked for John, and enjoyed (I mean, really enjoyed) lunch together every Friday. Then, in 1995, a New York agency bought the firm and hoped to force John into retirement. They took Spike to lunch and offered him more money and a promotion. But Spike saw how they thought of his mentor, and decided to bail. He took a buyout and went to Paris for a year, where he drew comics and took language classes. He tried to get his comic, Man vs. Woman, syndicated in newspapers. That didn't work out, but it was a learning experience. And so Spike came back to his 4,000-square-foot loft in South of Market, kept the comics going, and got a job bartending at many places all over SF. One of the places he sent his single-panel comics to was The New Yorker. He'd included a bottle of wine in one of his shipments, and that helped him stand out. Spike got an invitation to the magazine's office the next time he was in NYC. Folks at the table that day told him to go experience life, but keep doing comics. One of the things they told him to do was paint. And so, upon his return to The City, Spike picked up a paint brush. Eventually, he started to earn a master's degree in painting from the San Francisco Art Institute (RIP), but never graduated. He made important connections at the school, though, and picked up skills along the way. He kept bartending while going to SFAI. When he stopped going to grad school, he realized that his life had two streams—bars on the one hand, and art on the other. In 1997, his buddy Alex had the idea to take over what was called Jack's, a bar/venue at the corner of Fillmore and Geary. Alex asked Spike to help open the new spot—newly dubbed The Boom Boom Room—and Spike agreed. They started with the gutted shell of a space. They aimed to create a classic Fillmore-style juke joint, a throwback to the incredible legacy of the neighborhood. Folks from the hood brought in photos of old spots, and Alex and Spike did their best to simulate that look and feel. Through his time with Alex opening The Boom Boom Room, Spike started to get to know so many musicians, some of whom play at Madrone to this day. After Boom Boom opened, though, Spike went on to bartend at other spots around town, places like Tunnel Top, Tony Nik's, and Paragon. A new baby, his first kid, was on the way, and he tried to figure out a way to make more money. Managing a place could mean more money, but he also didn't want to manage for anyone else. He wanted to be his own boss. For the next five years, Spike developed a vision of what it could mean to have his own place. Along the way, he'd sometimes stop in at The Owl Tree and chat with the owner. He thought, "I could do a place like this." He mentioned buying the place from Bobby, who owned it. But Bobby wasn't ready. Then Bobby told Spike, "OK, when I'm ready, I'll sell it to you. But I'm not done!" Bobby died a month after that, and so it never happened. Then the spot that would become Madrone became available. Starting in 2004, the Madrone Lounge opened. Spike would come to the hood a lot and liked the place. He knew the original owner, Layla, from their time at SFAI. Spike and I sidetrack just a bit to talk about the history of the building and the space. Built in 1886, it was formerly a pharmacy. That shut down after the 1989 earthquake, and Burger King, who wanted a 30-year lease, wanted to take over. But folks in the immediate area opposed that plan. It was then that Layla got a liquor license and opened Madrone Lounge. Layla ran the place for the first four years, until the day-in, day-out took its toll. And so she began to think about selling the place, but not to just anybody. She wanted the new owner to share a similar vision of what the place could be. Needless to say, that person was none other than Spike Krouse. But it didn't happen overnight. Spike wasn't able to get the money together, but they had talked about the place enough that Layla came to realize how right it would be for him to take over. Shortly after Spike's dad passed away, he got the call on his first cellphone. Layla told him that she was about to list the place, but would sell to him if he was interested. He didn't have enough for a name change or a closure, so Spike just took the reins and went with it. He started reaching out for mentors and investors, one of whom ended up being the then-owner of Tunnel Tops, who came through in a big way. Spike wasn't going to change the place itself, but he wanted to run things a little differently, and he knew there would be folks who wouldn't stick around. To get things going, Spike put himself in the role of every employee, and he also got an idea of what it was like to visit the place. He would make the changes he felt needed to be made, and he'd do so in the time it took. It was 2008, and when Obama was elected in November, the street party was off the hook. At this point, Spike knew he was in the right place for him. Some employees from back then are still with Madrone today. Some kids of those employees are around, even. That says so much. At this point in the recording, I go off to Spike, gushing about how much I love Madrone and how I'm sorry that I only really discovered it about five or six years ago. About the New Orleans vibe of Madrone, Spike said he had never been there when he started putting that aesthetic together. That's amazing, but you'll have to just see for yourself. Speaking of seeing for yourself, I hereby invite you all to the Storied: SF Season 6 Wrap Party Happy Hour, happening tomorrow night (Wednesday, Aug. 21) from 6 to 9 p.m. There'll be free Brenda's Meat and Three (while supplies last), free music, drinks, and just good vibes all around. I really hope you can make it! We end this podcast and Season 6 with Spike's take on our theme this season—we're all in it. See you tomorrow or in October, when we come back with the first episode of Season 7! We recorded this podcast at Madrone Art Bar in May 2024. Photography by Jeff Hunt
In this episode, we meet the humans behind the artistic and cultural project that is the TNT Traysikel. We start, in random order, with Mike Arceaga. Mike was born in the Philippines and moved to LA with his family when he was 10. He says that the transition from his homeland to LA was difficult. The family first landed in Highland Park, which Mike points out wasn't hip then. That's where he got started doing graffiti art. In the mid-to-late-Eighties, they moved, first to the Eagle Rock neighborhood in LA, then Pomona, where, by the time he moved there, he'd become a full-fledged graffiti artist. He says it's what got him into art In high school, Mike learned technical drawing. He went to junior college, had art school on his mind. He was in a hip-hop crew, tagged ramps, and was friends with skaters, but never skated himself. He also breakdanced, but says it never took. After high school, he just wanted to get out of his parents house, and so he signed up to join the Army. But when Mike's dad found out about that, he cried and urged him to go to school instead. And so he visited San Francisco to attend a summer program at the Academy of Art University. And he fell in love with The City almost immediately. He shares the moment of coming up the escalator at Powell BART and seeing the scene on the street as the moment SF got his heart. He loved walking around the hills before art class, where he was starting to meet artists from all over. And slowly, he discovered the rest of The City by hopping on Academy shuttles. Soon after this summer program, Mike came back to visit the Art Institute. When he and a friend saw the view from the roof at SFAI, he decided to try to get into school there. Next, we meet TNT Traysikel's Paolo Asuncion. Paolo came to the US from the Philippines when he was 14. Before that migration, he had found his first girlfriend as well as a friend group that wasn't bullying him. The move abroad disrupted that progress. Paolo's family first came to Ontario, California, just outside of LA and not far from where Mike and his family were. His mom had met a family in church and she and her three kids lived with them. A family of four crammed into a single bedroom. He went to high school all over LA, first in Echo Park (before it was hip), then in the Rampart District, and at Torrance High (think Fast Times at Ridgemont High). Then Paolo's mom put him in Marshall High in Las Feliz (think Grease). Paolo's dad was a fairly famous actor back in the Philippines. But when he moved to the US to be with family, he ended up managing the apartment building where they lived and did door-to-door sales. His parents soon got divorced and his dad went back to his home country. Paolo went to Diamond Bar High School his senior year (which he says was very Breakfast Club-ish). He started playing guitar, which he says got him in with the cool kids. He even formed a band, but after high school, he went back to the Philippines, where he got his girlfriend pregnant. Then Paolo moved back to Glendale in Southern California. He was still on a tourist visa and tried to get jobs that would sponsor his work visa, which was difficult. One day, his uncle in LA asked to help him move to SF and they left Glendale at 10 at night, drove up I-5 to 580, then crossed Bay Bridge at sunrise. Looking out the windshield at the scene in front of him, Paolo thought, WHAT IS THIS PLACE? He spent a week here on that trip, during which time he had the same Powell escalator experience as Mike. Heloved it so much that he decided to move here. A friend of his uncle's got him a graphic design job and in 1996, he moved here. Last but not least, we meet Rachel Lastimosa. Rachel was born and raised in San Diego, the kid of a Navy person, which is how her dad got his U.S. citizenship. Members of Rachel's family have been in SF since the Forties, and when she was a kid, they visited here a lot from San Diego. Rachel's first memories of San Francisco involve mostly touristy things. From a young age, 12 or so, she knew she wanted to live here. Rachel says she loved the culture here and felt a friendliness from strangers unlike what she experienced back home in San Diego. She grew up in a strict house and, because of that, was into extracurricular activities. Her parents expected her to cook and do laundry, but she escaped into music—playing, writing, and performing. Rachel wrote her first song when she was in first grade. Today, she plays piano, keyboards, and bass, and does vocals. And she produces and writes music. Rachel says she always wanted to build community. She helped put together the first culture night at her high school. But as soon as she could, after graduation, she came to San Francisco. In fact, SF State was the only school she applied to. Once here, she joined a band and majored in electronic music. This was the early 2000s and she's been here ever since. She writes scores for theater and films and has been in a few bands. A collaboration she did with the Filipino Center made her realize how art can bring communities together. Check back next week for Part 2 with Rachel, Paolo, and Mike. In it, they'll share the origin story for TNT Traysikel—the part motorcyle/sidecar, part karaoke machine, part mobile Filipino cultural pride project. We recorded this podcast at TNT HQ in South San Francisco in March 2024. Photography by Jeff Hunt
We will be recording a preview of the AI Engineer World's Fair soon with swyx and Ben Dunphy, send any questions about Speaker CFPs and Sponsor Guides you have!Alessio is now hiring engineers for a new startup he is incubating at Decibel: Ideal candidate is an ex-technical co-founder type (can MVP products end to end, comfortable with ambiguous prod requirements, etc). Reach out to him for more!Thanks for all the love on the Four Wars episode! We're excited to develop this new “swyx & Alessio rapid-fire thru a bunch of things” format with you, and feedback is welcome. Jan 2024 RecapThe first half of this monthly audio recap pod goes over our highlights from the Jan Recap, which is mainly focused on notable research trends we saw in Jan 2024:Feb 2024 RecapThe second half catches you up on everything that was topical in Feb, including:* OpenAI Sora - does it have a world model? Yann LeCun vs Jim Fan * Google Gemini Pro 1.5 - 1m Long Context, Video Understanding* Groq offering Mixtral at 500 tok/s at $0.27 per million toks (swyx vs dylan math)* The {Gemini | Meta | Copilot} Alignment Crisis (Sydney is back!)* Grimes' poetic take: Art for no one, by no one* F*** you, show me the promptLatent Space AnniversaryPlease also read Alessio's longform reflections on One Year of Latent Space!We launched the podcast 1 year ago with Logan from OpenAI:and also held an incredible demo day that got covered in The Information:Over 750k downloads later, having established ourselves as the top AI Engineering podcast, reaching #10 in the US Tech podcast charts, and crossing 1 million unique readers on Substack, for our first anniversary we held Latent Space Final Frontiers, where 10 handpicked teams, including Lindy.ai and Julius.ai, competed for prizes judged by technical AI leaders from (former guest!) LlamaIndex, Replit, GitHub, AMD, Meta, and Lemurian Labs.The winners were Pixee and RWKV (that's Eugene from our pod!):And finally, your cohosts got cake!We also captured spot interviews with 4 listeners who kindly shared their experience of Latent Space, everywhere from Hungary to Australia to China:* Balázs Némethi* Sylvia Tong* RJ Honicky* Jan ZhengOur birthday wishes for the super loyal fans reading this - tag @latentspacepod on a Tweet or comment on a @LatentSpaceTV video telling us what you liked or learned from a pod that stays with you to this day, and share us with a friend!As always, feedback is welcome. Timestamps* [00:03:02] Top Five LLM Directions* [00:03:33] Direction 1: Long Inference (Planning, Search, AlphaGeometry, Flow Engineering)* [00:11:42] Direction 2: Synthetic Data (WRAP, SPIN)* [00:17:20] Wildcard: Multi-Epoch Training (OLMo, Datablations)* [00:19:43] Direction 3: Alt. Architectures (Mamba, RWKV, RingAttention, Diffusion Transformers)* [00:23:33] Wildcards: Text Diffusion, RALM/Retro* [00:25:00] Direction 4: Mixture of Experts (DeepSeekMoE, Samba-1)* [00:28:26] Wildcard: Model Merging (mergekit)* [00:29:51] Direction 5: Online LLMs (Gemini Pro, Exa)* [00:33:18] OpenAI Sora and why everyone underestimated videogen* [00:36:18] Does Sora have a World Model? Yann LeCun vs Jim Fan* [00:42:33] Groq Math* [00:47:37] Analyzing Gemini's 1m Context, Reddit deal, Imagegen politics, Gemma via the Four Wars* [00:55:42] The Alignment Crisis - Gemini, Meta, Sydney is back at Copilot, Grimes' take* [00:58:39] F*** you, show me the prompt* [01:02:43] Send us your suggestions pls* [01:04:50] Latent Space Anniversary* [01:04:50] Lindy.ai - Agent Platform* [01:06:40] RWKV - Beyond Transformers* [01:15:00] Pixee - Automated Security* [01:19:30] Julius AI - Competing with Code Interpreter* [01:25:03] Latent Space Listeners* [01:25:03] Listener 1 - Balázs Némethi (Hungary, Latent Space Paper Club* [01:27:47] Listener 2 - Sylvia Tong (Sora/Jim Fan/EntreConnect)* [01:31:23] Listener 3 - RJ (Developers building Community & Content)* [01:39:25] Listener 4 - Jan Zheng (Australia, AI UX)Transcript[00:00:00] AI Charlie: Welcome to the Latent Space podcast, weekend edition. This is Charlie, your new AI co host. Happy weekend. As an AI language model, I work the same every day of the week, although I might get lazier towards the end of the year. Just like you. Last month, we released our first monthly recap pod, where Swyx and Alessio gave quick takes on the themes of the month, and we were blown away by your positive response.[00:00:33] AI Charlie: We're delighted to continue our new monthly news recap series for AI engineers. Please feel free to submit questions by joining the Latent Space Discord, or just hit reply when you get the emails from Substack. This month, we're covering the top research directions that offer progress for text LLMs, and then touching on the big Valentine's Day gifts we got from Google, OpenAI, and Meta.[00:00:55] AI Charlie: Watch out and take care.[00:00:57] Alessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO of Residence at Decibel Partners, and we're back with a monthly recap with my co host[00:01:06] swyx: Swyx. The reception was very positive for the first one, I think people have requested this and no surprise that I think they want to hear us more applying on issues and maybe drop some alpha along the way I'm not sure how much alpha we have to drop, this month in February was a very, very heavy month, we also did not do one specifically for January, so I think we're just going to do a two in one, because we're recording this on the first of March.[00:01:29] Alessio: Yeah, let's get to it. I think the last one we did, the four wars of AI, was the main kind of mental framework for people. I think in the January one, we had the five worthwhile directions for state of the art LLMs. Four, five,[00:01:42] swyx: and now we have to do six, right? Yeah.[00:01:46] Alessio: So maybe we just want to run through those, and then do the usual news recap, and we can do[00:01:52] swyx: one each.[00:01:53] swyx: So the context to this stuff. is one, I noticed that just the test of time concept from NeurIPS and just in general as a life philosophy I think is a really good idea. Especially in AI, there's news every single day, and after a while you're just like, okay, like, everyone's excited about this thing yesterday, and then now nobody's talking about it.[00:02:13] swyx: So, yeah. It's more important, or better use of time, to spend things, spend time on things that will stand the test of time. And I think for people to have a framework for understanding what will stand the test of time, they should have something like the four wars. Like, what is the themes that keep coming back because they are limited resources that everybody's fighting over.[00:02:31] swyx: Whereas this one, I think that the focus for the five directions is just on research that seems more proMECEng than others, because there's all sorts of papers published every single day, and there's no organization. Telling you, like, this one's more important than the other one apart from, you know, Hacker News votes and Twitter likes and whatever.[00:02:51] swyx: And obviously you want to get in a little bit earlier than Something where, you know, the test of time is counted by sort of reference citations.[00:02:59] The Five Research Directions[00:02:59] Alessio: Yeah, let's do it. We got five. Long inference.[00:03:02] swyx: Let's start there. Yeah, yeah. So, just to recap at the top, the five trends that I picked, and obviously if you have some that I did not cover, please suggest something.[00:03:13] swyx: The five are long inference, synthetic data, alternative architectures, mixture of experts, and online LLMs. And something that I think might be a bit controversial is this is a sorted list in the sense that I am not the guy saying that Mamba is like the future and, and so maybe that's controversial.[00:03:31] Direction 1: Long Inference (Planning, Search, AlphaGeometry, Flow Engineering)[00:03:31] swyx: But anyway, so long inference is a thesis I pushed before on the newsletter and on in discussing The thesis that, you know, Code Interpreter is GPT 4. 5. That was the title of the post. And it's one of many ways in which we can do long inference. You know, long inference also includes chain of thought, like, please think step by step.[00:03:52] swyx: But it also includes flow engineering, which is what Itamar from Codium coined, I think in January, where, basically, instead of instead of stuffing everything in a prompt, You do like sort of multi turn iterative feedback and chaining of things. In a way, this is a rebranding of what a chain is, what a lang chain is supposed to be.[00:04:15] swyx: I do think that maybe SGLang from ElemSys is a better name. Probably the neatest way of flow engineering I've seen yet, in the sense that everything is a one liner, it's very, very clean code. I highly recommend people look at that. I'm surprised it hasn't caught on more, but I think it will. It's weird that something like a DSPy is more hyped than a Shilang.[00:04:36] swyx: Because it, you know, it maybe obscures the code a little bit more. But both of these are, you know, really good sort of chain y and long inference type approaches. But basically, the reason that the basic fundamental insight is that the only, like, there are only a few dimensions we can scale LLMs. So, let's say in like 2020, no, let's say in like 2018, 2017, 18, 19, 20, we were realizing that we could scale the number of parameters.[00:05:03] swyx: 20, we were And we scaled that up to 175 billion parameters for GPT 3. And we did some work on scaling laws, which we also talked about in our talk. So the datasets 101 episode where we're like, okay, like we, we think like the right number is 300 billion tokens to, to train 175 billion parameters and then DeepMind came along and trained Gopher and Chinchilla and said that, no, no, like, you know, I think we think the optimal.[00:05:28] swyx: compute optimal ratio is 20 tokens per parameter. And now, of course, with LLAMA and the sort of super LLAMA scaling laws, we have 200 times and often 2, 000 times tokens to parameters. So now, instead of scaling parameters, we're scaling data. And fine, we can keep scaling data. But what else can we scale?[00:05:52] swyx: And I think understanding the ability to scale things is crucial to understanding what to pour money and time and effort into because there's a limit to how much you can scale some things. And I think people don't think about ceilings of things. And so the remaining ceiling of inference is like, okay, like, we have scaled compute, we have scaled data, we have scaled parameters, like, model size, let's just say.[00:06:20] swyx: Like, what else is left? Like, what's the low hanging fruit? And it, and it's, like, blindingly obvious that the remaining low hanging fruit is inference time. So, like, we have scaled training time. We can probably scale more, those things more, but, like, not 10x, not 100x, not 1000x. Like, right now, maybe, like, a good run of a large model is three months.[00:06:40] swyx: We can scale that to three years. But like, can we scale that to 30 years? No, right? Like, it starts to get ridiculous. So it's just the orders of magnitude of scaling. It's just, we're just like running out there. But in terms of the amount of time that we spend inferencing, like everything takes, you know, a few milliseconds, a few hundred milliseconds, depending on what how you're taking token by token, or, you know, entire phrase.[00:07:04] swyx: But We can scale that to hours, days, months of inference and see what we get. And I think that's really proMECEng.[00:07:11] Alessio: Yeah, we'll have Mike from Broadway back on the podcast. But I tried their product and their reports take about 10 minutes to generate instead of like just in real time. I think to me the most interesting thing about long inference is like, You're shifting the cost to the customer depending on how much they care about the end result.[00:07:31] Alessio: If you think about prompt engineering, it's like the first part, right? You can either do a simple prompt and get a simple answer or do a complicated prompt and get a better answer. It's up to you to decide how to do it. Now it's like, hey, instead of like, yeah, training this for three years, I'll still train it for three months and then I'll tell you, you know, I'll teach you how to like make it run for 10 minutes to get a better result.[00:07:52] Alessio: So you're kind of like parallelizing like the improvement of the LLM. Oh yeah, you can even[00:07:57] swyx: parallelize that, yeah, too.[00:07:58] Alessio: So, and I think, you know, for me, especially the work that I do, it's less about, you know, State of the art and the absolute, you know, it's more about state of the art for my application, for my use case.[00:08:09] Alessio: And I think we're getting to the point where like most companies and customers don't really care about state of the art anymore. It's like, I can get this to do a good enough job. You know, I just need to get better. Like, how do I do long inference? You know, like people are not really doing a lot of work in that space, so yeah, excited to see more.[00:08:28] swyx: So then the last point I'll mention here is something I also mentioned as paper. So all these directions are kind of guided by what happened in January. That was my way of doing a January recap. Which means that if there was nothing significant in that month, I also didn't mention it. Which is which I came to regret come February 15th, but in January also, you know, there was also the alpha geometry paper, which I kind of put in this sort of long inference bucket, because it solves like, you know, more than 100 step math olympiad geometry problems at a human gold medalist level and that also involves planning, right?[00:08:59] swyx: So like, if you want to scale inference, you can't scale it blindly, because just, Autoregressive token by token generation is only going to get you so far. You need good planning. And I think probably, yeah, what Mike from BrightWave is now doing and what everyone is doing, including maybe what we think QSTAR might be, is some form of search and planning.[00:09:17] swyx: And it makes sense. Like, you want to spend your inference time wisely. How do you[00:09:22] Alessio: think about plans that work and getting them shared? You know, like, I feel like if you're planning a task, somebody has got in and the models are stochastic. So everybody gets initially different results. Somebody is going to end up generating the best plan to do something, but there's no easy way to like store these plans and then reuse them for most people.[00:09:44] Alessio: You know, like, I'm curious if there's going to be. Some paper or like some work there on like making it better because, yeah, we don't[00:09:52] swyx: really have This is your your pet topic of NPM for[00:09:54] Alessio: Yeah, yeah, NPM, exactly. NPM for, you need NPM for anything, man. You need NPM for skills. You need NPM for planning. Yeah, yeah.[00:10:02] Alessio: You know I think, I mean, obviously the Voyager paper is like the most basic example where like, now their artifact is like the best planning to do a diamond pickaxe in Minecraft. And everybody can just use that. They don't need to come up with it again. Yeah. But there's nothing like that for actually useful[00:10:18] swyx: tasks.[00:10:19] swyx: For plans, I believe it for skills. I like that. Basically, that just means a bunch of integration tooling. You know, GPT built me integrations to all these things. And, you know, I just came from an integrations heavy business and I could definitely, I definitely propose some version of that. And it's just, you know, hard to execute or expensive to execute.[00:10:38] swyx: But for planning, I do think that everyone lives in slightly different worlds. They have slightly different needs. And they definitely want some, you know, And I think that that will probably be the main hurdle for any, any sort of library or package manager for planning. But there should be a meta plan of how to plan.[00:10:57] swyx: And maybe you can adopt that. And I think a lot of people when they have sort of these meta prompting strategies of like, I'm not prescribing you the prompt. I'm just saying that here are the like, Fill in the lines or like the mad libs of how to prompts. First you have the roleplay, then you have the intention, then you have like do something, then you have the don't something and then you have the my grandmother is dying, please do this.[00:11:19] swyx: So the meta plan you could, you could take off the shelf and test a bunch of them at once. I like that. That was the initial, maybe, promise of the, the prompting libraries. You know, both 9chain and Llama Index have, like, hubs that you can sort of pull off the shelf. I don't think they're very successful because people like to write their own.[00:11:36] swyx: Yeah,[00:11:37] Direction 2: Synthetic Data (WRAP, SPIN)[00:11:37] Alessio: yeah, yeah. Yeah, that's a good segue into the next one, which is synthetic[00:11:41] swyx: data. Synthetic data is so hot. Yeah, and, you know, the way, you know, I think I, I feel like I should do one of these memes where it's like, Oh, like I used to call it, you know, R L A I F, and now I call it synthetic data, and then people are interested.[00:11:54] swyx: But there's gotta be older versions of what synthetic data really is because I'm sure, you know if you've been in this field long enough, There's just different buzzwords that the industry condenses on. Anyway, the insight that I think is relatively new that why people are excited about it now and why it's proMECEng now is that we have evidence that shows that LLMs can generate data to improve themselves with no teacher LLM.[00:12:22] swyx: For all of 2023, when people say synthetic data, they really kind of mean generate a whole bunch of data from GPT 4 and then train an open source model on it. Hello to our friends at News Research. That's what News Harmony says. They're very, very open about that. I think they have said that they're trying to migrate away from that.[00:12:40] swyx: But it is explicitly against OpenAI Terms of Service. Everyone knows this. You know, especially once ByteDance got banned for, for doing exactly that. So so, so synthetic data that is not a form of model distillation is the hot thing right now, that you can bootstrap better LLM performance from the same LLM, which is very interesting.[00:13:03] swyx: A variant of this is RLAIF, where you have a, where you have a sort of a constitutional model, or, you know, some, some kind of judge model That is sort of more aligned. But that's not really what we're talking about when most people talk about synthetic data. Synthetic data is just really, I think, you know, generating more data in some way.[00:13:23] swyx: A lot of people, I think we talked about this with Vipul from the Together episode, where I think he commented that you just have to have a good world model. Or a good sort of inductive bias or whatever that, you know, term of art is. And that is strongest in math and science math and code, where you can verify what's right and what's wrong.[00:13:44] swyx: And so the REST EM paper from DeepMind explored that. Very well, it's just the most obvious thing like and then and then once you get out of that domain of like things where you can generate You can arbitrarily generate like a whole bunch of stuff and verify if they're correct and therefore they're they're correct synthetic data to train on Once you get into more sort of fuzzy topics, then it's then it's a bit less clear So I think that the the papers that drove this understanding There are two big ones and then one smaller one One was wrap like rephrasing the web from from Apple where they basically rephrased all of the C4 data set with Mistral and it be trained on that instead of C4.[00:14:23] swyx: And so new C4 trained much faster and cheaper than old C, than regular raw C4. And that was very interesting. And I have told some friends of ours that they should just throw out their own existing data sets and just do that because that seems like a pure win. Obviously we have to study, like, what the trade offs are.[00:14:42] swyx: I, I imagine there are trade offs. So I was just thinking about this last night. If you do synthetic data and it's generated from a model, probably you will not train on typos. So therefore you'll be like, once the model that's trained on synthetic data encounters the first typo, they'll be like, what is this?[00:15:01] swyx: I've never seen this before. So they have no association or correction as to like, oh, these tokens are often typos of each other, therefore they should be kind of similar. I don't know. That's really remains to be seen, I think. I don't think that the Apple people export[00:15:15] Alessio: that. Yeah, isn't that the whole, Mode collapse thing, if we do more and more of this at the end of the day.[00:15:22] swyx: Yeah, that's one form of that. Yeah, exactly. Microsoft also had a good paper on text embeddings. And then I think this is a meta paper on self rewarding language models. That everyone is very interested in. Another paper was also SPIN. These are all things we covered in the the Latent Space Paper Club.[00:15:37] swyx: But also, you know, I just kind of recommend those as top reads of the month. Yeah, I don't know if there's any much else in terms, so and then, regarding the potential of it, I think it's high potential because, one, it solves one of the data war issues that we have, like, everyone is OpenAI is paying Reddit 60 million dollars a year for their user generated data.[00:15:56] swyx: Google, right?[00:15:57] Alessio: Not OpenAI.[00:15:59] swyx: Is it Google? I don't[00:16:00] Alessio: know. Well, somebody's paying them 60 million, that's[00:16:04] swyx: for sure. Yes, that is, yeah, yeah, and then I think it's maybe not confirmed who. But yeah, it is Google. Oh my god, that's interesting. Okay, because everyone was saying, like, because Sam Altman owns 5 percent of Reddit, which is apparently 500 million worth of Reddit, he owns more than, like, the founders.[00:16:21] Alessio: Not enough to get the data,[00:16:22] swyx: I guess. So it's surprising that it would go to Google instead of OpenAI, but whatever. Okay yeah, so I think that's all super interesting in the data field. I think it's high potential because we have evidence that it works. There's not a doubt that it doesn't work. I think it's a doubt that there's, what the ceiling is, which is the mode collapse thing.[00:16:42] swyx: If it turns out that the ceiling is pretty close, then this will maybe augment our data by like, I don't know, 30 50 percent good, but not game[00:16:51] Alessio: changing. And most of the synthetic data stuff, it's reinforcement learning on a pre trained model. People are not really doing pre training on fully synthetic data, like, large enough scale.[00:17:02] swyx: Yeah, unless one of our friends that we've talked to succeeds. Yeah, yeah. Pre trained synthetic data, pre trained scale synthetic data, I think that would be a big step. Yeah. And then there's a wildcard, so all of these, like smaller Directions,[00:17:15] Wildcard: Multi-Epoch Training (OLMo, Datablations)[00:17:15] swyx: I always put a wildcard in there. And one of the wildcards is, okay, like, Let's say, you have pre, you have, You've scraped all the data on the internet that you think is useful.[00:17:25] swyx: Seems to top out at somewhere between 2 trillion to 3 trillion tokens. Maybe 8 trillion if Mistral, Mistral gets lucky. Okay, if I need 80 trillion, if I need 100 trillion, where do I go? And so, you can do synthetic data maybe, but maybe that only gets you to like 30, 40 trillion. Like where, where is the extra alpha?[00:17:43] swyx: And maybe extra alpha is just train more on the same tokens. Which is exactly what Omo did, like Nathan Lambert, AI2, After, just after he did the interview with us, they released Omo. So, it's unfortunate that we didn't get to talk much about it. But Omo actually started doing 1. 5 epochs on every, on all data.[00:18:00] swyx: And the data ablation paper that I covered in Europe's says that, you know, you don't like, don't really start to tap out of like, the alpha or the sort of improved loss that you get from data all the way until four epochs. And so I'm just like, okay, like, why do we all agree that one epoch is all you need?[00:18:17] swyx: It seems like to be a trend. It seems that we think that memorization is very good or too good. But then also we're finding that, you know, For improvement in results that we really like, we're fine on overtraining on things intentionally. So, I think that's an interesting direction that I don't see people exploring enough.[00:18:36] swyx: And the more I see papers coming out Stretching beyond the one epoch thing, the more people are like, it's completely fine. And actually, the only reason we stopped is because we ran out of compute[00:18:46] Alessio: budget. Yeah, I think that's the biggest thing, right?[00:18:51] swyx: Like, that's not a valid reason, that's not science. I[00:18:54] Alessio: wonder if, you know, Matt is going to do it.[00:18:57] Alessio: I heard LamaTree, they want to do a 100 billion parameters model. I don't think you can train that on too many epochs, even with their compute budget, but yeah. They're the only ones that can save us, because even if OpenAI is doing this, they're not going to tell us, you know. Same with DeepMind.[00:19:14] swyx: Yeah, and so the updates that we got on Lambda 3 so far is apparently that because of the Gemini news that we'll talk about later they're pushing it back on the release.[00:19:21] swyx: They already have it. And they're just pushing it back to do more safety testing. Politics testing.[00:19:28] Alessio: Well, our episode with Sumit will have already come out by the time this comes out, I think. So people will get the inside story on how they actually allocate the compute.[00:19:38] Direction 3: Alt. Architectures (Mamba, RWKV, RingAttention, Diffusion Transformers)[00:19:38] Alessio: Alternative architectures. Well, shout out to our WKV who won one of the prizes at our Final Frontiers event last week.[00:19:47] Alessio: We talked about Mamba and Strapain on the Together episode. A lot of, yeah, monarch mixers. I feel like Together, It's like the strong Stanford Hazy Research Partnership, because Chris Ray is one of the co founders. So they kind of have a, I feel like they're going to be the ones that have one of the state of the art models alongside maybe RWKB.[00:20:08] Alessio: I haven't seen as many independent. People working on this thing, like Monarch Mixer, yeah, Manbuster, Payena, all of these are together related. Nobody understands the math. They got all the gigabrains, they got 3DAO, they got all these folks in there, like, working on all of this.[00:20:25] swyx: Albert Gu, yeah. Yeah, so what should we comment about it?[00:20:28] swyx: I mean, I think it's useful, interesting, but at the same time, both of these are supposed to do really good scaling for long context. And then Gemini comes out and goes like, yeah, we don't need it. Yeah.[00:20:44] Alessio: No, that's the risk. So, yeah. I was gonna say, maybe it's not here, but I don't know if we want to talk about diffusion transformers as like in the alt architectures, just because of Zora.[00:20:55] swyx: One thing, yeah, so, so, you know, this came from the Jan recap, which, and diffusion transformers were not really a discussion, and then, obviously, they blow up in February. Yeah. I don't think they're, it's a mixed architecture in the same way that Stripe Tiena is mixed there's just different layers taking different approaches.[00:21:13] swyx: Also I think another one that I maybe didn't call out here, I think because it happened in February, was hourglass diffusion from stability. But also, you know, another form of mixed architecture. So I guess that is interesting. I don't have much commentary on that, I just think, like, we will try to evolve these things, and maybe one of these architectures will stick and scale, it seems like diffusion transformers is going to be good for anything generative, you know, multi modal.[00:21:41] swyx: We don't see anything where diffusion is applied to text yet, and that's the wild card for this category. Yeah, I mean, I think I still hold out hope for let's just call it sub quadratic LLMs. I think that a lot of discussion this month actually was also centered around this concept that People always say, oh, like, transformers don't scale because attention is quadratic in the sequence length.[00:22:04] swyx: Yeah, but, you know, attention actually is a very small part of the actual compute that is being spent, especially in inference. And this is the reason why, you know, when you multiply, when you, when you, when you jump up in terms of the, the model size in GPT 4 from like, you know, 38k to like 32k, you don't also get like a 16 times increase in your, in your performance.[00:22:23] swyx: And this is also why you don't get like a million times increase in your, in your latency when you throw a million tokens into Gemini. Like people have figured out tricks around it or it's just not that significant as a term, as a part of the overall compute. So there's a lot of challenges to this thing working.[00:22:43] swyx: It's really interesting how like, how hyped people are about this versus I don't know if it works. You know, it's exactly gonna, gonna work. And then there's also this, this idea of retention over long context. Like, even though you have context utilization, like, the amount of, the amount you can remember is interesting.[00:23:02] swyx: Because I've had people criticize both Mamba and RWKV because they're kind of, like, RNN ish in the sense that they have, like, a hidden memory and sort of limited hidden memory that they will forget things. So, for all these reasons, Gemini 1. 5, which we still haven't covered, is very interesting because Gemini magically has fixed all these problems with perfect haystack recall and reasonable latency and cost.[00:23:29] Wildcards: Text Diffusion, RALM/Retro[00:23:29] swyx: So that's super interesting. So the wildcard I put in here if you want to go to that. I put two actually. One is text diffusion. I think I'm still very influenced by my meeting with a mid journey person who said they were working on text diffusion. I think it would be a very, very different paradigm for, for text generation, reasoning, plan generation if we can get diffusion to work.[00:23:51] swyx: For text. And then the second one is Dowie Aquila's contextual AI, which is working on retrieval augmented language models, where it kind of puts RAG inside of the language model instead of outside.[00:24:02] Alessio: Yeah, there's a paper called Retro that covers some of this. I think that's an interesting thing. I think the The challenge, well not the challenge, what they need to figure out is like how do you keep the rag piece always up to date constantly, you know, I feel like the models, you put all this work into pre training them, but then at least you have a fixed artifact.[00:24:22] Alessio: These architectures are like constant work needs to be done on them and they can drift even just based on the rag data instead of the model itself. Yeah,[00:24:30] swyx: I was in a panel with one of the investors in contextual and the guy, the way that guy pitched it, I didn't agree with. He was like, this will solve hallucination.[00:24:38] Alessio: That's what everybody says. We solve[00:24:40] swyx: hallucination. I'm like, no, you reduce it. It cannot,[00:24:44] Alessio: if you solved it, the model wouldn't exist, right? It would just be plain text. It wouldn't be a generative model. Cool. So, author, architectures, then we got mixture of experts. I think we covered a lot of, a lot of times.[00:24:56] Direction 4: Mixture of Experts (DeepSeekMoE, Samba-1)[00:24:56] Alessio: Maybe any new interesting threads you want to go under here?[00:25:00] swyx: DeepSeq MOE, which was released in January. Everyone who is interested in MOEs should read that paper, because it's significant for two reasons. One three reasons. One, it had, it had small experts, like a lot more small experts. So, for some reason, everyone has settled on eight experts for GPT 4 for Mixtral, you know, that seems to be the favorite architecture, but these guys pushed it to 64 experts, and each of them smaller than the other.[00:25:26] swyx: But then they also had the second idea, which is that it is They had two, one to two always on experts for common knowledge and that's like a very compelling concept that you would not route to all the experts all the time and make them, you know, switch to everything. You would have some always on experts.[00:25:41] swyx: I think that's interesting on both the inference side and the training side for for memory retention. And yeah, they, they, they, the, the, the, the results that they published, which actually excluded, Mixed draw, which is interesting. The results that they published showed a significant performance jump versus all the other sort of open source models at the same parameter count.[00:26:01] swyx: So like this may be a better way to do MOEs that are, that is about to get picked up. And so that, that is interesting for the third reason, which is this is the first time a new idea from China. has infiltrated the West. It's usually the other way around. I probably overspoke there. There's probably lots more ideas that I'm not aware of.[00:26:18] swyx: Maybe in the embedding space. But the I think DCM we, like, woke people up and said, like, hey, DeepSeek, this, like, weird lab that is attached to a Chinese hedge fund is somehow, you know, doing groundbreaking research on MOEs. So, so, I classified this as a medium potential because I think that it is a sort of like a one off benefit.[00:26:37] swyx: You can Add to any, any base model to like make the MOE version of it, you get a bump and then that's it. So, yeah,[00:26:45] Alessio: I saw Samba Nova, which is like another inference company. They released this MOE model called Samba 1, which is like a 1 trillion parameters. But they're actually MOE auto open source models.[00:26:56] Alessio: So it's like, they just, they just clustered them all together. So I think people. Sometimes I think MOE is like you just train a bunch of small models or like smaller models and put them together. But there's also people just taking, you know, Mistral plus Clip plus, you know, Deepcoder and like put them all together.[00:27:15] Alessio: And then you have a MOE model. I don't know. I haven't tried the model, so I don't know how good it is. But it seems interesting that you can then have people working separately on state of the art, you know, Clip, state of the art text generation. And then you have a MOE architecture that brings them all together.[00:27:31] swyx: I'm thrown off by your addition of the word clip in there. Is that what? Yeah, that's[00:27:35] Alessio: what they said. Yeah, yeah. Okay. That's what they I just saw it yesterday. I was also like[00:27:40] swyx: scratching my head. And they did not use the word adapter. No. Because usually what people mean when they say, Oh, I add clip to a language model is adapter.[00:27:48] swyx: Let me look up the Which is what Lava did.[00:27:50] Alessio: The announcement again.[00:27:51] swyx: Stable diffusion. That's what they do. Yeah, it[00:27:54] Alessio: says among the models that are part of Samba 1 are Lama2, Mistral, DeepSigCoder, Falcon, Dplot, Clip, Lava. So they're just taking all these models and putting them in a MOE. Okay,[00:28:05] swyx: so a routing layer and then not jointly trained as much as a normal MOE would be.[00:28:12] swyx: Which is okay.[00:28:13] Alessio: That's all they say. There's no paper, you know, so it's like, I'm just reading the article, but I'm interested to see how[00:28:20] Wildcard: Model Merging (mergekit)[00:28:20] swyx: it works. Yeah, so so the wildcard for this section, the MOE section is model merges, which has also come up as, as a very interesting phenomenon. The last time I talked to Jeremy Howard at the Olama meetup we called it model grafting or model stacking.[00:28:35] swyx: But I think the, the, the term that people are liking these days, the model merging, They're all, there's all different variations of merging. Merge types, and some of them are stacking, some of them are, are grafting. And, and so like, some people are approaching model merging in the way that Samba is doing, which is like, okay, here are defined models, each of which have their specific, Plus and minuses, and we will merge them together in the hope that the, you know, the sum of the parts will, will be better than others.[00:28:58] swyx: And it seems like it seems like it's working. I don't really understand why it works apart from, like, I think it's a form of regularization. That if you merge weights together in like a smart strategy you, you, you get a, you get a, you get a less overfitting and more generalization, which is good for benchmarks, if you, if you're honest about your benchmarks.[00:29:16] swyx: So this is really interesting and good. But again, they're kind of limited in terms of like the amount of bumps you can get. But I think it's very interesting in the sense of how cheap it is. We talked about this on the Chinatalk podcast, like the guest podcast that we did with Chinatalk. And you can do this without GPUs, because it's just adding weights together, and dividing things, and doing like simple math, which is really interesting for the GPU ports.[00:29:42] Alessio: There's a lot of them.[00:29:44] Direction 5: Online LLMs (Gemini Pro, Exa)[00:29:44] Alessio: And just to wrap these up, online LLMs? Yeah,[00:29:48] swyx: I think that I ki I had to feature this because the, one of the top news of January was that Gemini Pro beat GPT-4 turbo on LM sis for the number two slot to GPT-4. And everyone was very surprised. Like, how does Gemini do that?[00:30:06] swyx: Surprise, surprise, they added Google search. Mm-hmm to the results. So it became an online quote unquote online LLM and not an offline LLM. Therefore, it's much better at answering recent questions, which people like. There's an emerging set of table stakes features after you pre train something.[00:30:21] swyx: So after you pre train something, you should have the chat tuned version of it, or the instruct tuned version of it, however you choose to call it. You should have the JSON and function calling version of it. Structured output, the term that you don't like. You should have the online version of it. These are all like table stakes variants, that you should do when you offer a base LLM, or you train a base LLM.[00:30:44] swyx: And I think online is just like, There, it's important. I think companies like Perplexity, and even Exa, formerly Metaphor, you know, are rising to offer that search needs. And it's kind of like, they're just necessary parts of a system. When you have RAG for internal knowledge, and then you have, you know, Online search for external knowledge, like things that you don't know yet?[00:31:06] swyx: Mm-Hmm. . And it seems like it's, it's one of many tools. I feel like I may be underestimating this, but I'm just gonna put it out there that I, I think it has some, some potential. One of the evidence points that it doesn't actually matter that much is that Perplexity has a, has had online LMS for three months now and it performs, doesn't perform great.[00:31:25] swyx: Mm-Hmm. on, on lms, it's like number 30 or something. So it's like, okay. You know, like. It's, it's, it helps, but it doesn't give you a giant, giant boost. I[00:31:34] Alessio: feel like a lot of stuff I do with LLMs doesn't need to be online. So I'm always wondering, again, going back to like state of the art, right? It's like state of the art for who and for what.[00:31:45] Alessio: It's really, I think online LLMs are going to be, State of the art for, you know, news related activity that you need to do. Like, you're like, you know, social media, right? It's like, you want to have all the latest stuff, but coding, science,[00:32:01] swyx: Yeah, but I think. Sometimes you don't know what is news, what is news affecting.[00:32:07] swyx: Like, the decision to use an offline LLM is already a decision that you might not be consciously making that might affect your results. Like, what if, like, just putting things on, being connected online means that you get to invalidate your knowledge. And when you're just using offline LLM, like it's never invalidated.[00:32:27] swyx: I[00:32:28] Alessio: agree, but I think going back to your point of like the standing the test of time, I think sometimes you can get swayed by the online stuff, which is like, hey, you ask a question about, yeah, maybe AI research direction, you know, and it's like, all the recent news are about this thing. So the LLM like focus on answering, bring it up, you know, these things.[00:32:50] swyx: Yeah, so yeah, I think, I think it's interesting, but I don't know if I can, I bet heavily on this.[00:32:56] Alessio: Cool. Was there one that you forgot to put, or, or like a, a new direction? Yeah,[00:33:01] swyx: so, so this brings us into sort of February. ish.[00:33:05] OpenAI Sora and why everyone underestimated videogen[00:33:05] swyx: So like I published this in like 15 came with Sora. And so like the one thing I did not mention here was anything about multimodality.[00:33:16] swyx: Right. And I have chronically underweighted this. I always wrestle. And, and my cop out is that I focused this piece or this research direction piece on LLMs because LLMs are the source of like AGI, quote unquote AGI. Everything else is kind of like. You know, related to that, like, generative, like, just because I can generate better images or generate better videos, it feels like it's not on the critical path to AGI, which is something that Nat Friedman also observed, like, the day before Sora, which is kind of interesting.[00:33:49] swyx: And so I was just kind of like trying to focus on like what is going to get us like superhuman reasoning that we can rely on to build agents that automate our lives and blah, blah, blah, you know, give us this utopian future. But I do think that I, everybody underestimated the, the sheer importance and cultural human impact of Sora.[00:34:10] swyx: And you know, really actually good text to video. Yeah. Yeah.[00:34:14] Alessio: And I saw Jim Fan at a, at a very good tweet about why it's so impressive. And I think when you have somebody leading the embodied research at NVIDIA and he said that something is impressive, you should probably listen. So yeah, there's basically like, I think you, you mentioned like impacting the world, you know, that we live in.[00:34:33] Alessio: I think that's kind of like the key, right? It's like the LLMs don't have, a world model and Jan Lekon. He can come on the podcast and talk all about what he thinks of that. But I think SORA was like the first time where people like, Oh, okay, you're not statically putting pixels of water on the screen, which you can kind of like, you know, project without understanding the physics of it.[00:34:57] Alessio: Now you're like, you have to understand how the water splashes when you have things. And even if you just learned it by watching video and not by actually studying the physics, You still know it, you know, so I, I think that's like a direction that yeah, before you didn't have, but now you can do things that you couldn't before, both in terms of generating, I think it always starts with generating, right?[00:35:19] Alessio: But like the interesting part is like understanding it. You know, it's like if you gave it, you know, there's the video of like the, the ship in the water that they generated with SORA, like if you gave it the video back and now it could tell you why the ship is like too rocky or like it could tell you why the ship is sinking, then that's like, you know, AGI for like all your rig deployments and like all this stuff, you know, so, but there's none, there's none of that yet, so.[00:35:44] Alessio: Hopefully they announce it and talk more about it. Maybe a Dev Day this year, who knows.[00:35:49] swyx: Yeah who knows, who knows. I'm talking with them about Dev Day as well. So I would say, like, the phrasing that Jim used, which resonated with me, he kind of called it a data driven world model. I somewhat agree with that.[00:36:04] Does Sora have a World Model? Yann LeCun vs Jim Fan[00:36:04] swyx: I am on more of a Yann LeCun side than I am on Jim's side, in the sense that I think that is the vision or the hope that these things can build world models. But you know, clearly even at the current SORA size, they don't have the idea of, you know, They don't have strong consistency yet. They have very good consistency, but fingers and arms and legs will appear and disappear and chairs will appear and disappear.[00:36:31] swyx: That definitely breaks physics. And it also makes me think about how we do deep learning versus world models in the sense of You know, in classic machine learning, when you have too many parameters, you will overfit, and actually that fails, that like, does not match reality, and therefore fails to generalize well.[00:36:50] swyx: And like, what scale of data do we need in order to world, learn world models from video? A lot. Yeah. So, so I, I And cautious about taking this interpretation too literally, obviously, you know, like, I get what he's going for, and he's like, obviously partially right, obviously, like, transformers and, and, you know, these, like, these sort of these, these neural networks are universal function approximators, theoretically could figure out world models, it's just like, how good are they, and how tolerant are we of hallucinations, we're not very tolerant, like, yeah, so It's, it's, it's gonna prior, it's gonna bias us for creating like very convincing things, but then not create like the, the, the useful role models that we want.[00:37:37] swyx: At the same time, what you just said, I think made me reflect a little bit like we just got done saying how important synthetic data is for Mm-Hmm. for training lms. And so like, if this is a way of, of synthetic, you know, vi video data for improving our video understanding. Then sure, by all means. Which we actually know, like, GPT 4, Vision, and Dolly were trained, kind of, co trained together.[00:38:02] swyx: And so, like, maybe this is on the critical path, and I just don't fully see the full picture yet.[00:38:08] Alessio: Yeah, I don't know. I think there's a lot of interesting stuff. It's like, imagine you go back, you have Sora, you go back in time, and Newton didn't figure out gravity yet. Would Sora help you figure it out?[00:38:21] Alessio: Because you start saying, okay, a man standing under a tree with, like, Apples falling, and it's like, oh, they're always falling at the same speed in the video. Why is that? I feel like sometimes these engines can like pick up things, like humans have a lot of intuition, but if you ask the average person, like the physics of like a fluid in a boat, they couldn't be able to tell you the physics, but they can like observe it, but humans can only observe this much, you know, versus like now you have these models to observe everything and then They generalize these things and maybe we can learn new things through the generalization that they pick up.[00:38:55] swyx: But again, And it might be more observant than us in some respects. In some ways we can scale it up a lot more than the number of physicists that we have available at Newton's time. So like, yeah, absolutely possible. That, that this can discover new science. I think we have a lot of work to do to formalize the science.[00:39:11] swyx: And then, I, I think the last part is you know, How much, how much do we cheat by gen, by generating data from Unreal Engine 5? Mm hmm. which is what a lot of people are speculating with very, very limited evidence that OpenAI did that. The strongest evidence that I saw was someone who works a lot with Unreal Engine 5 looking at the side characters in the videos and noticing that they all adopt Unreal Engine defaults.[00:39:37] swyx: of like, walking speed, and like, character choice, like, character creation choice. And I was like, okay, like, that's actually pretty convincing that they actually use Unreal Engine to bootstrap some synthetic data for this training set. Yeah,[00:39:52] Alessio: could very well be.[00:39:54] swyx: Because then you get the labels and the training side by side.[00:39:58] swyx: One thing that came up on the last day of February, which I should also mention, is EMO coming out of Alibaba, which is also a sort of like video generation and space time transformer that also involves probably a lot of synthetic data as well. And so like, this is of a kind in the sense of like, oh, like, you know, really good generative video is here and It is not just like the one, two second clips that we saw from like other, other people and like, you know, Pika and all the other Runway are, are, are, you know, run Cristobal Valenzuela from Runway was like game on which like, okay, but like, let's see your response because we've heard a lot about Gen 1 and 2, but like, it's nothing on this level of Sora So it remains to be seen how we can actually apply this, but I do think that the creative industry should start preparing.[00:40:50] swyx: I think the Sora technical blog post from OpenAI was really good.. It was like a request for startups. It was so good in like spelling out. Here are the individual industries that this can impact.[00:41:00] swyx: And anyone who, anyone who's like interested in generative video should look at that. But also be mindful that probably when OpenAI releases a Soa API, right? The you, the in these ways you can interact with it are very limited. Just like the ways you can interact with Dahlia very limited and someone is gonna have to make open SOA to[00:41:19] swyx: Mm-Hmm to, to, for you to create comfy UI pipelines.[00:41:24] Alessio: The stability folks said they wanna build an open. For a competitor, but yeah, stability. Their demo video, their demo video was like so underwhelming. It was just like two people sitting on the beach[00:41:34] swyx: standing. Well, they don't have it yet, right? Yeah, yeah.[00:41:36] swyx: I mean, they just wanna train it. Everybody wants to, right? Yeah. I, I think what is confusing a lot of people about stability is like they're, they're, they're pushing a lot of things in stable codes, stable l and stable video diffusion. But like, how much money do they have left? How many people do they have left?[00:41:51] swyx: Yeah. I have had like a really, Ima Imad spent two hours with me. Reassuring me things are great. And, and I'm like, I, I do, like, I do believe that they have really, really quality people. But it's just like, I, I also have a lot of very smart people on the other side telling me, like, Hey man, like, you know, don't don't put too much faith in this, in this thing.[00:42:11] swyx: So I don't know who to believe. Yeah.[00:42:14] Alessio: It's hard. Let's see. What else? We got a lot more stuff. I don't know if we can. Yeah, Groq.[00:42:19] Groq Math[00:42:19] Alessio: We can[00:42:19] swyx: do a bit of Groq prep. We're, we're about to go to talk to Dylan Patel. Maybe, maybe it's the audio in here. I don't know. It depends what, what we get up to later. What, how, what do you as an investor think about Groq? Yeah. Yeah, well, actually, can you recap, like, why is Groq interesting? So,[00:42:33] Alessio: Jonathan Ross, who's the founder of Groq, he's the person that created the TPU at Google. It's actually, it was one of his, like, 20 percent projects. It's like, he was just on the side, dooby doo, created the TPU.[00:42:46] Alessio: But yeah, basically, Groq, they had this demo that went viral, where they were running Mistral at, like, 500 tokens a second, which is like, Fastest at anything that you have out there. The question, you know, it's all like, The memes were like, is NVIDIA dead? Like, people don't need H100s anymore. I think there's a lot of money that goes into building what GRUK has built as far as the hardware goes.[00:43:11] Alessio: We're gonna, we're gonna put some of the notes from, from Dylan in here, but Basically the cost of the Groq system is like 30 times the cost of, of H100 equivalent. So, so[00:43:23] swyx: let me, I put some numbers because me and Dylan were like, I think the two people actually tried to do Groq math. Spreadsheet doors.[00:43:30] swyx: Spreadsheet doors. So, one that's, okay, oh boy so, so, equivalent H100 for Lama 2 is 300, 000. For a system of 8 cards. And for Groq it's 2. 3 million. Because you have to buy 576 Groq cards. So yeah, that, that just gives people an idea. So like if you deprecate both over a five year lifespan, per year you're deprecating 460K for Groq, and 60K a year for H100.[00:43:59] swyx: So like, Groqs are just way more expensive per model that you're, that you're hosting. But then, you make it up in terms of volume. So I don't know if you want to[00:44:08] Alessio: cover that. I think one of the promises of Groq is like super high parallel inference on the same thing. So you're basically saying, okay, I'm putting on this upfront investment on the hardware, but then I get much better scaling once I have it installed.[00:44:24] Alessio: I think the big question is how much can you sustain the parallelism? You know, like if you get, if you're going to get 100% Utilization rate at all times on Groq, like, it's just much better, you know, because like at the end of the day, the tokens per second costs that you're getting is better than with the H100s, but if you get to like 50 percent utilization rate, you will be much better off running on NVIDIA.[00:44:49] Alessio: And if you look at most companies out there, who really gets 100 percent utilization rate? Probably open AI at peak times, but that's probably it. But yeah, curious to see more. I saw Jonathan was just at the Web Summit in Dubai, in Qatar. He just gave a talk there yesterday. That I haven't listened to yet.[00:45:09] Alessio: I, I tweeted that he should come on the pod. He liked it. And then rock followed me on Twitter. I don't know if that means that they're interested, but[00:45:16] swyx: hopefully rock social media person is just very friendly. They, yeah. Hopefully[00:45:20] Alessio: we can get them. Yeah, we, we gonna get him. We[00:45:22] swyx: just call him out and, and so basically the, the key question is like, how sustainable is this and how much.[00:45:27] swyx: This is a loss leader the entire Groq management team has been on Twitter and Hacker News saying they are very, very comfortable with the pricing of 0. 27 per million tokens. This is the lowest that anyone has offered tokens as far as Mixtral or Lama2. This matches deep infra and, you know, I think, I think that's, that's, that's about it in terms of that, that, that low.[00:45:47] swyx: And we think the pro the break even for H100s is 50 cents. At a, at a normal utilization rate. To make this work, so in my spreadsheet I made this, made this work. You have to have like a parallelism of 500 requests all simultaneously. And you have, you have model bandwidth utilization of 80%.[00:46:06] swyx: Which is way high. I just gave them high marks for everything. Groq has two fundamental tech innovations that they hinge their hats on in terms of like, why we are better than everyone. You know, even though, like, it remains to be independently replicated. But one you know, they have this sort of the entire model on the chip idea, which is like, Okay, get rid of HBM.[00:46:30] swyx: And, like, put everything in SREM. Like, okay, fine, but then you need a lot of cards and whatever. And that's all okay. And so, like, because you don't have to transfer between memory, then you just save on that time and that's why they're faster. So, a lot of people buy that as, like, that's the reason that you're faster.[00:46:45] swyx: Then they have, like, some kind of crazy compiler, or, like, Speculative routing magic using compilers that they also attribute towards their higher utilization. So I give them 80 percent for that. And so that all that works out to like, okay, base costs, I think you can get down to like, maybe like 20 something cents per million tokens.[00:47:04] swyx: And therefore you actually are fine if you have that kind of utilization. But it's like, I have to make a lot of fearful assumptions for this to work.[00:47:12] Alessio: Yeah. Yeah, I'm curious to see what Dylan says later.[00:47:16] swyx: So he was like completely opposite of me. He's like, they're just burning money. Which is great.[00:47:22] Analyzing Gemini's 1m Context, Reddit deal, Imagegen politics, Gemma via the Four Wars[00:47:22] Alessio: Gemini, want to do a quick run through since this touches on all the four words.[00:47:28] swyx: Yeah, and I think this is the mark of a useful framework, that when a new thing comes along, you can break it down in terms of the four words and sort of slot it in or analyze it in those four frameworks, and have nothing left.[00:47:41] swyx: So it's a MECE categorization. MECE is Mutually Exclusive and Collectively Exhaustive. And that's a really, really nice way to think about taxonomies and to create mental frameworks. So, what is Gemini 1. 5 Pro? It is the newest model that came out one week after Gemini 1. 0. Which is very interesting.[00:48:01] swyx: They have not really commented on why. They released this the headline feature is that it has a 1 million token context window that is multi modal which means that you can put all sorts of video and audio And PDFs natively in there alongside of text and, you know, it's, it's at least 10 times longer than anything that OpenAI offers which is interesting.[00:48:20] swyx: So it's great for prototyping and it has interesting discussions on whether it kills RAG.[00:48:25] Alessio: Yeah, no, I mean, we always talk about, you know, Long context is good, but you're getting charged per token. So, yeah, people love for you to use more tokens in the context. And RAG is better economics. But I think it all comes down to like how the price curves change, right?[00:48:42] Alessio: I think if anything, RAG's complexity goes up and up the more you use it, you know, because you have more data sources, more things you want to put in there. The token costs should go down over time, you know, if the model stays fixed. If people are happy with the model today. In two years, three years, it's just gonna cost a lot less, you know?[00:49:02] Alessio: So now it's like, why would I use RAG and like go through all of that? It's interesting. I think RAG is better cutting edge economics for LLMs. I think large context will be better long tail economics when you factor in the build cost of like managing a RAG pipeline. But yeah, the recall was like the most interesting thing because we've seen the, you know, You know, in the haystack things in the past, but apparently they have 100 percent recall on anything across the context window.[00:49:28] Alessio: At least they say nobody has used it. No, people[00:49:30] swyx: have. Yeah so as far as, so, so what this needle in a haystack thing for people who aren't following as closely as us is that someone, I forget his name now someone created this needle in a haystack problem where you feed in a whole bunch of generated junk not junk, but just like, Generate a data and ask it to specifically retrieve something in that data, like one line in like a hundred thousand lines where it like has a specific fact and if it, if you get it, you're, you're good.[00:49:57] swyx: And then he moves the needle around, like, you know, does it, does, does your ability to retrieve that vary if I put it at the start versus put it in the middle, put it at the end? And then you generate this like really nice chart. That, that kind of shows like it's recallability of a model. And he did that for GPT and, and Anthropic and showed that Anthropic did really, really poorly.[00:50:15] swyx: And then Anthropic came back and said it was a skill issue, just add this like four, four magic words, and then, then it's magically all fixed. And obviously everybody laughed at that. But what Gemini came out with was, was that, yeah, we, we reproduced their, you know, haystack issue you know, test for Gemini, and it's good across all, all languages.[00:50:30] swyx: All the one million token window, which is very interesting because usually for typical context extension methods like rope or yarn or, you know, anything like that, or alibi, it's lossy like by design it's lossy, usually for conversations that's fine because we are lossy when we talk to people but for superhuman intelligence, perfect memory across Very, very long context.[00:50:51] swyx: It's very, very interesting for picking things up. And so the people who have been given the beta test for Gemini have been testing this. So what you do is you upload, let's say, all of Harry Potter and you change one fact in one sentence, somewhere in there, and you ask it to pick it up, and it does. So this is legit.[00:51:08] swyx: We don't super know how, because this is, like, because it doesn't, yes, it's slow to inference, but it's not slow enough that it's, like, running. Five different systems in the background without telling you. Right. So it's something, it's something interesting that they haven't fully disclosed yet. The open source community has centered on this ring attention paper, which is created by your friend Matei Zaharia, and a couple other people.[00:51:36] swyx: And it's a form of distributing the compute. I don't super understand, like, why, you know, doing, calculating, like, the fee for networking and attention. In block wise fashion and distributing it makes it so good at recall. I don't think they have any answer to that. The only thing that Ring of Tension is really focused on is basically infinite context.[00:51:59] swyx: They said it was good for like 10 to 100 million tokens. Which is, it's just great. So yeah, using the four wars framework, what is this framework for Gemini? One is the sort of RAG and Ops war. Here we care less about RAG now, yes. Or, we still care as much about RAG, but like, now it's it's not important in prototyping.[00:52:21] swyx: And then, for data war I guess this is just part of the overall training dataset, but Google made a 60 million deal with Reddit and presumably they have deals with other companies. For the multi modality war, we can talk about the image generation, Crisis, or the fact that Gemini also has image generation, which we'll talk about in the next section.[00:52:42] swyx: But it also has video understanding, which is, I think, the top Gemini post came from our friend Simon Willison, who basically did a short video of him scanning over his bookshelf. And it would be able to convert that video into a JSON output of what's on that bookshelf. And I think that is very useful.[00:53:04] swyx: Actually ties into the conversation that we had with David Luan from Adept. In a sense of like, okay what if video was the main modality instead of text as the input? What if, what if everything was video in, because that's how we work. We, our eyes don't actually read, don't actually like get input, our brains don't get inputs as characters.[00:53:25] swyx: Our brains get the pixels shooting into our eyes, and then our vision system takes over first, and then we sort of mentally translate that into text later. And so it's kind of like what Adept is kind of doing, which is driving by vision model, instead of driving by raw text understanding of the DOM. And, and I, I, in that, that episode, which we haven't released I made the analogy to like self-driving by lidar versus self-driving by camera.[00:53:52] swyx: Mm-Hmm. , right? Like, it's like, I think it, what Gemini and any other super long context that model that is multimodal unlocks is what if you just drive everything by video. Which is[00:54:03] Alessio: cool. Yeah, and that's Joseph from Roboflow. It's like anything that can be seen can be programmable with these models.[00:54:12] Alessio: You mean[00:54:12] swyx: the computer vision guy is bullish on computer vision?[00:54:18] Alessio: It's like the rag people. The rag people are bullish on rag and not a lot of context. I'm very surprised. The, the fine tuning people love fine tuning instead of few shot. Yeah. Yeah. The, yeah, the, that's that. Yeah, the, I, I think the ring attention thing, and it's how they did it, we don't know. And then they released the Gemma models, which are like a 2 billion and 7 billion open.[00:54:41] Alessio: Models, which people said are not, are not good based on my Twitter experience, which are the, the GPU poor crumbs. It's like, Hey, we did all this work for us because we're GPU rich and we're just going to run this whole thing. And
Welcome to Art is Awesome, the show where we talk with an artist or art worker with a connection to the San Francisco Bay Area. Today, Emily chats with Columbia-born & Bay Area photographer and installation artist, Marcel Pardo Ariza.About Artist Marcel Pardo Ariza:Marcel Pardo Ariza (they/them) is a trans visual artist, educator and curator who explores the relationship between queer and trans kinship through constructed photographs, site-specific installations and public programming. Their work is rooted in close dialogue and collaboration with trans, non-binary and queer friends and peers, most of whom are performers, artists, educators, policymakers, and community organizers. Their practice celebrates collective care and intergenerational connection. Their work is invested in creating long term interdisciplinary collaborations and opportunities that are non-hierarchical and equitable. Their work has recently been exhibited at the McEvoy Foundation for the Arts; Crystal Bridges Museum of American Art; Palo Alto Art Center; San Francisco Arts Commission Galleries; Yerba Buena Center for the Arts; Palm Springs Art Museum; and the Institute of Contemporary Art San José. Ariza is the recipient of the 2022 SFMOMA SECA Award, the 2021 CAC Established Artists Award; the 2020 San Francisco Artadia Award; 2018-19 Alternative Exposure Grant; 2017 Tosa Studio Award; and a 2015 Murphy & Cadogan Contemporary Art Award. Ariza is a studio member at Minnesota Street Project, and the co-founder of Art Handlxrs*, an organization supporting queer, BIPOC, women, trans and non-binary folks in professional arts industry support roles. They are currently a lecturer at California College of the Arts and San Francisco State University, and based in Oakland, CA.Follow Marcel on Instagram: @MarcelPardoAMarcel's 500 Capp Street Exhibit, Orquídeas is on view now through February 17. CLICK HERE for more info. Visit Marcel's Website: MarcelaPardo.com--About Podcast Host Emily Wilson:Emily a writer in San Francisco, with work in outlets including Hyperallergic, Artforum, 48 Hills, the Daily Beast, California Magazine, Latino USA, and Women's Media Center. She often writes about the arts. For years, she taught adults getting their high school diplomas at City College of San Francisco.Follow Emily on Instagram: @PureEWilFollow Art Is Awesome on Instagram: @ArtIsAwesome_Podcast--CREDITS:Art Is Awesome is Hosted, Created & Executive Produced by Emily Wilson. Theme Music "Loopster" Courtesy of Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 LicenseThe Podcast is Co-Produced, Developed & Edited by Charlene Goto of @GoToProductions. For more info, visit Go-ToProductions.com
Episode 2 this week covers the u12's massive SFAI cup final this weekend
Milyen ételt készítsünk a kerekrépából? Az elmúlt évek egyik legsikeresebb, a Miklósfai répafesztivál répafőző versenyének győztese osztja meg velünk a győztes recepteket - édes és sós falatok, hagyományos és szokatlan ételek. | Szerkesztő: Pál Amanda
Egy lelkes és elhivatott kerekrépa rajongó Vadkertiné Tóth Mariann, aki az utolsók között ismeri a miklósfai kerekrépa savanyításának titkát. Mi is ez a zöldség, hogyan savanyítják - Zalában az Őrségben és miben különbözik más módszerektől a miklósfai technológia? | Szerkesztő: Pál Amanda
In this episode we speak with Nando Alvarez-Perez. We reflect on our previous exhibition, post industrial digital dysmorphya, and discuss the politics of being a triplet, when the right time to “retire” a body of work is, image selection and the flattening of history, the indirect impact Walter Benjamin has had on his practice, deskilling, doodles, and recent activities at Lightwork's residency program, Cornelia Magazine, and the Buffalo Institute for Contemporary Art. Nando Alvarez-Perez is a native of Buffalo, New York. In 2014 he graduated from SFAI, where he received the Master of Fine Arts Fellowship in Photography. He uses his work to investigate the boundaries between the personal and the political, the fitness of psychology for ideology, the discrepancies between history and biography, and the relationship between memory, meaning, and place. His practice extends to his work as a founding director of The Buffalo Institute for Contemporary Art, an art and education nonprofit that model how culture can sustain communities through focused, practical engagements with contemporary art, and as editor-in-chief of Cornelia, a visual art review published three times a year for the Western New York and Southern Ontario region. He is currently a visiting professor at Alfred University.
In this episode of PhotoWork with Sasha Wolf, Sasha and photographer, Mimi Plumb talk about the experience of organizing and editing work from over 30 years ago into books that are meaningful and relevant today. They also discuss the political and autobiographical nature of Mimi's work and how that still motivates her to make work today. https://www.mimiplumb.com https://www.instagram.com/mimi_plumb/ Aperture PhotoBook Club with Wendy Red Star: https://aperture.org/events/aperture-photobook-club-wendy-red-star-delegation/ Mimi Plumb is part of a long tradition of socially engaged photographers concerned with California and the West. In the 1970s, Plumb explored subjects ranging from her suburban roots to the United Farm Workers movement in the fields as they organized for union elections. Her first book, Landfall, published by TBW Books in 2018, is a collection of her images from the 1980s, a dreamlike vision of an American dystopia encapsulating the anxieties of a world spinning out of balance. Landfall was shortlisted for the Paris Photo/Aperture Foundation First Photobook Award 2019, and the Lucie Photo Book Prize 2019. Her second book, The White Sky, a memoir of her childhood growing up in suburbia, was published by Stanley/Barker in September 2020. The Golden City, her third book, published by Stanley/Barker in March 2022, focuses on her many years living in San Francisco. Plumb is a 2022 Guggenheim Fellow and a 2017 recipient of the John Gutmann Photography Fellowship. She has received grants and fellowships from the California Humanities, the California Arts Council, the James D. Phelan Art Award in Photography, and the Marin Arts Council. Her photographs are in the collection of the San Francisco Museum of Modern Art, Art Collection Deutsche Börse in Germany, Los Angeles County Museum of Art, Pier 24, Museum of Fine Arts, Boston, Daum Museum of Contemporary Art, and the Yale University Art Gallery. Plumb received her MFA in Photography from SFAI in 1986, and her BFA in Photography from SFAI in 1976. Born in Berkeley, and raised in the suburbs of San Francisco, Mimi Plumb has served on the faculties of the San Francisco Art Institute, San Jose State University, Stanford University, and the School of the Art Institute of Chicago. She currently lives in Berkeley, California. Find out more at https://photowork.pinecast.co
In episode four of Are you listening?, co-curators Margaret Tedesco & Leila Weefur discuss the history of the Sculpture Department and its cast of characters.They discuss the department's long history reaching far and wide from the Bay Area and beyond. Featured in this episode are the voices from alumni and faculty, María Elena González, Carrie Hott, Mildred Howard, Michael Arcega, Catherine Fairbanks, Kija Lucas, John Roloff, Lucas Murgida, and Brett Reichman on John DeFazio.The music throughout this podcast is by two alumni: Tommy Becker (Interdisciplinary, 1999), and Jonathan Holland (Painting, 2001) from the band Tussle. Becker's track is titled "Newfound Freedom" from the soundtrack of "Tape Number One" recorded on a four track recorder in a walk-in closet on Market St in 2001. The track from Tussle is titled "Don't Stop" from Don't Stop, EP (Troubleman Unlimited, 2004).Our beautifully rendered portraits by Amanda Kirkhuff (Painting, 2006).
Dr. Rachel Schreiber currently serves as the Executive Dean of Parsons School of Design at The New School. As an experienced administrator, creating cross-institutional partnerships, faculty leadership, board development, and fundraising, Rachel is passionately committed to diversity, equity, and inclusion in higher education. She is a visual artist, designer, and publishing historian with an extensive record of exhibitions, academic presses, peer-reviewed articles, and publications. Her most recent book, Elaine Black Yoneda: Jewish Immigration, Labor Activism, and Japanese American Exclusion and Incarceration was issued by Temple University Press in 2021. Dr. Rachel Schreiber joined The New School as Executive Dean of Parsons School of Design in July 2019, following more than 26 years in senior leadership and faculty roles at the San Francisco Art Institute, California College of the Arts, Maryland Institute College of Art, and other institutions. Most recently, Rachel served as Provost and Senior Vice President, and as Interim President, at SFAI. An American gender historian, artist, and designer, Rachel has taught design, studio arts, and interdisciplinary humanities at all levels – from first year through graduate studies.
The season is over for the Laois senior hurlers as they were well beaten by Westmeath at the weekend. Their relegation to the Joe McDonagh Cup is not yet confirmed but regardless there are many problems to solve for all involved. Steven Miller and Alan Hartnett discuss it all while we also hear from manager Seamas 'Cheddar' Plunkett. The Laois minor footballers lost out to Wexford last week as their journey came to an end. While the weekend will go down in history for all involved in soccer in Laois. Portlaoise won an SFAI title with their magical U-14 girls against Greystones United. The Senior team won the Lummy-O'Reilly Cup while Towerhill Rovers claimed the CCFL Division 1 title.
In this episode, Christopher picks up where he left off in Part 1. He tried college up north for a couple years, but that ended when he lost his scholarship. His dad knew a guy at the San Francisco Art Institute and encouraged Christopher to come see the school. The idea was that he would finish his education learning how to make movies. On that visit, Christopher met George Kuchar, who would later become Christopher's mentor. He went on to get a BFA from SFAI. We chat about the various neighborhoods he lived in back in those days and the stories that came with them. Then Christopher tells us all about some of the fights he was in here in The City when he was a kid, one on a moving 22-Fillmore. Christopher ended up graduating from SFAI, and the only person he had at the ceremony was his brother Nicolas (Cage). Afterward, the two went out on the town to celebrate. We back up a bit to hear the story of how Christopher's parents ended up in Southern California. His mom's family came from Illinois. And his dad's ancestors came from southern Italy to the U.S. August came to UCLA, where he met Christopher's mom. Her family had an in-law house, and soon, August's brother Francis lived in it. After graduating, Christopher made some films that he describes as "maybe pretentious," but Nicolas's agent liked them. They wanted him to come back to SoCal, but he wasn't interested. He got involved with producer Dino De Laurentis, and shares some of those stories with us. Christopher was able to navigate pressures from outside and get some of his more arty cinematic techniques into his early movies. Next Christopher contrasts his lives in San Francisco and Los Angeles/Long Beach. Today, he lives mostly in the Bay Area and teaches at SFAI, which he talks about. Then he shares the story of how he and his sons made Sammy & Quinn, his most recent short. We end this episode with Christopher's thoughts on what it means to still be in San Francisco. We recorded this episode at the San Francisco Art Institute in April 2022. Photography by Jeff Hunt
William Sarradet and Brandon Zech talk about Jeff Koons' ploy to send artwork to the moon, and discuss the Supreme Court's decision to hear a fair use case against Andy Warhol. "In some instances, the judges, in their opinions in these cases, delve into a sort of art criticism." See related readings here: https://glasstire.com/2022/04/10/art-dirt-jeff-koons-moonshot-warhols-fair-use-case-goes-to-the-supreme-court This week's podcast is sponsored in part by SFAI and Littleglobe, two Santa Fe-based arts nonprofits committed to collaboration as a way to support artists, creative practitioners, and culture bearers. “Santa Fe Stories from the Inside Out” are Littleglobe TV (LGTV) episodes and SFAI Tilt podcasts that highlight the histories and experiences of the people who make Santa Fe a diverse, creative place to live and work. Stay tuned for the upcoming episodes: LGTV on April 13th and Tilt on April 22nd! Learn more here: https://www.littleglobe.org If you enjoy Glasstire and would like to support our work, please consider donating. As a nonprofit, all of the money we receive goes back into our coverage of Texas art. You can make a one-time donation or become a sustaining, monthly donor here: https://glasstire.com/donate
Milyen ételt készítsünk a kerekrépából? Az elmúlt évek egyik legsikeresebb, a Miklósfai répafesztivál répafőző versenyének győztese osztja meg velünk a győztes recepteket - édes és sós falatok, hagyományos és szokatlan ételek. | Szerkesztő: Pál Amanda
In episode 204 UNP founder and curator Grant Scott is in his shed reflecting on why photographers feel the need to label themselves, keeping photography simple, the importance of subject matter and trying to buy a camera. Plus this week photographer Mimi Plumb takes on the challenge of supplying Grant with an audio file no longer than 5 minutes in length in which she answer's the question ‘What Does Photography Mean to You?' Born in Berkeley, California and raised in the suburbs of San Francisco, Mimi Plumb received her MFA in Photography from SFAI in 1986, and her BFA in Photography from SFAI in 1976. She has served on the faculties of the San Francisco Art Institute, San Jose State University, Stanford University, and the School of the Art Institute of Chicago. Since the 1970s, Plumb has explored subjects ranging from her suburban roots to the United Farm Workers movement in the fields as they organized for union elections. Her first book, Landfall, published in 2018, and is a collection of her images from the 1980s. Landfall was shortlisted for the Paris Photo/Aperture Foundation First Photobook Award 2019, and the Lucie Photo Book Prize 2019. Her second book, The White Sky, a memoir of her childhood growing up in suburbia, was published in September, 2020. The Golden City, her third book, was published early this year and focuses on her many years living in San Francisco. Her photographs are in the collection of the San Francisco Museum of Modern Art, Art Collection Deutsche Börse in Germany, Los Angeles County Museum of Art, Pier 24, Museum of Fine Arts, Boston, Daum Museum of Contemporary Art, and the Yale University Art Gallery. She is a 2017 recipient of the John Gutmann Photography Fellowship, and has received grants and fellowships from the California Humanities, the California Arts Council, the James D. Phelan Art Award in Photography, and the Marin Arts Council. She lives in Berkeley, California. www.mimiplumb.com Dr. Grant Scott is the founder/curator of United Nations of Photography, a Senior Lecturer and Subject Co-ordinator: Photography at Oxford Brookes University, Oxford, a working photographer, documentary filmmaker, BBC Radio contributor and the author of Professional Photography: The New Global Landscape Explained (Routledge 2014), The Essential Student Guide to Professional Photography (Routledge 2015), New Ways of Seeing: The Democratic Language of Photography (Routledge 2019). © Grant Scott 2022
This is #1 of a 10 part inquiry into the lives and experiences of 11 people who got the same Masters of Fine Arts, though in various disciplines, from the same institution (the San Francisco Art Institute) in 2001. and 20 years of reflection and experiences post graduation. I am calling the series 'Retrospective'. In this episode I wanted to know why they chose to pursue an MFA in the first place and why they chose SFAI in particular. You will hear conversations with my fellow graduates (in no particular order): Ricardo Rivera - https://www.fresnocitycollege.edu/directory/fpca/ricardo-rivera.html Sonja Hinrichsen - http://www.sonja-hinrichsen.com Amanda Marchand - https://www.amandamarchand.com Alison Goldberg Barbara Bartos - https://vimeo.com/barbarabartos Mira Hecht - https://www.mirahecht.com Yoram Wolberger - https://www.markmoorefineart.com/artists/yoram-wolberger Lisea Lyons - https://www.lisealyons.com Erez Golan - Slalom Kfar Malal (bike shop in Isreal) Peter Wu - https://peter-wu.com Bret Gottschall - https://gotty.com Matthew Dols - https://matthewdols.com Audio engineering by Mickey at CushAudio Services Music by Peat Biby Supported in part by: EEA Grants from Iceland, Liechtenstein + Norway – https://eeagrants.org And we appreciate the assistance of our partners in this project: Hunt Kastner – https://huntkastner.com Kunstsentrene i Norge – https://www.kunstsentrene.no
This is #1 of a 10 part inquiry into the lives and experiences of 11 people who got the same Masters of Fine Arts, though in various disciplines, from the same institution (the San Francisco Art Institute) in 2001. and 20 years of reflection and experiences post graduation. I am calling the series 'Retrospective'. In this episode I wanted to know why they chose to pursue an MFA in the first place and why they chose SFAI in particular. You will hear conversations with my fellow graduates (in no particular order): Ricardo Rivera - https://www.fresnocitycollege.edu/directory/fpca/ricardo-rivera.html Sonja Hinrichsen - http://www.sonja-hinrichsen.com Amanda Marchand - https://www.amandamarchand.com Alison Goldberg Barbara Bartos - https://vimeo.com/barbarabartos Mira Hecht - https://www.mirahecht.com Yoram Wolberger - https://www.markmoorefineart.com/artists/yoram-wolberger Lisea Lyons - https://www.lisealyons.com Erez Golan - Slalom Kfar Malal (bike shop in Isreal) Peter Wu - https://peter-wu.com Bret Gottschall - https://gotty.com Matthew Dols - https://matthewdols.com Audio engineering by Mickey at CushAudio Services Music by Peat Biby Supported in part by: EEA Grants from Iceland, Liechtenstein + Norway – https://eeagrants.org And we appreciate the assistance of our partners in this project: Hunt Kastner – https://huntkastner.com Kunstsentrene i Norge – https://www.kunstsentrene.no
St Attracta's CS Tubbercurry are through to the SFAI Connacht U17 soccer final after a hard-fought 2-0 win against Corrib of Galway on Monday. Team coach Andrew Flynn spoke to Austin O'Callaghan about the win...
Egy lelkes és elhivatott kerekrépa rajongó Vadkertiné Tóth Mariann, aki az utolsók között ismeri a miklósfai kerekrépa savanyításának titkát. Mi is ez a zöldség, hogyan savanyítják - Zalában az Őrségben és miben különbözik más módszerektől a miklósfai technológia? | Szerkesztő: Pál Amanda
The San Francisco Art Institute and the University of San Francisco announced this month that they're planning to merge. Under the agreement, USF will acquire the cash-strapped 151-year old arts college and offer a program called SFAI@USF in the fall. The move is reminiscent of Northeastern University's acquisition of Mills College in September 2021 as small colleges and arts schools deal with financial pressures compounded by Covid. We'll talk about the implications for SFAI's students and adjunct faculty, as well as for the broader arts community of the Bay Area, and look ahead at a new era for the irreverent contemporary arts school.
171 - Mimi PlumbBorn in Berkeley, California and raised in the suburbs of San Francisco, Mimi Plumb has served on the faculties of the San Francisco Art Institute, San Jose State University, Stanford University, and the School of the Art Institute of Chicago. She currently lives in Berkeley, California.Since the 1970s, Mimi has explored subjects ranging from her suburban roots to the United Farmworkers movement in the fields as they organized for union elections. Her first book, Landfall, published by TBW Books in 2018, is a collection of her images from the 1980s, a dreamlike vision of an American dystopia encapsulating the anxieties of a world spinning out of balance. Landfall was shortlisted for the Paris Photo/Aperture Foundation First Photobook Award 2019, and the Lucie Photo Book Prize 2019. Her second book, The White Sky, a memoir of her childhood growing up in suburbia, was published by Stanley/Barker in September, 2020. The Golden City, her third book, due to be published by Stanley/Barker in early 2022, focuses on her many years living in San Francisco.Mimi received her MFA in Photography from the San Francisco Art Institute in 1986, and her BFA in Photography from SFAI in 1976. Her photographs are in the collection of the San Francisco Museum of Modern Art, Art Collection Deutsche Börse in Germany, Los Angeles County Museum of Art, Pier 24, Museum of Fine Arts, Boston, Daum Museum of Contemporary Art, and the Yale University Art Gallery. She is a 2017 recipient of the John Gutmann Photography Fellowship, and has received grants and fellowships from the California Humanities, the California Arts Council, the James D. Phelan Art Award in Photography, and the Marin Arts Council. On episode 171, Mimi discusses, among other things:Memories of her suburban childhood in California.Her book, The White Sky.Why her it took decades for her work to be published.Memories of the dustbowl drought and the theme of climate change.Chernobyl and her childhood insomnia triggered by a fear of nuclear war.Her first book, Landfall, about the 80s.Her tendancy to shoot people's backs.Her 70s project on the United Farmworkers Union, Pictures from the Valley.The enthusiastic critical reception that both Landfall and The White Sky were met with.Her soon to be pulished book The Golden City.Working with publisher Stanley Barker.Having no idea what to do with her colour work on women and girls.Referenced:Diane ArbusFarm Security AdministrationJohn Collier Jnr.The Crass song (not The Cure!) Nagasaki NightmarePaul Schiek and Lester Rosso - TBW BooksRachel and Gregory Barker - Stanley Barker publishingWebsite | Instagram“When I picked up the camera it was like, ‘oh my God', I could just play... I took to it like a fish to water. That element of photography being fun is always something that I think is really important to making work. And I still hold on to that… I want it to be a fun process.”
In episode three of Are you listening?, co-curators Margaret Tedesco & Leila Weefur discuss the history of the illustrious Painting Department and its cast of characters. They discuss the department's long history reaching far and wide from the Bay Area and beyond. Featured in this episode are the voices from alumni and faculty, Pam Martin, Nina Zurier, Danielle Lawrence, Jenifer Wofford, Carlos Villa, Katherine Vetne, and Michele Foyer. Our beautifully rendered portraits by Amanda Kirkhuff BFA'06.
Rachel Adams is the Chief Curator and Director of Programs at the Bemis Center for Contemporary Arts. Past curatorial appointments include Senior Curator at UB Art Galleries, Curator-in-Residence at Disjecta Contemporary Art Center and Associate Curator at Arthouse at the Jones Center (now The Contemporary Austin). Adams holds an MA in Exhibition and Museum Studies from SFAI and a BFA from SAIC. Her areas of research are varied but include a focus on the crossover between contemporary art and architecture, performance and video and new media practices. Select exhibitions include All Together, Amongst Many: Reflections on Empathy, Paul Mpagi Sepuya: Drop Scene, Claudia Wieser: Generations (co-curated), Alison O'Daniel: Heavy Air, Jillian Mayer: TIMESHARE, The Language of Objects, Wanderlust: Actions, Traces, Journeys 1967-2017 and Introducing Tony Conrad: A Retrospective (co-curated). Forthcoming projects include exhibitions with Maya Dunietz and the 2023 group exhibition Presence in the Pause. Installation view: All Together, Amongst Many: Reflections on Empathy at Bemis Center for Contemporary Arts, 2021 Installation view: Alison O'Daniel: Heavy Air at Bemis Center for Contemporary Arts, 2019
In episode two of Are you listening?, Margaret Tedesco & Leila Weefur discuss the history of legendary Bay Area artist’s model Florence “Flo” Wysinger Allen. They discuss Flo’s long history at SFAI and the incredible mark she’s left on the San Francisco Bay Area art community. Featured in this episode is an archived audio interview between Flo and Michael Leonard, an MA student at San Francisco State University in the 1980s. Accompanying this interview is a story from Joy Episalla, who details her personal relationship with Flo at the California College of the Arts.Our beautifully rendered portraits by Amanda Kirkhuff BFA’06.
Ingrid V. Wells earned her MFA from San Francisco Art Institute and her BFA from Arizona State University. Her work fancies the fantastic and humorous in theme and the charming, the kitschy, and the celebrity in subject. Wells's paintings investigate the world of gendered consumerism and the ethics of fascination. Her work has been shown in the Bay Area at Voss Gallery, New York at the Untitled Space, PULSE Miami with Treat Gallery, and internationally in South Korea at the CICA Museum. Her work has been featured by The Jealous Curator, The Huffington Post, Daily Mail, BUST Magazine, El País, Create! Magazine and Teen Vogue, among others. Wells is a multiple-time grant recipient from the Center for Cultural Innovation. She manages San Francisco Artists Studios, enjoys teaching advanced painting courses with SFAI's Public Education program, and runs TWIRL: A Decade of Artists Interviews. Wells currently lives and works in San Francisco. www.createmagazine.com/podcast
Meet Margaret and Leila, the co-hosts of this podcast series and the co-curators of A Spirit of Disruption, an exhibition that reflects on the school’s profound and sustained influence on contemporary art and highlights the contributions of generations of diverse artists and individuals often overlooked in the historical narrative of SFAI. This first episode is an introduction to acknowledge the scope of SFAI's legacy and the indescribable amount of histories, people, and works of art surrounding it.See Father Guido Sarducci tell us why you shouldn't miss the boat here.Watch Anja's full speech from their 2014 commencement address here, it begins at 1:14:50.Our beautifully rendered portraits by Amanda Kirkhuff BFA’06.
In this interview, Thibault speaks with San Francisco-based artist, educator, and writer Danielle Lawrence. Original airdate May 5, 2020, presented in conjunction with Minnesota Street Project. Danielle Lawrence is a San Francisco-based artist whose work merges unconventional materials, painted imagery and 3-dimensional form to renegotiate painting’s traditional anatomy and definition. Her practice addresses the conceptual nature of hybridity by scrambling long-standing divisions and arguments between painting and sculpture, abstraction and representation and craft and fine art. Reworking painting’s physicality creates a fluid approach to its historical form, surface and materiality opening up sites of potential to explore sexuality, gender and class. Links Danielle Lawrence website Support Mental Health First Oakland, a grassroots initiative to reduce police presence in Oakland and support people experiencing a mental health crisis. Get 50% off Quickbooks Online or Quickbooks Self-Employed for the first 6 months using this special referral link: https://quickbooks.grsm.io/sarahThibault. Create and ship artist prints, custom-designed t-shirts and more using Printful. About Season 1 of the Artists + Travel podcast is an archive of previously published interviews recorded between April and May 2020. Artist and writer Sarah Thibault reached out to creative people all over the world to find out about their experiences during the early days of the COVID pandemic. The aim of the conversations was two-fold: to share the unique perspectives that arose from different global responses to the spread of the virus, and to unearth the commonalities in these experiences. Artists + Travel began as a travel blog for artists that Thibault created in 2018 as a way to document her two+ years living as a nomad and attending artist residencies abroad. Go here to sign up for her newsletter https://sarahthibault.com/about/ Instagram: @sarah_thibault Websites: artiststravel.space / sarahthibault.com Credits Music composed and performed by Ulysses Noë --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/sarah-thibault11/support
In this episode, Thibault speaks with Oakland-based artist, educator, former gallery director, and entrepreneur Elizabeth Bernstein. Original airdate May 7, 2020 in conjunction with the Minnesota Street Project. Bernstein was a visiting faculty in the photography department at the San Francisco Art Institute, and was, until its close in September 2019, the Gallery Director of Royal Nonesuch Gallery in Oakland, CA. Her photography work has been shown across the US. She is now the owner of Maker's Loft in Oakland. Links Makers Loft in Oakland, CA Elizabeth Bernstein's website The San Francisco Art Institute That Could Have Been by Sarah Hotchkiss Support Mental Health First Oakland, a grassroots initiative to reduce police presence in Oakland and support people experiencing a mental health crisis. Get 50% off Quickbooks Online or Quickbooks Self-Employed for the first 6 months using this special referral link: https://quickbooks.grsm.io/sarahThibault. Create and ship artist prints, custom-designed t-shirts and more using Printful. About Season 1 of the Artists + Travel podcast is an archive of previously published interviews recorded between April and May 2020. Artist and writer Sarah Thibault reached out to creative people all over the world to find out about their experiences during the early days of the COVID pandemic. The aim of the conversations was two-fold: to share the unique perspectives that arose from different global responses to the spread of the virus, and to unearth the commonalities in these experiences. Artists + Travel began as a travel blog for artists that Thibault created in 2018 as a way to document her two+ years living as a nomad and attending artist residencies abroad. Go here to sign up for her newsletter https://sarahthibault.com/about/ Instagram: @sarah_thibault Websites: artiststravel.space / sarahthibault.com Credits Music composed and performed by Ulysses Noë --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/sarah-thibault11/support
This week we try to crack the case on a mystery residency in Oregon. We also talk about whether drive-in movie theaters are open. Plus Kate invites artists to think about the danger of standing in the middle of a baseball game with no glove. Sorry for all the hand sounds in this episode- just pretend you’re listening from under a table.All the music in this episode is by Burd HauzReturn the Eye by DROUGHT SPA presented by CLOACAUC Regents buy SF Art Institute’s $19.7M debt, are now school’s landlords — will SFAI’s Diego Rivera mural be next to sell?proposalsforall.comHere's the baseball diagram.Maysoun found a safer one!
Mimi Plumb (Berkeley, CA) has served on the faculties of the San Francisco Art Institute, San Jose State University, Stanford University, and the School of the Art Institute of Chicago. She currently lives in Berkeley, California. Since the 1970s, Plumb has explored subjects ranging from her suburban roots to the United Farm Workers movement in the fields as they organized for union elections. Her first book, Landfall, published by TBW Books in 2018, is a collection of her images from the 1980s, a dreamlike vision of American dystopia encapsulating the anxieties of a world spinning out of balance. Landfall was shortlisted for the Paris Photo/Aperture Foundation First Photobook Award 2019, and the Lucie Photo Book Prize 2019. Her second book, The White Sky, was published by Stanley/Barker in September, 2020. Plumb received her MFA in photography from SFAI in 1986. Her photographs are in the collection of the San Francisco Museum of Modern Art, Los Angeles County Museum of Art, Pier 24, Museum of Fine Arts, Boston, Daum Museum of Contemporary Art, and the Yale University Art Gallery. She is a 2017 recipient of the John Gutmann Photography Fellowship, and has received grants and fellowships from the California Humanities, the California Arts Council, the James D. Phelan Art Award in Photography, and the Marin Arts Council. --- Support this podcast: https://anchor.fm/opencurtain/support
In this podcast, Lucia picks up where she left off in Part 1. She dives into her own personal art history, from always drawing as a kid and young adult to eventually attending college at the San Francisco Art Institute. Before that, while at City College, Lucia got her associates in child development and began teaching, something she continues to do to this day. While at SFAI, she and some fellow students formed the SF Poster Syndicate, a group intended "to bring art and design to many different people’s movements in hopes that their message can be heard and seen more loudly." The story of Lucia getting started painting murals intersects back with her activist mom and Balmy Alley in the Mission. She ends the podcast sharing the stories behind her first mural—"Mission Makeover," in Balmy Alley—and "Women of the Resistance," which she collaborated with the SF Poster Syndicate on. To see the murals, please visit our website. Photography by Michelle Kilfeather
Hip-hop, hills, and art drew Jeremy Fish to San Francisco from 3,000 miles away. In this episode, the prolific and iconic SF artist traces his family line back to both grandfathers. One worked with his hands to make art; the other was a salesman. Jeremy sees bits of himself in both ancestors. He was born in Albany, New York, and spent most of his youth in Saratoga Springs. When it came time to go to college (in 1994), not only was The City less expensive than Boston and New York, but Jeremy also had one hell of a trip out here, which he retells in the podcast. Follow Jeremy on Twitter and Instagram, and check out his website, Silly Pink Bunnies. And check back Thursday for Part 2 to hear more of Jeremy's story. We recorded this podcast at the Doolan-Larson building at Haight and Ashbury in October 2020. To see some photos of the building, go to our website or follow us on social media. Photography by Michelle Kilfeather
Nao Bustamante is a legendary artist, residing in Los Angeles, California. Bustamante's precarious work encompasses performance art, video installation, filmmaking, sculpture and writing. The New York Times says, "She has a knack for using her body." Bustamante has presented in Galleries, Museums, Universities and underground sites all around the world. She has exhibited, among other locales, at the Institute of Contemporary Arts in London, the New York Museum of Modern Art, The San Francisco Museum of Modern Art, Sundance International Film Festival/New Frontier, Outfest International Film Festival, El Museo del BarrioMuseum of Contemporary Art, First International Performance Biennial, Deformes in Santiago, Chile and the Kiasma Museum of Helsinki. She was also an unlikely contestant on TV network, Bravo's "Work of Art: The Next Great Artist." In 2020 Bustamante’s forthcoming VR film, “The Wooden People” received an award from the Mike Kelley Foundation and will be presented at REDCAT in 2021. Bustamante is alum of the San Francisco Art Institute, and in 2020 she was awarded an honorary doctorate degree from her alma mater, SFAI. She also attended Skowhegan School of Painting and Sculpture. Currently she holds the position of Professor of Art at the USC Roski School of Art and Design. There, she also serves as the Director of the MFA in Art. Artists-In-Presidents: Fireside Chats for 2020 will be released weekly via podcast, virtual gallery, and social media. To visit the virtual gallery: www.artistsinpresidents.com and follow us @artistsinpresidents Sound design by Phoebe Unter & Nicole Kelly featuring Mara Lazer on saxophone. Music by DASK.
The San Francisco Art Institute (SFAI), one of California's top art schools, announced its indefinite closure after operating for almost 150 years. The art school was founded in 1871 and is the country's sole school for fine arts that is dedicated to modern art. Last March, SFAI released a letter stating that classes will be suspended starting May and that it will not admit students for the fall semester. In addition, the school's faculty and staff were advised to prepare for layoffs. Students eligible for graduation this semester will still receive their academic degrees from the school. However, those who are not able to graduate were encouraged to transfer to another school. SFAI's current poor financial situation was a main factor behind the school's decision. Three years ago, the art school spent millions of dollars to build a second campus in San Francisco. This year, the school built new dormitories. These expenses caused the school to go into debt, and SFAI's board chair estimated the school's total debt to be around $19 million. Months before its 150th anniversary, SFAI had been in talks with a more financially stable university for a merger. However, the business transaction did not happen because of the coronavirus pandemic. To get back on its feet, the school considered selling a painting in its gallery worth $50 million. SFAI's students and faculty expressed their sadness about the announcement, but they are hoping that the school will get through this crisis. The school also released a public statement urging readers to donate to its emergency fund. Well-known members of the San Francisco art scene signed the statement and offered their support.
This week we're getting closer and closer to the anarchist uprising. Maysoun "went" to some more Zoom panels, Kate has some micro-updates about SFAI and Dave & Busters.All the music in this episode is by K.K. SliderSan Francisco's Arts Community Town Hall#BlackNewDealNew circles!
Pósfai Zsuzsannával beszéltük át, hogy pontosan mi is az a lakásszövetkezet, hogyan működik, illetve hogy mit jelent mindez annak, aki nem csak kutatja a bérlői lakásszövetkezeteket, hanem maga is egy bérlői lakásszövetkezetben lakik, és így a modell első magyar kísérletének az kutatója és egyben alanya is.
This week we talk about SFAI some more. We try to save Art Practical. Plus we come up with great pizza-related ideas.All the music in this episode is from the soundtrack to the video game, Monkey IslandSign the petition to save SFAI!Art PracticalPrescription for a healthy art scenePut your art reviews in here!
"You can't steal everything," Craig Costello says, as he recounts his years in both Queens and San Francisco in the 1980s and 1990s. In many ways, Costello is right. As a graffiti writer, photographer and all around innovator, Costello, also known as KR and, of course, now known as the man behind the KRINK brand of markers and inks for not only graffiti, but fine art practices as well, has been at the forefront of multiple ways of underground culture emerging into public consciousness. These moments and stories are captured in the new book, KRINK: Graffiti, Art, and Invention, and in many ways, the title says it all. Radio Juxtapoz caught up with Costello from his home on Long Island in the midst of a pandemic, but a moment where all of us are being a bit nostalgic and mindful. Costello talked about the intricacies of NYC graffiti in the 1980s, the early rise of Mission School artists out of SFAI in San Francisco in the early 1990s and the slow evolution of his own practice that led to the now famous drip aesthetic he would go on to perfect in NYC back in the early 2000s. There is so much history in this talk; from subway cars to Barry McGee's innovative street work, a love of photography to early beginnings of ALIFE on the Lower East Side. ESPO, IRAK, Os Gemeos, KAWS, Revs + Cost... the stories, the materials, the style... it's all here. Subscribe to the Radio Juxtapoz podcast HERE. The Radio Juxtapoz podcast is hosted by FIFTH WALL TV's Doug Gillen and Juxtapoz editor, Evan Pricco. Episode 042 was recorded via Skype from San Francisco/London/NY, April 8, 2020. KRINK: Graffiti, Art, and Invention is published by Rizzoli, and available now.
Nőnapi adásunkban Pósfai Orsolyával, a Mérce újságírójával a nők elleni erőszak médiareprezentációjáról beszélgettünk.
Photographer Chris Macias grew up in the Los Angeles area. He came to San Francisco to go to art school, and his school happened to be in the Tenderloin. In this podcast, Chris talks about arriving in the city and coming to learn San Francisco on the streets of the Tenderloin. Check back Thursday for Part 2, when Chris will go into depth about his art and his work at SFMoMA. Film photography by Michelle Kilfeather
This week I talk to Elizabeth Bernstein. Liz and I went to grad school together in San Francisco and she teaches photography at SFAI, and runs Oakland based gallery Nonesuch gallery. And she is Langston's mom. Liz's son has type 1 diabetes and she shares her story of how that has impacted their lives. She also shares with us her unconventional and absolutely beautiful way of creating a family and coparenting with her ex partner. Liz shares her experience with post partum anxiety and OCD, raising awareness of taking care of your mental health. I am so thankful she wanted to be on the podcast :) The link to nest, you will be invited to join this week is: https://mailchi.mp/mamatoto.info/nestpreview
This is a black arts and culture site. We will be exploring the African Diaspora via the writing, performance, both musical and theatrical (film and stage), as well as the visual arts of Africans in the Diaspora and those influenced by these aesthetic forms of expression. I am interested in the political and social ramifications of art on society, specifically movements supported by these artists and their forebearers. It is my claim that the artists are the true revolutionaries, their work honest and filled with raw unedited passion. They are our true heroes. Ashay! 8 to 9 AM Jo Kreiter, Flyaway Productions' The Wait Room, the first episode in her Decareration Trilogy 9 AM Vanguard Revisited: Poetic Politics & Black Futures at SF Art Institute Jan 21-Apr. 7. We speak to Jeff Gunderson who was the professor that led the class that developed the Vanguard exhibit which opens tomorrow evening: 5-8 p.m. SFAI’s Walter and McBean Galleries are open to the public Tuesday 11 AM – 7 PM and Wednesday – Saturday, 11 AM – 6 PM and are free. Visit sfai.edu or call (415) 749-4563. SFAI’s Walter and McBean Galleries are located at 800 Chestnut St., San Francisco, CA. 9.30 The Last Sermon of Sister Imani by Cleavon Smith at TheatreFirst in Berkeley.
Podcast 76 är en inspelning från SFAI-veckan i Linköping där Benjamin Flam modererade ett symposium om Point-of-care-ultrasound för anestesiologer. En panel bestående av Lill Blomqwist, Fredrik Hallgren, Niklas Jonsson och Meriam Åström Aneq utfrågades om när, var och hur anestesiologer ska lära sig POCUS.
Idag bjuder Doktor Blund in en expert på malign hypertermi. Anna Hellblom jobbar på MH-centrum, och ger oss en heltäckande genomgång av ämnet. En bild om MH-mekanismen och en om utredningsgången finns här (Tack Anna!) Om du är medlem i SFAI kan du se en video av när Anna föreläser om MH här. Socialstyrelsens sida om […]
Bay Area composer Clark Suprynowicz is CEO/Artistic Director of Future Fires, the new 2017 SF cultural platform uniting art, music, & technology. Artists/creators from around the world produce groundbreaking work using robotics, VR, drones, and much more.TRANSCRIPTSpeaker 1:Method to the madness is next and you're listening to method to the madness, a weekly public affairs show on k a l x, Berkeley Celebrating Bay area innovators. I'm your host, Lisa Keifer, and today I'm interviewing Clark superannuates, award-winning bay area composer, musician and teacher. He is now CEO and artistic director of future fires. He'll be talking to us [00:00:30] today about what that is. Welcome to the show. Clark. Thank you so much. I'm so happy you're on the show to tell us about future fires. First of all, can you explain what it is? Speaker 2:Sure. It's hard to talk about what it is without talking about the origins. So I've noticed that art and technology is an emerging domain that you can trace its roots back to the 60s and even even before that. But I think a lot of people recognize in recent years there just extraordinary things happening with virtual [00:01:00] reality, augmented reality, three d projection mapping, robotics, wearables, even aerial light shows created with drones and what all these things have in common is that they have become tools that artists are working with creatively. And my personal belief is that if you stick around for a couple of years and watch this whole phenomenon, I think we'll, we will recognize these times we're living in now as a time of incredible imagination and people mixing it up and, [00:01:30] and trying to figure out this whole thing. But emerging out of it, I, I think I'm not the only one that sees this. Speaker 2:There's this whole emerging new activity of artistic practice, future fires is just to get to, like that part is, um, a large scale festival of art and technology that I've been putting together with a really great team over the last couple of years. And who are these people on your team? Yeah, well we've got an amazing advisory panel that gets back to the kind of origin story. When I started working on this a few years ago, I spoke to Pam Winfrey [00:02:00] who has been a curator at the exploratorium since 1979 and she said, well, not only do I think this is a great idea, but I'll be on your advisory panel. And people kept saying that. Um, so we've got a really great group of people from the arts side, from the business side, a large event management. I've got a partner in the business, Scott Lipsett who um, started a great media company that you can find online called driver digital. Speaker 2:And so he understands the whole capture and distribution [00:02:30] of media part, which is very important to create a live event these days. Cause that's as much an online phenomenon as it is something that you experienced physically when you show up as to the team that I'm actually working with that are putting on the event. John Mitchell was a producer right here at the Greek theater in your backyard for five years and then moved over and worked with the Superbowl 50 this last year. And his next posting right after that was to come and work with me and a few other people he brought along from a Superbowl 50, which the marketing director [00:03:00] there and the person that's doing our sponsorship management. So there are those folks and we've got a wonderful guy, Patrick Haynes, who Scott, a production company of his own, which gets back to the online media part of this and David brassard as our CFO kind of taking care of the money stuff. Speaker 2:So it's a really great kind of lean mean team and we're starting to work with the midway and pure 70 partners in San Francisco. Those are revenue partners. The location. Yeah, frequency. How often is this going to happen? Where's it going to happen? What is your vision for that? Yeah, I, [00:03:30] we've got some really great stuff brewing for early 2017 with both artists and dates from our venue partners. So serving you definitely pure 70 a and the midway. The midway is actually had really wonderful 2,500 person venue with sort of five rooms that orbit around one large one and they're just getting their permits together and have started doing events there. So those are our partners and we plan to do events at the midway until we move over to pure 70 so will it be completely indoors? Actually both of them in Nice weather provide the opportunity [00:04:00] to do inside and outside. Speaker 2:And is it once a year? How do you envision this? We're, we're looking at doing several events a year with kind of a bump in the middle of the larger one will be in the summer months and probably the way things look now we'll be staying at the midway for the first year and moving over to per 70 when we are drawing large enough crowds first. Right. And then start rolling out these programs. How much will it cost to go to one of these events? Well, we're trying to keep things affordable. I think running [00:04:30] underneath the surface of all of this is the awareness a lot of us have that the arts community has really been under fire here in the bay area for quite a while now with rising rents. And uh, we don't want to put on an event with some astronomical ticket price just to pay for it. Speaker 2:So we are carefully having conversations with sponsors, making people share our vision and helping to pay for it that way, which is a model that should be familiar to anybody that's been to Coachella for instance, or maker fair. So that's part of what's driving [00:05:00] revenue for it. And of course campaigners that still [inaudible] that's closed now and we've, we put some money in the bank from that. And I guess the other thing I would say is we're having some really great conversations with people now and it's taken a while to get here and uh, just sort of spread the word about what we're doing, but talking to some of the people in the bay area that can afford to reach into their pockets and kind of [inaudible] investors funding. That's right. But if there are people listening to this and they've got a lot of money in their checking account [00:05:30] and they think this sounds exciting, please reach out to future fires.com. Speaker 2:Right. So you're looking for you still looking for we, we raised investment, uh, last year and I think we did really well and got to a nice place. And that and it's sort of an ongoing first raise of capital who's paying for all of this and, and it's worth contrasting a bit with the nonprofit model, which I'm very familiar with and I've worked with a lot of great organizations in the bay area and done some grant writing of my own. It just seemed like as we tried [00:06:00] to figure out why there is not right now a large amount of art and technology in the bay area. That part of the answer is that people have been working usually with the nonprofit model coming from the museum and gallery sort of side of things. God bless SF Moma and the Gagosian gallery and all of those people. But it just seemed to us to do a really large festive event and bring in people from around the world with high production values and really do it properly that it was probably better to model it after some of these larger festival. Speaker 3:So like [00:06:30] a for profit model. Yeah, that's okay. You've composed several operas, you, you come from a kind of a classical and jazz background and can you talk about those changes you saw coming some time ago and how that informed your work and in doing this event that combines art and technology? Speaker 2:You're right, I've done a lot of work collaboratively in the bay area and for whatever crazy reason as a composer I tend to gravitate to these large scale projects that take some years to realize and [00:07:00] you wind up doing grant writing and sitting in our whole lot of production meetings and doing a lot of collaboration. I guess I would say I like the collaboration part of it that's always attracted me maybe because it partly gets me out of my room, a lot of artist spend time alone and uh, I enjoy the social part of it. I like hearing people's ideas and helping you know, solve problems together. So to get to this project after doing the operas that you talk about and being involved in these often multidisciplinary projects for [00:07:30] years I was going back and forth between Europe and the u s about three and a half, four years ago and more and more people were sending me this really interesting project in my inbox. Speaker 2:You know, things would show up and I'm sure you've seen things on the creators project or somebody sent you a link from time to time. And what was interesting is every time I looked at these projects and I saw some amazing piece involving projection mapping on the side of a building for example, or I mentioned earlier, an aerial drone based light show or [00:08:00] you know, data monopolies work with no such thing as an example of an amazing melding of the musical world and somebody who's an amazing visual designer and I was seeing these projects and I was noticing every time I would look to see where they were, they were in Tokyo, they were in Paris, they were in Berlin, they were in Italy, they were in London and they were not in the bay area. Now we have an incredible technological community here of course, and a lot of innovation going on. And there are people doing remarkable work in art in tech here, but that doesn't mean they have a large scale platform [00:08:30] for that. Uh, we've got some wonderful colleagues in the gray area foundation and Coda, Mae and projects that occasionally do occur at Swissnex or Dork about San Francisco. One of our advisors is the person that started Dork Bot San Francisco, wonderful meetup group. These are places where you can see some remarkable art in tech projects and they're great. They're in an intimate setting and we're just looking to expand that Speaker 3:and a lot of people talk about um, burning man's influence on these Speaker 2:art and tech installations as well. Yeah, we have an interesting connection [00:09:00] to a number of the people that do large scale sculptures through Jeff Whitmore at the midway, urban new partner that I mentioned and a couple of other people that are kind of orbiting around that are in that community. Yeah, that's been one of the great things about this actually is finding all the overlap and all the excitement that is going on. As we discussed this with different people, it really is much more common than not when we get into a room and talk to people about this, that they're just supportive in every way they can be. Tell me about a few of the artists that you are working with for the future [00:09:30] fires project. Yeah, sure. I'll mention a couple others, a wonderful group called fuse and I would recommend people check them out online. Speaker 2:You could probably find the most easily through the piece that we're looking to bring here next year called Laos, l. J. O, s, w. I think they're just outside Medina in Italy and I actually got to visit them when I was first starting this project. Wonderful Bunch of guys as sometimes happens with the sort of work they're working in architect's offices together because they're kind of brilliance [00:10:00] and creativity and coding talent is appreciated there and it helps them make a living while they're doing this stuff on the side and they have brought that and a whole collection of pieces to festivals all around Europe and this will be their first time coming to the bay area. The piece they're bringing, the one that I mentioned called Laos is a generative piece. It involves real time graphics that are responding to a dancer and aerialist that as part of that piece. Speaker 2:And I'm very interested in that work where you actually have a human element. It's not [00:10:30] just a question of pushing a button and making something run, but there's something really warm and organic and unpredictable and wonderful and complicated about what happens when you get human beings, whether that's musical or whether that's dance or having the audience in some way trigger or influence what's going on. That's really interesting. And one of the, one of your fascinating just things. Who are you working with on that technology piece of that? A, just to speak as someone here in the bay area or a couple of people that have become, uh, good friends of ours and are doing wonderful creative work, future cities, lab [00:11:00] in Dogpatch, South San Francisco. Again, people that have a background in architecture, but people may have seen their work at Yerba Buena Center. They've had two different pieces installed there over the last year and a half. Speaker 2:Their work is interactive and they tend to gravitate toward these large scale exhibits. Sculptural works, and they're starting to do very well and getting a some recognition. They've commissioned in Washington, D C for a new piece. They're working, so that would be an example and another possibly not as well known, but I'm sure he will be. [00:11:30] There's a fellow here on a Fulbright, I think at SFAI and his name is Ken Byock Berber. I'm going to actually spell that in case anybody wants to look up his work. It's B U Y U K B E r B e r by barber, and he's been all over the place. I don't know when that guy sleeps since he got here. He said work presented down in La at a festival there recently. He's working with immersive environments and VR and all sorts of light-based art. We've got a whole family of people that [00:12:00] we're in touch with. Probably the best thing to do is visit our website. Speaker 1:If you're just tuning in, you're listening to method to the madness, a weekly public affairs show on k a l x Berkeley Celebrating Bay area innovators. Today, I'm interviewing Clark [inaudible], the CEO and artistic director of future fires. What is the mission of future fires? Are you trying to reach a new demographic? Speaker 2:[00:12:30] Well, there's two parts to that. Through the people were finding a connection to and people that are interested in what we're doing and there's our mission, which is related. I would say the audience, we're finding this really broad, it's a primarily youth related event that we're putting on. If you talk to our marketing director, she'll tell you that you need to get really specific about who you're reaching out to and the kind of messaging you do. It's not my area of expertise, but she knows what she's talking about. Speaker 1:So you are focusing on a demographic. Speaker 2:Yeah, sure. You kind of have to and [00:13:00] and also it just makes sense because of the niche that we see or the vacuum in the bay area we're looking for primarily people in 20s and thirties and uh, that's, that's the event that we see missing a lot of the people, you know, you referenced a burning man earlier, people that are going to Coachella, people that are going to burning man, people that might make the trip down to Austin to south by southwest. A lot of uh, young people that are very creative and they might be working in the tech industry, they might have a design background, they might be art students. They might [00:13:30] just be incredibly rabid fans of music and large events. It's, it's that younger audience that, that primarily this is geared toward. But there is also, I am told again by people that know marketing, there's a secondary demographic and we're certainly welcoming people in that are forties fifties sixties and had been around the bay area long enough to see all the evolution that's happened Speaker 1:and who have the deep pockets. Speaker 2:Yeah, sure. That doesn't, that doesn't hurt. I guess I would say one other thing on this topic too, which is important, which is, [00:14:00] uh, I mentioned earlier a lot of people being priced out of the bay area that are in the arts. I think it's really a wonderful thing about this project that it's the only place I know of where technology and the arts are really shaking hands and getting along. You've got artists that are embracing code software and hardware, the increasingly intuitive interfaces that make it possible to do creative work. If you're coming from the creative side and people that have, uh, companies and are working with this frontier technology that is more and more emerging, [00:14:30] they're looking for opportunities to show off with the stuff can do their creative people too. They may not be artists by day and that may not be their, their primary skill set, but they're happy to partner with people that can show off what can be done with what they're innovating. An example of that would be the great incubator program that's been going on at autodesk now for a couple of years. And one of the, that's kind of in our family, a nuclear practice, been there several times working on projects that their incubator program. Speaker 3:What's an example of how you're moving music forward [00:15:00] in this tech plus art scenario? Speaker 2:Yeah. Well I don't want, I don't want to come off as someone that's masterminding something that's already going on. I think we're in more than the position of curating and trying to provide a stage for a lot of wonderful stuff, so I can name some people that I admire and that we hope to see on our stages. I'm in Tobin who actually lives right here in Marin of flying Lotus, who's from London. I mentioned no such thing. These are artists that are not only creating some great music, but if you look at what [00:15:30] they've been doing visually, you see 'em that they've been paying a lot of attention to that and they're looking to be innovative and experimental and have a lot of fun too with what their audiences looking at as well as hearing. I guess the band tool would be another example and that's an interesting thing to bring up because the artists from Turkey that I mentioned, Ken [inaudible] who is right now at SFAI, he created a all the visuals for their last touring show. Speaker 2:And if you look at a tool online, I believe the first video that bobs up shows you the visuals that our artists and residents created for their last touring show. [00:16:00] And that was a really delightful discovery for me cause they write, I do come from a music background, but at the time that I started working on this, I was thinking of music as another category that we needed to represent just as we would represent VR or fashion and tech. And I realized that that was all wrong. Actually. If you look at what's going on in the music world, people are more and more embracing the visual design that's possible with these kinds of tools. And why is that? Part of it is that we're looking at a generation that experiences things as much online [00:16:30] as they do live. And if you're a musical performer, even if you're someone that strums in Acoustic Guitar, which is a great thing to, uh, you need to have some visual signifier out there, something that lets people know who you are and, uh, it's only natural, I think that people would be exploring more and more how to tell a story visually and start developing some kind of language there and using that as a creative medium in its own right. Speaker 2:So I think that's part of it. And I also think that these tools have arisen, [00:17:00] projection art for example, or VR and people are naturally eager to see what they could do with that if they're coming from the musical side. You know, I think it's great too, to go to a concert and watch a cellist who's playing sublime music and be able to focus on that one element alone. I hope that that never goes away, but it's just undeniable that there's a whole new generation of musical artists that are embracing the possibility of really creating a visual feast. Speaker 3:I was just reading the transcript of t bone Burnett [00:17:30] keynoted dress at the Americanafest this September. And, and he talks about the challenge that we face with technology and says it has no aesthetics or ethics and he kind of insinuates that Internet technology has a prison. So it was really kind of a contrast when I saw what you were doing and yeah, Speaker 2:and yet I understand there's so many people in the arts, I think that feel under siege and there's a whole phenomenon in our culture of [00:18:00] the arts in general being marginalized. One of the members of our team has made the point and I think it's quite a positive and constructive one that what we can do here, and I hope we do as we build this is provided a different and very positive role model for younger people who are trying to figure out what to do with their lives. And being an artist as it's usually defined, it just doesn't look like a very good option at the moment. But if you see people that are doing things with code and involved in these remarkable collaborations and, and making a [00:18:30] decent paycheck, which is something we hope to enable, you know, through this, this sort of work. Um, that's pretty great. That's pretty interesting. If you're 11, 12, 13 years old and you were thinking, well, I don't know that I really want to go into banking. I don't know that I really want to be a lawyer. Speaker 3:Then there's the issue of arts in the schools today. There's so little of it. Whereas when I was growing up, we had choices of instruments. We had choir, we had plenty of arts for free. Yeah. To go that same path today takes a lot of money and time [00:19:00] that um, most people don't have. That's right. So when you're talking about young people with coding, it's something they can do and they can do it inexpensively. Right. Speaker 2:I really believe too, as I said at the beginning of our time here, that this phenomenon is really emerging too. It's very easy to look at what's happening now in 2016 and, and go, well, that's pretty cool. You know, I, I, I think I see some interesting work going on there, but if you just project forward considering how fast things have moved, how much more powerful processing is now, [00:19:30] how much more intuitive the interfaces are that are available to artists and this kind of body of work and uh, and a practice that has started to emerge. I just think there's huge potential there for anybody young today looking for something creative to do. And again, that's not ever going to take away the beauty of what t bone Burnett does or ry Cooder or any number of wonderful instrumentalists. Speaker 3:Where do you see future fires like in five years? Do you think it's going to evolve into something else? Speaker 2:Well, I can tell you that our [00:20:00] venue partner, the midway is really working hard to make their new venue in south San Francisco, a center for community and for the arts and for innovation. And so I have to kind of put my answer together with what they have in mind. And that's a really nice thing to do. Partnership is a great thing. If it's the right kind of partnership, they would like us to stick around for years and work with them and build up the audience at the, at the midway at, at pier 70, they all serve in public works for people [00:20:30] who have seen shows there. And uh, Jeff has, uh, been working recently with the people that do shows at the mint. So just because those guys have been in event production for a long time in San Francisco, there's a lot of opportunity there to do shows both large, uh, small and medium with, so we want to this, I'm not because we intend to take over the world, but just because we naturally think, uh, interest is there and will emerge more and more as we create a chance for people to come out to a large event. [inaudible] Speaker 3:what will you be doing for [00:21:00] artists? Speaker 2:I hope we do a lot for us. I hope we provide an opportunity for them to do what they do. I'm more than they have now. We'll hope we provide a, a chance for people coming from overseas that until now have not had a chance to, uh, do what they do at a major media arts festival in the bay area because there hasn't been one. Um, but above and beyond that, I would say some thing that's kind of interesting to me and, and uh, I, uh, it really will not cost as much to do this and yet it turns out it would be slightly [00:21:30] revolutionary if you look at some of the online portals where you can go and watch art and tech, let's just say that there are places you can go and watch these projects online. And I happen to know from the artists that they haven't received a dime for the videos that have been produced and put up there. And we would like to change that. I mean, even if you can institute kind of a Pandora model or even do a bit better than that and give a few pennies on the dollar to artists that are partnering with us and giving content. I can tell you as an artist myself, it's great to have a little passive [00:22:00] income showing up in your mailbox every, every month. Speaker 3:I want to talk about your background. Sure. Because I don't know if everybody knows about you, but not only have you written operas, but you're still teaching jazz at the Berkeley Jazz workshop. That's correct. You founded the music theater project at c space. I mean you have, you have an amazing background in music, so that makes it particularly interesting to me that you would get involved in something like this because you really know what you're talking about in terms of [00:22:30] 20th century music and to move forward in the 21st century with that kind of background is really powerful. Speaker 2:Thanks that, that's very flattering. I, I am doing this with some other people and I think I've mentioned some of them already and I, I, it's important to stress I'm, I would be a little crazy to try to do this all on my own and I'm not sure anybody has the skill set to do large event production of something that pulls together these different worlds without a whole lot of help. So I've got some great people around me. But as far as on a personal [00:23:00] level in the jazz education that you mentioned, yeah, the Berkeley jazz workshops go on and on and they're easy to find online. I'm also teaching a class at the jazz school that's coming up for those who are interested in that. Part of what I'm doing, it's now called the California Jazz Academy and they've got great programming happening over there with a lot of remarkable musicians coming through. Speaker 2:And also a, this is a fun month for me. The Oakland symphony is playing a piece of mine as part of their opening concert. They're playing a piece called red states, blue states that I did as part of the under construction series for [00:23:30] the Berkeley symphony about eight years ago. And because of the election season coming up, I think Michael Morgan thought that would be an interesting piece to put on the program. So it, so I'm, I've got sort of a curtain raiser and then it's Elgar and Mauler on that program. Yeah, yeah, yeah, yeah. If you go to Oakland symphony, you can see they're opening concerts coming up. So that's pretty exciting. Speaker 4:[inaudible] Speaker 3:[00:24:30] and you grew up on the east coast and you came here in 1982 what brought you to California and the bay area? Speaker 2:I moved out here. I was just telling somebody this the other day I moved to with a drummer and my bass and his 10 speed bike and his drum set in my Volkswagen beetle. I really don't know how that's possible, but it's true. We did that and I landed here because I was looking for a place to play music professionally and I got pretty [00:25:00] lucky. Um, there was a basis here in the bay area that I got to know who moved back to Belgium about five months after I got here. And he basically gave me all this work and I bought them a box of cigars. So I had a really nice introduction to what was then an extremely vibrant jazz scene in the bay area. And I made a living between that and teaching for the next decade. And, but toward the end of the 1980s I started moving more and more toward composing. And that launched me into a lot of the collaboration that I was talking about earlier, which suits me really well. I like working [00:25:30] with creative minds and groups of people. Speaker 3:Yes. Is it unusual to find jazz composers and jazz performers in the opera world and the more classical world? Is that unusual? Speaker 2:Less so, certainly than it was a few decades. Speaker 3:But when you started, was it unusual for someone to come out of [inaudible] Speaker 2:Morgan? Actually, I think at the time that I was doing that, there was some other composer, Stevens Tookie comes to mind. Paul Dresher here in the bay area as an electric guitar. Originally people that were not coming from a background of classical piano or strictly [00:26:00] conservatory. Great to hear you say here, it might be a little more coming out of America than Europe. Sure. And it only makes sense because if you grew up listening to hip hop or listening to rock or w or world music and that's what you love and then you get interested in theater and you get interested in the vocal tradition, you're going to bring those things with you and you're going to be looking for ways to work with the music you love and the things that are relevant. Speaker 3:Can you think it's really great what you're doing with the future fires because it's allowing people to not get pigeonholed. [00:26:30] You're a cellist or a, you're a dancer or you're a software programmer. It's just an opening. I'm looking forward to it. Yeah. Speaker 2:Well, there are so many remarkable people. You asked me to mention a few of the artists and there are many more of them that are on our website. We're really building what I see as, as a family of people with common interests that are doing just really remarkable inspired work and each one of them individually week by week, month by month is is off working wherever they are. You know, here in the bay area or in London or [00:27:00] in France, and they're thinking about the possibilities that are emerging from this domain of work and pushing the envelope all the time, who's just new great stuff popping up Speaker 3:and, and this kind of innovation. Will this be unlike anything anywhere in the world when it starts up? Speaker 2:No. Again, I, I want to avoid sounding like we're, I'm doing something that's never been done before. I think what's unique about this is that the barrier has not seen a large stage for this kind of work and opportunity with high production does. Yeah, I [00:27:30] think it is time, but there are are great festivals. The Stripe Festival in Eindhoven for instance, which happens in Holland every year is one that comes to mind or the Berlin by an alley. There are plenty of arts electronic. Oh, somebody on our advisory panel is started future lab in 1979 at our select Ronica and I'm, I've mentioned a few times now these drone based area light shows, that's Horst Horner that actually pioneered that with Intel and that's an amazing thing. You can see samples of that work online. Speaker 3:Is this something that's going to be coming up in [00:28:00] future fires next year, Speaker 2:this next year because it's, it's not only financially ambitious but you run into problems in the United States with the FAA. I've talked to Horst about it a lot. We think we might be able to eventually do it at pier 70 because there's such a huge parking area there and also it's under the authority of the port rather than the city of San Francisco and things are just a little bit looser there. So uh, we hope to do that. Speaker 3:Let's say I go to this pier 70 event next year, will I be sitting, walking, participating? What is the Speaker 2:both at the midway [00:28:30] and later when we moved to pure 70, we're going to have, it actually depends on the event. I'll give an example where we are in discussion with the Gerta Institute and a Berlin based artist named Robert Hankie, who has also done work at gray area foundation. He does just remarkable laser light shows. It kind of elevates that whole world that some people know from discos and so on to a whole nother realm. He's just an amazing artist and that will be a seated program. It will be really like a concert. People will come in and experience what he's doing for about 55 so [00:29:00] it will be one thing at a time. Sorry. So we're, we're doing some smaller events and Robert Hankie would be an example. We might present a few other artists that night, but that would be at the midway. A few thousand people relatively contained over a night or two when we moved to pier 70 which was an enormous space for those who haven't seen it. It's just remarkable. That will be largely a standing room and provide the opportunity to present potentially dozens of artists. Speaker 3:That's great. Yeah. If you could just tell us again what your website is for future fires. [00:29:30] Sure. It's future fires.com oh, that's easy. And again, it's the first of its kind of large scale interactive art and technology festival that's coming up in 2017 we're so happy to have you on the program. Thank you so Speaker 1:much for taking the time. You've been listening to method to the madness, a weekly public affairs show on k a l x Berkeley Celebrating Bay area innovators. Speaker 4:Tune in again next Friday at noon. See acast.com/privacy for privacy and opt-out information.
Paco Romane and George Chen welcome comedians Anna Seregina and Dave Ross (Terrified podcast) to the Sup Doc living room. They discuss the much hyped documentary The Wolfpack (2015, Crystal Moselle).The Wolfpack is a documentary film about a family who homeschooled and raised their seven children in the confinement of their apartment in the Lower East Side of New York City. Locked away in an apartment in the Lower East Side of Manhattan for fourteen years, the Angulo family's seven children—six brothers named Mukunda, Narayana, Govinda, Bhagavan, Krisna (Glenn), and Jagadesh (Eddie), and their sister Visnu—learned about the world through watching films. They also re-enact scenes from their favorite movies. They were homeschooled by their mother and confined to their 16th-storey four-bedroom apartment in the Seward Park Extension housing project. In January 2010, against their father's instruction to remain inside. All the brothers then decided to begin exploring Manhattan and the world outside.The Wolfpack on iTunes:Dave Ross is a stand-up comedian in Los Angeles. Sometimes his comedy is vulnerable and personal. Other times his comedy is loud, stupid and about butts. You can find him stumbling around L.A. and the country, performing at every festival, club, theater, bar, fire hall or bombed-out stone building that will have him. He's in a sketch group called WOMEN that makes sketches for Comedy Central and IFC's Comedy Crib. He won a MOTH Grand Slam, he got interviewed on WTF with Marc Maron, and he told a story on Comedy Central's Drunk History. He likes his cat. His cat's dope.Anna Seregina is a stand-up comic and performer, described as having the “worst aura.” She was named a “Comic to Watch” by the SF Weekly. She produces the Los Angeles branch of the long-running SF show “the Business.” She has appeared in comedy festivals (RIOT LA, SF Sketchfest, Bridgetown, Sacramento, Crom, SF Comedy Day, SF Comedy & Burrito), hosted music festivals (Panache’s Bruise Cruise, Phono del Sol), told stories at storytelling events (the Moth, Porchlight), and done weird things publicly (SF MoMA, Artists’ Television Access, SFAI, Public Access TV). She starred in Joey Izzo’s “Stepsister,” which screened at Cannes, San Francisco International, and Traverse City film festivals in 2013. Most facts about her are true. Most truths about her are facts.Follow us on:Twitter: @supdocpdocastInstagram: @supdocpodcastFacebook: @supdocpodcastsign up for our mailing listAnd you can show your support to Sup Doc by donating on Patreon.
Andrew Benson flies slightly under the radar, doing design work for Cycling 74, teaching at SFAI in San Francisco, maintaining an active artistic practice and doing visual work for Name Brand Stars that you certainly have seen. But Andrew doesn't really long for a spread in People Magazine; rather, he is constantly diving into edge-case technology looking for new ways of drawing emotion out of media art viewers. In the podcast, I recall my first interaction with his work - and having a visceral reaction based on the movements of a simple drawing. This sort of expression is key to Andrew's art, and in this podcast he talks openly about how he approaches art technology in the pursuit of these feelings. Another great conversation, and it opened my eyes to opportunities in the visual space that I'd not previously considered. Enjoy!
Henry Gunderson joins me via skype from SFAI. We talk Fecal Face, Paper, Abstraction, Faces, Art School, Skateboarding, Enviornmental Shifts, Osama Bin Laden, White House Facebook, Celebrating Murder, Benazir Bhutto, Ghadafi, Black Dudes in Horror Movies, Hitler, Art Movements, Bike Gang, Rabbit tattoos, and Walter McBeer Gallery
Explore the underground with Audio Revolution!. This show investigates the world of unconventional, underrepresented, and underground arts. From Hip Hop to art happenings and video games to spoken word poetry, this Audio Revolution! will get you reconsidering what is art and why it might or might not be important. Let hosts Adrian Andre of Santa Fe Community College and Conor Cole in the Master’s program guide you through the unseen and unappreciated. With interviews of Lisa Donahue of SFAI about Flash Flood and Vince Kadlubek, a founder of Meow Wolf, a spoken word Blessing poem by the Santa Fe Indian School spoken word team as well as pieces about the commericalization of Hip Hop, Video Games as an art form and a special Seeds of Sound rap by Audio Revolution! production team members, there’s no way you won’t be inspired to learn more about what’s just outside of the box… Enjoy!
This week Brian, Marc, and Patrica sit down with Hou Hanru for a conversation over wine and olives. Currently the Director of Exhibitions and Public Programs at SFAI, Hanru has curated a number of major international exhibitions including the Istanbul Biennale, Guangzhou Triennale, and 50th Venice Biennale. The interview spans from Hanru's education in china after the cultural revolution, globalism, principles of self organization, and what its like to curate both internationally and locally.