Podcasts about cinema 4d

  • 96PODCASTS
  • 194EPISODES
  • 1h 5mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Apr 15, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about cinema 4d

Latest podcast episodes about cinema 4d

Bad Decisions Podcast
#68 Rebūke on Making World-Famous Music, Visuals, & the Creative Process

Bad Decisions Podcast

Play Episode Listen Later Apr 15, 2025 92:35


Rebūke is an Irish DJ and music producer. He's worked with some of the biggest names in electronic music including deadmau5 and Anyma, creating tracks like the global hit "Syren," featured on Anyma's album Genesys. Rebūke's debut album, The World of Era, popped off with "Endless" a collaboration with deadmau5 and Ed Graves, highlighting his unique beats and signature vibe.In this episode, he talked about how music and visuals complement each other, how the live show experience is evolving, and why 3D artists are going to play a massive role in the future of the music industry. A fun fact about him is that he creates some of his own visuals and knows how to use tools like Cinema 4D, Unreal Engine, and more.Episode 68 Timestamps:00:00 Introduction01:47 How Rebūke Ended Up Creating Mind-Blowing 3D Visuals13:03 How Visuals in Music Have Evolved and What's Coming Next20:09 Rebūke on Why Visual Artists Are the New Rockstars22:42 How AI Is Changing the Game for Creatives30:02 How Rebūke Is Changing The Game In Live Visuals36:11 How Unreal Engine Is Defining A New Era For Real-Time Live Visuals43:55 THIS Is How The Top Musicians Collaborate50:44 How A Day In Rebūke's Life Looks Like53:06 Does Work-Life Balance Exist For A Creative?01:00:30 How Rebūke Manages Distractions During World Tours01:06:23 The Breakthrough That Changed Rebūke's Life01:15:06 What They Don't Tell You About the Music Industry01:19:08 THIS Is The Biggest Trap Young Artists Fall Into01:30:09 Rebūke's Future Projects & TourIf this podcast is helping you, please take 2 minutes to rate our podcast on Spotify or Apple Podcasts, It will help the Podcast reach and help more people! Spotify -https://open.spotify.com/show/12jUe4lIJgxE4yst7rrfmW?si=ab98994cf57541cfApple Podcasts (Scroll down to review)- https://podcasts.apple.com/us/podcast/bad-decisions-podcast/id1677462934Find out more about Rebūke:https://www.instagram.com/rebukemusic/https://www.youtube.com/c/RebukeMusichttps://x.com/rebukemusicSpotify:https://open.spotify.com/artist/113reBz1jA6rVxbXl55mlj?si=xQymWh36RSeHGN3kqbyHegJoin our discord server where we connect and share assets:https://discord.gg/zwycgqezfD Bad Decisions Audio Podcast

Mograph Podcast
Ep 433: April Fools Day Episode with Nick DenBoer

Mograph Podcast

Play Episode Listen Later Apr 12, 2025 67:10


Join Matt Milstead and Special Guest Nick DenBoer, AKA Smearballs, and we talk all things Motion Graphics, Cinema 4D, Motion Capture and more.

School of Motion Podcast
Already Been Chewed and the Journey to Motion Design Success

School of Motion Podcast

Play Episode Listen Later Mar 19, 2025 72:41


EJ jams with Barton Damer, the creative muscle behind Already Been Chewed (ABC)...basically the Tony Hawk of motion design. This dude turned his skateboarding obsession into a full-blown 3D animation studio that now cranks out mind-blowing work for Nike, Adidas, Star Wars, and Marvel. Check out the corresponding blog post with key takeaways: https://www.schoolofmotion.com/blog/already-been-chewed Artists Barton Damer https://www.linkedin.com/in/barton-damer-92a32918 EJ Hassenfratz https://www.youtube.com/@eyedesyn/videos Nick Campbell https://greyscalegorilla.com/about-us/ Paul Babb https://www.linkedin.com/in/paulbabb/ Rob Dyrdek https://robdyrdek.com/ Mark Fancher https://www.youtube.com/c/MarkFancherFX Dan Arsham https://www.danielarsham.com/ PJ Richardson https://www.laundry.studio/ Jonathan Winbush https://www.youtube.com/channel/UCmzWP6o2cw73moEF7LO_KvA Studios Already Been Chewed (ABC) https://www.alreadybeenchewed.tv/ Greyscale Gorilla https://greyscalegorilla.com/ Maxon https://www.maxon.net/en LRG https://l-r-g.com/ Nike https://www.nike.com/ Adidas https://www.adidas.com/ Under Armour https://www.underarmour.com/en-us/ Street League Skateboarding https://www.streetleague.com/ MTV https://www.mtv.com/ ESPN https://www.espn.com/ Discovery Channel https://www.discovery.com/ New Balance https://www.newbalance.com/ Louis Vuitton https://www.alreadybeenchewed.tv/louisvuitton Tiffany and Co. https://www.tiffany.com/stories/collaborations/daniel-arsham-pokemon/ Legwork https://legworkstudio.com/animation/ Laundry Studio https://www.laundry.studio/ SoFi Stadium https://www.laundry.studio/ooh/project-four-l3zw3-jecsr-hr7em-6yptl Work Fantasy Factory https://www.paramountplus.com/shows/rob-dyrdeks-fantasy-factory/ Rob & Big https://en.wikipedia.org/wiki/Rob_%26_Big Snack Off https://tv.apple.com/us/show/snack-off/umc.cmc.3cjzt6066id3jq5koxur3vx9p Ridiculousness https://tv.apple.com/us/show/ridiculousness/umc.cmc.234le4y5rrb4satzsf28ix6yx Digital Artist of the Year https://www.behance.net/gallery/12189735/COMPUTER-ARTS-MAGAZINE-Digital-Artist-of-the-Year?locale=en_US Resources NAB https://www.nabshow.com/ Cinema 4D https://www.maxon.net/en/cinema-4d After Effects https://www.adobe.com/products/aftereffects.html Computer Arts Magazine https://www.creativebloq.com/computer-arts Adobe Photoshop https://www.adobe.com/products/photoshop.html Adobe Illustrator https://www.adobe.com/products/illustrator.html Final Cut Pro https://www.apple.com/final-cut-pro/ iMovie https://support.apple.com/imovie Houdini https://www.sidefx.com/products/houdini/ Unreal Engine https://www.unrealengine.com/en-US Behance https://www.behance.net/onboarding/hirerCreative Nixon https://www.nixon.com/ Rob & Bart Interview https://www.youtube.com/watch?v=frJ4rcpyFvI

The Monday Meeting
Just Make the Thing with EJ Hassenfratz | March 17, 2025

The Monday Meeting

Play Episode Listen Later Mar 18, 2025 71:13


In this episode of Monday Meeting, host Jen Van Horn sits down with motion designer EJ Hassenfratz to explore his journey from digital artist to creator of physical plushies and vinyl toys. This episode includes:EJ's evolution from broadcast graphics to motion design and 3D artistry, discovering Cinema 4D when it was first gaining popularityHow making tutorials became an unexpected career path after preparing for a presentation at NABThe emotional journey of creating his first plushie based on his beloved dog Gus, and how it became a cathartic experience when his actual dog passed awayBehind-the-scenes insights into toy production, from finding a reputable manufacturer to designing custom packagingThe unexpected challenges of transitioning from digital to physical products, including order fulfillment, shipping logistics, and overcoming imposter syndromeHow EJ's exploration of Japanese kawaii mascot culture influenced his character designsThe importance of pushing through creative fears and taking action rather than waiting for perfectionThroughout the conversation, EJ emphasizes the value of community support and how the motion design industry has evolved to become more open and collaborative. He shares honest reflections about the mental barriers that delayed his physical product launch despite having completed the toy design and manufacturing a year prior.Next week features an open discussion where listeners can bring questions or topics to the group. Please subscribe to The Monday Meeting newsletter on Substack and/or via email -open call opportunities for listener spotlights, feature requests, and community participation!Visit MondayMeeting.org for this episode and other insightful conversations from our motion design community!SHOW NOTES:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Monday Meeting Patreon⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Monday Meeting Discord⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Monday Meeting LinkedIn⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Monday Meeting Instagram⁠⁠EJ's PortfolioEJ's InstagramGus the PugMake ShipShippothebeastisback

japanese 3d substack ej cinema 4d monday meeting ej hassenfratz
Mograph Podcast
EP 430: Special Guest PJ Richardson from Laundry

Mograph Podcast

Play Episode Listen Later Mar 5, 2025 70:57


Come join us, as we talk to the incredible PJ Richardson from Laundry Studios about all things Cinema 4D, After Effects, Unreal Engine and more!

School of Motion Podcast
Unreal Engine, Creative Communities, and Career Growth with Cart & Horse

School of Motion Podcast

Play Episode Listen Later Feb 26, 2025 65:39


What happens when a small group of motion designers in Detroit evolves into a powerhouse of collaboration and innovation? The crew from Cart & Horse sat down with Joey on the School of Motion podcast to share their journey—from navigating the tight-knit creative community in Detroit to pioneering new workflows in Unreal Engine. Check out the corresponding blog post and key takeaways here: https://www.schoolofmotion.com/blog/cart-and-horse

Mograph Podcast
LIVE: Ep 428: Special Guest Konstantin Eydelman

Mograph Podcast

Play Episode Listen Later Feb 19, 2025 119:11


Come join us, as we talk to the super talented Konstantin Eydelman about all things Cinema 4D, After Effects, Unreal Engine and more!

School of Motion Podcast
Mastering Blender and Navigating a Creative Career with Elijah Sheffield

School of Motion Podcast

Play Episode Listen Later Jan 29, 2025 85:22


Blender artist and motion designer Elijah Sheffield shares how he transitioned into 3D, built a standout demo reel, and turned personal projects into career-defining moments. Check out the corresponding blog post (with episode highlights) here: https://www.schoolofmotion.com/blog/elijah-sheffield-blender

Paid 2 Draw
26. YONK Pushes Boundaries In Virtual Reality (Live at Pictoplasma Berlin 2024)

Paid 2 Draw

Play Episode Listen Later Jan 21, 2025 51:01


YONK is a Dutch 3D animation studio consisting of artistic power couple Victoria Young and Niels van der Donk. Coming from different backgrounds like fine art and graphic design they decided early on to combine their individual skills to create 3D work. Since 2019 they specialize in using Virtual Reality and 3D sculpting tools to create uniquely strange, textured and colorful artworks, animations and character designs for an increasingly international client list such as Google, Sprite, Nike, Amazon and The New York Times, but also just for the sake of creating and having fun experimenting.  In this episode they take us into their world and explain how sculpting 3D figures in virtual reality is more intuitive and less technical than the traditional way with a keyboard and mouse. Working in VR has led to quicker results and helped them discover their unique style. By embracing the explorer's mindset they experiment in a way where everything is allowed and create a body of work by describing their nightmares to each other.  They generously share how their style gradually developed by not knowing how to do things “properly” and how they made a conscious decision to leave the imperfections and happy accidents in their work to give it a more organic feel. While collaborating they acknowledge each other's strengths and try to involve each other throughout the whole process to create a cohesive result.  Even though their work is mostly created in VR it can be transferred to many other mediums and be experienced by everyone as an animated video, a 3D print or traditional 2D image. But their activities are not only limited to making art — they also develop tools to solve specific problems within the sculpting or animation programs and share them with the growing VR and 3D sculpting community.  It was that constant sharing of their personal work on Instagram that has caught the attention of potential clients who want be a part of their exciting experiences and set their mark in the VR space. To YONK, client work not only poses creative challenges, but also requires them to incorporate some planning and organizing into their process while still keeping it as intuitive and natural as possible.  _________ MENTIONED LINKS:  • Adobe Medium • Substance 3D Modeler (by Adobe) • Joseph Melhuish • Meta Quest Pro • YONK & Friends (live stream) • Christopher Rutledge • Blender (open-source 3D & animation software) • Step Motion on Blender Market • WarpySTEP v1.2 for Blender (by Will Anderson) • Grease Pencil Resources for Blender • Geometry Nodes for Blender • Dédouze • Other 3D software: Cinema 4D, Houdini, ZBrush, Maya  _________ FOLLOW YONK:  Instagram: @yonk.online Website: yonk.online YouTube: YONK TikTok: @yonk.online Twitch: yonkonline Threads: @yonk.online Twitter: x.com/yonkonline _________ If you liked this episode, please subscribe and leave a review. And follow Paid 2 Draw on Instagram and TikTok.  _________ Hosted by Vicky Cichoń and Dave Leutert. Music by Amanda Deff. Assistance by Diana Lazaru.  _________ This interview was recorded on May 5th, 2024, during the 20th annual Pictoplasma Conference at silent green in Berlin. Each spring, Pictoplasma transforms the city into an international meeting point for a diverse scene of artists and creatives, trailblazing the face of tomorrow's visual culture. The central conference brings together 900 key players on a global scale and features 20+ lectures by forward-thinking creatives. The accompanying animation screenings showcase cutting edge short films, with most of the filmmakers present in Q&A rounds. The character lab offers hands-on workshops, immersive media demos, panels and networking. Get your tickets for Pictoplasma Berlin 2025 (May 1st–4th).

School of Motion Podcast
Standing Out in Motion Design: Joel Pilger's Strategies for Success

School of Motion Podcast

Play Episode Listen Later Jan 16, 2025 79:06


In the latest episode of the School of Motion Podcast, host Joey Korenman sits down with industry veteran Joel Pilger—a name synonymous with success in the motion design world. With a career spanning over two decades, Joel has: - Founded and led Impossible Pictures, a top creative studio that grossed over $40 million and garnered major awards. - Advised leading independent studios worldwide, including Cream, Giant Ant, Laundry, Mighty Nice, Polyester, Sarofsky, and STATE. - Launched FORUM, a community where studio founders master the art of business together. Tune in as Joel shares invaluable insights on not just surviving, but thriving, in the evolving motion design landscape.   See the corresponding blog post here: https://www.schoolofmotion.com/blog/joel-pilger    

School of Motion Podcast
The most EPIC 2024 roundup of all things Motion Design

School of Motion Podcast

Play Episode Listen Later Dec 30, 2024 383:54


2024 was a transformative year for motion design - from AI disruption to the evolution of real-time tools, emerging platforms, and a changing economic landscape. In this comprehensive year-end roundup, Joey Korenman, EJ Hassenfratz, and Aharon Rabinowitz break down everything that shaped our industry and peer into what 2025 might bring. We also asked some industry luminaries to weigh in, so you'll hear from the likes of Buck, Scholar, Motion Hatch, Colosseum, Curious Refuge and more! Get ready for candid insights on the state of motion design, software updates that changed the game, the impact of AI, and how artists are adapting to an ever-shifting landscape. Whether you're a seasoned pro or just getting started, this conversation covers the trends, tools, and opportunities that matter. Plus, hear our panel's bold predictions for 2025 - from the future of real-time rendering to emerging platforms and where the next big opportunities lie for motion designers.

CG ПОДКАСТ №1
Влад Петренко. Houdini проще Cinema 4D.

CG ПОДКАСТ №1

Play Episode Listen Later Dec 26, 2024 104:17


Влад Петренко. Houdini проще Cinema 4D. by matematic.xyz

Mograph Podcast
Ep 424: Headline Show

Mograph Podcast

Play Episode Listen Later Dec 11, 2024 44:01


Dave and Matt talk the latest updates in Octane Render, Cinema 4D, and ZBrush. They critique the new Coca-Cola Ai Ad, discuss node-based AI animation, and share plans for future shows.

Mograph Podcast
Ep 422: Winbush Here!

Mograph Podcast

Play Episode Listen Later Nov 8, 2024 77:00


Take a moment to breathe... Dave and Matt welcome @JonathanWinbush to explore the latest Octane Render and Unreal Engine, the rise of Cinema 4D and Blender among freelancers/studios, and the balance between freelancing and studio work. They also discuss mental health, AI's role in design, and dynamics and liquid effects advancements.

Mograph Podcast
Ep 420: Dave & Matt // Headline Show

Mograph Podcast

Play Episode Listen Later Oct 25, 2024 31:06


Dave is sick, but the show must go on, but it's short. We chat about new AI features in Adobe programs, personal experiences with Cinema 4D's updates, Ben Marriott's new course, and the upcoming anniversary party at Already Been Chewed.

XR MOTION
48 - MoGraph Podcast @_MattMilstead

XR MOTION

Play Episode Listen Later Jun 21, 2024 102:48


Matt Milstead is a seasoned digital artist and co-founder of MoGraph.com, a leading platform in the motion graphics and 3D design industry. With over 20 years of experience, Matt has worked on a diverse range of projects, from corporate presentations to high-end commercial productions. He is renowned for his expertise in Cinema 4D and After Effects, consistently pushing the boundaries of what's possible in motion design. Through MoGraph.com, Matt has created an invaluable community resource, offering tutorials, podcasts, and support for fellow artists. His dedication to the craft and passion for education have made him a respected figure in the digital art world. --- Support this podcast: https://podcasters.spotify.com/pod/show/xrmotinon/support

Freelancer Podcast
„Ich habe mein Arbeitsprogramm vollständig durch KI ersetzt“ - KI im Freelancing | Mit Bonny Carrera

Freelancer Podcast

Play Episode Listen Later Jun 4, 2024 66:45


In diesem Format berichten Freelancer davon, wie sie künstliche Intelligenz in ihrem Freelancer-Alltag einsetzen und welche Auswirkungen das auf ihre Selbständigkeit hat. Bonny ist 3D Illustrator und gestaltet seine Kunst inzwischen mit Midjourney statt wie bisher mit Cinema 4D. Was ihn zu dieser Entscheidung bewogen hat, erzählt er in dieser Folge. Wir haben außerdem über die ethischen Implikationen und die Preisgestaltung gesprochen. - Bonny: Instagram: https://www.instagram.com/bonnycarrera/ Website: https://bonnycarrera.de/ Masterclass: https://bonnycarrera.de/masterclass -- Das All In One Tool für Freelancer: http://www.goodlanceapp.com/ -- Yannick bei LinkedIn: https://www.linkedin.com/in/yannick-krohn-9126a5153/ Schreib uns bei Instagram: https://www.instagram.com/freelancerpodcast Mehr praktische Tools für deinen Start als Freelancer: http://FreelancerTool.de Tritt unserer Facebook-Kooperations-Gruppe bei: http://bit.ly/2yE2laI Tritt unserem Slack Workspace für Freelancer bei: https://bit.ly/2SGLIay Kontaktiere uns: http://Freelancer-Podcast.de

Bad Decisions Podcast
#44 Quitting Everything for your Dream with the CEO of Greyscalegorilla

Bad Decisions Podcast

Play Episode Listen Later Mar 20, 2024 131:22


Nick Campbell is the Founder of Greyscalegorilla, a platform known for its high-quality 3D assets and having Microsoft, ESPN, and Sony among its clients. He initiated the company independently and has since scaled it significantly over the years. Greyscalegorilla facilitates 3D artists in creating realistic renders efficiently across platforms such as Cinema 4D, Blender, Houdini, and Unreal Engine. We spoke with Nick about his journey as a creative and an entrepreneur and we broke down the Evolution of Greyscalegorilla. We talked about the importance of saying NO to distractions and even talked about our secret weapon of success "Daily challenges". Nick shared with us strategies for building exceptional teams, prioritizing the right ideas, and envisioning the future of Greyscalegorilla. Thanks to Polycam for sponsoring this episode! Here is that sweet promo code we promised you guys: Type the code "BADDECISIONS" at https://poly.cam/ to save 30% on their Pro Plan. If this podcast is helping you, please take 2 minutes to rate our podcast on Spotify or Apple Podcasts, It will help us SO MUCH, you have no idea lol

The Monday Meeting
Resources & Courses || February 26, 2024

The Monday Meeting

Play Episode Listen Later Feb 26, 2024 81:09


In this episode, we dive into the digital revolution of learning, showcasing the best online courses and resources that are reshaping how designers and motion artists sharpen their skills. From the basics of character design to advanced techniques in Cinema 4D and Houdini, join us as we explore courses that cater to every level of expertise. We also explore interjecting your personality into your work, Generalists vs Specialists, and more! SHOW NOTES: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Monday Meeting Patreon⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Monday Meeting Discord⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠Camp Mograph Australia⁠⁠⁠⁠⁠ ⁠⁠⁠⁠Camp Mograph USA Character Design Fundamentals at School of Motion Rive Interactive Motion Era at Motion Design School Level Up Course at School of Motion Effectatron C4D Redshift Course Tim Clapham's LEARN C4D IN A DAY Paul Esteves on YouTube Motion Magic Rive Class Brads Art School on YouTube Lights, Camera, Render at School of Motion Producer Classes at Deducers Shea Lord's Courses Abeleal 3D Full Harbor Thrivecart Texturing in Adobe After Effects on Skillshare Ordinary Folk's Free Projects Ravie's AE Files School of Motion Holdframe Motion Punk on Patreon Blink My Brain Digital on Gumroad We Are Playgrounds Replay Motion Plus Design Watch Talks UE5 Automotive at Allan Portilho Academy Austin Taylor on LinkedIn Full Harbor Plus 25K Division 05 Tutorials PJ Richardson on LinkedIn Discord Invite Link Merkvilson on YouTube Airen 4D, AI Render Engine for Cinema 4D AI in European Union Regulations Karen X Cheng on Instagram Hirokazu Yokohara on Instagram Vallee Duhamel on Instagram JunkboxAI Post on Instagram

Mac Folklore Radio
Plan Be (1997)

Mac Folklore Radio

Play Episode Listen Later Feb 12, 2024 28:39


Original text by Henry Bortman and Jeff Pittelkau, MacUser, January 1997. How does BeOS measure up to System 7.5, and could it have become the next-generation Mac OS? The authors examine why Copland would not have been the crashproof operating system we had all hoped for. Official BeOS demo video from … I'll have to guess 1998, the year the x86 port of BeOS shipped. An extremely rudimentary port of Cinema 4D is shown. Maxon appears to have dropped all plans to complete their BeOS port of Cinema 4D after Be decided to focus on the Internet appliance market in late 1999. BeOS demo video intro music: Virtual (void) Remix from the Cotton Squares, a.k.a. Be Engineering. BeOS, it's The OS. More on the Cotton Squares. Standing In The Death Car! AFAIK a pure software multitrack digital audio recording and editing suite never shipped for the BeOS. Otari's RADAR doesn't count since that was a hardware/software bundle, and an expensive one at that. Second version. If you can find a DAW for BeOS that was available in 2000 right before everything imploded, I'd like to hear from you. :-) I have a sample track from one but I don't think it was ever published. GrooveMachine doesn't count since it's geared towards short samples and phrases. BeBits lists Qua as a hard disk recorder, but the author's website states its audio functionality is also centered on short samples. Printing support was not a priority for BeOS. Hey, this was supposed to be an OS for the multimedia future, not dead tree prepress! I tried the third-party BInkjet printer support package with a DeskJet 680C and it worked well. Nitin Ganatra of iOS Contacts and Mail.app fame worked in Apple Developer Technical Support through the 1990s. He talked about working with developers and the perils of letting Apple marketing loose on Copland in the Debug podcast, episode 39. The Cotton Squares/BeOS Demo Video: Where Are They Now? Baron Arnold: Danger (early 2000s, now: ???) Frank Boosman: AWS Jeff Bush: ??? Jean-Louis Gassée: The Monday Note, Grateful Geek Ficus Kirkpatrick: Google, Meta Scott Paterson: making the world a better place Doug Wright: ???

Mountain Collective Podcast
EP 88: Journey of Daan Dominic

Mountain Collective Podcast

Play Episode Listen Later Jan 15, 2024 36:18


In this podcast episode, Mourad Bahrouch and Daan Dominic explore the world of 3D art, diving into the parallels between space and undersea visuals. Daan shares his journey from Cinema 4D to Houdini, highlighting the frustrations he faced and the allure of Houdini's procedural approach. The conversation touches on the role of AI in art, with Daan expressing both fascination and reservations. The hosts briefly discuss the potential impact of mixed reality on the creative process. Tune in for insights into the evolving landscape of digital art and the artistic journey. Timestamps: (00:00) Daan Dominic (05:00) Transition from Cinema 4D to Houdini (10:00) Exploring Space and Undersea in Art (20:00) The Learning Curve of Houdini (25:00) AI in Art (30:00) The Potential of Mixed Reality and Future Visions --- Send in a voice message: https://podcasters.spotify.com/pod/show/mountain-collective/message

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

We are running an end of year survey for our listeners! Please let us know any feedback you have, what episodes resonated with you, and guest requests for 2024! Survey link here!Before language models became all the rage in November 2022, image generation was the hottest space in AI (it was the subject of our first piece on Latent Space!) In our interview with Sharif Shameem from Lexica we talked through the launch of StableDiffusion and the early days of that space. At the time, the toolkit was still pretty rudimentary: Lexica made it easy to search images, you had the AUTOMATIC1111 Web UI to generate locally, some HuggingFace spaces that offered inference, and eventually DALL-E 2 through OpenAI's platform, but not much beyond basic text-to-image workflows.Today's guest, Suhail Doshi, is trying to solve this with Playground AI, an image editor reimagined with AI in mind. Some of the differences compared to traditional text-to-image workflows:* Real-time preview rendering using consistency: as you change your prompt, you can see changes in real-time before doing a final rendering of it.* Style filtering: rather than having to prompt exactly how you'd like an image to look, you can pick from a whole range of filters both from Playground's model as well as Stable Diffusion (like RealVis, Starlight XL, etc). We talk about this at 25:46 in the podcast.* Expand prompt: similar to DALL-E3, Playground will do some prompt tuning for you to get better results in generation. Unlike DALL-E3, you can turn this off at any time if you are a prompting wizard* Image editing: after generation, you have tools like a magic eraser, inpainting pencil, etc. This makes it easier to do a full workflow in Playground rather than switching to another tool like Photoshop.Outside of the product, they have also trained a new model from scratch, Playground v2, which is fully open source and open weights and allows for commercial usage. They benchmarked the model against SDXL across 1,000 prompts and found that humans preferred the Playground generation 70% of the time. They had similar results on PartiPrompts:They also created a new benchmark, MJHQ-30K, for “aesthetic quality”:We introduce a new benchmark, MJHQ-30K, for automatic evaluation of a model's aesthetic quality. The benchmark computes FID on a high-quality dataset to gauge aesthetic quality.We curate the high-quality dataset from Midjourney with 10 common categories, each category with 3K samples. Following common practice, we use aesthetic score and CLIP score to ensure high image quality and high image-text alignment. Furthermore, we take extra care to make the data diverse within each category.Suhail was pretty open with saying that Midjourney is currently the best product for imagine generation out there, and that's why they used it as the base for this benchmark. I think it's worth comparing yourself to maybe the best thing and try to find like a really fair way of doing that. So I think more people should try to do that. I definitely don't think you should be kind of comparing yourself on like some Google model or some old SD, Stable Diffusion model and be like, look, we beat Stable Diffusion 1.5. I think users ultimately want care, how close are you getting to the thing that people mostly agree with? [00:23:47]We also talked a lot about Suhail's founder journey from starting Mixpanel in 2009, then going through YC again with Mighty, and eventually sunsetting that to pivot into Playground. Enjoy!Show Notes* Suhail's Twitter* “Starting my road to learn AI”* Bill Gates book trip* Playground* Playground v2 Announcement* $40M raise announcement* “Running infra dev ops for 24 A100s”* Mixpanel* Mighty* “I decided to stop working on Mighty”* Fast.ai* CivitTimestamps* [00:00:00] Intros* [00:02:59] Being early in ML at Mixpanel* [00:04:16] Pivoting from Mighty to Playground and focusing on generative AI* [00:07:54] How DALL-E 2 inspired Mighty* [00:09:19] Reimagining the graphics editor with AI* [00:17:34] Training the Playground V2 model from scratch to advance generative graphics* [00:21:11] Techniques used to improve Playground V2 like data filtering and model tuning* [00:25:21] Releasing the MJHQ30K benchmark to evaluate generative models* [00:30:35] The limitations of current models for detailed image editing tasks* [00:34:06] Using post-generation user feedback to create better benchmarks* [00:38:28] Concerns over potential misuse of powerful generative models* [00:41:54] Rethinking the graphics editor user experience in the AI era* [00:45:44] Integrating consistency models into Playground using preview rendering* [00:47:23] Interacting with the Stable Diffusion LoRAs community* [00:51:35] Running DevOps on A100s* [00:53:12] Startup ideas?TranscriptAlessio: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO-in-Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI. [00:00:15]Swyx: Hey, and today in the studio we have Suhail Doshi, welcome. [00:00:18]Suhail: Yeah, thanks. Thanks for having me. [00:00:20]Swyx: So among many things, you're a CEO and co-founder of Mixpanel, and I think about three years ago you left to start Mighty, and more recently, I think about a year ago, transitioned into Playground, and you've just announced your new round. How do you like to be introduced beyond that? [00:00:34]Suhail: Just founder of Playground is fine, yeah, prior co-founder and CEO of Mixpanel. [00:00:40]Swyx: Yeah, awesome. I'd just like to touch on Mixpanel a little bit, because it's obviously one of the more successful analytics companies we previously had amplitude on, and I'm curious if you had any reflections on the interaction of that amount of data that people would want to use for AI. I don't know if there's still a part of you that stays in touch with that world. [00:00:59]Suhail: Yeah, I mean, the short version is that maybe back in like 2015 or 2016, I don't really remember exactly, because it was a while ago, we had an ML team at Mixpanel, and I think this is when maybe deep learning or something really just started getting kind of exciting, and we were thinking that maybe given that we had such vast amounts of data, perhaps we could predict things. So we built two or three different features, I think we built a feature where we could predict whether users would churn from your product. We made a feature that could predict whether users would convert, we built a feature that could do anomaly detection, like if something occurred in your product, that was just very surprising, maybe a spike in traffic in a particular region, can we tell you that that happened? Because it's really hard to like know everything that's going on with your data, can we tell you something surprising about your data? And we tried all of these various features, most of it boiled down to just like, you know, using logistic regression, and it never quite seemed very groundbreaking in the end. And so I think, you know, we had a four or five person ML team, and I think we never expanded it from there. And I did all these Fast AI courses trying to learn about ML. And that was the- That's the first time you did fast AI. Yeah, that was the first time I did fast AI. Yeah, I think I've done it now three times, maybe. [00:02:12]Swyx: Oh, okay. [00:02:13]Suhail: I didn't know it was the third. No, no, just me reviewing it, it's maybe three times, but yeah. [00:02:16]Swyx: You mentioned prediction, but honestly, like it's also just about the feedback, right? The quality of feedback from users, I think it's useful for anyone building AI applications. [00:02:25]Suhail: Yeah. Yeah, I think I haven't spent a lot of time thinking about Mixpanel because it's been a long time, but sometimes I'm like, oh, I wonder what we could do now. And then I kind of like move on to whatever I'm working on, but things have changed significantly since. [00:02:39]Swyx: And then maybe we'll touch on Mighty a little bit. Mighty was very, very bold. My framing of it was, you will run our browsers for us because everyone has too many tabs open. I have too many tabs open and slowing down your machines that you can do it better for us in a centralized data center. [00:02:51]Suhail: Yeah, we were first trying to make a browser that we would stream from a data center to your computer at extremely low latency, but the real objective wasn't trying to make a browser or anything like that. The real objective was to try to make a new kind of computer. And the thought was just that like, you know, we have these computers in front of us today and we upgrade them or they run out of RAM or they don't have enough RAM or not enough disk or, you know, there's some limitation with our computers, perhaps like data locality is a problem. Why do I need to think about upgrading my computer ever? And so, you know, we just had to kind of observe that like, well, actually it seems like a lot of applications are just now in the browser, you know, it's like how many real desktop applications do we use relative to the number of applications we use in the browser? So it's just this realization that actually like, you know, the browser was effectively becoming more or less our operating system over time. And so then that's why we kind of decided to go, hmm, maybe we can stream the browser. Fortunately, the idea did not work for a couple of different reasons, but the objective is try to make sure new computer. [00:03:50]Swyx: Yeah, very, very bold. [00:03:51]Alessio: Yeah, and I was there at YC Demo Day when you first announced it. It was, I think, the last or one of the last in-person ones, at Pier34 in Mission Bay. How do you think about that now when everybody wants to put some of these models in people's machines and some of them want to stream them in, do you think there's maybe another wave of the same problem before it was like browser apps too slow, now it's like models too slow to run on device? [00:04:16]Suhail: Yeah. I mean, I've obviously pivoted away from Mighty, but a lot of what I somewhat believed at Mighty, maybe why I'm so excited about AI and what's happening, a lot of what Mighty was about was like moving compute somewhere else, right? Right now, applications, they get limited quantities of memory, disk, networking, whatever your home network has, et cetera. You know, what if these applications could somehow, if we could shift compute, and then these applications have vastly more compute than they do today. Right now it's just like client backend services, but you know, what if we could change the shape of how applications could interact with things? And it's changed my thinking. In some ways, AI has like a bit of a continuation of my belief that like perhaps we can really shift compute somewhere else. One of the problems with Mighty was that JavaScript is single-threaded in the browser. And what we learned, you know, the reason why we kind of abandoned Mighty was because I didn't believe we could make a new kind of computer. We could have made some kind of enterprise business, probably it could have made maybe a lot of money, but it wasn't going to be what I hoped it was going to be. And so once I realized that most of a web app is just going to be single-threaded JavaScript, then the only thing you could do largely withstanding changing JavaScript, which is a fool's errand most likely, make a better CPU, right? And there's like three CPU manufacturers, two of which sell, you know, big ones, you know, AMD, Intel, and then of course like Apple made the M1. And it's not like single-threaded CPU core performance, single-core performance was increasing very fast, it's plateauing rapidly. And even these different companies were not doing as good of a job, you know, sort of with the continuation of Moore's law. But what happened in AI was that you got like, if you think of the AI model as like a computer program, like just like a compiled computer program, it is literally built and designed to do massive parallel computations. And so if you could take like the universal approximation theorem to its like kind of logical complete point, you know, you're like, wow, I can get, make computation happen really rapidly and parallel somewhere else, you know, so you end up with these like really amazing models that can like do anything. It just turned out like perhaps the new kind of computer would just simply be shifted, you know, into these like really amazing AI models in reality. Yeah. [00:06:30]Swyx: Like I think Andrej Karpathy has always been, has been making a lot of analogies with the LLMOS. [00:06:34]Suhail: I saw his video and I watched that, you know, maybe two weeks ago or something like that. I was like, oh man, this, I very much resonate with this like idea. [00:06:41]Swyx: Why didn't I see this three years ago? [00:06:43]Suhail: Yeah. I think, I think there still will be, you know, local models and then there'll be these very large models that have to be run in data centers. I think it just depends on kind of like the right tool for the job, like any engineer would probably care about. But I think that, you know, by and large, like if the models continue to kind of keep getting bigger, you're always going to be wondering whether you should use the big thing or the small, you know, the tiny little model. And it might just depend on like, you know, do you need 30 FPS or 60 FPS? Maybe that would be hard to do, you know, over a network. [00:07:13]Swyx: You tackled a much harder problem latency wise than the AI models actually require. Yeah. [00:07:18]Suhail: Yeah. You can do quite well. You can do quite well. You definitely did 30 FPS video streaming, did very crazy things to make that work. So I'm actually quite bullish on the kinds of things you can do with networking. [00:07:30]Swyx: Maybe someday you'll come back to that at some point. But so for those that don't know, you're very transparent on Twitter. Very good to follow you just to learn your insights. And you actually published a postmortem on Mighty that people can read up on and willing to. So there was a bit of an overlap. You started exploring the AI stuff in June 2022, which is when you started saying like, I'm taking fast AI again. Maybe, was there more context around that? [00:07:54]Suhail: Yeah. I think I was kind of like waiting for the team at Mighty to finish up, you know, something. And I was like, okay, well, what can I do? I guess I will make some kind of like address bar predictor in the browser. So we had, you know, we had forked Chrome and Chromium. And I was like, you know, one thing that's kind of lame is that like this browser should be like a lot better at predicting what I might do, where I might want to go. It struck me as really odd that, you know, Chrome had very little AI actually or ML inside this browser. For a company like Google, you'd think there's a lot. Code is actually just very, you know, it's just a bunch of if then statements is more or less the address bar. So it seemed like a pretty big opportunity. And that's also where a lot of people interact with the browser. So, you know, long story short, I was like, hmm, I wonder what I could build here. So I started to take some AI courses and review the material again and get back to figuring it out. But I think that was somewhat serendipitous because right around April was, I think, a very big watershed moment in AI because that's when Dolly 2 came out. And I think that was the first truly big viral moment for generative AI. [00:08:59]Swyx: Because of the avocado chair. [00:09:01]Suhail: Yeah, exactly. [00:09:02]Swyx: It wasn't as big for me as Stable Diffusion. [00:09:04]Suhail: Really? [00:09:05]Swyx: Yeah, I don't know. Dolly was like, all right, that's cool. [00:09:07]Suhail: I don't know. Yeah. [00:09:09]Swyx: I mean, they had some flashy videos, but it didn't really register. [00:09:13]Suhail: That moment of images was just such a viral novel moment. I think it just blew people's mind. Yeah. [00:09:19]Swyx: I mean, it's the first time I encountered Sam Altman because they had this Dolly 2 hackathon and they opened up the OpenAI office for developers to walk in back when it wasn't as much of a security issue as it is today. I see. Maybe take us through the journey to decide to pivot into this and also choosing images. Obviously, you were inspired by Dolly, but there could be any number of AI companies and businesses that you could start and why this one, right? [00:09:45]Suhail: Yeah. So I think at that time, Mighty and OpenAI was not quite as popular as it is all of a sudden now these days, but back then they had a lot more bandwidth to kind of help anybody. And so we had been talking with the team there around trying to see if we could do really fast low latency address bar prediction with GPT-3 and 3.5 and that kind of thing. And so we were sort of figuring out how could we make that low latency. I think that just being able to talk to them and kind of being involved gave me a bird's eye view into a bunch of things that started to happen. Latency first was the Dolly 2 moment, but then stable diffusion came out and that was a big moment for me as well. And I remember just kind of like sitting up one night thinking, I was like, you know, what are the kinds of companies one could build? Like what matters right now? One thing that I observed is that I find a lot of inspiration when I'm working in a field in something and then I can identify a bunch of problems. Like for Mixpanel, I was an intern at a company and I just noticed that they were doing all this data analysis. And so I thought, hmm, I wonder if I could make a product and then maybe they would use it. And in this case, you know, the same thing kind of occurred. It was like, okay, there are a bunch of like infrastructure companies that put a model up and then you can use their API, like Replicate is a really good example of that. There are a bunch of companies that are like helping you with training, model optimization, Mosaic at the time, and probably still, you know, was doing stuff like that. So I just started listing out like every category of everything, of every company that was doing something interesting. I started listing out like weights and biases. I was like, oh man, weights and biases is like this great company. Do I want to compete with that company? I might be really good at competing with that company because of Mixpanel because it's so much of like analysis. But I was like, no, I don't want to do anything related to that. That would, I think that would be too boring now at this point. So I started to list out all these ideas and one thing I observed was that at OpenAI, they had like a playground for GPT-3, right? All it was is just like a text box more or less. And then there were some settings on the right, like temperature and whatever. [00:11:41]Swyx: Top K. [00:11:42]Suhail: Yeah, top K. You know, what's your end stop sequence? I mean, that was like their product before GPT, you know, really difficult to use, but fun if you're like an engineer. And I just noticed that their product kind of was evolving a little bit where the interface kind of was getting a little bit more complex. They had like a way where you could like generate something in the middle of a sentence and all those kinds of things. And I just thought to myself, I was like, everything is just like this text box and you generate something and that's about it. And stable diffusion had kind of come out and it was all like hugging face and code. Nobody was really building any UI. And so I had this kind of thing where I wrote prompt dash like question mark in my notes and I didn't know what was like the product for that at the time. I mean, it seems kind of trite now, but I just like wrote prompt. What's the thing for that? Manager. Prompt manager. Do you organize them? Like, do you like have a UI that can play with them? Yeah. Like a library. What would you make? And so then, of course, then you thought about what would the modalities be given that? How would you build a UI for each kind of modality? And so there are a couple of people working on some pretty cool things. And I basically chose graphics because it seemed like the most obvious place where you could build a really powerful, complex UI. That's not just only typing a box. It would very much evolve beyond that. Like what would be the best thing for something that's visual? Probably something visual. Yeah. I think that just that progression kind of happened and it just seemed like there was a lot of effort going into language, but not a lot of effort going into graphics. And then maybe the very last thing was, I think I was talking to Aditya Ramesh, who was the co-creator of DALL-E 2 and Sam. And I just kind of went to these guys and I was just like, hey, are you going to make like a UI for this thing? Like a true UI? Are you going to go for this? Are you going to make a product? For DALL-E. Yeah. For DALL-E. Yeah. Are you going to do anything here? Because if you are going to do it, just let me know and I will stop and I'll go do something else. But if you're not going to do anything, I'll just do it. And so we had a couple of conversations around what that would look like. And then I think ultimately they decided that they were going to focus on language primarily. And I just felt like it was going to be very underinvested in. Yes. [00:13:46]Swyx: There's that sort of underinvestment from OpenAI, but also it's a different type of customer than you're used to, presumably, you know, and Mixpanel is very good at selling to B2B and developers will figure on you or not. Yeah. Was that not a concern? [00:14:00]Suhail: Well, not so much because I think that, you know, right now I would say graphics is in this very nascent phase. Like most of the customers are just like hobbyists, right? Yeah. Like it's a little bit of like a novel toy as opposed to being this like very high utility thing. But I think ultimately, if you believe that you could make it very high utility, the probably the next customers will end up being B2B. It'll probably not be like a consumer. There will certainly be a variation of this idea that's in consumer. But if your quest is to kind of make like something that surpasses human ability for graphics, like ultimately it will end up being used for business. So I think it's maybe more of a progression. In fact, for me, it's maybe more like Mixpanel started out as SMB and then very much like ended up starting to grow up towards enterprise. So for me, I think it will be a very similar progression. But yeah, I mean, the reason why I was excited about it is because it was a creative tool. I make music and it's AI. It's like something that I know I could stay up till three o'clock in the morning doing. Those are kind of like very simple bars for me. [00:14:56]Alessio: So you mentioned Dolly, Stable Diffusion. You just had Playground V2 come out two days ago. Yeah, two days ago. [00:15:02]Suhail: Two days ago. [00:15:03]Alessio: This is a model you train completely from scratch. So it's not a cheap fine tune on something. You open source everything, including the weights. Why did you decide to do it? I know you supported Stable Diffusion XL in Playground before, right? Yep. What made you want to come up with V2 and maybe some of the interesting, you know, technical research work you've done? [00:15:24]Suhail: Yeah. So I think that we continue to feel like graphics and these foundation models for anything really related to pixels, but also definitely images continues to be very underinvested. It feels a little like graphics is in like this GPT-2 moment, right? Like even GPT-3, even when GPT-3 came out, it was exciting, but it was like, what are you going to use this for? Yeah, we'll do some text classification and some semantic analysis and maybe it'll sometimes like make a summary of something and it'll hallucinate. But no one really had like a very significant like business application for GPT-3. And in images, we're kind of stuck in the same place. We're kind of like, okay, I write this thing in a box and I get some cool piece of artwork and the hands are kind of messed up and sometimes the eyes are a little weird. Maybe I'll use it for a blog post, you know, that kind of thing. The utility feels so limited. And so, you know, and then we, you sort of look at Stable Diffusion and we definitely use that model in our product and our users like it and use it and love it and enjoy it, but it hasn't gone nearly far enough. So we were kind of faced with the choice of, you know, do we wait for progress to occur or do we make that progress happen? So yeah, we kind of embarked on a plan to just decide to go train these things from scratch. And I think the community has given us so much. The community for Stable Diffusion I think is one of the most vibrant communities on the internet. It's like amazing. It feels like, I hope this is what like Homebrew Club felt like when computers like showed up because it's like amazing what that community will do and it moves so fast. I've never seen anything in my life and heard other people's stories around this where an academic research paper comes out and then like two days later, someone has sample code for it. And then two days later, there's a model. And then two days later, it's like in nine products, you know, they're all competing with each other. It's incredible to see like math symbols on an academic paper go to well-designed features in a product. So I think the community has done so much. So I think we wanted to give back to the community kind of on our way. Certainly we would train a better model than what we gave out on Tuesday, but we definitely felt like there needs to be some kind of progress in these open source models. The last kind of milestone was in July when Stable Diffusion Excel came out, but there hasn't been anything really since. Right. [00:17:34]Swyx: And there's Excel Turbo now. [00:17:35]Suhail: Well, Excel Turbo is like this distilled model, right? So it's like lower quality, but fast. You have to decide, you know, what your trade off is there. [00:17:42]Swyx: It's also a consistency model. [00:17:43]Suhail: I don't think it's a consistency model. It's like it's they did like a different thing. Yeah. I think it's like, I don't want to get quoted for this, but it's like something called ad like adversarial or something. [00:17:52]Swyx: That's exactly right. [00:17:53]Suhail: I've read something about that. Maybe it's like closer to GANs or something, but I didn't really read the full paper. But yeah, there hasn't been quite enough progress in terms of, you know, there's no multitask image model. You know, the closest thing would be something called like EmuEdit, but there's no model for that. It's just a paper that's within meta. So we did that and we also gave out pre-trained weights, which is very rare. Usually you just get the aligned model and then you have to like see if you can do anything with it. So we actually gave out, there's like a 256 pixel pre-trained stage and a 512. And we did that for academic research because we come across people all the time in academia, they have access to like one A100 or eight at best. And so if we can give them kind of like a 512 pre-trained model, our hope is that there'll be interesting novel research that occurs from that. [00:18:38]Swyx: What research do you want to happen? [00:18:39]Suhail: I would love to see more research around things that users care about tend to be things like character consistency. [00:18:45]Swyx: Between frames? [00:18:46]Suhail: More like if you have like a face. Yeah, yeah. Basically between frames, but more just like, you know, you have your face and it's in one image and then you want it to be like in another. And users are very particular and sensitive to faces changing because we know we're trained on faces as humans. Not seeing a lot of innovation, enough innovation around multitask editing. You know, there are two things like instruct pics to pics and then the EmuEdit paper that are maybe very interesting, but we certainly are not pushing the fold on that in that regard. All kinds of things like around that rotation, you know, being able to keep coherence across images, style transfer is still very limited. Just even reasoning around images, you know, what's going on in an image, that kind of thing. Things are still very, very underpowered, very nascent. So therefore the utility is very, very limited. [00:19:32]Alessio: On the 1K Prompt Benchmark, you are 2.5x prefer to Stable Diffusion XL. How do you get there? Is it better images in the training corpus? Can you maybe talk through the improvements in the model? [00:19:44]Suhail: I think they're still very early on in the recipe, but I think it's a lot of like little things and you know, every now and then there are some big important things like certainly your data quality is really, really important. So we spend a lot of time thinking about that. But I would say it's a lot of things that you kind of clean up along the way as you train your model. Everything from captions to the data that you align with after pre-train to how you're picking your data sets, how you filter your data sets. I feel like there's a lot of work in AI that doesn't really feel like AI. It just really feels like just data set filtering and systems engineering and just like, you know, and the recipe is all there, but it's like a lot of extra work to do that. I think we plan to do a Playground V 2.1, maybe either by the end of the year or early next year. And we're just like watching what the community does with the model. And then we're just going to take a lot of the things that they're unhappy about and just like fix them. You know, so for example, like maybe the eyes of people in an image don't feel right. They feel like they're a little misshapen or they're kind of blurry feeling. That's something that we already know we want to fix. So I think in that case, it's going to be about data quality. Or maybe you want to improve the kind of the dynamic range of color. You know, we want to make sure that that's like got a good range in any image. So what technique can we use there? There's different things like offset noise, pyramid noise, terminal zero, SNR, like there are all these various interesting things that you can do. So I think it's like a lot of just like tricks. Some are tricks, some are data, and some is just like cleaning. [00:21:11]Swyx: Specifically for faces, it's very common to use a pipeline rather than just train the base model more. Do you have a strong belief either way on like, oh, they should be separated out to different stages for like improving the eyes, improving the face or enhance or whatever? Or do you think like it can all be done in one model? [00:21:28]Suhail: I think we will make a unified model. Yeah, I think it will. I think we'll certainly in the end, ultimately make a unified model. There's not enough research about this. Maybe there is something out there that we haven't read. There are some bottlenecks, like for example, in the VAE, like the VAEs are ultimately like compressing these things. And so you don't know. And then you might have like a big informational information bottleneck. So maybe you would use a pixel based model, perhaps. I think we've talked to people, everyone from like Rombach to various people, Rombach trained stable diffusion. I think there's like a big question around the architecture of these things. It's still kind of unknown, right? Like we've got transformers and we've got like a GPT architecture model, but then there's this like weird thing that's also seemingly working with diffusion. And so, you know, are we going to use vision transformers? Are we going to move to pixel based models? Is there a different kind of architecture? We don't really, I don't think there have been enough experiments. Still? Oh my God. [00:22:21]Swyx: Yeah. [00:22:22]Suhail: That's surprising. I think it's very computationally expensive to do a pipeline model where you're like fixing the eyes and you're fixing the mouth and you're fixing the hands. [00:22:29]Swyx: That's what everyone does as far as I understand. [00:22:31]Suhail: I'm not exactly sure what you mean, but if you mean like you get an image and then you will like make another model specifically to fix a face, that's fairly computationally expensive. And I think it's like not probably not the right way. Yeah. And it doesn't generalize very well. Now you have to pick all these different things. [00:22:45]Swyx: Yeah. You're just kind of glomming things on together. Yeah. Like when I look at AI artists, like that's what they do. [00:22:50]Suhail: Ah, yeah, yeah, yeah. They'll do things like, you know, I think a lot of ARs will do control net tiling to do kind of generative upscaling of all these different pieces of the image. Yeah. And I think these are all just like, they're all hacks ultimately in the end. I mean, it just to me, it's like, let's go back to where we were just three years, four years ago with where deep learning was at and where language was that, you know, it's the same thing. It's like we were like, okay, well, I'll just train these very narrow models to try to do these things and kind of ensemble them or pipeline them to try to get to a best in class result. And here we are with like where the models are gigantic and like very capable of solving huge amounts of tasks when given like lots of great data. [00:23:28]Alessio: You also released a new benchmark called MJHQ30K for automatic evaluation of a model's aesthetic quality. I have one question. The data set that you use for the benchmark is from Midjourney. Yes. You have 10 categories. How do you think about the Playground model, Midjourney, like, are you competitors? [00:23:47]Suhail: There are a lot of people, a lot of people in research, they like to compare themselves to something they know they can beat, right? Maybe this is the best reason why it can be helpful to not be a researcher also sometimes like I'm not trained as a researcher, I don't have a PhD in anything AI related, for example. But I think if you care about products and you care about your users, then the most important thing that you want to figure out is like everyone has to acknowledge that Midjourney is very good. They are the best at this thing. I'm happy to admit that. I have no problem admitting that. Just easy. It's very visual to tell. So I think it's incumbent on us to try to compare ourselves to the thing that's best, even if we lose, even if we're not the best. At some point, if we are able to surpass Midjourney, then we only have ourselves to compare ourselves to. But on First Blush, I think it's worth comparing yourself to maybe the best thing and try to find like a really fair way of doing that. So I think more people should try to do that. I definitely don't think you should be kind of comparing yourself on like some Google model or some old SD, Stable Diffusion model and be like, look, we beat Stable Diffusion 1.5. I think users ultimately want care, how close are you getting to the thing that people mostly agree with? So we put out that benchmark for no other reason to say like, this seems like a worthy thing for us to at least try, for people to try to get to. And then if we surpass it, great, we'll come up with another one. [00:25:06]Alessio: Yeah, no, that's awesome. And you killed Stable Diffusion Excel and everything. In the benchmark chart, it says Playground V2 1024 pixel dash aesthetic. Do you have kind of like, yeah, style fine tunes or like what's the dash aesthetic for? [00:25:21]Suhail: We debated this, maybe we named it wrong or something, but we were like, how do we help people realize the model that's aligned versus the models that weren't? Because we gave out pre-trained models, we didn't want people to like use those. So that's why they're called base. And then the aesthetic model, yeah, we wanted people to pick up the thing that makes things pretty. Who wouldn't want the thing that's aesthetic? But if there's a better name, we're definitely open to feedback. No, no, that's cool. [00:25:46]Alessio: I was using the product. You also have the style filter and you have all these different styles. And it seems like the styles are tied to the model. So there's some like SDXL styles, there's some Playground V2 styles. Can you maybe give listeners an overview of how that works? Because in language, there's not this idea of like style, right? Versus like in vision model, there is, and you cannot get certain styles in different [00:26:11]Suhail: models. [00:26:12]Alessio: So how do styles emerge and how do you categorize them and find them? [00:26:15]Suhail: Yeah, I mean, it's so fun having a community where people are just trying a model. Like it's only been two days for Playground V2. And we actually don't know what the model's capable of and not capable of. You know, we certainly see problems with it. But we have yet to see what emergent behavior is. I mean, we've just sort of discovered that it takes about like a week before you start to see like new things. I think like a lot of that style kind of emerges after that week, where you start to see, you know, there's some styles that are very like well known to us, like maybe like pixel art is a well known style. Photorealism is like another one that's like well known to us. But there are some styles that cannot be easily named. You know, it's not as simple as like, okay, that's an anime style. It's very visual. And in the end, you end up making up the name for what that style represents. And so the community kind of shapes itself around these different things. And so if anyone that's into stable diffusion and into building anything with graphics and stuff with these models, you know, you might have heard of like Proto Vision or Dream Shaper, some of these weird names, but they're just invented by these authors. But they have a sort of je ne sais quoi that, you know, appeals to users. [00:27:26]Swyx: Because it like roughly embeds to what you what you want. [00:27:29]Suhail: I guess so. I mean, it's like, you know, there's one of my favorite ones that's fine tuned. It's not made by us. It's called like Starlight XL. It's just this beautiful model. It's got really great color contrast and visual elements. And the users love it. I love it. And it's so hard. I think that's like a very big open question with graphics that I'm not totally sure how we'll solve. I don't know. It's, it's like an evolving situation too, because styles get boring, right? They get fatigued. Like it's like listening to the same style of pop song. I try to relate to graphics a little bit like with music, because I think it gives you a little bit of a different shape to things. Like it's not as if we just have pop music, rap music and country music, like all of these, like the EDM genre alone has like sub genres. And I think that's very true in graphics and painting and art and anything that we're doing. There's just these sub genres, even if we can't quite always name them. But I think they are emergent from the community, which is why we're so always happy to work with the community. [00:28:26]Swyx: That is a struggle. You know, coming back to this, like B2B versus B2C thing, B2C, you're going to have a huge amount of diversity and then it's going to reduce as you get towards more sort of B2B type use cases. I'm making this up here. So like you might be optimizing for a thing that you may eventually not need. [00:28:42]Suhail: Yeah, possibly. Yeah, possibly. I think like a simple thing with startups is that I worry sometimes by trying to be overly ambitious and like really scrutinizing like what something is in its most nascent phase that you miss the most ambitious thing you could have done. Like just having like very basic curiosity with something very small can like kind of lead you to something amazing. Like Einstein definitely did that. And then he like, you know, he basically won all the prizes and got everything he wanted and then basically did like kind of didn't really. He can dismiss quantum and then just kind of was still searching, you know, for the unifying theory. And he like had this quest. I think that happens a lot with like Nobel Prize people. I think there's like a term for it that I forget. I actually wanted to go after a toy almost intentionally so long as that I could see, I could imagine that it would lead to something very, very large later. Like I said, it's very hobbyist, but you need to start somewhere. You need to start with something that has a big gravitational pull, even if these hobbyists aren't likely to be the people that, you know, have a way to monetize it or whatever, even if they're, but they're doing it for fun. So there's something, something there that I think is really important. But I agree with you that, you know, in time we will absolutely focus on more utilitarian things like things that are more related to editing feats that are much harder. And so I think like a very simple use case is just, you know, I'm not a graphics designer. It seems like very simple that like you, if we could give you the ability to do really complex graphics without skill, wouldn't you want that? You know, like my wife the other day was set, you know, said, I wish Playground was better. When are you guys going to have a feature where like we could make my son, his name's Devin, smile when he was not smiling in the picture for the holiday card. Right. You know, just being able to highlight his, his mouth and just say like, make him smile. Like why can't we do that with like high fidelity and coherence, little things like that, all the way to putting you in completely different scenarios. [00:30:35]Swyx: Is that true? Can we not do that in painting? [00:30:37]Suhail: You can do in painting, but the quality is just so bad. Yeah. It's just really terrible quality. You know, it's like you'll do it five times and it'll still like kind of look like crooked or just artifact. Part of it's like, you know, the lips on the face, there's such little information there. So small that the models really struggle with it. Yeah. [00:30:55]Swyx: Make the picture smaller and you don't see it. That's my trick. I don't know. [00:30:59]Suhail: Yeah. Yeah. That's true. Or, you know, you could take that region and make it really big and then like say it's a mouth and then like shrink it. It feels like you're wrestling with it more than it's doing something that kind of surprises you. [00:31:12]Swyx: Yeah. It feels like you are very much the internal tastemaker, like you carry in your head this vision for what a good art model should look like. Do you find it hard to like communicate it to like your team and other people? Just because it's obviously it's hard to put into words like we just said. [00:31:26]Suhail: Yeah. It's very hard to explain. Images have such high bitrate compared to just words and we don't have enough words to describe these things. It's not terribly difficult. I think everyone on the team, if they don't have good kind of like judgment taste or like an eye for some of these things, they're like steadily building it because they have no choice. Right. So in that realm, I don't worry too much, actually. Like everyone is kind of like learning to get the eye is what I would call it. But I also have, you know, my own narrow taste. Like I don't represent the whole population either. [00:31:59]Swyx: When you benchmark models, you know, like this benchmark we're talking about, we use FID. Yeah. Input distance. OK. That's one measure. But like it doesn't capture anything you just said about smiles. [00:32:08]Suhail: Yeah. FID is generally a bad metric. It's good up to a point and then it kind of like is irrelevant. Yeah. [00:32:14]Swyx: And then so are there any other metrics that you like apart from vibes? I'm always looking for alternatives to vibes because vibes don't scale, you know. [00:32:22]Suhail: You know, it might be fun to kind of talk about this because it's actually kind of fresh. So up till now, we haven't needed to do a ton of like benchmarking because we hadn't trained our own model and now we have. So now what? What does that mean? How do we evaluate it? And, you know, we're kind of like living with the last 48, 72 hours of going, did the way that we benchmark actually succeed? [00:32:43]Swyx: Did it deliver? [00:32:44]Suhail: Right. You know, like I think Gemini just came out. They just put out a bunch of benchmarks. But all these benchmarks are just an approximation of how you think it's going to end up with real world performance. And I think that's like very fascinating to me. So if you fake that benchmark, you'll still end up in a really bad scenario at the end of the day. And so, you know, one of the benchmarks we did was we kind of curated like a thousand prompts. And I think that's kind of what we published in our blog post, you know, of all these tasks that we a lot of some of them are curated by our team where we know the models all suck at it. Like my favorite prompt that no model is really capable of is a horse riding an astronaut, the inverse one. And it's really, really hard to do. [00:33:22]Swyx: Not in data. [00:33:23]Suhail: You know, another one is like a giraffe underneath a microwave. How does that work? Right. There's so many of these little funny ones. We do. We have prompts that are just like misspellings of things. Yeah. We'll figure out if the models will figure it out. [00:33:36]Swyx: They should embed to the same space. [00:33:39]Suhail: Yeah. And just like all these very interesting weirdo things. And so we have so many of these and then we kind of like evaluate whether the models are any good at it. And the reality is that they're all bad at it. And so then you're just picking the most aesthetic image. We're still at the beginning of building like the best benchmark we can that aligns most with just user happiness, I think, because we're not we're not like putting these in papers and trying to like win, you know, I don't know, awards at ICCV or something if they have awards. You could. [00:34:05]Swyx: That's absolutely a valid strategy. [00:34:06]Suhail: Yeah, you could. But I don't think it could correlate necessarily with the impact we want to have on humanity. I think we're still evolving whatever our benchmarks are. So the first benchmark was just like very difficult tasks that we know the models are bad at. Can we come up with a thousand of these, whether they're hand rated and some of them are generated? And then can we ask the users, like, how do we do? And then we wanted to use a benchmark like party prompts. We mostly did that so people in academia could measure their models against ours versus others. But yeah, I mean, fit is pretty bad. And I think in terms of vibes, it's like you put out the model and then you try to see like what users make. And I think my sense is that we're going to take all the things that we notice that the users kind of were failing at and try to find like new ways to measure that, whether that's like a smile or, you know, color contrast or lighting. One benefit of Playground is that we have users making millions of images every single day. And so we can just ask them for like a post generation feedback. Yeah, we can just ask them. We can just say, like, how good was the lighting here? How was the subject? How was the background? [00:35:06]Swyx: Like a proper form of like, it's just like you make it, you come to our site, you make [00:35:10]Suhail: an image and then we say, and then maybe randomly you just say, hey, you know, like, how was the color and contrast of this image? And you say it was not very good, just tell us. So I think I think we can get like tens of thousands of these evaluations every single day to truly measure real world performance as opposed to just like benchmark performance. I would like to publish hopefully next year. I think we will try to publish a benchmark that anyone could use, that we evaluate ourselves on and that other people can, that we think does a good job of approximating real world performance because we've tried it and done it and noticed that it did. Yeah. I think we will do that. [00:35:45]Swyx: I personally have a few like categories that I consider special. You know, you know, you have like animals, art, fashion, food. There are some categories which I consider like a different tier of image. Top among them is text in images. How do you think about that? So one of the big wow moments for me, something I've been looking out for the entire year is just the progress of text and images. Like, can you write in an image? Yeah. And Ideogram came out recently, which had decent but not perfect text and images. Dolly3 had improved some and all they said in their paper was that they just included more text in the data set and it just worked. I was like, that's just lazy. But anyway, do you care about that? Because I don't see any of that in like your sample. Yeah, yeah. [00:36:27]Suhail: The V2 model was mostly focused on image quality versus like the feature of text synthesis. [00:36:33]Swyx: Well, as a business user, I care a lot about that. [00:36:35]Suhail: Yeah. Yeah. I'm very excited about text synthesis. And yeah, I think Ideogram has done a good job of maybe the best job. Dolly has like a hit rate. Yes. You know, like sometimes it's Egyptian letters. Yeah. I'm very excited about text synthesis. You know, I don't have much to say on it just yet. You know, you don't want just text effects. I think where this has to go is it has to be like you could like write little tiny pieces of text like on like a milk carton. That's maybe not even the focal point of a scene. I think that's like a very hard task that, you know, if you could do something like that, then there's a lot of other possibilities. Well, you don't have to zero shot it. [00:37:09]Swyx: You can just be like here and focus on this. [00:37:12]Suhail: Sure. Yeah, yeah. Definitely. Yeah. [00:37:16]Swyx: Yeah. So I think text synthesis would be very exciting. I'll also flag that Max Wolf, MiniMaxxier, which you must have come across his work. He's done a lot of stuff about using like logo masks that then map onto food and vegetables. And it looks like text, which can be pretty fun. [00:37:29]Suhail: That's the wonderful thing about like the open source community is that you get things like control net and then you see all these people do these just amazing things with control net. And then you wonder, I think from our point of view, we sort of go that that's really wonderful. But how do we end up with like a unified model that can do that? What are the bottlenecks? What are the issues? The community ultimately has very limited resources. And so they need these kinds of like workaround research ideas to get there. But yeah. [00:37:55]Swyx: Are techniques like control net portable to your architecture? [00:37:58]Suhail: Definitely. Yeah. We kept the Playground V2 exactly the same as SDXL. Not because not out of laziness, but just because we knew that the community already had tools. You know, all you have to do is maybe change a string in your code and then, you know, retrain a control net for it. So it was very intentional to do that. We didn't want to fragment the community with different architectures. Yeah. [00:38:16]Swyx: So basically, I'm going to go over three more categories. One is UIs, like app UIs, like mock UIs. Third is not safe for work, and then copyrighted stuff. I don't know if you care to comment on any of those. [00:38:28]Suhail: I think the NSFW kind of like safety stuff is really important. I kind of think that one of the biggest risks kind of going into maybe the U.S. election year will probably be very interrelated with like graphics, audio, video. I think it's going to be very hard to explain, you know, to a family relative who's not kind of in our world. And our world is like sometimes very, you know, we think it's very big, but it's very tiny compared to the rest of the world. Some people like there's still lots of humanity who have no idea what chat GPT is. And I think it's going to be very hard to explain, you know, to your uncle, aunt, whoever, you know, hey, I saw President Biden say this thing on a video, you know, I can't believe, you know, he said that. I think that's going to be a very troubling thing going into the world next year, the year after. [00:39:12]Swyx: That's more like a risk thing, like deepfakes, faking, political faking. But there's a lot of studies on how for most businesses, you don't want to train on not safe for work images, except that it makes you really good at bodies. [00:39:24]Suhail: Personally, we filter out NSFW type of images in our data set so that it's, you know, so our safety filter stuff doesn't have to work as hard. [00:39:32]Swyx: But you've heard this argument that not safe for work images are very good at human anatomy, which you do want to be good at. [00:39:38]Suhail: It's not like necessarily a bad thing to train on that data. It's more about like how you go and use it. That's why I was kind of talking about safety, you know, in part, because there are very terrible things that can happen in the world. If you have an extremely powerful graphics model, you know, suddenly like you can kind of imagine, you know, now if you can like generate nudes and then there's like you could do very character consistent things with faces, like what does that lead to? Yeah. And so I tend to think more what occurs after that, right? Even if you train on, let's say, you know, new data, if it does something to kind of help, there's nothing wrong with the human anatomy, it's very valid for a model to learn that. But then it's kind of like, how does that get used? And, you know, I won't bring up all of the very, very unsavory, terrible things that we see on a daily basis on the site, but I think it's more about what occurs. And so we, you know, we just recently did like a big sprint on safety. It's very difficult with graphics and art, right? Because there is tasteful art that has nudity, right? They're all over in museums, like, you know, there's very valid situations for that. And then there's the things that are the gray line of that, you know, what I might not find tasteful, someone might be like, that is completely tasteful, right? And then there are things that are way over the line. And then there are things that maybe you or, you know, maybe I would be okay with, but society isn't, you know? So where does that kind of end up on the spectrum of things? I think it's really hard with art. Sometimes even if you have like things that are not nude, if a child goes to your site, scrolls down some images, you know, classrooms of kids, you know, using our product, it's a really difficult problem. And it stretches mostly culture, society, politics, everything. [00:41:14]Alessio: Another favorite topic of our listeners is UX and AI. And I think you're probably one of the best all-inclusive editors for these things. So you don't just have the prompt, images come out, you pray, and now you do it again. First, you let people pick a seed so they can kind of have semi-repeatable generation. You also have, yeah, you can pick how many images and then you leave all of them in the canvas. And then you have kind of like this box, the generation box, and you can even cross between them and outpaint. There's all these things. How did you get here? You know, most people are kind of like, give me text, I give you image. You know, you're like, these are all the tools for you. [00:41:54]Suhail: Even though we were trying to make a graphics foundation model, I think we think that we're also trying to like re-imagine like what a graphics editor might look like given the change in technology. So, you know, I don't think we're trying to build Photoshop, but it's the only thing that we could say that people are largely familiar with. Oh, okay, there's Photoshop. What would Photoshop compare itself to pre-computer? I don't know, right? It's like, or kind of like a canvas, but you know, there's these menu options and you can use your mouse. What's a mouse? So I think that we're trying to re-imagine what a graphics editor might look like, not just for the fun of it, but because we kind of have no choice. Like there's this idea in image generation where you can generate images. That's like a super weird thing. What is that in Photoshop, right? You have to wait right now for the time being, but the wait is worth it often for a lot of people because they can't make that with their own skills. So I think it goes back to, you know, how we started the company, which was kind of looking at GPT-3's Playground, that the reason why we're named Playground is a homage to that actually. And, you know, it's like, shouldn't these products be more visual? These prompt boxes are like a terminal window, right? We're kind of at this weird point where it's just like MS-DOS. I remember my mom using MS-DOS and I memorized the keywords, like DIR, LS, all those things, right? It feels a little like we're there, right? Prompt engineering, parentheses to say beautiful or whatever, waits the word token more in the model or whatever. That's like super strange. I think a large portion of humanity would agree that that's not user-friendly, right? So how do we think about the products to be more user-friendly? Well, sure, you know, sure, it would be nice if I wanted to get rid of, like, the headphones on my head, you know, it'd be nice to mask it and then say, you know, can you remove the headphones? You know, if I want to grow, expand the image, you know, how can we make that feel easier without typing lots of words and being really confused? I don't even think we've nailed the UI UX yet. Part of that is because we're still experimenting. And part of that is because the model and the technology is going to get better. And whatever felt like the right UX six months ago is going to feel very broken now. So that's a little bit of how we got there is kind of saying, does everything have to be like a prompt in a box? Or can we do things that make it very intuitive for users? [00:44:03]Alessio: How do you decide what to give access to? So you have things like an expand prompt, which Dally 3 just does. It doesn't let you decide whether you should or not. [00:44:13]Swyx: As in, like, rewrites your prompts for you. [00:44:15]Suhail: Yeah, for that feature, I think once we get it to be cheaper, we'll probably just give it up. We'll probably just give it away. But we also decided something that might be a little bit different. We noticed that most of image generation is just, like, kind of casual. You know, it's in WhatsApp. It's, you know, it's in a Discord bot somewhere with Majorny. It's in ChatGPT. One of the differentiators I think we provide is at the expense of just lots of users necessarily. Mainstream consumers is that we provide as much, like, power and tweakability and configurability as possible. So the only reason why it's a toggle, because we know that users might want to use it and might not want to use it. There's some really powerful power user hobbyists that know what they're doing. And then there's a lot of people that just want something that looks cool, but they don't know how to prompt. And so I think a lot of Playground is more about going after that core user base that, like, knows, has a little bit more savviness and how to use these tools. You know, the average Dell user is probably not going to use ControlNet. They probably don't even know what that is. And so I think that, like, as the models get more powerful, as there's more tooling, hopefully you'll imagine a new sort of AI-first graphics editor that's just as, like, powerful and configurable as Photoshop. And you might have to master a new kind of tool. [00:45:28]Swyx: There's so many things I could go bounce off of. One, you mentioned about waiting. We have to kind of somewhat address the elephant in the room. Consistency models have been blowing up the past month. How do you think about integrating that? Obviously, there's a lot of other companies also trying to beat you to that space as well. [00:45:44]Suhail: I think we were the first company to integrate it. Ah, OK. [00:45:47]Swyx: Yeah. I didn't see your demo. [00:45:49]Suhail: Oops. Yeah, yeah. Well, we integrated it in a different way. OK. There are, like, 10 companies right now that have kind of tried to do, like, interactive editing, where you can, like, draw on the left side and then you get an image on the right side. We decided to kind of, like, wait and see whether there's, like, true utility on that. We have a different feature that's, like, unique in our product that is called preview rendering. And so you go to the product and you say, you know, we're like, what is the most common use case? The most common use case is you write a prompt and then you get an image. But what's the most annoying thing about that? The most annoying thing is, like, it feels like a slot machine, right? You're like, OK, I'm going to put it in and maybe I'll get something cool. So we did something that seemed a lot simpler, but a lot more relevant to how users already use these products, which is preview rendering. You toggle it on and it will show you a render of the image. And then graphics tools already have this. Like, if you use Cinema 4D or After Effects or something, it's called viewport rendering. And so we try to take something that exists in the real world that has familiarity and say, OK, you're going to get a rough sense of an early preview of this thing. And then when you're ready to generate, we're going to try to be as coherent about that image that you saw. That way, you're not spending so much time just like pulling down the slot machine lever. I think we were the first company to actually ship a quick LCM thing. Yeah, we were very excited about it. So we shipped it very quick. Yeah. [00:47:03]Swyx: Well, the demos I've been seeing, it's not like a preview necessarily. They're almost using it to animate their generations. Like, because you can kind of move shapes. [00:47:11]Suhail: Yeah, yeah, they're like doing it. They're animating it. But they're sort of showing, like, if I move a moon, you know, can I? [00:47:17]Swyx: I don't know. To me, it unlocks video in a way. [00:47:20]Suhail: Yeah. But the video models are already so much better than that. Yeah. [00:47:23]Swyx: There's another one, which I think is general ecosystem of Loras, right? Civit is obviously the most popular repository of Loras. How do you think about interacting with that ecosystem? [00:47:34]Suhail: The guy that did Lora, not the guy that invented Loras, but the person that brought Loras to Stable Diffusion actually works with us on some projects. His name is Simu. Shout out to Simu. And I think Loras are wonderful. Obviously, fine tuning all these Dreambooth models and such, it's just so heavy. And it's obvious in our conversation around styles and vibes, it's very hard to evaluate the artistry of these things. Loras give people this wonderful opportunity to create sub-genres of art. And I think they're amazing. Any graphics tool, any kind of thing that's expressing art has to provide some level of customization to its user base that goes beyond just typing Greg Rakowski in a prompt. We have to give more than that. It's not like users want to type these real artist names. It's that they don't know how else to get an image that looks interesting. They truly want originality and uniqueness. And I think Loras provide that. And they provide it in a very nice, scalable way. I hope that we find something even better than Loras in the long term, because there are still weaknesses to Loras, but I think they do a good job for now. Yeah. [00:48:39]Swyx: And so you would never compete with Civit? You would just kind of let people import? [00:48:43]Suhail: Civit's a site where all these things get kind of hosted by the community, right? And so, yeah, we'll often pull down some of the best things there. I think when we have a significantly better model, we will certainly build something that gets closer to that. Again, I go back to saying just I still think this is very nascent. Things are very underpowered, right? Loras are not easy to train. They're easy for an engineer. It sure would be nicer if I could just pick five or six reference images, right? And they might even be five or six different reference images that are not... They're just very different. They communicate a style, but they're actually like... It's like a mood board, right? And you have to be kind of an engineer almost to train these Loras or go to some site and be technically savvy, at least. It seems like it'd be much better if I could say, I love this style. Here are five images and you tell the model, like, this is what I want. And the model gives you something that's very aligned with what your style is, what you're talking about. And it's a style you couldn't even communicate, right? There's n

XR MOTION
41 - David Ariew @DavidAriew

XR MOTION

Play Episode Listen Later Nov 25, 2023 116:06


David Ariew is a renowned 3D motion designer and educator. He has collaborated with high-profile artists such as Katy Perry, Beeple, Zedd, Deadmau5, Keith Urban, Excision, and The Lumineers. Known for his expertise in Cinema4D, Octane Render, and Realflow, Ariew's work often involves creating intricate visual elements like ice caves, melting figures, and dynamic water scenes. In addition to his design work, he is an NFT artist and a dedicated educator in C4D and Octane, sharing his knowledge through various tutorial series. Ariew, who is also referred to by the nickname "Octane Jesus," is recognized for his innovative approach in utilizing tools like KitBash3d and Octane for animation https://arievvisuals.com/ --- Support this podcast: https://podcasters.spotify.com/pod/show/xrmotinon/support

Mograph Podcast
Ep: 386 Chris Schmidt

Mograph Podcast

Play Episode Listen Later Oct 31, 2023 84:53


Chris Schmidt from RocketLasso joins Dave and special Co-Host Jags to chat about Mograph events and the latest updates to Cinema 4D!

audiodump
ad149 Will it Blender Extravaganza (mit Tom Albrecht & Dennis mit nn)

audiodump

Play Episode Play 69 sec Highlight Listen Later May 18, 2023 253:09


Dennis mit nn und Tom Albrecht sind da und erklären uns Cinema 4D, Blender, 3DSMax, VRay und warum der Kreis kein Kreis mehr ist, wenn du zoomst. FUCK - es ist unglaublich kompliziert. Unser Atem wird zu WD-40 und unser State ist der Mittelfinger. Wir beleuchten global den miesen M1, wünschen uns nVidia im MacPro und sind auch ansonsten sehr unzufrieden.

Layout
252: Adventures Learning 3D

Layout

Play Episode Listen Later Apr 28, 2023 33:43


This week, Kevin talks about his experiences learning 3D modelling and switching from Cinema 4D to Blender. Sponsors Userbit: Everything your UX team needs to understand users and make smart product decisions. Show Notes Kevin's rendersArthur's Nosh BarAesop BottleBathroomIce Cream ShopIce Cream ContaineriPhone 14 Pro BackiPhone 14 Pro Front10 years later and I still think about this blog post daily, and I haven't used #000 for anything (except text) sinceCinema 4DGrayscale Gorilla3D for DesignersBlenderAesop, the Cult-Favorite Skin Care Brand, Will Be Acquired by L'Oréal — The New York TimesIvorySketch One Layer ChallengeAdam Whitcroft's Apollo iconRecommendations Men's Tree Dasher 2Sleeve 2 Hosts Kevin Clark (@vernalkick) Rafael Conde (@rafahari)

Space Valley Live
Trattorie, Cinema 4D e server chiusi - Space Valley Live del 21/03/23 - S1E88

Space Valley Live

Play Episode Listen Later Mar 23, 2023 59:46


Live con tutti quanti, a turno, con 3 microfoni quindi c'è sempre qualcuno che parla senza che si senta.Video Youtube: https://youtu.be/pCXON9EO1sgSegui le LIVE su Twitch: https://www.twitch.tv/spacevalleyDal Lunedì al Giovedì alle ore 9:00!Shop Ufficiale Space Valley: https://spacevalley.shop/Canale Yakety-Yak: https://www.youtube.com/@YaketyYakSpace Valley: https://www.youtube.com/@vallespazialeInstagram: https://www.instagram.com/vallespazialeTelegram: https://t.me/vallespazialeAround the Valley: https://www.youtube.com/AroundtheValleyDiventa un supporter di questo podcast: https://www.spreaker.com/podcast/space-valley-live--5686515/support.

HOLOCENE
034 → ROB GARROTT ↗ finding your passion through personal growth & psychadelics

HOLOCENE

Play Episode Listen Later Jan 15, 2023 114:18


A lifelong learner, Rob is on a journey to build a creative life by creating cool things and teaching people how to create cool things.Previously he worked as a Content Manager for LinkedIn Learning ( lynda dot com ) designing curriculum and implementing online courses for a variety of subject areas including video production, motion design, product design, 3D visualization, architecture, engineering and construction industries.Before his role at LinkedIn Learning, Rob was an Art Director, Animator, and Editor with many years of hands-on experience in the print and broadcast industries, working for ad agencies and television networks. A true creative “Jack of all trades”, Rob has worked in creative direction, design, video editing, and motion graphics animation using Cinema 4D, After Effects, Photoshop, Illustrator, Davinci Resolve, Premiere Pro, and Final Cut Pro.He's designed and produced broadcast projects for many top brands, creating everything from TV Guide ads to on-air network graphics packages, promos, television shows, and sales presentations. In addition, Rob was an instructor at the Art Center College of Design in Pasadena, California, teaching 3D motion graphics, compositing, and motion design for 12 years.HOLOCENE Magazine + StoreRob Garrott IGRob Auchincloss IGSHOW NOTESsee more Hosted on Acast. See acast.com/privacy for more information.

CG talks
#9 Grant Osborne - The 4th dimension of design 1/2

CG talks

Play Episode Listen Later Oct 5, 2022 30:31


This episode of CG Talks (The podcast where CG guys talk about CG) is the first part of a conversation with Grant Osborne - Melbourne (Australia) based 3d generalist, motion designer. DJ and Grant talk about getting into 3D and Grant's experiences with design softwares including After Effects, Cinema 4D and Houdini. Grant shares his passion and struggles with hard parts of design and learning it. We also venture a bit into the world of sound design and how it supplements the visuals. Listen to the episode and get inspired to find your inner spark (or mojo). Stay tuned for the second part of the interview in which we talk about personal 3d projects and on-line challenges (and especially the latest Pwnisher's challenge - Moving Meditations) as an opportunity to grow.

3D Design, Metaverse, Virtual Reality, XR, Augmented Reality
Learning Cinema 4D Materials, Adobe Aero and Houdini Progress

3D Design, Metaverse, Virtual Reality, XR, Augmented Reality

Play Episode Listen Later Sep 19, 2022 4:58


My YouTube Channel:https://www.youtube.com/channel/UC5q92wiV3hMDPvBgjOFaFhg

Sala 1604
10 motivos para aprender ZBrush - Episódio 257 - Sala1604

Sala 1604

Play Episode Listen Later Jul 28, 2022 31:51


Olá, artistas do Brasil!! Se alguma vez você já pensou em começar no 3D, garanto que você já ficou na dúvida de qual software seria o melhor para estudar, certo? Tem o Blender, tem o Maya, tem o 3DMax, tem o Cinema 4D…. Cada um deles tem suas especificidades e aplicações. Como saber qual é o melhor? Nesse episódio da Sala 1604 você vai conhecer um pouco mais de um dos softwares 3D mais usados na indústria, o ZBrush! PARTICIPAM DESTE EPISÓDIO KAUE DAIPRAI Instagram: https://instagram.com/ksdaiprai Artstation: https://www.artstation.com/kauedaiprai GABRIELA ANTONIA ROSA Instagram: https://www.instagram.com/gabrielantoniarosa/ Twitter: https://twitter.com/galantoniarosa LINKS COMENTADOS NO EPISÓDIO ☆ Podcast **Como perder o medo do 3D:** https://www.youtube.com/watch?v=4FV6DUPVD1s ☆ Aula gratuita de ZBrush: https://www.youtube.com/watch?v=V3hyp7BkC-Y

Los Wise Guys Podcast | Games, Comics, Movies,  & more
A Look Inside Dark Mind Productions | LWG Podcast

Los Wise Guys Podcast | Games, Comics, Movies, & more

Play Episode Listen Later Jul 18, 2022 74:27


The guys are joined by very special guest, award winning cinematographer, Chris Barcia. Watch as Chris breaks down his perspective on movies and the film industry as a whole. He also goes into his filming process, and tells amazing stories about filming, and coming close to death and being arrested. Be sure to check out his incredible shorts at youtube.com/c/DarkMindProductions And tune into his YouTube channel on 8/5/22 at 11:30pm Est. to see his upcoming short film The Citrine Gaze. You can also listen to the podcast on your favorite platform and visit the Los Wise Guys website below! loswiseguys.com https://linktr.ee/loswiseguys Follow the LWG at: Disco - @emperor.disco Eslam - @lwg_eslam Dan - @lwg_danrosado And be sure to follow Chris Barcia and Dark Mind Productions at: Chris Barcia - @filmingchris1 Dark Mind Productions - @darkmindproductions #lwg #movies #horror #creepy #film #horrorshorts

Ross  Video XPression U
Quick Tips 160 - Importing Cinema4D Scenes Using the 64-bit Editions of XPression

Ross Video XPression U

Play Episode Listen Later Jun 16, 2022 1:31


The 64-bit edition of XPression supports the import of Cinema4D scenes. This includes the models, materials, Lights and Cameras from the scene. Now XPression users can have access to the power of Cinema4D and bring those items directly into XPression. Living Live! with Ross Video www.rossvideo.com/XPression-U

(in)sight-reading enlightenment
Audio design/digital art/virtual reality and its development today

(in)sight-reading enlightenment

Play Episode Listen Later Jun 11, 2022 28:22


Did you know that in virtual reality you can create your own worlds, clothes and anything you dream of? In this episode, we talk with young audio designer Tim Shatnyy, who also creates digital art. You'll hear our improvisation with Sebastien on piano, Darina on a Renaissance transverse flute, and Tim on synthesiser as well as Tim's solo improvisation. We talk about the conception of a music album and its digital realisation, as well as the perception of the latest generation of reality and ways to expand it, escape it, or even undo it. Discover more about Tim: https://www.timshatnyy.com https://www.instagram.com/timshatnyy/?hl=en https://soundcloud.com/timshatnyy You can buy his digital art here: https://foundation.app/@timshatnyy #art #digitalart #posterdesign #graphicindex#thedesignblacklist #eyeondesign #digitalarchive #cinema4D#selectedwork #designfeed #3Dsculpt #dopedddesign#sculpture #metamoderngrotesk #arnoldrenderer#3dillustration  Discover more video footage on our Telegram channel, Instagram, Facebook, etc. https://insightreadingenlightenment.carrd.co Write to us if you want to support us insightreading.enlightenment@gmail.com Darina and Sebastien  #harpsichord #insightreadingenlightenment #earlymusicpodcastinsightreadingenlightenment #flute #fortepiano #baroque #baroquemusic #podcast #earlymusicpodcast #romanticmusic #darinaablogina #sebastienmitra #darinaabloginapodcast --- Send in a voice message: https://podcasters.spotify.com/pod/show/insight-reading/message

The XR Magazine
Designing great XR experiences with Simon Frübis

The XR Magazine

Play Episode Listen Later Jun 1, 2022 30:28


In this episode, Simon Frübis shares his experience of 2 years in interaction design, digital marketing, and entrepreneurship and more than 1 year as a VR/AR prototyper with MRTK. Figma, Unity3D, C#, MRTK (with Oculus Quest 2, HoloLens2), Haptic Design, Cinema4D, and Adobe Suit are all skills he has (InDesign, XD, After Effects). He has a strong design thinking mindset and a master's degree in interaction design, and he is actively exploring XR scroll user interfaces with haptics. He enjoys solving usability challenges for human needs and all of their senses, and he is always eager to learn something new where design and technology meet. These are some of the highlights of this episode: How did he come up with the design states in VR/AR Prototyper? What are the best design tools to use for VR/AR prototyping? What advice can Simon give to someone who is just starting out in this field? What his own workflow is, what his main challenges are, and how he overcame them. You can follow Simon Frübis on: LinkedIn: https://www.linkedin.com/in/simon-fr%... Twitter: https://mobile.twitter.com/fruebis Check out his XR design free tools resource here: http://smfr.eu/xrdesign ------------ Gift Alert! You can download now my new XR Roadmap for Immersive Design including narrative. Click here to download it FREE. P.S. If you've already heard about the podcast and prefer video, you can also head over to the D.O.! YT Channel to enjoy the interviews. Would you like to learn how to create immersive experiences or perhaps incorporate them into your profession or business? Great news, Circuit Stream is the only certified Unity institution to teach you how to do this and they are now an official sponsor of the podcast! So if you'd like to learn more about Circuit Stream, you can head over here: CIRCUIT STREAM! You can always find me on Facebook, Instagram, Twitter, LinkedIn & TikTok as @dianaolynick.

FuturePerfect Podcast
#002 - Team Rolfes: 3D Art, Digital Communities, and Platform Capitalism

FuturePerfect Podcast

Play Episode Listen Later May 6, 2022 64:11


This is episode #002 of the FuturePerfect Podcast where we talk with compelling people breaking new ground in art, media, and entertainment. This podcast is produced by FuturePerfect Studio, an extended reality studio creating immersive experiences for global audiences. Episodes are released every two weeks, visit our website futureperfect.studio for more details.The text version of this interview has been edited for length and clarity. Find the full audio version above or in your favorite podcast app.This week we interview Team Rolfes, a digital performance and image studio led by Sam and Andy Rolfes. The studio specializes in figurative animation, VR puppetry, and mixed reality collage. They create works across multiple formats, including livestream improvisational comedy, live motion capture animation on large festival stages and in underground rave bunkers, print design for fashion collections, album covers and music videos. They have collaborated with Lady Gaga, Danny Elfman, Danny L Harle, Nike, Netflix, Adult Swim and performed at music festivals across the world. On June 4th, 2022, they will premiere their live 3D musical 3-2-1 RULE at Carriageworks in Sydney, Australia. The work is being developed together with writer and net artist Jacob Bakkila and artist songwriter Lil Mariko.I first encountered your work as an online video in 2020 as a part of the Lunchmeat Festival of electronic music and art based in Prague. I think it was called Sam Rolfes 360° AV experience. I watched it on my Oculus headset and the work was so exhilarating, but also disconcerting and humorous at the same time. It was like a fever dream complete with moving walls, objects melting, spaces constantly changing sizes, and yet was extremely beautiful. For me, the work exemplified this intriguing in-betweenness that you embrace: part puppet show, theme park ride, sculpture, live performance, gaming, and installation. And this makes absolute sense because you've been making experiences across media and genres for a very long time.You were both originally trained in painting and fine art. How did you get from there to the work that you're doing now?Sam Rolfes: Yes, Andy and I both come from a painting background. Our mom was a painter. She ran a little 3D studio when we were kids. She had these big huge books on Blender and 3ds Max laying around.Andy Rolfes: It was a long path back to 3D. We played around with 2D a lot more. We read about musculature systems in the 3D books and wondered how in the world people can even set this stuff up.SR: There was also a lot about wireframes. When we were kids 3D was just kind of boring. It felt like math and I didn't want to do math I just wanted to make a cool race car. AR: Yeah a lot of math. I remember making a sword in Blender when I was 12. It's a pretty linear shape, but it was the most taxing process. So I went back to 2D. I could just play with a plane and an abstraction and it was more fun.These 3D tools, along with game engines and other design software, have become some of the most significant toolsets for conceptualizing and building your work. What happened in terms of your training where you suddenly realized you needed to leave painting and watercolor and shift into 3D?SR: I don't remember how I came across it, but I came across ZBrush, a 3D sculpting program where you can mash things around like digital clay. That was the big aha moment for me. A lot times it hides (honestly oftentimes to its detriment) the mathy elements and we found that it was actually in keeping with our painting background where it allows for semi-improvisation, but with an impressionistic sculptural object. Andy started playing more with Maya and Blender as well. And we both slowly got into it just because it was fun.AR: I went through the whole watercolor track and was doing semi-pro photography and developing an interest in photogrammetry. As I was seeing Sam play around with ZBrush, I got into it and jumped back into 3D. I actually went back to 3ds Max. I was putting photogrammetry scans in there and throwing grass around and rendering that out and realized it had gotten way better. And I started bringing in my 2D stuff and playing with ways to collage that in. I played around with that and Cinema 4D before I ended up going back to ZBrush.SR: This was in tandem with the 2012 to 2016 era of internet art and post internet art. There were a lot of people doing 3D art. They would kind of kludge something together in Maya and make it shiny and spin around. And that stuff still exists to some extent these days, but was increasingly present in Chicago where I was living at the time. I had just moved back from Austin after being there for a year after graduating art school. I was starting to do more show flyers and stuff like that and I was trying to find whatever scene existed in Chicago. You wouldn't know it because none of the people would actually hang out in person, but a lot of interesting things in the glitch scene and post internet scene were coming out of Chicago. I was trying to engage with this new community and was finding our perspective within that. I realized we could take a different approach because of our painting background. All these other people were coming more from a digital art or computer science background. They had an art game program at SAIC where I went to school, but I was so turned off by it because everybody was making these white box gallery experiences and they were all the same. That was one reason why it took me a while to get into Unreal Engine. I was still traumatized by having to virtually walk through all these terribly designed spaces. And then I started doing music videos. Our first one was for this group Amnesia Scanner. And I started using ZBrush as a live visual performance tool and did visuals for shows. I would make characters for every musician performing. There's no real rigging in ZBrush, but I managed to make the characters bounce around like marionettes. From there I got a bit of an understanding of realtime performance.And then Amnesia Scanner kind of blew up on the internet. We don't reach out to musicians like this, but I just like sent them an email. They're very mysterious and I didn't know where they were based. I sent them an email that was in four different languages that was like, please let's work together. And they responded to me. So I spent two months with an initial dev trying out both Unity and Unreal. And Unreal ended up being better.I got in contact through a friend of a friend with this guy Eric Anderson, who was running a three-story punk venue in Chicago called The Keep. We met and he had a prototype Oculus Rift. This was back in 2015 or something like that. And I went to this DIY spot and then stayed there for a week and we just banged out this crazy video. I just palmed the prototype Oculus headset to do the camera. There was no sequencer and there was nothing rendered in Unreal. This was all recorded. I exported it all and took it to my painting mentor's place and uploaded it to his 12 year old daughter's gaming computer. And it took like 24 hours for it to load on that computer and then we performed it there and just recorded it straight from the screen. It felt good enough that we kind of just kept running with it for everything after that.So in terms of music, your past works have a long dialogue with rave culture, hyperpop, and new forms of media that circulate on the internet. Tell us more about that dialogue and how it informs some of your current work.AR: I was kind of plugged into, or at least aware of, both vaporwave and glitch and everything in between that, like the acerbic visuals and everyone realizing 3D is a lot more approachable. The communities I've engaged with have definitely been varied and scattered. It's a lot of pulling things together and trying to figure out what works. Up until recently not many friends or people I've know have directly engaged with 3D. But I show them what I'm working on and try to connect different communities together and see how we can work together.SR: And more recently you've been more active in the visual artist communities than I have. I've been more interested in those rave cultures. I have a long career of DJing and producing. I've been in the turntable scene, the glitch hop scene, the witch house scene, and now it's hyperpop. It all ends up being the same. The through line is just experimentalism basically. It's just like a certain amount of interest in a new sound.Hyperpop is an interesting illustration of this to talk about because it's this weird thing where underground culture was made mainstream and at the same time, at least initially, was not diluted upon becoming mainstream. I guess this has happened all the time, but it's the most recent occurrence that I participated in. Hyperpop is this weird sound that somehow a ton of people know about and it became a meme and a joke because of course it was gonna be. But watching that dynamic was very interesting. We've had a long history with different music scenes. Both me performing as a DJ, but also us doing stage performances with musicians on big festival stages with mocap (motion capture) VR performances that are kind of accompaniments to their music. We've got an opera and a kind of a 3D musical in the works right now. But where it all started was album covers and then music videos. It was about participating in those communities and finding a way to, as visual artists, be a part if it more than just fans, but actually help shape the ideas and shape where everything is going. What are the ideas you're shaping? What's the content and the substance of what you're trying to shape right now?SR: Generally we try and get in and maybe expand the visual dynamic range. With a lot of experimental approaches, especially in the music scenes, it ends up being a lot about vibe or the nerdy tech or kind of esoteric stuff. For us, we can use all these esoteric tech tools, but use them hopefully for a compelling overarching narrative.And I'm sure we'll talk more about the performative aspects of our work with using digital tools. But in these electronic scenes it ends up losing a certain humanity. A lot of it for us has been trying to reconnect to this live, in-the-moment feeling. Our work is trying to hit the same subconscious feeling of being in the moment and having all these things happening rather than have some kind of contrived tech demo construction or something.AR: Especially nowadays where people are like—oh yeah I need to touch grass. We want to somehow bring that back to the digital and think how can we make this more physical? We're combining that with strong motivations and guiding lights in theater, performance, athletics, heavy physicality. And we're thinking what can we really do with having our bodies fling around, often literally, and have that cascade and become a deeper narrative that also has its own motivations of speaking to the community or wherever our eyes are fixated at the moment.Performance in front of a live audience is super central to you guys. Give us a sense of the infrastructure you need to build in order to create one of your dynamic realtime performances. How does it work compositionally, dramaturgically and technically? What does it take to put together and create a realtime dynamic performance in front of a live audience.SR: Right now, one of our projects is this stage adaptation for this short film, this bigger thing 3-2-1 RULE that's going to debut in Australia in a month. That one is going to be significantly more structured and quality controlled beforehand rather than being a crazy thing where it's incredibly improvisational. Often times each show is purpose built to a certain extent. Most of our projects inherit worlds and characters and assets from previous projects, but they they build on each other. We'll have a collection of scenes that are modular and existing in the same world. Each one is setup for a specific type of camera shot and a specific type of motion capture or VR mechanic.AR: Before we get into designing the motion, we also have to figure what the arc of the performance is. What's the energy, what modes want to fit where? Is this going to be a soft moment or is it going to be more excitable? We chart the long arc and mini arcs of the scene.SR: Oftentimes we're not able to meet with the musicians until we get to whatever country we're going to. Prior to meeting them we set up these modular scenes, each with their arc in terms of mechanics and scene dynamic. We have a whole collection of things and plug them together to an extent. Because the performances are so improvisational, it's kind of like acting the part of a good DJ who's watching the audience, watching the musicians, listening, and deciding what's right in the moment.We work this way when we're making music videos as well. Where we build the environment in VR and then kind of feel out where the choreography of a scene is supposed to go. This big Australian debut of 3-2-1 RULE is going to be pretty regimented. We're going to have everything planned, but there's still going to be a fair amount of improvisation since it's all realtime. I would never want to cut out the potential for those kind of magical moments to happen.It sounds like 3-2-1 RULE is a very important transitional project for you where you're in control of the narrative and you're not in service of some other musicians. Tell me where the title 3-2-1 RULE comes from and give me a sense of what you're producing.SR: The name comes from this backup strategy in tech where you're supposed to have three backups. I'm gonna get this wrong, but one is local, one is on the cloud, and one is offsite. The staged work is an adaptation of a short film and will eventually be either a feature film or a playable game. It's one of the major projects for us this year. It's kind of a parody of both the metaverse stuff and the contemporary moment. But also a way to talk about memory and people's relationships and history together on the internet and what happens when you use the cloud platforms as a prosthetic brain or a prosthetic memory where you're offloading moments together. The work follows these gig economy workers who respond to listings posted on an app that gathers memories for people in a metaverse space. If someone wants to remember the best day they ever had or the way their dad danced around when he made breakfast they would use this app and the gig economy workers dive in and play these genre parody games to unlock the memory for them. The conceit is that AI can obviously go in and scan your brain or scan the internet and grab this stuff, but it could never recreate the senses that really make up the core of what the memory is. So you have these gig economy workers who kind of chemically collage and assemble these things together for their clients.The stage adaptation served the dual function of giving us an excuse to start building out everything for this broader narrative project really fast. And to start developing this format that's closer to a musical. The debut in Australia will be with the musician Lil Mariko, but the idea is that we would put this on all over the world, and it could be any musician friend that would star in this role. It might be customized for each musician a bit. There are moments where there's narrative and there are moments were they could just perform their songs. This is kind of our pitch for a new performance format that could be replicated elsewhere and could really bring variety to the music performance world. Because I mean I love music shows. I love venues. I love playing them. I love going to them. I'm at them all the time. But I'm sick of music shows and the format has hardly changed. There exists this potential to unite all these different formats including visuals, sound, music, and narrative. And it takes a little more work. But I think we might be good people to try it out.You're working with writer and social network artist Jacob Bakkila. What is he bringing to the work?SR: We initially brought Jacob in on our now defunct Netflix project we were developing. He has a whole career of performing as bots on the internet or doing genre parody things and all these satirical things that are really brilliant. The project was going really well, but there was too much red tape and it got canceled. But we were talking afterward about working together and we had a kernel of the idea for 3-2-1 RULE. He said, okay I think I can do this and went away for a few days and came back with the base concept for 3-2-1 RULE. And it just threaded the needle between stuff that our team had already been working on for our game and other projects. I work directly with Jacob on the broader concepts and the story and where it goes, but he can churn out hilarious writing very quickly. It's a mishmash of different online references from every generation and he's so conversant in that kind of dialogue that he can make it feel genuinely realistic. He's able to sit in this incredibly online space that I feel is very essential to this story. He just generally knows how to fit everything together in a very nice way and was able to bring the emotion to the project.Do you have a sense of what you want the audience to experience? What do you want them to come away with? What kind of impact do you want to have on them?SR: Maybe it varies a bit between the live show, the eventual short, and then whatever the final big project is. I want it to be jarring, but funny. I want it to reflect upon our online relationships and what we've given up in terms of community, interpersonal dialogue, memory and moments together. How much are we sacrificing for platforms?Would it be safe to say that you obviously have a fraught relationship with these platforms? You've experimented in these spaces, you draw inspiration from these spaces, you post in these spaces, and simultaneously, you're frustrated and critical of these spaces. SR: We're participating in them because there really is no alternative. I have friends who are making their own distributed web3 based platforms like people doing Channel and people doing other projects, like more horizontal lefty things here and there. But they still have to promote it on platforms because that is just where all this stuff exists. So much of our stuff, especially if it has any narrative, does have a platform critical element to it because I can't think of anything else to comment on. It feels so absurd to be forced to fit this art that we do, that could take so many different forms, into a box that's 1080 by 1080 pixels and lasts a minute. There's always been constraints to art, but with platforms it's not a meritocracy, and the best stuff does not rise to the surface. The platforms themselves do not promote things that are in keeping with the value system of anybody within their right minds. It promotes things that will do well on the platform for its own good. I don't think that's a healthy thing for an artistic community, or for an artist, or for anything. I think most people recognize this to an extent. In a sense, critiquing it and putting it in my little skits is just coping. It's like acknowledging it, but I only have so much ability to actually do anything about it. It's also just generally frustrating with the moment we're in. The trick is speaking to that moment and then not getting too trapped in the Twitter style riffing on the discourse of the day. That stuff will do better, it is incentivized because you will get better metrics and the platforms want that kind of momentary ephemeral thing. But then if you go back a week later, it doesn't hit the same. So that's also a trap. Having things be somehow engaging with the contemporary moment, acknowledging where we are right now, and what our relation is to these platforms and to the economy and to how they have basically become the air we breath. Doing that and then also figuring out how you have it be something that lasts longer than 10 minutes is always a struggle artistically.In all of our discussion we haven't touched on the literal politics of the day. I mean, we haven't talked about Ukraine, we haven't talked about Russia. We haven't talked about the elections. We haven't talked about any of that. What's your relationship to these events and the work you're doing? Is it something you avoid, something you engage with, or something you don't wanna participate in?SR: All the political discourse, at least between the conservative and liberal sphere, I don't give a s**t about. My interest is in the working class relation to their power, and collective bargaining and what we can do about it. I have opinions about imperialism and being against it and what the US should be doing abroad. But a much more tangible thing to engage with is union and platform issues.AR: It feels more actionable. Stuff that doesn't feel like beating the same drum. We're not trying to be Beeple where we just do modern day political cartoons.SR: That's that momentary discourse thing I'm talking about where it's like oh, I'm going to make an Elon thing. Like who cares?AR: It feels far too ephemeral. And there's a time in place for that, the political art.SR: And I have done some stuff like that, I mean I've thrown Zuckerberg into some s**t, but I don't know.AR: But that's also trying to keep things contemporary and keeping with a sense of immediacy. I feel like we usually try to tie things down to more. Not really universal, well sort of universal because working class issues are fairly universal outside of maybe the top 1%. But try to speak to the broader issues, and try to speak more to the individual themselves rather than trying to talk to political issues that will come and go all the time. Even if they don't seem like they ever go away.SR: Talking about the news of the day and making art about the news of the day is both a symptom of a broader issue that is very much not the discourse in the mainstream media or however you wanna phrase it. Not to sound like too much like a post-left guy, but it's a liberal trap to make your art about an issue that is being discussed by the media that you have no control over. It's a liberal trap in that it is a culture war fabrication that art can change the world. Like if we make the most moral Disney movie, then everybody will be good. It ignores people's relation to their labor and all these other things. It's like, if we have no more bad villains who do problematic things on TV, then everybody's gonna be okay. And I think a lot of artists end up in that trap, feeling the push to have to make work about things like this. Both because it's incentivized by the platform, and because again it's the churn of the daily discourse you're supposed to plug into. And just morally they feel like oh, I have to be saying something. And I'm not saying that my stuff is not cope because there's a left version of this that is just cope too. But it's just like posting on Twitter. It's not doing anything. We've all been trained to be cultural commentators. All we are doing is quote tweeting people endlessly while the same structural system continues. And I just have no interest in participating in that. It's entertainment at the end of the day and it's entertainment for some people and my stuff is entertainment for lefty types and I'm not necessarily accomplishing anything more, but I at least think that the topics that I'm interested in maybe are more realistically accomplished.AR: I usually just look to the actual items. I just made an artwork for the Queer Museum of Digital Art, which is part of the whole web3 sphere. They're trying to fundraise.SR: Just to clarify, I was not talking about that kind of stuff. I'm not saying that fundraising's bad or anything like that.AR: I know. I know. For Ukraine or other huge issues, I'm just going to donate or help however I can. If sharing something might help connect one or two other people, I'm aware of my presence as a node within this whole network. If i'm one of a thousand other people sharing this, but there are three other people in my network who didn't see this it's cool if it's actionable. Not if it's just hot takes.SR: That community building is also way more important than making art about it. Communities can make art and have that steer people in a certain direction. Just to self roast a little bit, if I made the most perfectly leftist take down of whatever, that doesn't accomplish anything either. So making these alternative structures, not to get into dual power talk, but building community structures that exist outside of these platform capital dependent things, I think is the most important thing.What communities are you working with specifically?SR: I have yet to start helping them really in a way that I can give myself credit for, but Jaded is a new organization. It's some people from the Black Socialists in America, Zack Fox, and a bunch of comedians have started this artist co-op and community. They're building a venue, they're going to be funding scripts, they just debuted this podcast they're doing. Black Socialists in America also have all these other projects like The Dual Power App, which helps give people tools for building co-ops and horizontal things and community structures that don't rely on basic finance capital. They are a great example.And then Channel, I did some work for them. They're a web3 venture. I don't want to over explain their thing because I will probably do a bad job. They've done a lot of platform critical work, podcasts, and they're a bunch of lefty artists. But from time to time they would get shadowbanned. And they are still, regardless of how critical they are, dependent on these platforms to a certain extent. They're working to untether that. In the same way that people are tethered to their jobs because they can't get universal healthcare, they have to stay at the job for healthcare. To give themselves a life raft or a way to untether from that toxic situation, the idea is that basically their followers are on the chain so that they can move to whatever platform. You don't lose followers when you jump somewhere else. It's a first step towards an alternative platform structure or an alternative community structure that does not rely on passing through AWS and Google and relying on this huge stack from just a couple companies. Both of them, Channel and Jaded are awesome examples, and we help where we can.That's great. This really helps fill in a whole other part of your practice that I'm learning more and more about all the time. So I'm super excited to hear you talk about that.We have so many things in common and we have some really interesting overlapping happening between Team Rolfes and FuturePerfect Studio. It's very exciting and I can't wait to see more of your work and have more conversations with both of you. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit futureperfect.substack.com

D Talks - The Design Podcast
How to become an NFT Artist | D Talks with Vipul Sachdeva

D Talks - The Design Podcast

Play Episode Listen Later May 4, 2022 51:00


Vipul Sachdeva, a 3D designer based out of Delhi. He's been in the design space for the last 5 years and his work focuses on 3D stylized characters and motion designing. Growing up, he was always fascinated by the world of animation and comics like Chacha Chaudhari, Pinki, etc. Not just for the entertainment value but fascinated by their capacity to communicate messages with simplicity. Their simplistic yet visually appealing style always stood out for him. The cinematic wonder of the Marvel universe then introduced him to a completely different render style which was realistic and extremely detailed. And that is what he has always strived to bring to his work using software such as Cinema 4D, Marvelous Designer, and After Effects. He aims to bring to the table the right blend of simplicity and complexity in a manner that resonates and lasts. Timestamps 00:00 - Intro 01:51- Academic Journey 10:30 - Keeping up with the trends 12:55 - Social Media Content 15:20 - What is NFT? 25:22 - Selling NFTs 29:35 - Pricing your NFT 33:05 - Collector's Perspective 35:41- NFT Artists/Artwork 38:20 - Mistakes to learn from 40:10 - Does Instagram following count matter? 43:20 - Most Expensive NFT Sold 44:41- Importance of having a Concept behind a Collection 46:47 - Upcoming Projects ____________________________ Host - Sanjay Reddy https://www.instagram.com/sanjayreddy144/

Mograph Podcast
Ep: 326 Rick Barrett

Mograph Podcast

Play Episode Listen Later May 2, 2022 93:17


Rick Barrett is here to wrap up our NAB shenanigans and talk about the new features in the latest version of Cinema4D.

nab cinema 4d rick barrett
The Andrew Price Podcast
#24: Having a VFX Career on Youtube w/ Clint Jones

The Andrew Price Podcast

Play Episode Listen Later Apr 10, 2022 78:18


Clinton Jones is a Youtuber (Pwnisher) with almost a million subscribers, creating tutorials and explainer videos for Cinema4D, unreal engine and 3D in general. Before that, he was part of the Corridor channel, reacting to VFX and creating short films. Most recently he created a series of community challenges that went viral, where thousands of artists create animations, and then Clint puts the best into mega montage that is incredible to watch if you haven't seen it. In this episode we talk about how he got started in VFX and Youtube, why he deparated from Corridor, the 2022 Oscar VFX nominees, and why audiences hate CG. (all chapter marks in the description). Chapter Marks: 0:00 Intro 1:17 Making of Corridor Crew's VFX React 3:35 Clint's Corridor origin story 09:00 Learning to be a better storyteller 13:30 Post-Rocket Jump 14:38 Sam & Nico invite Clint to Corridor 16:45 What inspires you? 18:26 Free Guy 19:50 Watching the Oscars 21:13 Dune 23:24 VFX in District 9 27:25 VFX in Shang Chi 30:00 Top Gun 31:35 Why do audiences hate CG? 36:10 All movies are just fine 36:35 Poliigon Ad 40:57 Into The Spider Verse 41:48 How long were you at Corridor? 43:15 The Viral Community Challenges 52:43 Why are NFTs so controversial? 59:33 Kanye West and NFTs 01:04:35 Why NFTs let artists do more art

Styleframe Saturdays
Building Creative Processes with Liam Clisham (Ep. 1)

Styleframe Saturdays

Play Episode Listen Later Jan 29, 2022 49:39


➡️ Welcome to our first episode! In episode one, we chat with Liam Clisham about the creative process behind his Favorite Frame. ➡️ Want to actually see Liam's Favorite Frame? Head over to Youtube for the video version of this episode! ➡️ Connect with us on social by using @styleframesat. We're on Instagram, Twitter, Facebook, Youtube and LinkedIn. ➡️ Like listening to the show? Leave us a review on Spotify and Apple Podcasts. ➡️ Today's show notes: Liam's portfolio website, https://www.five-31.com/ Recur, https://www.recurforever.com/ Cinema 4D, https://www.maxon.net/en/cinema-4d Houdini, https://www.sidefx.com/ ➡️ Music Credits: Late Night Latte by Harrison Amer. Licensed by Premiumbeat.com. ➡️ Styleframe Saturdays is a proud member of the Formerle brand family. --- Support this podcast: https://podcasters.spotify.com/pod/show/styleframesat/support

Floor is Rising
Artists - 3/x - Stuz0r - Everydays since 2016

Floor is Rising

Play Episode Listen Later Nov 5, 2021 24:10


Episode interview with Stuart Lippincott, find his https://twitter.com/Stuz0r (twitter)  and https://superrare.com/stuz0r (SuperRare). Time Stamps [0:53 - 4:39] Sabretooth starts by asking Stuart how he started with NFTs.  Stuart started to join the NFT world because of the crazy money currently being made. Stuart's work is very futuristic with an utopian or dystopian feel and pairs with music bands. Kizu asks if there are any influencers that have remained consistent in his work. Stuart explains that he imagines a universe where all creatures and people all live in the same realm. Music is a large contributing factor to his art.   [4:39 - 8:26] Stuart started out using https://www.autodesk.com/products/maya/overview (Maya 3D), the https://substance3d.adobe.com/plugins/substance-in-3ds-max/?sdid=JQVGW67L&mv=search&gclid=Cj0KCQjwt-6LBhDlARIsAIPRQcJyjlmUF4yYcugxKuuk6TtDrR3bTXWDyCUvF0QgbteROQIUOofYWyUaAgetEALw_wcB (3Ds Max), and now https://www.maxon.net/en/cinema-4d (Cinema 4D). Sabretooth asks if Stuart sees the tools as what he can do with them or seeks out tools to create what he wishes to create. Stuart landed on Cinema 4D because it is fairly fast and an easy user interface.  He then started to use Octane Render. Octane Render would take a significant amount of time. When Octane came out for Cinema 4D, he challenged himself to learn all there was about the software. Then in late 2015, he came across https://www.beeple-crap.com/everydays (Beeple's Everydays). https://twitter.com/beeple (Beeple) uses Cinema 4D with Octane and Stuart really started to learn it then. With plugins and addons, Stuart tries to learn as many tools as he can to get his ideas across.  [8:25 - 11:29 ] Kizu discusses how successful artists have a bigger following and others do not have that recognition. Stuart explains how previous to his first sale, he did not know how cryptocurrency worked. He then sold his first piece and continued to sell really well. Then one day the selling stopped.  He took a couple of months break and then posted pieces that were popular on Instagram and Twitter. These did not sell either though. Despite his large Instagram following and his Twitter account, it is not the case. He is still trying to figure out his own place in the NFT world.  [11:29 - 13:29 ] To explain this trend in Stuart's sales, Sabretooth gives his opinion on NFT's speculation premium. Sabretooth says that people are buying at a premium and that premium says that the price will go up. When the speculation premium was taken out of the market, sales started to decline and have not recovered at this time. Sabretooth asks Stuart to explain how much of his time is spent on client work versus NFTs.  [11:29 - 16:50 ] From 2016 to this year, Stuart was doing client work and his Everyday Project. In July, he decided that he was done with his Everyday Project. His successes from this past year, allowed him to take the rest of the year off from his art. He has chosen to really focus on learning new tools though. Kizu is wondering if Stuart is learning a specific new effect or just adding to his toolbox? Stuart explains that he has a far goal to create worlds in augmented reality using Unreal. The observer would be able to experience a virtual reality scenario within his art. With there being so many tools available, he is always learning. It can all go into his work and be translated into Unreal.  [16:50 - 22:49 ] Looking ahead to Stuart's future, Kizu asks what his ideal role would be. Stuart explains that he believes a creative director slash art director would be most fitting. He enjoys directing the vision, but also helping with work. Sabretooth has Stuart explain more on the Everyday Project that was made famous by Beeple and a previous show guest, Josh Pierce also subscribes to the process. For Stuart, it was the benefit of stress relief during his lunch break and gathering skills in Cinema....

Nebuchadnezzar
Más grande, más largo y sin cortes (y en Dolby Atmos)

Nebuchadnezzar

Play Episode Listen Later Nov 2, 2021 124:01


Todo es cada vez más grande y por lo tanto más complicado de manejar. Analicemos esta situación imposible. ¡Primer podcast mezclado en Dolby Atmos con sonido espacial! En los comienzos de la informática doméstica, la capacidad y posibilidades de los sistemas era ínfima si lo comparamos con hoy día. Al igual que su tamaño.  Un sistema operativo como MSDOS cabía en un disco de escasos 720KB o de 1.44MB según la versión. La primera versión de macOS para el primer Macintosh cabía en un disco y podía funcionar con apenas 128KB de memoria. Un juego como El día de Tentáculo, con una intro que parecía de dibujos animados, con voces digitales en la introducción y una banda sonora completa, cabía en 5 discos de 1.44MB. Al igual que Monkey Island 2 o el DOOM. Antes, debido a las limitaciones tanto de espacio y de proceso, los programas, juegos o sistemas… ocupaban mucho menos espacio y eran más controlables. Para los desarrolladores el aprovechamiento del espacio era una virtud. Poco a poco llegaron sistemas cada vez más complejos y, sobre todo, más capacidad, más espacio… CDs de 650MB, luego de 700, DVDs de casi 5 Gigas y discos duros que pasaron de medirse en Megas o Gigas a medirse ahora en Teras. Todo lo que manejamos hoy día se ha ido complicando y haciéndose cada vez más grande, más largo, sin cortes en su desarrollo y siendo productos vivos que evolucionan versión a versión haciéndose más pesado en cada una de éstas. Gigante y difícil de mantener porque ¿imaginan lo que es hoy coordinar y gestionar un proyecto como Windows o macOS, o Android, o iOS,  juegos de gran presupuesto o software como Photoshop, DaVinci Resolve o Cinema 4D? Todo cada vez, más grande, largo y sin cortes. Oliver Nabani Twitter: @olivernabani Twitch: Se Dice Mashain Julio César Fernández Twitter: @jcfmunoz Twitch: Apple Coding Podcast: Apple Coding Formación: Apple Coding Academy Consultoría: Gabhel Studios

Floor is Rising
Artists - 1/x - Josh Pierce - What is originality in NFT art?

Floor is Rising

Play Episode Listen Later Oct 21, 2021 26:09


Episode interview with Josh Pierce, find his https://twitter.com/jpierce_art (twitter), https://cryptoart.io/artist/jpierce (website), and https://superrare.com/jpierce (SuperRare). [0:51 - 2:46] Josh Pierce has been following https://twitter.com/beeple (Beeple's) work since 2012 when he started to post everyday artwork. He has been longtime friends and fans since they work with the same software. In November of last year he was contacted by https://twitter.com/tommyk_eth (tommyk_eth) from https://niftygateway.com/profile/jpierce (Nifty Gateway) to post his own artwork. Josh started in December after other artists dropped art on https://superrare.com/jpierce (SuperRare), he posted his piece Genesis.    [2:46 - 3:26] Kizu points out how trash art follows the development of a medium, but finds Josh and similar artists to be independent to historical trends. Kizu wonders if Josh sees a diverse audience and market among the scene for all different types of art. [3:58 - 6:07 ] Josh explains how it was around ten years ago when social media was expanding quickly, that there was a movement of artists creating everydays. https://twitter.com/rawandrendered (Joey Camacho) from https://www.rawandrendered.com/ (Raw and Rendered) was a pioneer for this movement. The challenge and different technology then started to sculpt artists into different styles and categories. Josh joined in and has been creating art with his own themes and ideas. He believes that the fact that it's digital lends itself to blockchain.  [6:08 - 11:30] Sabretooth asks Josh about the evolution of the technology that he uses, especially Cinema 4D. Sabretooth wonders with technology constantly being updated,will today's art become ordinary in a few years. Josh has been using Cinema 4D since 2001. It wasn't until the technology upgraded in 2015 with Octane that there was a shift in power. Josh states that this shift was monumental. It was this that allowed him to create art that he always dreamed about. Art that has significance is what people have never seen before. He believes it is that art, that will make history books and gives something significance.  [11:21 - 14:45] Kizu wonders what Josh considers originality in the NFT space or if it is even possible to achieve. While Josh believes it can be unfair for artists to not get the recognition they deserve, true artists know that it is between the artist, audience, and universe. It is ultimately a greater creative force that comes from a process and no one really understands it. He defines originality having to do with the process and how it can only really happen if it comes from the heart.   [15:00 - 17:33 ] Sabretooth asks about Josh's balance between his NFT and working artist career. Josh is taking this season off from his six year career with the NFL, to continue with his art. He enjoys working with Nifty Gateway and SuperRare. He has a collector's only drop happening. It will be called https://www.joshpierce.net/transmutation (Transmutation) and it is an evolution from his Portals Collection. Sabretooth asks for clarification of how this came about. Josh wanted to do reward pieces based on the previous drop. The idea was inspired by https://twitter.com/BoredApeYC (Bored Ape Yacht Club)'s MAYC drop. It is the evolution, and struggle of life and turning it into something joyful.   [16:40 - 19:04 ] Kizu asks Josh how important it is for him to know art history or to engage with his contemporaries in the NFT space. Josh explains how in the 20th century art went from purely being beautiful to being about emotions. One of his favorite artists, https://francis-bacon.com/paintings (Francis Bacon), captured emotions in a raw way, influencing later artists like https://www.basquiat.com/ (Basquiat) and even https://xcopy.art/ (X Copy). Josh wants to create beautiful landscapes with abstract energy and that is not done right now in the art world.  [19:08 - 23:11

The Connected Enterprise Podcast
Career 360 Pro Skateboarder Leverages Technology to Land Head of 3D Design Role at Nike

The Connected Enterprise Podcast

Play Episode Listen Later May 12, 2021 35:59


Chad Knight is a 3D Instagram artist whose work has gone viral when Lindsay Lohan started an internet rumor that his art was a real-life place in Japan. He recently sold a record breaking 1 million dollars' worth of art on the NFT platform Nifty Gateway. Chad is more than just a freelance artist; he is the Head of 3D Design at Nike and not so long ago retired from an exciting career as a professional skateboarder that ran from 1998-2011.  His work has been featured in Elle Decor, Vogue Italia, Vice, Bored Panda, and many more international publications. He recently debuted his first art collection in the Hamptons.   Chad Knight's vibrant digital art moves between the meditative and the frenetic. The artist's personal work seems to exist in alien worlds, with his works being made in Cinema 4D. These are places inhabited by enormous elaborate beings that appear in mid-evolution. The artist posts a new creation each day on his Instagram account as part of an ongoing, prolific effort. 

Not Network
014. Jimmy's long lost 3D grandfather FOUND w/ Perry Cooper

Not Network

Play Episode Listen Later Feb 3, 2021 79:29


In this Not Network podcast episode Jimmy and Matthew talk to a 3D Volume Building Beast, Perry Cooper. Dabbling in Cinema 4D, Perry bends deformers to his will and creates these fun, playful animations that leave you feeling a little bit happier after getting off of the gram. Using the volume builder and volume mesher Perry seemingly creates complicated geometry at will leaving his own imagination to wonder over the movement yet to be created. Producing daily renders for almost 5 years now its no wonder why Perry's imagination has attained BEAST status. We talk about where he came from to even the important impact of tutorials done correctly. Check it out! Show Notes: https://www.riggedpie.com/not-network/podcast-page/perry-cooper --- Send in a voice message: https://podcasters.spotify.com/pod/show/not-network/message Support this podcast: https://podcasters.spotify.com/pod/show/not-network/support

The Learn Squared Podcast
Episode 12 - Maxime Truchon

The Learn Squared Podcast

Play Episode Listen Later Dec 7, 2020 99:09


In this episode, we speak with the fantastic Maxime Truchon fresh off his award from The Motion Design Awards. Discover how our Styleframes course with zaoeyo helped him with that and how 2020 has been a tumultuous year of opportunity and growth in his fledgling career.    Follow Maxime http://maximetruchon.ca/ https://www.instagram.com/max.truchon/ https://www.behance.net/maximetruchon   Start your Career at  www.learnsquared.com All First Lessons are FREE!   Your Host https://www.artstation.com/dhanda https://www.instagram.com/dhandatron/  

Go Guerilla Filmcast
Episode 94: Motion Tracking, Uncut Gems, High Life & Mrs. Maisel

Go Guerilla Filmcast

Play Episode Listen Later Apr 5, 2020 47:22


This week we chat about motion tracking in Cinema 4D and the fun that it is! We also watched Uncut Gems, High Life and The Marvelous Mrs. Maisel. Join Us won't you! Link: https://www.youtube.com/watch?v=Idsiy-ugfdg&t Please visit us on the socials as we'd love to hear from you!https://www.instagram.com/goguerillafilmwww.twitter.com/goguerillafilmgoguerillafilm@gmail.com

UX Coffee 设计咖
#71:创作,不是一件熟能生巧的事(烧麦 · 独立动态设计师)

UX Coffee 设计咖

Play Episode Listen Later Jul 22, 2019 37:42


本期节目我们请到了动态设计师孙世晟(烧麦)。他是 OPPO 御用的手机宣传片设计师, OPPO R7、R9、R11 以及之后的旗舰系列 Find X 、Reno 的外观概念宣传片都是由他独立制作完成的。他的作品被收录在业内顶级媒体杂志上,也入选了 Cinema 4D 官方收录的 Feature Project。2016 年他在国际级别的动态设计会议 Motion Plus Design 上演讲,成为了首位受邀的华人设计师。这期节目里,我们要来聊一聊他是怎样进化成为国内顶级的动态设计大神的?当他在设计时陷入焦虑甚至失去自信时,他是如何应对的?以及作为一位独立创作者,他是如何和甲方大客户打交道的? 时间轴 02:10 从小就有的艺术细胞 03:44 「我不搞艺术也要和艺术沾点边儿」 06:26 自学做动态设计,大二时我就赚了第一笔钱 11:51 做自由职业者的第一年,我就被客户坑了 15:31 自由职业 vs 朝九晚五:围墙另一边的风景是什么样的? 18:43 烧麦保持创作状态的秘诀 21:59 独立动态设计时如何赢得像 OPPO 这样的大客户? 27:26 创作是一件永远不能熟能生巧的事情 29:29 要学会利用创作过程中意外的瞬间 31:16 自由职业者如何和客户打交道? 35:47 未来,我想做和这个世界更相关的作品 相关链接 烧麦的作品集:https://vimeo.com/somei 烧麦的微博:https://www.weibo.com/someisheng 烧麦在 Motion Plus Design 上的演讲:https://www.bilibili.com/video/av54296890/ 动态设计学习网站 Video Copilot:https://www.videocopilot.net/ 台湾动态设计工作室 Bito:https://bito.tv/ 英国动态设计工作室 ManvsMachine:https://mvsm.com/ Motion Plus Design 动态设计大会: http://motion-plus-design.com/

oppo cinema 4d video copilot
异能FM X 全球设计故事
动态设计师孙世晟(烧麦)专访 | 异能电台 x 上海Vol.30

异能FM X 全球设计故事

Play Episode Listen Later Sep 2, 2018 86:08


本期嘉宾:烧麦(独立动态设计师)Leo(动效设计师,现 BBDO Visual Engineer)Patrick(Frog Design 视觉设计师)烧麦(Somei)是来自上海的独立动态设计师,作为中国最有代表性的 C4D Power User 之一被广大视频设计师视为学习的偶像。他为 OPPO 制作了 R7、R9 及之后的系列和刚发布不久的 Find X 的外观概念片。与小米、一加、魅族、华为等国内一线厂牌合作。他的作品被收录在顶级数字媒体杂志 stash.tv,多部登上 Motionographer, Wine After Coffee, Stash Media 等业内顶尖媒体,并入选 Cinema 4D 官方收录的Feature Project。本期节目,烧麦和我们畅聊了他的成长历程:大学期间工科的专业背景,在当时并没有动态设计这个行业的情况下开始做动态视频广告,之后接踵而至的手机商业概念片这些背后的故事与细节;与我们分享了作为一个自由职业者面临的问题,优势与劣势。这些都会给初学者和设计师很多启发。 除了工作,烧麦还与我们聊了生活哲学,身为一个90后,已经是娃爸了噢。喜欢烧麦喜欢动态设计的朋友们来收听这期吧! --------------------------------------------------------------------- 感谢 INNOSPACE+ 为异能上海提供本期节目录制场地。INNOSPACE+ 是一个全要素、一站式创业社区,满足创业者办公、成长、交流、生活等全方位的需求。INNOSPACE+ 构建起国内创业要素最丰富、功能最完善并且充满活力的创业生态系统,不仅为创业者提供一流的空间,更重要的是提供最佳的创业服务和资源,助力优秀创业者更快地实现创业梦想。 --------------------------------------------------------------------- Poster:花花 文:晶晶剪辑:郑芃 本期主播:郑芃,晶晶,Olive,浦浦,大树

cinema 4d motionographer
The Perception Podcast
Episode 28-Paul Babb, Maxon President/CEO Chats with Perception Chief Creative John LePore

The Perception Podcast

Play Episode Listen Later Feb 22, 2018 41:11


On this episode of the Perception Podcast, our very own Chief Creative Officer John LePore speaks with Paul Babb. Paul is President and CEO of Maxon - the creators of Cinema 4D. We've known Paul for a very long time and have been using C4D for a very long time. In the famous words of Ed McMahon - "Heeere's Johnny!"