Podcasts about 3d studio max

  • 15PODCASTS
  • 20EPISODES
  • 54mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Jan 12, 2025LATEST

POPULARITY

20172018201920202021202220232024

Related Topics:

3d 2d tv

Best podcasts about 3d studio max

Latest podcast episodes about 3d studio max

Life of an Architect
Ep 167: How Did We Get Here?

Life of an Architect

Play Episode Listen Later Jan 12, 2025 70:32


It is the start of a New Year, everything still smells fresh and most things looking forward are theoretically in place for an amazing year … at least I think so. Since it is first podcast episode of 2025, today's conversation is more of an introspective look into a career and just how bad or good things have gone over the last 30+ years. While this is not a look into my own personal diary, it should provide you with a framework when you decide to look at whatever it is you've got going on as well. Welcome to Episode 167: How Did We Get Here? [Note: If you are reading this via email, click here to access the on-site audio player] I have a guest on today's episode as my typical co-host Andrew Hawkins, who was supposed to be in Japan during the time we needed to record, fell sick enough that he had to cancel this trip and is currently recovering at home. To that end, I have another good friend of mine sitting in to play point/counterpoint in today's conversation. I have asked friend, neighbor, and just like me, a Principal and Senior Project Designer at BOKA Powell, and 3-time participant on the podcast Lane Acree to sit in and hopefully not point out just how dumb I am. Given today's topic, I thought it would be interesting to see how two people who didn't even know each other a few years ago both ended up in the same place, despite the reasons behind the journey being completely different. Bob Borson one fateful Christmas morning ... The Beginning jump to 05:06 When I reflect on how I got started in architecture, it all goes back to the moment my dad gave me a drawing table at age five. From that day on, I knew I wanted to be an architect (he might have been angling for engineer but that's on him for not being more specific). I never questioned whether I would go to college—it was a given in my house—but I did face doubts that the profession that I had decided on might not be the right fit for me once I actually started my freshman year. I wasn't as driven as my classmates, and I began to worry that I wasn't cut out for architecture after all. Looking back, it wasn't that I lacked ability; I just wasn't putting in the same level of commitment. My parents had been strict, so when I got to Austin, I had all this freedom and indulged in everything the city had to offer. Eventually, I hit crisis point at the end of my freshman year and and took a year off from design studio during my sophomore year. I continued my other classes, but I needed that break to figure out what I truly wanted. When my junior year began, something clicked. I realized I wasn't actually bad at design—I just needed to put in the work. That realization changed everything. It was a lesson in prioritizing my goals, a skill that still matters to me to this day. Meanwhile, Lane took a different path. He discovered architecture at a young age—around sixth grade—when he witnessed the process of designing his family's home with a draftsman. From that point on, he immersed himself in art classes and drafting throughout high school and then once in college, he ended up landing a job at a small architecture office where he spent every summer and holiday break. That real-world experience gave him a big advantage over classmates who never set foot in a firm until after graduation. I find Lane's background intriguing because he gravitated towards the use of computers, even when some of his professors believed technology stifled creativity. He taught himself tools like 3D Studio Max and came out of school with cutting-edge skills at a time when most people were still using the drafting board. By contrast, we didn't even have the option to use computer software while I was in college (despite only being only 10 years older than Lane). My focus was never on starting my own firm ... I just had 3 goals for myself once I graduated. I wanted to make a good living (which meant a yearly salary of $100,000), see one of my buildings get published in a history book,...

Life of an Architect
Ep 167: How Did We Get Here?

Life of an Architect

Play Episode Listen Later Jan 12, 2025 70:32


It is the start of a New Year, everything still smells fresh and most things looking forward are theoretically in place for an amazing year … at least I think so. Since it is first podcast episode of 2025, today's conversation is more of an introspective look into a career and just how bad or good things have gone over the last 30+ years. While this is not a look into my own personal diary, it should provide you with a framework when you decide to look at whatever it is you've got going on as well. Welcome to Episode 167: How Did We Get Here? [Note: If you are reading this via email, click here to access the on-site audio player] I have a guest on today's episode as my typical co-host Andrew Hawkins, who was supposed to be in Japan during the time we needed to record, fell sick enough that he had to cancel this trip and is currently recovering at home. To that end, I have another good friend of mine sitting in to play point/counterpoint in today's conversation. I have asked friend, neighbor, and just like me, a Principal and Senior Project Designer at BOKA Powell, and 3-time participant on the podcast Lane Acree to sit in and hopefully not point out just how dumb I am. Given today's topic, I thought it would be interesting to see how two people who didn't even know each other a few years ago both ended up in the same place, despite the reasons behind the journey being completely different. Bob Borson one fateful Christmas morning ... The Beginning jump to 05:06 When I reflect on how I got started in architecture, it all goes back to the moment my dad gave me a drawing table at age five. From that day on, I knew I wanted to be an architect (he might have been angling for engineer but that's on him for not being more specific). I never questioned whether I would go to college—it was a given in my house—but I did face doubts that the profession that I had decided on might not be the right fit for me once I actually started my freshman year. I wasn't as driven as my classmates, and I began to worry that I wasn't cut out for architecture after all. Looking back, it wasn't that I lacked ability; I just wasn't putting in the same level of commitment. My parents had been strict, so when I got to Austin, I had all this freedom and indulged in everything the city had to offer. Eventually, I hit crisis point at the end of my freshman year and and took a year off from design studio during my sophomore year. I continued my other classes, but I needed that break to figure out what I truly wanted. When my junior year began, something clicked. I realized I wasn't actually bad at design—I just needed to put in the work. That realization changed everything. It was a lesson in prioritizing my goals, a skill that still matters to me to this day. Meanwhile, Lane took a different path. He discovered architecture at a young age—around sixth grade—when he witnessed the process of designing his family's home with a draftsman. From that point on, he immersed himself in art classes and drafting throughout high school and then once in college, he ended up landing a job at a small architecture office where he spent every summer and holiday break. That real-world experience gave him a big advantage over classmates who never set foot in a firm until after graduation. I find Lane's background intriguing because he gravitated towards the use of computers, even when some of his professors believed technology stifled creativity. He taught himself tools like 3D Studio Max and came out of school with cutting-edge skills at a time when most people were still using the drafting board. By contrast, we didn't even have the option to use computer software while I was in college (despite only being only 10 years older than Lane). My focus was never on starting my own firm ... I just had 3 goals for myself once I graduated. I wanted to make a good living (which meant a yearly salary of $100,000), see one of my buildings get published in a history book,...

Dev Game Club
DGC Ep 400: Bonus Interview with Tony Rowe

Dev Game Club

Play Episode Listen Later Aug 14, 2024 84:43


Welcome to Dev Game Club, where this week we revisit our series on Trespasser: The Lost World with an interview with Tony Rowe, who did QA on the title. Dev Game Club looks at classic video games and plays through them over several episodes, providing commentary. Podcast breakdown: 00:49   Interview 1:13:20 Break 1:13:55 Outro Issues covered: time to take out the prehistoric trash, getting in, doubling up the QA team, the clay model of an island, having to rebuild the island, cutting a more open level, the empty plantation house, Microsoft Hiking Simulator, the bowling shirt, how long games took at the time, rising expectations, developing a software renderer, length of time and risk, entirely procedurally driving the critters, using a hill to escape a dinosaur, everything being a box, exploding physics boxes, choosing procedural animation, saying yes to too many things, a richer first person experience, locking the arm, emergent gameplay, a different context, building a separate demo level, overtime/double time/golden time, lack of friction, the floating plants, taking the blame, programming and managing at the same time, video game history and documenting game development, influences later, making it hard for game stores, dinosaur brains and subtlety, cranking up the anger, the importance of preservation, regressing bugs and test plans. Games, people, and influences mentioned or discussed: Star Wars, Call of Duty, Medal of Honor, Jurassic Park, Dreamworks, Electronic Arts, Spark Unlimited, LucasArts, Force Unleashed (series), First Assault, Drexel University, Greg Knight, Interweave, WayForward Technologies, Microshaft: Winblows 98, X-Fools, Star Warped, MYST, PYST, Parroty Interactive, Monopoly, Spielberg, Katzenberg, David Geffen, DOOM, Neverhood, Dark Forces, Skyrim, Indiana Jones and the Staff of Kings, PS3, Microsoft 360, Nintendo Wii, Nintendo DS, PSP, AMD, Quake, 3dfx Voodoo2, Dreamcast, PS2, Seamus Blackly, Looking Glass, Terranova: Strike Force Centauri, Richard Wyckoff, Austin Grossman, Andrew Grant, Tai-Fu, Small Soldiers, Crystal Dynamics, Noah Hughes, Kung Fu Panda, Unreal, Clive Barker's Undying, Fall Guys, 3D Studio MAX, Starfighter, Video Game History Foundation, Phil Salvador, Frank Cifaldi, UNESCO, Dinosaur Train, Terry Izumi, Clint Hocking, Far Cry 2, Half-Life 2, Octodad, Eidos, Spectre, Max Spielberg, Jet Lucas, Assassin's Creed, David Wolinsky, Apple ][, The Sims, Kirk Hamilton, Aaron Evers, Mark Garcia. Next time: TBA! Links:  David Wolinsky's Interview with Steven Horowitz Twitch: timlongojr Discord https://t.co/h7jnG9J9lz DevGameClub@gmail.com

Ingenios@s de Sistemas
Episodio 337 - Dos lineas de negocio

Ingenios@s de Sistemas

Play Episode Listen Later Aug 11, 2024 37:44


Hola a todos! Soy Charly Alonso de Tecnolita. Como siempre, quiero recordarte que tienes acceso a una suscripción mensual a la Academia de Inteligencia Artificial de Tecnolita, una academia de formación continua en inteligencia artificial por tan solo 10 euros al mes. Tema del Episodio: En este episodio, continúo compartiendo experiencias sobre la compra de la empresa, o más bien, las dos empresas. La semana pasada hablamos de la compra, y hoy vamos a profundizar en cómo gestionamos la separación de las compañías para optimizar la operatividad y la carga fiscal. Gestión de las Empresas: Decidimos dividir la compañía en dos: Techex Ibérica, enfocada en la distribución de productos tecnológicos, y Techex Comunicaciones y Sistemas, que ofrecía soluciones integrales. En Techex Ibérica representábamos a fabricantes como Autodesk, ofreciendo productos como 3D Studio Max y otras soluciones de animación y visualización arquitectónica. También trabajábamos con codificadores de video digital para streaming profesional, que eran esenciales para la transmisión en televisión. Canal de Distribución y Relaciones: Nuestro canal de distribución incluía empresas que inicialmente se dedicaban a la venta de equipos de video como Sony, Panasonic, y JVC, pero con el tiempo, se diversificó hacia el mundo audiovisual en general. El trabajo con Techex Ibérica era emocionante; demostraciones de productos como 3D Studio Max y Edit eran un desafío constante, especialmente en presentaciones frente a clientes importantes, como los realizados en Galicia. Anécdotas Profesionales: Recuerdo especialmente una demostración en Galicia donde, después de una noche de copas con los clientes, tuve que realizar una presentación a las 9:30 de la mañana, sin haber dormido nada. A pesar de todo, la demostración salió bien, gracias a la experiencia acumulada. Venta Consultiva en Techex Comunicaciones y Sistemas: En Techex Comunicaciones y Sistemas, nos enfocamos en ofrecer soluciones tecnológicas como observatorios de medios, que permitían a empresas y gobiernos monitorizar lo que se decía de ellos en los medios de comunicación. Esto implicaba un profundo conocimiento técnico y la capacidad de integrar diferentes tecnologías para ofrecer una solución completa a nuestros clientes. Desarrollo de Soluciones Personalizadas: Utilizamos tecnologías como FFmpeg y herramientas de código abierto para desarrollar soluciones que automatizaban la captura, el almacenamiento y la gestión de contenidos multimedia. Estas soluciones eran innovadoras para la época y demostraban nuestra capacidad de adaptación a las necesidades del cliente. Consultoría y Percepción del Cliente: En España, la cultura de la consultoría en el sector audiovisual no estaba muy desarrollada. A menudo, la consultoría se incluía dentro del proceso de venta. Sin embargo, siempre he creído en la importancia de ofrecer una consultoría independiente que analice las necesidades del cliente y le recomiende la mejor solución, incluso si eso significa recomendar un producto de la competencia. Percepción vs. Perspectiva: Una de las claves en nuestro trabajo era entender la diferencia entre la percepción y la perspectiva del cliente. Es fundamental ponerse en los zapatos del cliente y entender su perspectiva para ofrecer una solución que no solo sea técnicamente viable, sino también la más adecuada para sus necesidades específicas. Apúntate a la academia Canal de telegram  y  Canal de Youtube Pregunta por Whatsapp +34 620 240 234 Déjame un mensaje de voz

Ingenios@s de Sistemas
Episodio 335 - Nuevo cambio de empresa

Ingenios@s de Sistemas

Play Episode Listen Later Jul 28, 2024 23:08


¡Hola a todos! Soy Charly Alonso de Teno y quiero recordarte que tienes acceso a una suscripción mensual a la Academia de Inteligencia Artificial de Tecnolitas por tan solo 10 euros al mes. ¡No te lo pierdas! Tema del episodio: Continuamos con nuestra serie de episodios veraniegos sobre mi trayectoria profesional y cómo he evolucionado en el campo de la tecnología. Resumen del episodio anterior: La semana pasada hablamos sobre mi incorporación a AUV Medios, donde junto a Chema, transformamos la empresa de alquiler de medios audiovisuales en una de soluciones audiovisuales. Transición a Texas Ibérica: Mientras trabajaba en AUV Medios, surgió la oportunidad de unirme a Texas Ibérica, distribuidor oficial para España y Portugal de Autodesk y sus productos como 3D Studio Max. Mi rol se centró en la parte técnica, encargándome también de la edición no lineal con productos de Discreet Logic, como Edit. Evolución en el sector audiovisual: En Texas Ibérica, me especialicé en 3D Studio Max y Edit, viajando regularmente a Londres y Suiza para recibir formaciones y luego formar al canal de distribución en España. Conocí proyectos interesantes y diseñé integraciones de flujos de datos para productoras de televisión y videojuegos. Experiencias destacadas y retos técnicos: Trabajé con estaciones de edición de vídeo y software de efectos especiales como Combustion, Smoke y Flame, esenciales para la producción de efectos visuales en películas y publicidad. También gestioné la formación de distribuidores y la asistencia en ferias tecnológicas. Cambio de roles y crecimiento profesional: Autodesk compró Discreet Logic, ampliando mi rol para abarcar productos de ambos fabricantes. Eventualmente, me convertí en director técnico de Texas Ibérica, liderando un equipo de colaboradores y gestionando la venta y soporte de productos a través de un canal de distribución especializado. Transformación empresarial y nuevos desafíos: Los dueños de Texas Ibérica decidieron vender la compañía. Compré parte de la empresa, convirtiéndome en socio y propietario. Dirigí proyectos de ingeniería de contenidos y seguí innovando en el campo de la multimedia profesional. Experiencias personales: Recientemente estuve de vacaciones en el País Vasco, disfrutando de paisajes espectaculares en Oñati, San Sebastián, Zarautz y el País Vasco francés. Fue una experiencia maravillosa y relajante que recomiendo a todos. Anécdota divertida: Un consejo útil: no echéis gel ni champú en un jacuzzi. Aprendí de la manera difícil cuando el jacuzzi empezó a llenarse de espuma sin parar. Fue una experiencia divertida, pero tuve que vaciar y llenar el jacuzzi nuevamente. Cierre del episodio: En el próximo episodio, compartiré más sobre mi trayectoria y experiencias en el mundo de la tecnología. Recuerda que puedes hacer preguntas y poner en marcha tus proyectos a través de nuestra página del podcast o suscribiéndote a la academia para obtener la información necesaria. Síguenos en nuestro canal de YouTube y en Telegram para contenido más visual y específico. ¡Hasta la próxima! Apúntate a la academia Canal de telegram  y  Canal de Youtube Pregunta por Whatsapp +34 620 240 234 Déjame un mensaje de voz

Pojačalo
EP 252: Dane Blačić, kreativni tehnolog - Pojačalo podcast

Pojačalo

Play Episode Listen Later Feb 4, 2024 112:23


"Posle 20 godina znam svoju vrednost i bez tuđe validacije." U 252. epizodi Pojačala gost Ivana Minića je Dane Blačić, kreativni tehnolog koji nam otkriva koliko široku primenu ima veština kreiranja vizuelnih rešenja u tehnologiji, počev od video igara pa do svetskih prvenstava, državnih izbora širom sveta, broadcasting-a u svim oblicima i gotovo svim mogućim industrijama i delatnostima koje imaju potrebu za vizuelnim rešenjima. Dane govori o tome kako je počeo da se bavi 3D dizajnom kao dečak koji je izmolio od roditelja da mu kupe knjigu o 3D Studio Max 4.5 i dospeo do toga da danas radi na ogromnim globalnim događajima poput svetskih prvenstava, kreiranja AR Digital Twin projekata i projektima koji su pobrali nagrade, uključujući i Emmy Award for Technical Excellence za izuzetno izveštavanje o izborima u SAD 2022. godine. Teme u epizodi: - Uvod - Početak razgovora - Kad porastem biću... - U vreme ranog interneta - Prva zaposlenja - Kriva učenja - Šta se promenilo - Gde je budućnost - Omiljeni projekti - Proces velikih projekata - Ozbiljan manjak ljudi - Augmentovana realnost Realizacija Pojačalo podkasta ne bi bila moguća bez naših izuzetnih partnera: - Kompanija Epson koja je vodeći svetski proizvođač projektora i štampača za sve namene: https://www.epson.rs/sr_RS - Kompanija Orion telekom provajtera najbrže internet infrastrukture u Srbiji sa preko 30 godina iskustva: https://oriontelekom.rs Podržite nas na BuyMeACoffee: https://bit.ly/3uSBmoa Pročitajte transkript ove epizode: https://bit.ly/3w4euaU Posetite naš sajt i prijavite se na našu mailing listu: http://bit.ly/2LUKSBG Prijavite se na naš YouTube kanal: http://bit.ly/2Rgnu7o Pratite Pojačalo na društvenim mrežama: Facebook: http://bit.ly/2FfwqCR Twitter: http://bit.ly/2CVZoGr Instagram: http://bit.ly/2RzGHjN

3d emmy awards sad podr proces kad teme prva srbiji gde posle technical excellence poja kreativni 3d studio max
Progettazione BIM
EP 192 - V-Ray su Revit o su 3D Studio Max? Quale è migliore

Progettazione BIM

Play Episode Listen Later Jan 5, 2024 9:47


Risorse Gratuite: CORSO GRATIS REVIT- BIM: https: //bit.ly/34OJRVS  CORSO GRATIS AUTOCAD: https://bit.ly/3fNrgjk   CORSO GRATIS RENDERING: https: //bit.ly/3uOt4Nd  ACCATASTA QUIZ - SCOPRI CHE ACCATASTATORE SEI: https://bit.ly/3IxzOHb  BIM QUIZ - SCOPRI CHE BIM SEI: https://bit.ly/3q12O3E   PILLOLE DI CANTIERE: https://bit.ly/3vQc4da  PILLOLE DI COMPUTO METRICO: https://bit.ly/3srh7ja  PILLOLE DI DOCFA: https://bit.ly/3KzfpCc  PILLOLE DI PRATICHE EDILIZIE: http://bit.ly/3zdcQSt Le 8 REGOLE D'ORO PER FARE UN BUON RENDER: https://bit.ly/3zUgrnk  PRENOTA UNA CONSULENZA TELEFONICA GRATUITA:  https://bit.ly/3vQuEQ4 ========================================================== Per seguire ADARA Architettura: YOUTUBE: Iscriviti e attiva la campanella https://bit.ly/36AMlav WEBSITE: https://bit.ly/2X4BbrG BLOG: https://bit.ly/36w3GS1 PAGINA FB: https://bit.ly/2ywKQ0q GRUPPO SEGRETO FB "Progettazione Competitiva": https://bit.ly/2X6rn0h ========================================================== Podcast: SPOTIFY: https://spoti.fi/2LYKdzM OVERCAST: https://bit.ly/3d6FFU1 APPLE PODCAST: https://apple.co/3d365pG GOOGLE PODCAST: https://bit.ly/2LWqBwq ANCHOR: https://bit.ly/2Adi7OB ========================================================== #AdaraArchitetturaCentroAutodesk #Università 

Progettazione BIM
EP 153 - Come velocizzare i render su 3D Studio Max

Progettazione BIM

Play Episode Listen Later Mar 31, 2023 13:07


========================================================== Risorse Gratuite: CORSO GRATIS REVIT- BIM: https: //bit.ly/34OJRVS CORSO GRATIS AUTOCAD: https://bit.ly/3fNrgjk CORSO GRATIS RENDERING: https: //bit.ly/3uOt4Nd ACCATASTA QUIZ - SCOPRI CHE ACCATASTATORE SEI: https://bit.ly/3IxzOHb BIM QUIZ - SCOPRI CHE BIM SEI: https://bit.ly/3q12O3E PILLOLE DI CANTIERE: https://bit.ly/3vQc4da PILLOLE DI COMPUTO METRICO: https://bit.ly/3srh7ja PILLOLE DI DOCFA: https://bit.ly/3KzfpCc Le 8 REGOLE D'ORO PER FARE UN BUON RENDER: https://bit.ly/3zUgrnk PRENOTA UNA CONSULENZA TELEFONICA GRATUITA: https://bit.ly/3vQuEQ4 ========================================================== Per seguire ADARA Architettura: YOUTUBE: Iscriviti e attiva la campanella https://bit.ly/36AMlav WEBSITE: https://bit.ly/2X4BbrG BLOG: https://bit.ly/36w3GS1 PAGINA FB: https://bit.ly/2ywKQ0q GRUPPO SEGRETO FB "Progettazione Competitiva": https://bit.ly/2X6rn0h ========================================================== Podcast: SPOTIFY: https://spoti.fi/2LYKdzM OVERCAST: https://bit.ly/3d6FFU1 APPLE PODCAST: https://apple.co/3d365pG GOOGLE PODCAST: https://bit.ly/2LWqBwq ANCHOR: https://bit.ly/2Adi7OB ========================================================== #AdaraArchitetturaCentroAutodesk #Università

render 3d studio max
Sixteen:Nine
Gavin Smith, Voxon

Sixteen:Nine

Play Episode Listen Later Mar 15, 2023 46:20


The 16:9 PODCAST IS SPONSORED BY SCREENFEED – DIGITAL SIGNAGE CONTENT When I was at the big ISE pro AV trade show a few weeks ago, I yet  again saw several products that were billed as holograms, even though they didn't even loosely fit the technical definition. I am always paying attention to news and social media posts that use that terminology, and once in a while, I come across something that actually does start to align with the true definition of holograms and holography. Like Voxon, which operates out of Adelaide, Australia. Started years ago as a beer drinking and tinkering maker project in a garage, Voxon now has a physical product for sale that generates a visual with depth that viewers can walk around and see from different angles. That product is mainly being bought by universities and R&D teams at companies to play with and learn, but the long game for Voxon is to produce or be the engine for other products that really do live up to the mainstream, Hollywood-driven notion of holograms. I had a great chat with co-founder and CEO Gavin Smith. Subscribe to this podcast: iTunes * Google Play * RSS TRANSCRIPT Gavin, thank you very much for joining me. I know you're up in Scotland, but you are based in Adelaide, Australia, correct? Gavin Smith: Yes, that's right. I'm originally from Scotland. I grew up here, spent the first part of my life in the north of Scotland in Elgin, and then I went to university in Paisley, Glasgow and then eventually, after working for 10 years in the banking sector, I immigrated to Australia and I've lived in Adelaide for the last 14 years.  That's quite a climate shift!  Gavin Smith: Yes, it is a climate shift. I was speaking to my wife the day before, and it was about 40 degrees there, just now they're having a heat wave, whereas up in Elgin here, it's about 1 degree at the moment. Yeah. I'm thinking, why are you there in February? But on the other hand, why would you wanna be in Adelaide if it's 40 Celsius?  Gavin Smith: I quite like the cold. I prefer to be in this temperature right now than 40 degrees, that's for sure.  Oh, I just spent 45 minutes with my snow machine clearing 25 centimeters of snow off my driveway, so I wouldn't mind being in Adelaide today.   Gavin Smith: Thankfully I can have the best of both worlds. I'm heading back there in about a week and a half time.  I was intrigued by your company. I saw a couple of LinkedIn posts with embedded videos and thought that's interesting and I wanted to speak more. So can you tell me what Voxon does?  Gavin Smith: Yes, sure. So Voxon is a company that started in about 2012-2013, and it came out of two joint research projects. One was me and my friend Will, based in Adelaide, we had a Thursday Night Lab Session, as we called it, where we went to the shed and we drank a few beers and we tried to invent things. It was a bit weird, science-esque. So this wasn't exactly a lab?  Gavin Smith: It was a shed. Let's face it, with a beer fridge and there was a lot of machinery, which was in various stages of repair. We used to get hard rubbish off the right side of the road in Adelaide and take it apart and see what we could make.  It was just amateur invention hour. But it was at the start of that project, we built fairly rudimentary machines, CNC machines and we took apart laser scanners and were just inquisitive about how they work from a mechanical point of view. But that then turned into more of a, let's see how far we can push ourselves and learn new stuff, and we've been inspired by sci-fi, Star Wars, all those sorts of things. So we said, let's try and make the sort of 3D display that we'd seen in the movies and those science fiction movies always had the same type of display, and that wasn't a screen, that wasn't a headset. It was always some sort of floating image that you could walk around and you could look out from any direction and the common name for that in popular media was a holographic display. That's what people called it. So that's what we set out to build, and we very quickly figured out that this type of display had to be something to do with projecting images or dots onto some sort of surface that moved and that's because in order to render these little dots that make up the image, inside a space that had physical dimensions, you couldn't make the lights just appear on air. We figured you, you might be able to do some sort of gas or some sort of lasers and things like that. But the way we approached it was starting off by just shaking business cards back and forwards and shining lasers on them, and then that made a line because of persistence of vision.  I always think that Neanderthal man invented the volumetric display because they probably waved burning embers around on the sticks at nighttime and drew those patterns in the air and those patterns really only existed because of the persistence of vision and the extrusion of light through a volume of space, and so that's what we decided to do, and we realized if you could draw a line, then if you could control the laser and turn it off and on again, you could draw a dot. And so we did that by cutting the laser beam with a rotating CD that was stuck on a high-speed drill with some sticky tape on it. We chopped the laser into little bits, and by controlling the speed of the laser, we ended up having a single dot, which we referred to as a voxel, that's what we Googled that a dot in space is referred to as a voxel and then we extrapolated from there and say if we're building these images out of little pixels of light or voxels, we need more and more of these dots, and when you do the math you quickly realize that you need millions of dots of light or volume to make an image, and that's difficult. And really that started us down the road of experimenting with video projectors, with lasers with all sorts of things and more and more advanced moving surfaces, and eventually, we made a small helical display using a vacuum-formed helix that we basically made in Will's wife's kitchen when she was out, in the oven, and yeah, we created a very small image of an elephant. You might call it a hologram at the time. That's what we called it at the time, but it was a volumetric swept surface image. The terminology I'll go into a bit more detail, but at the time it was just a hologram to us, and we thought this was amazing and we'd never seen it before. So we put a video of it on YouTube and some guys in America who were unbeknown to us doing the same project got in contact with us and push came to shove, we decided to join forces and form Voxon, and that was back in 2013.  So when you created this little elephant, was that like a big ‘aha' moment? Like, “Oh my God, we figured this out”? Gavin Smith: Yes, very much so. We believed at the time, we were the first people to do this. In fact, we weren't. But it was the first time we'd seen this type of image, and it was literally spine tingly amazing, to see a truly three-dimensional object that you could look down from, above, from the sides, from any angle, and it filled a space the same way as you or I fill a space in the physical world, you could measure its length that's spread, that's height and even its volume in gallons or liters. It had a tangible existence in the physical world and not on a screen as other 3D images tend to do. At this point, was this a stationary object?  Gavin Smith: Yes, at this point the elephant was stationary and the way I'd created the elephant was we'd figured out, in order to make this elephant, we first needed to have the swept surface moving. So that was the helical screen, which was spinning at about 900 RPM on a very small electric motor and then we had a video projector that we'd managed to get going at about 1,200 frames per second, and in order to create the images, which were cross sections, helical cross sections of an elephant, that was all done offline. So the way I approached that was, we used software called 3D Studio Max, which is a design software, and in that, I modeled a helix and an elephant, and I then intersected the helix with the elephant in the software, rotated the helix digitally, and then I rendered out the resultant cross-section, the boolean operation of one on the other, and this is like taking a drill and drilling a hole into the ground and looking at just a helical core sample. So really it was like a CT scan of this elephant, but just slice at a time, and then I rendered those images to a file. I wrote some software to convert it to a new video format that we had to invent to compress all that data into this high-speed image stream, and then projected that onto the helix. Now, of course, the timing of the images and the rotation of the helix were not in sync, and so much like an old CRT screen where the vertical shift is not dialed in, the elephant would drift out the top of the display and come back in the bottom, and at that point, we knew that this was all about a combination of mathematics, optics, precision, and timing. And to make it interactive, we'd have to write a real-time computer program capable of generating these images in real-time, and that was the next part of the puzzle. This was a work working prototype basically.  Gavin Smith: This was a working prototype, yeah.  How big was it?  Gavin Smith: The helix was very small. It was about five centimeters in diameter, about an inch and a half in diameter, and about an inch tall. But because the projector that we used was a Pico projector at the time, and it was about half the size of a pack of cards. This tiny little thing that we got off the internet from Texas Instruments, and you could focus it at about one centimeter away. So all those little pixels were infinitesimally small, so it was a very high-resolution display and very small, and we realized to get these number of frames per second, we'd have to take advantage of one of the most incredible pieces of engineering ever conceived, in my opinion, and that is the DLP chip from Texas Instruments invented by Larry Hornbeck who passed away several years ago, sadly, and that is an array of mirrors that is grown on a chip using photolithography, the same process as you create microchips, and that array of mirrors contains upwards of a million mirrors arranged in a two-dimensional array, and they can tilt on and off physically about 30,000 times a second. And that's called a MEMS, a microelectromechanical display or in optical terms, a spatial light modulator. So it's something that turns the light on and off at ultra-high speed, and those on-off cycles are what give us our Z-resolution on the display. So that's the slices that make up the display. Wow. So where are you at now with the company now that you've formed it and you've grown it, what's happened since that very first prototype elephant? Gavin Smith: Following that we realized that my programming skills were finite. I'd spent 10 years as a COBOL programmer in banking, and I wasn't up to the task of writing what was needed, which was a low-level graphics engine. This didn't need a mainframe, no, and we couldn't afford a mainframe, even if we wanted one. So we looked up on the internet to see who we could find in terms of programming to join the company, and there were two programmers who stood out. They were referred to as the top two programmers in the world and were John Carmack of Oculus, and then there was Ken Silverman who wrote the graphics engine for Duke Nukem back in the late 90s, so we contacted Ken. John wasn't available so we contacted Ken and demoed to him at Brown University in Rhode Island where he was working subsequently as basically a computer programmer teacher with his dad, who was the Dean of Engineering there, and Ken really liked what we were doing and his understanding of mathematics and foxholes and 3D rendering really made him think this was something he wanted to be involved in. So he joined our company as a founder and chief computer scientist, and he has led the development of the core rendering engine, which we call the Voxon Photonic engine and that's really our core IP, it's the ability to tick any 3D graphics from a third party source, from Unity, from a C program or something else, and turn it into a high speed projected image, which can be processed in such a way as to de-wrap them when they're projected, so they're the right size. We use dithering in real time to make color possible, which is similar to newsprint, CMY newsprint in the newspaper, and this all basically allows us to project images onto any type of moving surface now and do it in real-time and make applications that are much bigger and extensible so we can plug it into other programs or have people write their own programs for our displays. So you've emerged from being an R&D effort in the shed to a real company to having working prototypes and now you're an operating company with the product.  Gavin Smith: I like to say we've emerged, but I'd very much say we're still crossing the chasm, so to speak, in terms of the technology landscape. After that initial prototype, we spent many years batting our heads together, trying to work as a team in America, and eventually, Will and I decided to raise some money in Australia and set up the company there. We raised about a million and a half Australian dollars. It was about a million US dollars back in 2017, and that was enough to employ some extra engineers and business development, and an experienced COO and start working on our first product, which was the VX1. Now, the VX1 was a different type of display. We decided not to do the helix back then, and we decided to make a different type of display, and that was a reciprocating display and so we invented a way of moving a screen up and down very efficiently using resonance. It's the same I guess mechanical thing that all objects have, and that is at a certain frequency, they start vibrating if there's a driving vibration force. So the Tacoma Bridge falling down when the wind blew at the right speed was an example of when resonances destroyed something. But an opera singer, breaking a glass at the right pitch is another example of something that vibrates due to a striving force, and so we found out if we built a screen, which was mounted on springs that were of a very particular weight, and the springs were a very particular constant of Young's modulus, we could vibrate that subsystem and the screen would vibrate up and down very efficiently and very fast, fast enough that you couldn't see the screen. So that's what the VX1 became, and onto the back of that screen, we project images and those images from a swept volume, and the VX1 had a volume of about 18x18x8 cm, I think it's about 7 inches square by about 3 inches tall, and we have a single projector mounted inside of that and a computer and a ton of electronics keeps it all in sync, and we built a software API for it and a library of programs that come built into it. So it's off the shelf, you turn it on and it works. And so we built that back in 2017 and over the last five years, it's evolved into something which is very reliable and now, you can't tell them apart when they're manufactured at the start, each one might look different with hot glue and duct tape and all the rest of it. But now we have a complete digital workflow. We outsource most of the manufacture of the parts and we do final assembly software, QC, and packaging up and then ship them out to companies we've sold probably about 120 VX1s globally since 2017, and those have gone out to companies all around the world, like Sony, MIT, Harvard, CMU, Unity, BA Systems, Verizon, Erickson, a lot of companies and they've bought them and they're generally going into explorative use cases.  Yeah, I was going to say, it sounds like they're going into labs as opposed to stores.  Gavin Smith: Yeah, they're not going into stores. The VX1 is really an evaluation system. It's not prime time ready for running all day long, and the reason for that is it has a vibration component to it, and also the refresh rate of the VX1 is actually variable within the volume. It's hard to explain, but the apparent volume refresh rate is 30 hertz in the middle and 15 hertz at the poles and so it has a little bit of flicker. But in a dark environment, it's really spellbinding and it's actually used in museums. There's some in Germany and a science museum there. It's been used in an art exhibition  in Paris, where the art was created by David Levine and MIT Media Lab and it's frequently used in universities and it pops up in all sorts of trade shows, and it's always a talking point and it always gathers a crowd around it, and what we like to say with the volumetric display from a marketing point of view, or really a description of what it is, it's really about creating a digital campfire. That's the kind of user experience.  It's gathering people around something intimately in a way that they can still have eye contact and maintain a conversation, and each person has their own perspective and view of the 3D data.  The scale you're describing is still quite small and that seems to be What I've experienced with, when I've seen demonstrations at the SID trade show of light field displays. They're all like the size of a soda bottle at most.  Is that a function of just the technology, you can't just make these things big? Gavin Smith: You can make them bigger, and we have since that point. The biggest display that we've made so far was one that we just delivered to BA Systems in Frimley near London, and fo that one, we've gone back to the helical display for that particular one, and it's. 46 centimeters in diameter and 8 centimeters deep. So that's about nine times the volume of the VX1. So that's a much bigger display.  Now you can, with a swept volume, you can go as big as you'd like within the realms of physics, and what I mean by that is with a rotating display, you can make the display as big as something that can rotate at a speed that's fast enough to make the medium kind of disappear. So if you think about propellers and fans, for example, I've seen pedestal fans that are a meter in diameter running faster than we run our display, and with rotating displays, it's easier to do because you have conservation of momentum and you have inertia which drives the display around, and yet you can rotate the volume as well, have it enclosed so that you're not generating airflow as a fan does.  So for example, if you have a propeller-shaped blade encased in a cylindrical enclosure, and that enclosure is spinning, then you don't get the air resistance you get with a fan and the display that we made for BA Systems is ultimately silent and flicker-free because we're running at exactly 30 hertz throughout the volume, which means you don't get flicker, but reciprocating displays, ones that go up and down, scaling them is more of a challenge because you're having to push the air out the way up and down, and as the size of the screen moving up and down gets bigger, if you're projecting from behind, for example, you also have to start considering things like the flexing of the substrate that you're projecting onto. For a front projection display where you project down from the top, we can go bigger because you can make a very lightweight, thicker screen out of exotic materials and those are materials that are very light but very stiff. Things like air gels and foamed metals, and very lightweight honeycomb structure so that way you can go bigger but we may need to move into the realms of using reduced atmospheric displays, partial vacuums, and things like that to reduce the resistance or using materials that are air permeable, such as meshes that move up and down very quickly. And we have done experiments with those and found that we can go a lot bigger.  However, with the current projection systems that we're using, you then have to increase the brightness because the brightness of the image is also stretched out through a volume. If you imagine a home cinema projector projecting 3k or 4k lumens, you have to consider that each of the images that it's projecting is pretty much evenly lit in terms of all the pixels that you're projecting. Whereas what we are doing is we are projecting these thousands of images, we're only illuminating the cross-section of every object. So we're maybe only using 1% of the available brightness of the projector at any one time, unless you project a solid slice all the way across, which is really you're building up this construct, which is how I explain it to people as it's very similar to 3D printing. If you look at how a 3D printer works, we are doing exactly the same thing, except we are printing using light instead of PLA and we're printing thousands and thousands of times faster.  In digital signage, the thing that always gets people nervous is moving parts, and that directly affects reliability and longevity. How do you address that? Gavin Smith: So the VX1 is a good example of moving parts in a display that isn't yet ready for long-running and when I say long-running, we do have it in exhibitions, but we have recently engineered it in such a way that the parts that may break or will break are the four springs that drive the machine, and those have been engineered to resonate at particular frequency. Now after several hundred million extensions of those springs, they can fatigue and they will fatigue break and that's something that we're working on, and that might be a month or three weeks of running 24/7, and so we've made those springs user replaceable. You can change them in two or three minutes for a fresh set. So it's almost like the mechanical profile of something like an Inkjet printer where you have to change the cartridge every so often. And we find with mechanical stuff, people accept mechanical things in their lives as long as the maintenance/utility ratio is at a level they can accept like bicycles, cars, and things like that. You maintain them as long as their utility outweighs the inconvenience of the repair. Now for projection equipment and things like that in digital signage, there are a lot of two-dimensional technologies that are ultra-reliable on those things, big LED panels, 2D video projectors and just lighting. You can turn them on and leave them and you should be okay. So in our rotating displays and we have another rotating display that we're working on, which we can't discuss just now cuz it's still under NDA, is part of the reason we're going down that rabbit hole or going down that design sort of path because we can make rotating displays, which are very reliable, they're effectively like a record player. You turn it on and it spins around and you could leave it and come back in three weeks and it would still be spinning around, and also a rotating display if properly manufactured within tolerances won't cause the vibration, and the vibration is really the thing that can cause the issues because vibration can lead to fatigue and failure in electrical components, electronic components, small cracks in circuits, and things like that. So from our point of view, we're going towards rotating mechanics because that ultimately allows us to make things which are reliable enough to be used in a wide range of industries including digital signage, advertising, medical imaging and gaming, and many more. In my world, there are all kinds of companies who are saying that they have holographic products of some kind or another. As somebody who's doing something that sounds very much like a hologram or close to what we thought of when we all saw Star Wars, what do you think of those things?  Gavin Smith: I don't like to be a troll, first of all on LinkedIn, and so I try to shy away from saying, look, that's rubbish. But what I try to do is politely point out how things work when it's not clear from someone's post how something might work or where it's misleading. Now if you look at the term hologram, it comes from the Greek, hólos and grammḗ, which means the whole message, and in a way, I tend to think of an actual hologram, which is created using lasers, laser interference patterns, and light beams and things like that they don't represent the whole message. Because if you take your credit card out, which is one of the few places you will see a hologram you'll notice that you can't look down on the hologram from above, you can't turn the card over and look at it from the back. They are a limited view of something, and so the term hologram has become, as you say, in popular fiction, and popular media, it's really a catchall for anything that is sci-fi 3D related, right? And it's misused, everyone calls it a hologram, and our staff sometimes call it a hologram. I like to say it's not a hologram because it has a lot more features than a hologram. Holograms have some really interesting properties, one of which is that you can cut a hologram into 10 little pieces and it turns into 10 individual little holograms, and that's a really interesting thing. But holograms from a 3D point of view don't exist in signage anywhere. They simply don't. The terminology used to describe things that you see in signage and popular media is completely misused, and I like to go through them and categorize them into different things. And those are, first of all, volumetric displays of which we're the only company in the world that's making a commercial volumetric display. There's one other company Aerial Burton, who are based in Japan that makes a volumetric display, but it's a very high-tech scientific prototype that uses lasers to explode the air and has very low resolution. And then you've got autostereoscopic 3D displays, and they broadly fit into the categories of lenticular displays which are as you probably know LCD panels, which have got a plastic lens array on them that allows you to see a left and a right image, and those left and right images can give you a stereoscopic view. I would call them stereoscopic displays because they're not 3d. You can't look at them from any direction and they don't physically occupy three-dimensional euclidean space, which is what the real world is, and those types of displays come in different formats. So you get some with just horizontal parallax, which means you can move your head left and right and see a number of distinct views. You've got some that you can move up and down as well, and also get a little bit of vertical parallax as well, and there's probably five or six companies doing those sorts of displays. You've got Looking Glass, Lightfield Labs, Acer, and Sodium, so that area can grow. The physical size of those displays can get bigger, but the bigger they get, the harder it is to move further away because you're pupil distance means it's harder to get a 3D view, and also with any display like that, the 3D image that you see because it's the result of you seeing two independent images with your left and right eye, that 3D image can never leave the bounds or the window of the display, and that's something in advertising, which is very misused a lot, they show a 2D monitor with the image leaping out beyond the border of the monitor, and that just can't happen. That breaks the laws of physics, and so that's the kind of three auto stereoscopic 3D landscapes, and it's hard to say that autostereoscopic, 3D display because people zone out and they go, is it a hologram? And no it's not.  The other types of 3D that are popular just now are obviously, glasses-based display, AR, VR, mixed-reality, and we don't really, we don't really mind about that or care about that because it's something you have to put something on your head, and that's our different thing really. So those offer you an immersive experience where you go down a rabbit hole and you're in another world and that's not what we are about. And then you've got the fake 3D displays, which are not 3D stereoscopically but appear that way, and that's where I get slightly annoyed by those displays, but I understand there are people making types of signage I guess you would say, that is perfectly suitable for a scenario and those are things like Pepper's ghost which is when you reflect a 2D image off a big piece of glass or plexiglass, and that's the pepper, the famous one, the Tupac hologram at Coachella. I met the guy and spoke to him. He's a really lovely guy and I had a good chat about that, and he knows full well that it's an illusion, but it's the illusion that Disneyland has been using for many years, and it's a perfectly good illusion for a seated studio audience because they see someone on stage and they're doing it now with the, I think the ABBA Show in London is a similar type of setup.  They call them holograms, but it's a 2D picture that's far enough away that you can be made to believe that it's three-dimensional and it might exist at different levels like a diorama. You could have a stack of images, on fly screens or whatever, that appear to be layered, but ultimately they are 2D, and then the one that's come out recently, which causes probably the most amount of confusion for people are the anamorphic projections on large billboards, and everyone's seen these displays on LinkedIn and YouTube, and they tend to appear on large curved billboards in parts of China where the rental of the billboards is sufficiently cheap as you can put these big images up there, film them from one particular spot in 2d, and then put that on LinkedIn and have people comment on it and say, wow, that's an amazing hologram. Even though a) they haven't seen this in real life and b) it's not a hologram and it's not even three-dimensional. It's a perspective-based 2D trick, and so one of our challenges is expectation management, and that is people see large-scale fake 2D images, and fake 3D images and then they conclude that it must be possible and they want to buy one, and then when they see yours they go, oh, it's much smaller than I imagined, and you feel like saying, it's real. It's actually based on science, and you could walk around it. And that's the challenge we're at just now. Trying to move away from this feeling that you have to have the biggest display in the world for it to be valid, and a lot of the business for us and a lot of the inquiries we get are from the likes of the Middle East, where they want to build very big, very impressive, very bright, very colorful displays and they say, we want a hologram that will fit in a football stadium and fly around in the sky, and you have to say well, that's great, but that's also impossible using anything that's even imaginable today, let alone physically achievable, and so yeah, we are very much a case of trying to be as honest as we can with the limitations, but also with the opportunities because regardless of the fact that our technology is relatively small compared to large screen billboards, we have got the ability to create sci-fi-inspired interactive displays that you can put in personal spaces, in museums, in galleries, in shopping centers, and they really do look like something up close under scrutiny that you might see in a Marvel movie, and that's the kind of relationship we're trying to find with other companies as well. There are other types of the display as well. You probably talked to Daniel about some of his displays, which are levitating grains of dust and things like that, and the challenge I have with them is yes, you can make a 3D image, but you have to look at how long it takes to make that 3D image and they're really more akin to painting with light. It's long-exposure photography. You have to manipulate something and move it around over a long period of time to bring it, to build a single image, and scaling those types of displays is impossible. It's the same with laser-based displays, whenever you're moving a single dot around, you run out of resolution extraordinarily fast because it's a linear thing, and even with Aerial Burton exploding the air with a laser they can only do about 1000 or 2000 dots every second, and that breaks down to being able to draw maybe a very simple two-dimensional shape whereas to draw a detailed image, an elephant or anything like that, that we've displayed in the past, it requires upwards of 30 or 40 million dots a second to do that with each image, each volume contains millions of dots.  Where do you see this going in, let's say, five years from now? And are you at that point selling products or are you licensing the technology to larger display manufacturers? Or something else? Gavin Smith: So at the moment what we're doing is we're looking for projects that we can scale and one of the first projects that we're working on just now and the technology can be applied to a range of different industries. As you can imagine, any new display technology. You could use it for CT scans, you could use it for advertising, for point of sale, for a whole lot of different things. But you have to choose those projects early on when the technology is immature, and that is low-hanging fruit if you want to use that term, and so our low-hanging freight at the moment, we believe is in the entertainment industry, digital out-of-home entertainment to be specific, which is the likes of video gaming and entertainment venues, and so 2018, we were in the Tokyo Game Show with one of our machines, and we were situated next to Taito at the company that made Space Invaders, and their board came across their senior members and they played with our technology and they really liked it. And so we entered into a conversation with them and over several years, we have built a Space invaders arcade machine called Next Dimension, and that's using our rotating volumetric display with three projectors each running at 4,000 frames per second and a large rotating volume, and we've written a new Space Invaders arcade game and Taito has granted us the license to bring that to market. In order to do that, we're now doing commercial testing and technical testing which involves taking the technology into venues, play testing it and getting feedback from the venues on the suitability of the game and the profitability of it as a product. So with that game, our plan is to follow in the footsteps of the previous Space Invader game, which was called Frenzy made by Roth Rolls. It sold 3000 or 4000 units globally. So if you could do that, it would be a profitable first venture in terms of bringing technology to market, and at the moment, we're looking to raise some capital. We need to raise $2-3 million USD to do the design from the manufacturer for that and build the first batch of machines which would be rolled out globally.  Now, that's really seen for us as a launch of technology using the IP of Space Invaders as a carrier, a launch vehicle for the technology, but once launched and once our technology is widely known and understood, what we then plan to do is build our own revenue generating model and technology platform that can be deployed to venues around the world who can use this as a kind of an entertainment device where you can run different IP on it from different vendors and do a sort of profit share with the venue owners. So a cinema, Chucke CheeseB, Dave & Busters, those types of venues, as well as bowling alleys, VR arcades, and all those types of entertainment venues that currently is starting to grow in strength, largely because people are now looking for entertainment experiences, not necessarily just staying at home.  COVID obviously threw a curve ball our way as well. When our Space Invaders machine was sent to Japan for testing, COVID had just happened so it went into internal testing within Taito, and then Square Enix who owns Taito, their parent company decreed that Taito would no longer manufacture arcade machines but would license their IP only so that kind of threw a spanner in the works and they've come back to us and said, we'd love the game, but we want you to bring it to market, not us. So that's one thing we're working on just now. There's a video of Space Invaders: Next Dimension on YouTube that you can look at, and it's a really fun experience because it's a four-player game. We've added the volumetric nature. You can fly up and down during sub-games. You can bump your next-door neighbor with your spaceship and get a power-up. It really is for us a way of saying, look, this is a new way, it's a new palette of which to make new gaming experiences and the future is really up to the imaginations of people writing software.  All right. That was super interesting. I learned a lot there and some of it is, as often the case, I understood as well. Gavin Smith: That's great. I'm glad you understand. It is a hard thing to wrap your head around, especially for us trying to demonstrate the nature of the technology in 2D YouTube videos and LinkedIn videos, and you really have to see it with your own eyes to understand it, and that's why this week I was over for a meeting with BA Systems, but I took the opportunity to spend several days in London at a film Studio in SoHo, in London, the owners very gratefully let me have a demonstration group there, and I spent two days last week demonstrating the product to ten or so companies come in and see the technology, and it's only then when they really start to get their creative juices flowing and that's where POCs projects kick-off.  So that's what we're looking for just now, are companies that have imaginative people and they have a need for creating some new interactive media that can be symbiotic with their existing VR and AR metaverse type stuff. But really something that's designed for people up close and personal, intimate experiences.  If people want to get in touch, where do they find you online?  Gavin Smith: So we have a website, which is just www.voxon.co. Voxon Photonics is our Australian company name, and you can find us on LinkedIn. Actually, my own personal LinkedIn is generally where I post most stuff. That's Gavin Smith on LinkedIn, you can look me up there around, and then we have the Voxon Photonics LinkedIn page and we're on Twitter and Facebook and YouTube as well. We have a lot of videos on YouTube. That's a good place to start. But if you wanna get in touch, contact us via Voxon.co. Drop us an email and we'll be happy to have a meeting and a video call.  All right, Gavin, thank you so much for spending some time with me.  Gavin Smith: My pleasure. Thanks very much for having me.

Dev Game Club
DGC Ep 259: Prince of Persia Bonus Interview with Remi Lacoste!

Dev Game Club

Play Episode Listen Later May 5, 2021 74:33


Welcome to Dev Game Club, where this week we present a bonus interview with Remi Lacoste, who reflects on what it took to make the camera of Prince of Persia: Sands of Time, which was far more authored than previous 3rd person platformers and action adventures. Dev Game Club looks at classic video games and plays through them over several episodes, providing commentary. Podcast breakdown: 0:50 Interview 1:04:29 Break 1:05:00 Outro Issues covered: trial by fire with the last talk of the week, "If the job is well-done, people won't realize how much work there is," being the bad goalie, exploring a new medium for storytelling, working with level designers, developing an aesthetic, setting up the alternate camera, providing the player more spatial context, expressing yourself as an artist, getting views you couldn't have gotten otherwise, building a relationship with level design, anticipating problems, discovering the rules as they went, attempting to preserve the feeling of the 2D game, a 3D navigation puzzle, helping guide the player, camera-relative steering, finding the exact moment to cut, camera behaviors, splines, placing thousands of trigger volumes, preventing panning for jumping between walls, moving the camera to see better, helping the player better understand a space, placing shadows well on the wall to help the player understand the timing of jumps, creating exclusion lists to prevent bugs, maintaining controller and player facing continuity, changing camera at a knowable time, avoiding a problem that's hard to train/tutorialize, the fragility of the player-character control mapping, watching someone else play your game, having flybys and not enjoying watching them, audience problems vs play problems, using camera as a crutch for weak level design, having to show the player something they couldn't see via cuts and trying to avoid that, find the way to frame the destination while you're activating it, blending back when you cut, collaborating with an excellent team, a major milestone, the game's tone and mechanics, the timing with UbiSoft taking off, a darker tone for the sequels, everything needing to work together, pushing the GDC talk again Games, people, and influences mentioned or discussed: Assassin's Creed (series), Behaviour Interactive, WET, Crystal Dynamics, Tomb Raider (2013 reboot series), The Avengers, The Initiative, Microsoft, NES, Patrice Desilets, Philippe Morin, Half-Life, Rainbow Six (series), MYST (series), Donald Duck: Quack Attack, Crash Bandicoot, Rayman, Mario (series), Resident Evil, Devil May Cry, 3D Studio MAX, Alex Drouin, David Châteauneuf, Raphaël Lacoste, Splinter Cell, Michel Ancel, Beyond Good and Evil, Final Fantasy VI, Death Stranding, The Last Story, Mistwalker Studios, Kirk Hamilton, Aaron Evers, Mark Garcia. Links: Creating an Emotionally Engaging Camera for Tomb Raider Next time: More of Final Fantasy VI! Twitch: brettdouville or timlongojr, instagram:timlongojr, Twitter: @timlongojr and @devgameclub DevGameClub@gmail.com

CG Garage
Episode 300 - Gary Yost - 3ds Max Development Team Leader & WisdomVR Project Founder

CG Garage

Play Episode Listen Later Nov 9, 2020 81:57


CG Garage’s 300th podcast features a true superstar of CG: 3D Studio and 3ds Max team leader Gary Yost. After working for an Atari magazine, in the mid 1980s, Gary became involved in software, eventually overseeing the development of 3D Studio from its early days as a DOS package to its limelight-stealing Windows release as 3D Studio MAX 1.0. He talks about how his five-person team (including one barely out of his teens) worked from home in the early-90s to create this pioneering, game-changing 3D software. Today, Gary is returning to his artistic instincts with a high-tech short film about the COVID-19 pandemic, using superpowered 360-degree cameras, 3ds Max, V-Ray and Chaos Cloud to produce an immersive experience. It’s fascinating to hear about Gary’s experiences in computing and entertainment over the past few decades. He’s a rare character who can combine creative energy and technical know-how with a knack for choosing the right people for the right roles.

7 BASES DO SUCESSO
EP.69 - 7 BASES DO SUCESSO COM JOÃO PICO

7 BASES DO SUCESSO

Play Episode Listen Later Nov 6, 2020 97:55


O João Pico é meu convidado no programa "7 Bases do Sucesso com...", nada melhor do que finalizar o dia com esta super série e com mais um excelente profissional, lisboeta, 49 anos. Mestre em Audiovisuais e Multimédia. 25 anos de experiência de produção de vídeo, de composição musical, e desenvolvimento de projectos audiovisuais, formatação de programas televisivos, produção, promoção e distribuição, onde aplica os conhecimentos especializados em Web Video Marketing. Desde 1998 até 2018 na SportTV como sénior editor, ganhou 7 prémios de edição de imagem. Antes da SportTV, entre 1994 e 1998, passou pela TVI como editor de vídeo. Tem ainda o curso de música do Hot Jazz Club, Formação de Formadores pela Cenjor, Fotografia na ARCO, Logomedia e 3D Studio Max. Produtor do projecto de entrevistas online.

Future of Coding
#41 - The Aesthetics of Programming Tools: Jack Rusher

Future of Coding

Play Episode Listen Later Jul 26, 2019 100:55


Ivan Reese guest hosts. I've been intimidated by Jack Rusher from the first blush. I mean, he's wearing a high-collared fur coat and black sunglasses in his Twitter pic, and his bio includes "Bell Labs Researcher". So when tasked with choosing a subject for my first interview, I immediately reached out to him, leaning in to my nervousness. His reply included the detail that he's "generally hostile to the form" of podcasting. Terrifying. When we talked, it was about Lisp — several flavours of Scheme and Racket, Common Lisp, Lisp machines, Black, Clojure, parens of all stripes. It was also about aesthetics, and graphic design, the relative ignorance of typical programming tools to the capability of the visual cortex, and how to better tap it. This podcast's streak of discussions about Coq, miniKanren, TLA+, and Alloy continues, with the addition of QuickCheck and the like. Jack presents his work on a literate editor for Clojure called Maria.cloud, an environment that makes a number of unusual and interesting choices both in the design and implementation, reaching for an ideal blend of features that afford both instant beginner enthusiasm and unrestricted expert use. We pay our respects to the phenomenal red carpet that video games roll out to new players, inviting them in to the model and mechanics of the game with an apparent ease and apt ability that should be the envy of programming toolsmiths like us. The show ends with Jack sharing an excellent collection of plugs, ranging from academic papers by the relatively obscure Stéphane Conversy, to the aesthetically-lush programming tools pouring out of Hundredrabbits's Devine Lu Linvega. I am no longer terrified of Jack's persona. Rather, I am now humbled by his towering expertise and the wildly varied accomplishments of his career, and it was a thrill to get to tour them in this interview. Best quote of the show: "A kind of grotesque capitulation to sameness." Damn, Jack! Links Jack Rusher is our esteemed guest. He is on Twitter, Instagram, and SoundCloud. Applied Science is his consultancy, and Maria.cloud is their beautifully designed literate Clojure editor. Ivan Reese hosts. He's on Twitter, works on educational media, is making a visual programming tool, and plays 100 instruments — badly. He started life with HyperCard and now loves Max/MSP. Repl.it is our Sponsor. Email jobs@repl.it if you'd like to work on the future of coding. Complex Event Processing is a bit of technology Jack helped commercialize. ClojureVerse is where a discussion of Luna led to the Visual Programming Codex, based on the History of Lisp Parens by Shaun Lebron. QuickCheck, miniKanren, Datalog, Black Scheme, and Oleg Kiselyov are touched on. Out of the Tar Pit has its mandatory mention, and then Chez Scheme saves the day. I wanted to link to the Maru project but the author, Ian Piumata's website seems to be down and I could find no other canonical reference. There's some discussion on Hacker News and such. If you know of a good link, I'd love a PR. Scheme Bricks and Media Molecule's Dreams are interesting touchstones on the road to future visual programming languages. Ivan has an affinity for Pure Data and Max/MSP and vvvv. When talking about tools for beginners versus experts, Rich Hickey's Design, Composition, and Performance is invoked — and poor Shostakovich. Jack's main is Maria.cloud, named in honour of Maria Montessori. SICP gets a nod. Maria has proven useful at Clojure Bridge. Matt Hubert [Twitter] created the Cells abstraction that Maria was eventually built atop — it's similar to ObservableHQ. Video games like Steel Battalion, The Witness, and Dead Space have strong opinions about how much, or how little, visual interface to expose to the player. Complex 3D tools like Maya and 3D Studio Max are GUI inspirations for Ivan, where Jack and Matt prefer simplicity, so much so that Matt wrote When I Sit Down At My Editor, I Feel Relaxed. Dave Liepmann is the third leg of the stool in Applied Science, Jack's consultancy. Maria originally had a deployment feature like Glitch. There's a great talk about Maria by the Applied Science trio, containing a mini-talk called Maria for experts by Jack. Pharo is an inspiring modern Smalltalk. Fructure is a wildly cool new structured editor, and its designer Andrew Blinn is fantastic on Twitter. Extempore and Temporal Recursion by Andrew Sorensen offer some interesting foundations for future visual programming tools. Sonic Pi and Overtone are lovely audio tools by Sam Aaron, widely praised and deservedly so, and everyone should back Sam's Patreon. A visual perception account of programming languages: finding the natural science in the art and Unifying Textual and Visual: A Theoretical Account of the Visual Perception of Programming Languages are obscure but beautiful papers by Stéphane Conversy. Aesthetic Programming is one of Ivan's favourites, and the author Paul Fishwick just so happened to teach Jack's graphics programming class at Uni. Orca is a mind-bending textual-visual-musical hybrid programming tool by Hundredrabbits, who are Devine Lu Linvega and Rekka Bell. Notwithstanding that they live on a sailboat(!), they do an amazing job of presenting their work and everyone in our community should take stock of how they accomplish that. Ableton Push and Ableton Live are practically state-issued music tools in Berlin. (Not to mention — Ivan edited this podcast in Live, natch.) thi.ng and @thi.ng/umbrella are Jurassic-scale libraries by Karsten Schmidt, who wrote blog posts about Clojure's Reducers in TypeScript. Finally, Nextjournal are doing great work with their multi-lingual online scientific notebook environment. The transcript for this episode was sponsored by Repl.it and can be found at https://futureofcoding.org/episodes/041#full-transcript

Google Cloud Platform Podcast

Day two of NEXT was another day full of interesting interviews! Melanie and Mark sat down for quick chats with Haben Girma about accessibility in tech and Paresh Kharya to talk about NVIDIA. Next, we touched base with Amruta Gulanikar and Simon Zeltser to learn more about Windows SQL Server and .NET workloads on Google Cloud. The interviews wrap up with Henry Hsu & Isaac Wong of Holberton. Haben Girma The first Deafblind person to graduate from Harvard Law School, Haben Girma advocates for equal opportunities for people with disabilities. President Obama named her a White House Champion of Change. She received the Helen Keller Achievement Award, and a spot on Forbes 30 Under 30. Haben travels the world consulting and public speaking, teaching clients the benefits of fully accessible products and services. She’s a talented storyteller who helps people frame difference as an asset. She resisted society’s low expectations, choosing to create her own pioneering story. Haben is working on a book that will be published by Hachette in 2019. Paresh Kharya Paresh Kharya is Group Product Marketing Manager for data center products at NVIDIA responsible for product marketing of NVIDIA’s Tesla accelerated computing platform. Previously, Paresh held a variety of business roles in the high-tech industry, including group product manager at Adobe and business development manager at Tech Mahindra. Paresh has an MBA from the Indian Institute of Management and a bachelors of computer science and engineering from the National Institute of Technology, India. Amruta Gulanikar & Simon Zeltser Prior to joining Google Amruta spent 5+ years as a PM in the Office division at Microsoft working on many different products. Just before she left, she worked on launching a new service and supporting apps - “O365 Planner” which offers people a simple and visual way to organize teamwork. At Google, Amruta owns Windows on GCE which includes support for premium OS & Microsoft Server product images, platform improvements to support Windows workloads on GCE. Simon Zeltser is a Developer Programs Engineer at Google, working with .NET and Windows on Google Cloud Platform. Henry Hsu & Isaac Wong Henry Hsu is a software engineer trained at Holberton School. He has experience with C, C++, Python, Ruby/Rails, JavaScript, HTML/CSS, MySQL/Postgres, Unity, Game Maker Studio, Linux, Photoshop, 3D Studio Max, systems design, algorithms, and devops. Isaac Wong attends the Holberton School. He has a degree in horticulture from Texas A&M. Interviews Edge TPU site Cloud IoT Edge site Cloud Armor site Titan Security Key site Building on our cloud security leadership to help keep businesses protected blog Google Cloud Container Registry site Haben Girma’s website site Haben Girma’s presentation at NEXT video San Francisco Lighthouse for the Blind site National Federation of the Blind site National Association of the Deaf site NVIDIA site NVIDIA and Google Cloud Platform site Google Cloud Platform Podcast Episode 119 podcast Velostrata site GKE site Google App Engine site Stackdriver Debugger site Windows on Google Cloud Platform site SQL Server on Google Cloud Platform site .NET on Google Cloud Platform site Holberton School site Unity site GKE On-Prem site TensorFlow site Where can you find us next? We’ll both be at Cloud NEXT in Moscone West on the first floor, so come by and say hi! We have chocolate!

End Credits - The Behind the Scenes in Entertainment Podcast

Rob Amenta is a Toronto based Artist/animator, mostly aself taught artist but did go to OCAD as well at the Toronto School of Art and Design.His main creative tool these days is 3D Studio Max (a 3d rendering program for those not familiar), Rob is currently trying to get his TV pilot entitled "The Nuklear Family" funded through Kickstarter. The Nuklear Family started as a 2D project (and we loved the style of that), but has since moved it into a CG style series.We chat with Rob about his series, the music behind it and how he plans to get his vision out to the world.

Montreal Sauce
You Need Poutine Bros

Montreal Sauce

Play Episode Listen Later Jun 24, 2015 60:00


This is the second half our conversation with artist & developer, Jen Montes. We discuss being multilingual, Puerto Rico, Python, GitHub, and poutine. Jen currently works on the TV show Archer. Chris tries to show off, saying he learned Spanish from watching, Home Alone in Mexico. Jen wonders what the difference between European French and Quebec French is? Jacob, who is in the chatroom, chimes in to share his experience in Quebec. Jen grew up in Puerto Rico which reminds Chris of a recent Last Week Tonight video on voting rights in U.S. Territories. The Insular Cases are the awkward Supreme Court documents that surmise people in territories under the control of America do not receive all the rights guaranteed in the constitution. Paul is reminded of the videos by CPG Grey, American Empire and The Difference between the United Kingdom, Great Britain and England Explained. Paul explores Jen’s GitHub page and we talk Python. Jacob is looking forward to ES6 Harmony the sixth edition of Javascript. Jen is interested in the way Tent and the Indie Web movement are working. She hopes to build a personal site that interacts with other networks, but all her data is in one central place. One project Jen is working on is a file browser for the nifty, dropbox-like Syncthing. Remembering PaintShop Pro and 3D Studio Max, which are still made! Jen tells us that you can still get The Print Shop! That’s more exciting than baseball. Jeena’s site is a great example of indieweb ideas in the real world. He explains it in a blog post, IndieWeb Join us! Known is a service using the indieweb principles. Without having programing knowledge you can jump right in and begin publishing your own site. Jen is not a tremendous TV show junky but she does appreciate watching shows as a social activity. She watches The Walking Dead, Game of Thrones, Unbreakable Kimmy Schmidt and of course Archer with groups of friends. We hope you’ve enjoyed this episode with Jen Montes. It’s fairly obvious that we had a good time. Up next, a topical show on privacy. Support Montreal Sauce on Patreon

EdGamer
EdGamer Edvisor: Top 5 (ish) Digital Creation Tools

EdGamer

Play Episode Listen Later Feb 19, 2013 14:53


EdGamer Edvisor is a compacted but highly focused version of EdGamer that gives the listener a quick list of tools or reviews to aid learning in the classroom. This episode highlights our top 5 (ish) digital creation tools for the classroom. Enjoy!   The right computer lab – Macs, monitors & Wacom tablets Adobe Photoshop/CS Suite & Adobe Lightroom Final Cut Pro / iMovie 3D Studio Max / Maya / Autodesk Suite / ZBrush / Sculptris / Pixologic / Cheetah 3D Unity Game Engine Sound editing software – GarageBand / Avid’s Pro Tools Journaling program – Corel / Pento / Evernote Hosting program – JIBE / web design   Your thoughts?… Read the rest

Polskie Detroit
PD244-2010-05-10

Polskie Detroit

Play Episode Listen Later May 10, 2010


Dzis rozmawiam z Szymonem Masiakiem o komputerach, efektach specjalnych do filmow, firmie CafeFX, Grzegorzu Jonkajtysie, gieldach komputerowych, Robercie Rodriguezie, 3D Studio Max, grze Jack Orlando i Apple Motion.

dzis apple motion 3d studio max
Polskie Detroit
PD244-2010-05-10

Polskie Detroit

Play Episode Listen Later May 10, 2010


Dzis rozmawiam z Szymonem Masiakiem o komputerach, efektach specjalnych do filmow, firmie CafeFX, Grzegorzu Jonkajtysie, gieldach komputerowych, Robercie Rodriguezie, 3D Studio Max, grze Jack Orlando i Apple Motion.

dzis apple motion 3d studio max
Video StudentGuy
#136 Something from Nothing

Video StudentGuy

Play Episode Listen Later Sep 10, 2009 28:06


Listening to this show and trying to reach from what I know to what the least informed person knows is very difficult. It's hard to record a session of me talking off the cuff, just using notes, because I end up using a lot of shorthand without explain things. I need a glossary.I don't have one, but I have included some brief explanations of some of the video codewords I've mentioned in passing, in this show. If you follow the links you'll find more thorough descriptions. It can get pretty thick, but it really helps to know these things.This episode begins with a recent revelation about my goals and future direction as a filmmaking professional. I've also included details I've gleaned from blogs, podcasts and presentations at a recent meeting of the Boston FCP User's Group.This has been a busy summer of media events. I've attended Podcamp Boston 4, Podcaster's Across Borders in Kingston Ontario, the Boston Media Makers get together which meets in Jamaica Plains the first Sunday of every month and the annual Avid Summer BBQ.I wanted to attend Podcamp Montreal, which takes place in a couple weeks, but I think I need to stay at home. There is also a Podcamp New Hampshire, taking place in Portsmouth in November.It was at the Boston Final Cut Pro User's group that took place in August that I saw Philip Hodgetts present an overview of new features for FCP7. I've also included information I've gleaned from Apple's FCS site and the Digital Production Buzz. You can also find video tutorials online that demonstrate what the new features are in all applications within the suite.One of the warm up presentations that I thought was particularly noteworthy was for Mocha for FCP from Imagineer Systems. Check out the link above for videos that explain image tracking and rotoscoping in a way that will quickly make sense. Here's my abbreviated version:Imagine you're seeing a movie and there's a scene at a football stadium. There's a huge video screen that shows instant replays and and short commercials. The people who made that movie didn't record the information on that screen when they shot the footage of the stadium, they inserted their own footage on the screen in the editing suite. Maybe an advertisement for a product that they're getting paid to place in the movie.In a still image you can select the space inside the frame of the jumbotron and remove it and insert what ever image you choose. In a moving image, the shape of the screen and position of the screen in the frame is changing in every frame if the camera is moving.You accomplish this difficult task by marking places in the frame that are always visible (bright white points usually) and then making sure they remain in place as the camera pans across or zooms out. Now that you've got the location of the anchor points, you create a mask that fits inside the screen area of the jumbotron and then make sure that mask is linked to those anchor points that are being tracked. That is called rotoscoping. Then you drop in your video and make it look like it was always there.For sure, that's a gross simplification, but I hope it gets the idea across.The new version of the Final Cut Suite (no number, should be #3 tho) include new feature updates for all the produces (except for DVD Studio Pro), but Philip was there to cover just Final Cut Pro 7.Because I clumsily referred to ProRez 422 in passing, give me a moment to explain what an intermediate codec is and what 422 color space and 4444 refers to.A color space is the limited range of color that can be viewed from the entire spectrum of color. Humans can see a wide swath of color between ultra violet and infra red (violet to red). Some insects and animals can see beyond that range. Mechanical devices, like monitors and cameras and capture and display color in a variety color spaces depending on type of color space. HSB (Hue Saturation and Brightness) is one space, RGB (Red, Green and Blue) is another.Video cameras generally use a color space called YCC, which is roughly RGB. The Y is the luma quality and the two Cs are the chroma, or color qualities. Those are the three values in a camera that shoots 4.2.2.Our eyes are more sensitive to luma than chroma, so in a 4.2.2 color space there's twice as much luma, or light, as there is color. Web and DVD video use a 4.2.0 space and the DV standard uses 4.1.1.Think of the 3 areas of information captured by a 4.2.2 camera as distinct channels of light or color. Like channels in photoshop. There is 4th channel of visual information which cameras won't capture because it is created in post production, the alpha channel.Alpha channels are used in Photoshop, After Effects and Final Cut Pro are an additional layer of information that can be used to remove areas of the frame so that something else can be seen through it. Or it can act as a selection area of a moving object in the frame so an effect or filter can be applied to it.Hold that thought for a moment and let me move on to a codec. Among other things, it's a software program that compresses a moving digital image. There are a variety of codecs that compress video as a camera records it, and decompresses it as a DVD player plays it. It's a lossy process, which means digital information is lost when it's compressed. The greater compression, the more minutes of video can be shushed into a gigabyte of storage space.Still with me?There are a lot of codecs out there and the variety is necessary because of how you're using them. Camera codecs need to compress data a certain way to retain the most information to fit on the storage medium, tape, drive or solid state card. Cameras are capturing video for one purpose only, to store it. You aren't using it, cutting it into pieces, so it can squeeze it really tight.Cutting video in a codec designed for camera capture, particularly HDV, is not a pleasure to cut. It's doable, but has problems that I'm not going to get into. If you're producing a feature length movie or TV video you want to work in a codec that will give you more freedom to edit. That's what an intermediate codec is.One codec for capture, another for playback (sometimes the same one) and one in-between for the edit. Prorez is an intermediate codec.You capture the video from the camera as you do normally, then select the footage in FCP and then convert it to a Prorez codec.Hang in there, I'm coming to the end.When you convert footage captured by a camera using a codec using the 4.1.1 or 4.2.0 color space to, say, Prorez 4.2.2, does that mean you're getting a better quality out of the footage you shot?No!Footage captured in every DV or HDV camera is being compressed on the fly using whatever color space the camera uses. That compress, being lossy discards anything that doesn't fit. So when you convert it to a high resolution color space, it's got a bunch space it isn't using. When you shake it, you can hear it rattle.So why would you convert it to the 4.2.2 space, or for that matter, 4.4.4?One reason is these other codecs have other characters that make it more efficient for your editing software to edit, render and export the video. More importantly though, and this IS the reason you would use Prorez 4x4 (4.4.4.4) is that anything else you add, a still image, motions graphics from After Effects or Motion, or 3D animation from Maya or 3D Studio Max will be added at their full, mostly likely higher color and image resolution.These additional elements, even something as simple as title text have to be massaged by various filters and often moved in and out of other programs to make them feel like they're as real as the realness of the video footage. Have a space that allows you to work with the maximum amount of resolution of color and pixel depth offers the kind of control the people with the big bucks are looking for.You and I are just lucky that we don't have to have big bucks to get into this party.As hard as that was to read, and I congratulate you if you got this far, it was no picnic figuring how to say it. And be careful, don't use this in your research paper.I'm not going to put links to all this stuff. I've put out the bare bones. If you need to know more you can look it up for yourself.I hope it's been useful.You can find pricing for educational software at Journey Ed and Academic Superstore. I've used them both and they're fast.Mike Jones at Digital Basin has a good review of the suite upgrade, including what's missing. And check out the Film School Drinking game which I found in the same article. It's an education in itself.Finally, if you're on the fence about getting the Snow Leopard update, Leo Laporte and his gang of usual suspects provide a definitve thrashing of the pros and cons in #156 of Macbreak Weekly.