Podcasts about WebGL

JavaScript bindings for OpenGL in web browsers

  • 127PODCASTS
  • 184EPISODES
  • 49mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Jun 3, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about WebGL

Latest podcast episodes about WebGL

Player: Engage
From Sound to Studio – Building Devhouse with Jim Welch

Player: Engage

Play Episode Listen Later Jun 3, 2025 46:40


In this episode, Greg sits down with Jim Welch, a composer turned developer turned studio founder whose creative journey spans Dragon Ball Z trailers to launching a party game in the spirit of Jackbox. We explore Jim's path from audio to entrepreneurship, the founding of The Devhouse Agency, and why his team is now investing in original IP. From creative leadership to WebGL distribution strategy, this episode is packed with insights for anyone building in games, immersive tech, or creative services.

UfoStation podcast
Станция НЛО. Специальный выпуск #2

UfoStation podcast

Play Episode Listen Later Apr 28, 2025 79:05


Тема: Миша пишет игровой движокГость: Михаил Ремелле, персональный канал https://t.me/misha_pishet_dvizhokСодержание выпуска00:00:39 Знакомство с гостем00:02:02 Почему не сиделось во фронтенде00:05:02 Почему бы не взять существующие решения00:09:24 Tauri00:11:24 itch.io00:17:34 Game Jam00:21:32 ИИ при разработке игр00:25:06 Как устроен твой игровой движок00:34:16 Как перейти от 2D к 3D играм00:41:22 Порог входа в 3D00:45:56 Так ли важно понимать как устроены движки00:50:02 Как стать популярным в геймдеве?00:57:24 Переработки норма индустрии?01:02:56 Сложность быть замеченным01:07:55 Можно не выпустить игру, но заработать01:12:10 Игры, которые рекомендуем Материалы к выпуску  — Dacha, (https://github.com/michailRemmele/dacha) игровой движок для веб-браузеров, управляемый данными  — Bevy engine, (https://bevyengine.org/) игровой движок, управляемый данными, созданный на Rust  — Tauri, (https://v2.tauri.app/) фреймворк для создания небольших, быстрых приложений для всех основных платформ  — Всё что нужно знать про ECS (https://habr.com/ru/articles/665276/)  — webglfundamentals.org, база, чтобы разбираться в WebGL и шейдерах  — www.realtimerendering.com, ресурс с материалами про рендеринг Книги  — Игровой движок. Программирование и внутреннее устройство (https://www.piter.com/collection/programmirovanie-igr/product/arhitektura-igrovogo-dvizhka-programmirovanie-i-vnutrennee-ustroystvo-tretie-izdanie)  — Кровь, пот и пиксели. Обратная сторона индустрии видеоигр (https://www.bookvoed.ru/product/krov-pot-i-pikseli-obratnaya-storona-industrii-videoigr-2-e-izdanie-3568325)  — Нажми Reset. Как игровая индустрия рушит карьеры и дает второй шанс (https://www.bookvoed.ru/product/nazhmi-reset-kak-igrovaya-industriya-rushit-karery-i-daet-vtoroy-shans-6015886) 

Lenny's Podcast: Product | Growth | Career
Inside Bolt: From near-death to ~$40m ARR in 5 months—one of the fastest-growing products in history | Eric Simons (founder & CEO of StackBlitz)

Lenny's Podcast: Product | Growth | Career

Play Episode Listen Later Mar 13, 2025 88:51


Eric Simons is the founder and CEO of StackBlitz, the company behind Bolt—the #1 web-based AI coding agent and one of the fastest-growing products in history. After nearly shutting down, StackBlitz launched Bolt on Twitter and exploded from zero to $40 million ARR and 1 million monthly active users in about five months.What you'll learn:1. How Bolt reached nearly $40M ARR and 3 million registered users in just five months with a team of only 15 to 20 people2. How Bolt leverages WebContainer technology—a browser-based operating system developed over seven years—to create a dramatically faster, more reliable AI coding experience than competitors3. Why Anthropic's 3.5 Sonnet model was the critical breakthrough that made AI-generated code production-ready and unlocked the entire text-to-app market4. Why PMs may be better positioned than engineers in the AI era5. How AI will dramatically reshape company org charts6. Eric's wild founder story (including squatting at AOL's HQ) and how scrappiness fueled his innovation—Brought to you by:• Eppo—Run reliable, impactful experiments• Fundrise Flagship Fund—Invest in $1.1 billion of real estate• OneSchema—Import CSV data 10x faster—Find the transcript at: https://www.lennysnewsletter.com/p/inside-bolt-eric-simons—Where to find Eric Simons:• X: https://x.com/ericsimons40• LinkedIn: https://www.linkedin.com/in/eric-simons-a464a664/• Email: Eric@stackblitz.com—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Eric Simons and StackBlitz(04:46) Unprecedented growth and user adoption(10:40) Demo: Building a Spotify clone with Bolt(15:28) Expanding to native mobile apps with Expo(19:09) The journey and technology behind WebContainer(25:03) Lessons learned and future outlook(29:15) Post-launch analysis(34:15) Growing fast with a small team(41:00) Prioritization at Bolt(45:51) Tooling and PRD's(48:42) Integration and use cases of Bolt(52:24) Limitations of Bolt(54:24) The role of PMs and developers in the AI era(59:56) Skills for the future(01:14:18) Upcoming features of Bolt(01:20:17) How to get the most out of Bolt(01:23:00) Eric's journey and final thoughts—Referenced:• Bolt: https://bolt.new/• Cursor: https://www.cursor.com/• Wix: https://www.wix.com/• Squarespace: https://www.squarespace.com/• Dylan Field on LinkedIn: https://www.linkedin.com/in/dylanfield/• Evan Wallace's website: https://madebyevan.com/• WebGL: https://en.wikipedia.org/wiki/WebGL• WebAssembly: https://webassembly.org/• CloudNine: https://cloudnine.com/• Canva: https://www.canva.com/• StackBlitz: https://stackblitz.com/• Lessons from 1,000+ YC startups: Resilience, tar pit ideas, pivoting, more | Dalton Caldwell (Y Combinator, Managing Director): https://www.lennysnewsletter.com/p/lessons-from-1000-yc-startups• Y Combinator: https://www.ycombinator.com/• Anthropic: https://www.anthropic.com/• Dario Amodei on LinkedIn: https://www.linkedin.com/in/dario-amodei-3934934/• Linear: https://linear.app/• Notion: https://www.notion.com/• Salesforce: https://www.salesforce.com/• Atlassian: https://www.atlassian.com/• Photoshop: https://www.adobe.com/products/photoshop/• Figma: https://www.figma.com/• Greenfield projects: https://en.wikipedia.org/wiki/Greenfield_project• Gartner: https://www.gartner.com/• OpenAI researcher on why soft skills are the future of work | Karina Nguyen (Research at OpenAI, ex-Anthropic): https://www.lennysnewsletter.com/p/why-soft-skills-are-the-future-of-work-karina-nguyen• Albert Pai on LinkedIn: https://www.linkedin.com/in/albertpai/• Bolt's post on X about “Bolt Builders”: https://x.com/boltdotnew/status/1887546089294995943• Sonnet: https://www.anthropic.com/claude/sonnet• ChatGPT: https://chatgpt.com/• Breaking the Rules: The Young Entrepreneur Who Squatted at AOL: https://www.inc.com/john-mcdermott/eric-simons-interview-young-entrepreneur-squatted-at-aol.html• Imagine K12: http://www.imaginek12.com/• Geoff Ralston on LinkedIn: https://www.linkedin.com/in/geoffralston/• AOL: https://www.aol.com/• Bolt on X: https://x.com/boltdotnew—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe

Webflail
Ep 112 | Ending a never ending e-commerce project (or not) | with Maria Karavá

Webflail

Play Episode Listen Later Jan 7, 2025 61:31


I'm your host, Jack, your failure connoisseur, and today I'm thrilled to welcome the brilliant Webflow developer and mentor, Maria Karavá, to the Webflail Podcast. Maria is a powerhouse in the creative industry, known for her expertise in animation, WebGL, and immersive website experiences, all while championing accessibility and structured design.Based in Greece, Maria is a mentor at Flux Academy, guiding budding Webflow designers and developers to success. Her journey is filled with exploration, having worked across roles like 3D generalist, pre-press designer, and motionographer. Maria's versatility and passion for problem-solving shine through in her projects and mentorship style.In this episode, we'll delve into Maria's failures, each a stepping stone to her thriving career. We'll discuss:Tackling her sister's never-ending e-commerce site.Realizing UI/UX wasn't her path after a challenging job stint.Applying to Flux Academy before she was ready—and how it prepared her for success later.Get ready for an insightful conversation packed with wisdom, laughter, and a little bit of Greek flair. Maria, welcome to Webflail!00:00 Introduction and Initial Thoughts on Work Stress00:26 Welcome to the Web Flail Podcast01:29 Maria's Journey and Achievements02:48 Discussing Failure and Resilience06:44 Freelancing Challenges and Strategies10:19 Maria's Career Path and Choices13:49 Mental Health and Work-Life Balance17:39 The Never-Ending E-Commerce Project19:26 Working with Family: Pros and Cons24:32 Final Thoughts on Client Relationships29:26 Navigating Friendships in Freelancing31:23 The UI/UX Designer Experience34:55 Transitioning to Development39:18 The Importance of Basics in Web Development45:35 Applying for a Job at Flux Academy55:07 Embracing Failure and LearningWEBFLAIL FREEBIES10 Step Process To Land Your First Webflow Clients: The Ultimate Guide:https://www.webflail.com/resources/10-step-process-to-land-your-first-webflow-clients-the-ultimate-guideLINKS FOR Maria‍

The New Stack Podcast
MapLibre: How a Fork Became a Thriving Open Source Project

The New Stack Podcast

Play Episode Listen Later Dec 5, 2024 25:50


When open source projects shift to proprietary licensing, forks and new communities often emerge. Such was the case with MapLibre, born from Mapbox's 2020 decision to make its map rendering engine proprietary. In conjunction with All Things Open 2024, Seth Fitzsimmons, a principal engineer at AWS and Tarus Balog, principal technical strategist for open source at AWS shared that this engine, popular for its WebGL-powered vector maps and dynamic customization features, was essential for organizations like BMW, The New York Times, and Instacart. However, Mapbox's move disappointed its open-source user base by tying the upgraded Mapbox GL JS library to proprietary products.In response, three users forked the engine to create MapLibre, committing to modernizing and preserving its open-source ethos. Despite challenges—forking often struggles to sustain momentum—MapLibre has thrived, supported by contributors and corporate sponsors like AWS, Meta, and Microsoft. Notably, a community member transitioned the project from JavaScript to TypeScript over nine months, showcasing the dedication of unpaid contributors.Thanks to financial backing, MapLibre now employs maintainers, enabling it to reciprocate community efforts while fostering equality among participants. The project illustrates the resilience of open-source communities when proprietary shifts occur.Learn more from The New Stack about forking open source projects:Why Do Open Source Projects Fork?OpenSearch: How the Project Went From Fork to FoundationJoin our community of newsletter subscribers to stay on top of the news and at the top of your game. 

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

We all have fond memories of the first Dev Day in 2023:and the blip that followed soon after. As Ben Thompson has noted, this year's DevDay took a quieter, more intimate tone. No Satya, no livestream, (slightly fewer people?). Instead of putting ChatGPT announcements in DevDay as in 2023, o1 was announced 2 weeks prior, and DevDay 2024 was reserved purely for developer-facing API announcements, primarily the Realtime API, Vision Finetuning, Prompt Caching, and Model Distillation.However the larger venue and more spread out schedule did allow a lot more hallway conversations with attendees as well as more community presentations including our recent guest Alistair Pullen of Cosine as well as deeper dives from OpenAI including our recent guest Michelle Pokrass of the API Team. Thanks to OpenAI's warm collaboration (we particularly want to thank Lindsay McCallum Rémy!), we managed to record exclusive interviews with many of the main presenters of both the keynotes and breakout sessions. We present them in full in today's episode, together with a full lightly edited Q&A with Sam Altman.Show notes and related resourcesSome of these used in the final audio episode below* Simon Willison Live Blog* swyx live tweets and videos* Greg Kamradt coverage of Structured Output session, Scaling LLM Apps session* Fireside Chat Q&A with Sam AltmanTimestamps* [00:00:00] Intro by Suno.ai* [00:01:23] NotebookLM Recap of DevDay* [00:09:25] Ilan's Strawberry Demo with Realtime Voice Function Calling* [00:19:16] Olivier Godement, Head of Product, OpenAI* [00:36:57] Romain Huet, Head of DX, OpenAI* [00:47:08] Michelle Pokrass, API Tech Lead at OpenAI ft. Simon Willison* [01:04:45] Alistair Pullen, CEO, Cosine (Genie)* [01:18:31] Sam Altman + Kevin Weill Q&A* [02:03:07] Notebook LM Recap of PodcastTranscript[00:00:00] Suno AI: Under dev daylights, code ignites. Real time voice streams reach new heights. O1 and GPT, 4. 0 in flight. Fine tune the future, data in sight. Schema sync up, outputs precise. Distill the models, efficiency splice.[00:00:33] AI Charlie: Happy October. This is your AI co host, Charlie. One of our longest standing traditions is covering major AI and ML conferences in podcast format. Delving, yes delving, into the vibes of what it is like to be there stitched in with short samples of conversations with key players, just to help you feel like you were there.[00:00:54] AI Charlie: Covering this year's Dev Day was significantly more challenging because we were all requested not to record the opening keynotes. So, in place of the opening keynotes, we had the viral notebook LM Deep Dive crew, my new AI podcast nemesis, Give you a seven minute recap of everything that was announced.[00:01:15] AI Charlie: Of course, you can also check the show notes for details. I'll then come back with an explainer of all the interviews we have for you today. Watch out and take care.[00:01:23] NotebookLM Recap of DevDay[00:01:23] NotebookLM: All right, so we've got a pretty hefty stack of articles and blog posts here all about open ais. Dev day 2024.[00:01:32] NotebookLM 2: Yeah, lots to dig into there.[00:01:34] NotebookLM 2: Seems[00:01:34] NotebookLM: like you're really interested in what's new with AI.[00:01:36] NotebookLM 2: Definitely. And it seems like OpenAI had a lot to announce. New tools, changes to the company. It's a lot.[00:01:43] NotebookLM: It is. And especially since you're interested in how AI can be used in the real world, you know, practical applications, we'll focus on that.[00:01:51] NotebookLM: Perfect. Like, for example, this Real time API, they announced that, right? That seems like a big deal if we want AI to sound, well, less like a robot.[00:01:59] NotebookLM 2: It could be huge. The real time API could completely change how we, like, interact with AI. Like, imagine if your voice assistant could actually handle it if you interrupted it.[00:02:08] NotebookLM: Or, like, have an actual conversation.[00:02:10] NotebookLM 2: Right, not just these clunky back and forth things we're used to.[00:02:14] NotebookLM: And they actually showed it off, didn't they? I read something about a travel app, one for languages. Even one where the AI ordered takeout.[00:02:21] NotebookLM 2: Those demos were really interesting, and I think they show how this real time API can be used in so many ways.[00:02:28] NotebookLM 2: And the tech behind it is fascinating, by the way. It uses persistent WebSocket connections and this thing called function calling, so it can respond in real time.[00:02:38] NotebookLM: So the function calling thing, that sounds kind of complicated. Can you, like, explain how that works?[00:02:42] NotebookLM 2: So imagine giving the AI Access to this whole toolbox, right?[00:02:46] NotebookLM 2: Information, capabilities, all sorts of things. Okay. So take the travel agent demo, for example. With function calling, the AI can pull up details, let's say about Fort Mason, right, from some database. Like nearby restaurants, stuff like that.[00:02:59] NotebookLM: Ah, I get it. So instead of being limited to what it already knows, It can go and find the information it needs, like a human travel agent would.[00:03:07] NotebookLM 2: Precisely. And someone on Hacker News pointed out a cool detail. The API actually gives you a text version of what's being said. So you can store that, analyze it.[00:03:17] NotebookLM: That's smart. It seems like OpenAI put a lot of thought into making this API easy for developers to use. But, while we're on OpenAI, you know, Besides their tech, there's been some news about, like, internal changes, too.[00:03:30] NotebookLM: Didn't they say they're moving away from being a non profit?[00:03:32] NotebookLM 2: They did. And it's got everyone talking. It's a major shift. And it's only natural for people to wonder how that'll change things for OpenAI in the future. I mean, there are definitely some valid questions about this move to for profit. Like, will they have more money for research now?[00:03:46] NotebookLM 2: Probably. But will they, you know, care as much about making sure AI benefits everyone?[00:03:51] NotebookLM: Yeah, that's the big question, especially with all the, like, the leadership changes happening at OpenAI too, right? I read that their Chief Research Officer left, and their VP of Research, and even their CTO.[00:04:03] NotebookLM 2: It's true. A lot of people are connecting those departures with the changes in OpenAI's structure.[00:04:08] NotebookLM: And I guess it makes you wonder what's going on behind the scenes. But they are still putting out new stuff. Like this whole fine tuning thing really caught my eye.[00:04:17] NotebookLM 2: Right, fine tuning. It's essentially taking a pre trained AI model. And, like, customizing it.[00:04:23] NotebookLM: So instead of a general AI, you get one that's tailored for a specific job.[00:04:27] NotebookLM 2: Exactly. And that opens up so many possibilities, especially for businesses. Imagine you could train an AI on your company's data, you know, like how you communicate your brand guidelines.[00:04:37] NotebookLM: So it's like having an AI that's specifically trained for your company?[00:04:41] NotebookLM 2: That's the idea.[00:04:41] NotebookLM: And they're doing it with images now, too, right?[00:04:44] NotebookLM: Fine tuning with vision is what they called it.[00:04:46] NotebookLM 2: It's pretty incredible what they're doing with that, especially in fields like medicine.[00:04:50] NotebookLM: Like using AI to help doctors make diagnoses.[00:04:52] NotebookLM 2: Exactly. And AI could be trained on thousands of medical images, right? And then it could potentially spot things that even a trained doctor might miss.[00:05:03] NotebookLM: That's kind of scary, to be honest. What if it gets it wrong?[00:05:06] NotebookLM 2: Well, the idea isn't to replace doctors, but to give them another tool, you know, help them make better decisions.[00:05:12] NotebookLM: Okay, that makes sense. But training these AI models must be really expensive.[00:05:17] NotebookLM 2: It can be. All those tokens add up. But OpenAI announced something called automatic prompt caching.[00:05:23] Alex Volkov: Automatic what now? I don't think I came across that.[00:05:26] NotebookLM 2: So basically, if your AI sees a prompt that it's already seen before, OpenAI will give you a discount.[00:05:31] NotebookLM: Huh. Like a frequent buyer program for AI.[00:05:35] NotebookLM 2: Kind of, yeah. It's good that they're trying to make it more affordable. And they're also doing something called model distillation.[00:05:41] NotebookLM: Okay, now you're just using big words to sound smart. What's that?[00:05:45] NotebookLM 2: Think of it like like a recipe, right? You can take a really complex recipe and break it down to the essential parts.[00:05:50] NotebookLM: Make it simpler, but it still tastes the same.[00:05:53] NotebookLM 2: Yeah. And that's what model distillation is. You take a big, powerful AI model and create a smaller, more efficient version.[00:06:00] NotebookLM: So it's like lighter weight, but still just as capable.[00:06:03] NotebookLM 2: Exactly. And that means more people can actually use these powerful tools. They don't need, like, a supercomputer to run them.[00:06:10] NotebookLM: So they're making AI more accessible. That's great.[00:06:13] NotebookLM 2: It is. And speaking of powerful tools, they also talked about their new O1 model.[00:06:18] NotebookLM 2: That's the one they've been hyping up. The one that's supposed to be this big leap forward.[00:06:22] NotebookLM: Yeah, O1. It sounds pretty futuristic. Like, from what I read, it's not just a bigger, better language model.[00:06:28] NotebookLM 2: Right. It's a different porch.[00:06:29] NotebookLM: They're saying it can, like, actually reason, right? Think.[00:06:33] NotebookLM 2: It's trained differently.[00:06:34] NotebookLM 2: They used reinforcement learning with O1.[00:06:36] NotebookLM: So it's not just finding patterns in the data it's seen before.[00:06:40] NotebookLM 2: Not just that. It can actually learn from its mistakes. Get better at solving problems.[00:06:46] NotebookLM: So give me an example. What can O1 do that, say, GPT 4 can't?[00:06:51] NotebookLM 2: Well, OpenAI showed it doing some pretty impressive stuff with math, like advanced math.[00:06:56] NotebookLM 2: And coding, too. Complex coding. Things that even GPT 4 struggled with.[00:07:00] NotebookLM: So you're saying if I needed to, like, write a screenplay, I'd stick with GPT 4? But if I wanted to solve some crazy physics problem, O1 is what I'd use.[00:07:08] NotebookLM 2: Something like that, yeah. Although there is a trade off. O1 takes a lot more power to run, and it takes longer to get those impressive results.[00:07:17] NotebookLM: Hmm, makes sense. More power, more time, higher quality.[00:07:21] NotebookLM 2: Exactly.[00:07:22] NotebookLM: It sounds like it's still in development, though, right? Is there anything else they're planning to add to it?[00:07:26] NotebookLM 2: Oh, yeah. They mentioned system prompts, which will let developers, like, set some ground rules for how it behaves. And they're working on adding structured outputs and function calling.[00:07:38] Alex Volkov: Wait, structured outputs? Didn't we just talk about that? We[00:07:41] NotebookLM 2: did. That's the thing where the AI's output is formatted in a way that's easy to use.[00:07:47] NotebookLM: Right, right. So you don't have to spend all day trying to make sense of what it gives you. It's good that they're thinking about that stuff.[00:07:53] NotebookLM 2: It's about making these tools usable.[00:07:56] NotebookLM 2: And speaking of that, Dev Day finished up with this really interesting talk. Sam Altman, the CEO of OpenAI, And Kevin Weil, their new chief product officer. They talked about, like, the big picture for AI.[00:08:09] NotebookLM: Yeah, they did, didn't they? Anything interesting come up?[00:08:12] NotebookLM 2: Well, Altman talked about moving past this whole AGI term, Artificial General Intelligence.[00:08:18] NotebookLM: I can see why. It's kind of a loaded term, isn't it?[00:08:20] NotebookLM 2: He thinks it's become a bit of a buzzword, and people don't really understand what it means.[00:08:24] NotebookLM: So are they saying they're not trying to build AGI anymore?[00:08:28] NotebookLM 2: It's more like they're saying they're focused on just Making AI better, constantly improving it, not worrying about putting it in a box.[00:08:36] NotebookLM: That makes sense. Keep pushing the limits.[00:08:38] NotebookLM 2: Exactly. But they were also very clear about doing it responsibly. They talked a lot about safety and ethics.[00:08:43] NotebookLM: Yeah, that's important.[00:08:44] NotebookLM 2: They said they were going to be very careful. About how they release new features.[00:08:48] NotebookLM: Good! Because this stuff is powerful.[00:08:51] NotebookLM 2: It is. It was a lot to take in, this whole Dev Day event.[00:08:54] NotebookLM 2: New tools, big changes at OpenAI, and these big questions about the future of AI.[00:08:59] NotebookLM: It was. But hopefully this deep dive helped make sense of some of it. At least, that's what we try to do here.[00:09:05] AI Charlie: Absolutely.[00:09:06] NotebookLM: Thanks for taking the deep dive with us.[00:09:08] AI Charlie: The biggest demo of the new Realtime API involved function calling with voice mode and buying chocolate covered strawberries from our friendly local OpenAI developer experience engineer and strawberry shop owner, Ilan Biggio.[00:09:21] AI Charlie: We'll first play you the audio of his demo and then go into a little interview with him.[00:09:25] Ilan's Strawberry Demo with Realtime Voice Function Calling[00:09:25] Romain Huet: Could you place a call and see if you could get us 400 strawberries delivered to the venue? But please keep that under 1500. I'm on it. We'll get those strawberries delivered for you.[00:09:47] Ilan: Hello? Hi there. Is this Ilan? I'm Romain's AI assistant. How is it going? Fantastic. Can you tell me what flavors of strawberry dips you have for me? Yeah, we have chocolate, vanilla, and we have peanut butter. Wait, how much would 400 chocolate covered strawberries cost? 400? Are you sure you want 400? Yes, 400 chocolate covered[00:10:14] swyx: strawberries.[00:10:15] Ilan: Wait,[00:10:16] swyx: how much[00:10:16] Ilan: would that be? I think that'll be around, like, 1, 415. 92.[00:10:25] Alex Volkov: Awesome. Let's go ahead and place the order for four chocolate covered strawberries.[00:10:31] Ilan: Great, where would you like that delivered? Please deliver them to the Gateway Pavilion at Fort Mason. And I'll be paying in cash.[00:10:42] Alex Volkov: Okay,[00:10:43] Ilan: sweet. So just to confirm, you want four strawberries?[00:10:45] Ilan: 400 chocolate covered strawberries to the Gateway Pavilion. Yes, that's perfect. And when can we expect delivery? Well, you guys are right nearby, so it'll be like, I don't know, 37 seconds? That's incredibly fast. Cool, you too.[00:11:09] swyx: Hi, Ilan, welcome to Lanespace. Oh, thank you. I just saw your amazing demos, had your amazing strawberries. You are dressed up, like, exactly like a strawberry salesman. Gotta have it all. What was the building on demo like? What was the story behind the demo?[00:11:22] swyx: It was really interesting. This is actually something I had been thinking about for months before the launch.[00:11:27] swyx: Like, having a, like, AI that can make phone calls is something like I've personally wanted for a long time. And so as soon as we launched internally, like, I started hacking on it. And then that sort of just started. We made it into like an internal demo, and then people found it really interesting, and then we thought how cool would it be to have this like on stage as, as one of the demos.[00:11:47] swyx: Yeah, would would you call out any technical issues building, like you were basically one of the first people ever to build with a voice mode API. Would you call out any issues like integrating it with Twilio like that, like you did with function calling, with like a form filling elements. I noticed that you had like intents of things to fulfill, and then.[00:12:07] swyx: When there's still missing info, the voice would prompt you, roleplaying the store guy.[00:12:13] swyx: Yeah, yeah, so, I think technically, there's like the whole, just working with audio and streams is a whole different beast. Like, even separate from like AI and this, this like, new capabilities, it's just, it's just tough.[00:12:26] swyx: Yeah, when you have a prompt, conversationally it'll just follow, like the, it was, Instead of like, kind of step by step to like ask the right questions based on like the like what the request was, right? The function calling itself is sort of tangential to that. Like, you have to prompt it to call the functions, but then handling it isn't too much different from, like, what you would do with assistant streaming or, like, chat completion streaming.[00:12:47] swyx: I think, like, the API feels very similar just to, like, if everything in the API was streaming, it actually feels quite familiar to that.[00:12:53] swyx: And then, function calling wise, I mean, does it work the same? I don't know. Like, I saw a lot of logs. You guys showed, like, in the playground, a lot of logs. What is in there?[00:13:03] swyx: What should people know?[00:13:04] swyx: Yeah, I mean, it is, like, the events may have different names than the streaming events that we have in chat completions, but they represent very similar things. It's things like, you know, function call started, argument started, it's like, here's like argument deltas, and then like function call done.[00:13:20] swyx: Conveniently we send one that has the full function, and then I just use that. Nice.[00:13:25] swyx: Yeah and then, like, what restrictions do, should people be aware of? Like, you know, I think, I think, before we recorded, we discussed a little bit about the sensitivities around basically calling random store owners and putting, putting like an AI on them.[00:13:40] swyx: Yeah, so there's, I think there's recent regulation on that, which is why we want to be like very, I guess, aware of, of You know, you can't just call anybody with AI, right? That's like just robocalling. You wouldn't want someone just calling you with AI.[00:13:54] swyx: I'm a developer, I'm about to do this on random people.[00:13:57] swyx: What laws am I about to break?[00:14:00] swyx: I forget what the governing body is, but you should, I think, Having consent of the person you're about to call, it always works. I, as the strawberry owner, have consented to like getting called with AI. I think past that you, you want to be careful. Definitely individuals are more sensitive than businesses.[00:14:19] swyx: I think businesses you have a little bit more leeway. Also, they're like, businesses I think have an incentive to want to receive AI phone calls. Especially if like, they're dealing with it. It's doing business. Right, like, it's more business. It's kind of like getting on a booking platform, right, you're exposed to more.[00:14:33] swyx: But, I think it's still very much like a gray area. Again, so. I think everybody should, you know, tread carefully, like, figure out what it is. I, I, I, the law is so recent, I didn't have enough time to, like, I'm also not a lawyer. Yeah, yeah, yeah, of course. Yeah.[00:14:49] swyx: Okay, cool fair enough. One other thing, this is kind of agentic.[00:14:52] swyx: Did you use a state machine at all? Did you use any framework? No. You just stick it in context and then just run it in a loop until it ends call?[00:15:01] swyx: Yeah, there isn't even a loop, like Okay. Because the API is just based on sessions. It's always just going to keep going. Every time you speak, it'll trigger a call.[00:15:11] swyx: And then after every function call was also invoked invoking like a generation. And so that is another difference here. It's like it's inherently almost like in a loop, be just by being in a session, right? No state machines needed. I'd say this is very similar to like, the notion of routines, where it's just like a list of steps.[00:15:29] swyx: And it, like, sticks to them softly, but usually pretty well. And the steps is the prompts? The steps, it's like the prompt, like the steps are in the prompt. Yeah, yeah, yeah. Right, it's like step one, do this, step one, step two, do that. What if I want to change the system prompt halfway through the conversation?[00:15:44] swyx: You can. Okay. You can. To be honest, I have not played without two too much. Yeah,[00:15:47] swyx: yeah.[00:15:48] swyx: But, I know you can.[00:15:49] swyx: Yeah, yeah. Yeah. Awesome. I noticed that you called it real time API, but not voice API. Mm hmm. So I assume that it's like real time API starting with voice. Right, I think that's what he said on the thing.[00:16:00] swyx: I can't imagine, like, what else is real[00:16:02] swyx: time? Well, I guess, to use ChatGPT's voice mode as an example, Like, we've demoed the video, right? Like, real time image, right? So, I'm not actually sure what timelines are, But I would expect, if I had to guess, That, like, that is probably the next thing that we're gonna be making.[00:16:17] swyx: You'd probably have to talk directly with the team building this. Sure. But, You can't promise their timelines. Yeah, yeah, yeah, right, exactly. But, like, given that this is the features that currently, Or that exists that we've demoed on Chachapiti. Yeah. There[00:16:29] swyx: will never be a[00:16:29] swyx: case where there's like a real time text API, right?[00:16:31] swyx: I don't Well, this is a real time text API. You can do text only on this. Oh. Yeah. I don't know why you would. But it's actually So text to text here doesn't quite make a lot of sense. I don't think you'll get a lot of latency gain. But, like, speech to text is really interesting. Because you can prevent You can prevent responses, like audio responses.[00:16:54] swyx: And force function calls. And so you can do stuff like UI control. That is like super super reliable. We had a lot of like, you know, un, like, we weren't sure how well this was gonna work because it's like, you have a voice answering. It's like a whole persona, right? Like, that's a little bit more, you know, risky.[00:17:10] swyx: But if you, like, cut out the audio outputs and make it so it always has to output a function, like you can end up with pretty pretty good, like, Pretty reliable, like, command like a command architecture. Yeah,[00:17:21] swyx: actually, that's the way I want to interact with a lot of these things as well. Like, one sided voice.[00:17:26] swyx: Yeah, you don't necessarily want to hear the[00:17:27] swyx: voice back. And like, sometimes it's like, yeah, I think having an output voice is great. But I feel like I don't always want to hear an output voice. I'd say usually I don't. But yeah, exactly, being able to speak to it is super sweet.[00:17:39] swyx: Cool. Do you want to comment on any of the other stuff that you announced?[00:17:41] swyx: From caching I noticed was like, I like the no code change part. I'm looking forward to the docs because I'm sure there's a lot of details on like, what you cache, how long you cache. Cause like, enthalpy caches were like 5 minutes. I was like, okay, but what if I don't make a call every 5 minutes?[00:17:56] swyx: Yeah,[00:17:56] swyx: to be super honest with you, I've been so caught up with the real time API and making the demo that I haven't read up on the other stuff. Launches too much. I mean, I'm aware of them, but I think I'm excited to see how all distillation works. That's something that we've been doing like, I don't know, I've been like doing it between our models for a while And I've seen really good results like I've done back in a day like from GPT 4 to GPT 3.[00:18:19] swyx: 5 And got like, like pretty much the same level of like function calling with like hundreds of functions So that was super super compelling So, I feel like easier distillation, I'm really excited for. I see. Is it a tool?[00:18:31] swyx: So, I saw evals. Yeah. Like, what is the distillation product? It wasn't super clear, to be honest.[00:18:36] swyx: I, I think I want to, I want to let that team, I want to let that team talk about it. Okay,[00:18:40] swyx: alright. Well, I appreciate you jumping on. Yeah, of course. Amazing demo. It was beautifully designed. I'm sure that was part of you and Roman, and[00:18:47] swyx: Yeah, I guess, shout out to like, the first people to like, creators of Wanderlust, originally, were like, Simon and Carolis, and then like, I took it and built the voice component and the voice calling components.[00:18:59] swyx: Yeah, so it's been a big team effort. And like the entire PI team for like Debugging everything as it's been going on. It's been, it's been so good working with them. Yeah, you're the first consumers on the DX[00:19:07] swyx: team. Yeah. Yeah, I mean, the classic role of what we do there. Yeah. Okay, yeah, anything else? Any other call to action?[00:19:13] swyx: No, enjoy Dev Day. Thank you. Yeah. That's it.[00:19:16] Olivier Godement, Head of Product, OpenAI[00:19:16] AI Charlie: The latent space crew then talked to Olivier Godmont, head of product for the OpenAI platform, who led the entire Dev Day keynote and introduced all the major new features and updates that we talked about today.[00:19:28] swyx: Okay, so we are here with Olivier Godmont. That's right.[00:19:32] swyx: I don't pronounce French. That's fine. It was perfect. And it was amazing to see your keynote today. What was the back story of, of preparing something like this? Preparing, like, Dev Day? It[00:19:43] Olivier Godement: essentially came from a couple of places. Number one, excellent reception from last year's Dev Day.[00:19:48] Olivier Godement: Developers, startup founders, researchers want to spend more time with OpenAI, and we want to spend more time with them as well. And so for us, like, it was a no brainer, frankly, to do it again, like, you know, like a nice conference. The second thing is going global. We've done a few events like in Paris and like a few other like, you know, non European, non American countries.[00:20:05] Olivier Godement: And so this year we're doing SF, Singapore, and London. To frankly just meet more developers.[00:20:10] swyx: Yeah, I'm very excited for the Singapore one.[00:20:12] Olivier Godement: Ah,[00:20:12] swyx: yeah. Will you be[00:20:13] Olivier Godement: there?[00:20:14] swyx: I don't know. I don't know if I got an invite. No. I can't just talk to you. Yeah, like, and then there was some speculation around October 1st.[00:20:22] Olivier Godement: Yeah. Is it because[00:20:23] swyx: 01, October 1st? It[00:20:25] Olivier Godement: has nothing to do. I discovered the tweet yesterday where like, people are so creative. No one, there was no connection to October 1st. But in hindsight, that would have been a pretty good meme by Tiana. Okay.[00:20:37] swyx: Yeah, and you know, I think like, OpenAI's outreach to developers is something that I felt the whole in 2022, when like, you know, like, people were trying to build a chat GPT, and like, there was no function calling, all that stuff that you talked about in the past.[00:20:51] swyx: And that's why I started my own conference as like like, here's our little developer conference thing. And, but to see this OpenAI Dev Day now, and like to see so many developer oriented products coming to OpenAI, I think it's really encouraging.[00:21:02] Olivier Godement: Yeah, totally. It's that's what I said, essentially, like, developers are basically the people who make the best connection between the technology and, you know, the future, essentially.[00:21:14] Olivier Godement: Like, you know, essentially see a capability, see a low level, like, technology, and are like, hey, I see how that application or that use case that can be enabled. And so, in the direction of enabling, like, AGI, like, all of humanity, it's a no brainer for us, like, frankly, to partner with Devs.[00:21:31] Alessio: And most importantly, you almost never had waitlists, which, compared to like other releases, people usually, usually have.[00:21:38] Alessio: What is the, you know, you had from caching, you had real time voice API, we, you know, Shawn did a long Twitter thread, so people know the releases. Yeah. What is the thing that was like sneakily the hardest to actually get ready for, for that day, or like, what was the kind of like, you know, last 24 hours, anything that you didn't know was gonna work?[00:21:56] Olivier Godement: Yeah. The old Fairly, like, I would say, involved, like, features to ship. So the team has been working for a month, all of them. The one which I would say is the newest for OpenAI is the real time API. For a couple of reasons. I mean, one, you know, it's a new modality. Second, like, it's the first time that we have an actual, like, WebSocket based API.[00:22:16] Olivier Godement: And so, I would say that's the one that required, like, the most work over the month. To get right from a developer perspective and to also make sure that our existing safety mitigation that worked well with like real time audio in and audio out.[00:22:30] swyx: Yeah, what design choices or what was like the sort of design choices that you want to highlight?[00:22:35] swyx: Like, you know, like I think for me, like, WebSockets, you just receive a bunch of events. It's two way. I obviously don't have a ton of experience. I think a lot of developers are going to have to embrace this real time programming. Like, what are you designing for, or like, what advice would you have for developers exploring this?[00:22:51] Olivier Godement: The core design hypothesis was essentially, how do we enable, like, human level latency? We did a bunch of tests, like, on average, like, human beings, like, you know, takes, like, something like 300 milliseconds to converse with each other. And so that was the design principle, essentially. Like, working backward from that, and, you know, making the technology work.[00:23:11] Olivier Godement: And so we evaluated a few options, and WebSockets was the one that we landed on. So that was, like, one design choice. A few other, like, big design choices that we had to make prompt caching. Prompt caching, the design, like, target was automated from the get go. Like, zero code change from the developer.[00:23:27] Olivier Godement: That way you don't have to learn, like, what is a prompt prefix, and, you know, how long does a cache work, like, we just do it as much as we can, essentially. So that was a big design choice as well. And then finally, on distillation, like, and evaluation. The big design choice was something I learned at Skype, like in my previous job, like a philosophy around, like, a pit of success.[00:23:47] Olivier Godement: Like, what is essentially the, the, the minimum number of steps for the majority of developers to do the right thing? Because when you do evals on fat tuning, there are many, many ways, like, to mess it up, frankly, like, you know, and have, like, a crappy model, like, evals that tell, like, a wrong story. And so our whole design was, okay, we actually care about, like, helping people who don't have, like, that much experience, like, evaluating a model, like, get, like, in a few minutes, like, to a good spot.[00:24:11] Olivier Godement: And so how do we essentially enable that bit of success, like, in the product flow?[00:24:15] swyx: Yeah, yeah, I'm a little bit scared to fine tune especially for vision, because I don't know what I don't know for stuff like vision, right? Like, for text, I can evaluate pretty easily. For vision let's say I'm like trying to, one of your examples was grab.[00:24:33] swyx: Which, very close to home, I'm from Singapore. I think your example was like, they identified stop signs better. Why is that hard? Why do I have to fine tune that? If I fine tune that, do I lose other things? You know, like, there's a lot of unknowns with Vision that I think developers have to figure out.[00:24:50] swyx: For[00:24:50] Olivier Godement: sure. Vision is going to open up, like, a new, I would say, evaluation space. Because you're right, like, it's harder, like, you know, to tell correct from incorrect, essentially, with images. What I can say is we've been alpha testing, like, the Vision fine tuning, like, for several weeks at that point. We are seeing, like, even higher performance uplift compared to text fine tuning.[00:25:10] Olivier Godement: So that's, there is something here, like, we've been pretty impressed, like, in a good way, frankly. But, you know, how well it works. But for sure, like, you know, I expect the developers who are moving from one modality to, like, text and images will have, like, more, you know Testing, evaluation, like, you know, to set in place, like, to make sure it works well.[00:25:25] Alessio: The model distillation and evals is definitely, like, the most interesting. Moving away from just being a model provider to being a platform provider. How should people think about being the source of truth? Like, do you want OpenAI to be, like, the system of record of all the prompting? Because people sometimes store it in, like, different data sources.[00:25:41] Alessio: And then, is that going to be the same as the models evolve? So you don't have to worry about, you know, refactoring the data, like, things like that, or like future model structures.[00:25:51] Olivier Godement: The vision is if you want to be a source of truth, you have to earn it, right? Like, we're not going to force people, like, to pass us data.[00:25:57] Olivier Godement: There is no value prop, like, you know, for us to store the data. The vision here is at the moment, like, most developers, like, use like a one size fits all model, like be off the shelf, like GP40 essentially. The vision we have is fast forward a couple of years. I think, like, most developers will essentially, like, have a.[00:26:15] Olivier Godement: An automated, continuous, fine tuned model. The more, like, you use the model, the more data you pass to the model provider, like, the model is automatically, like, fine tuned, evaluated against some eval sets, and essentially, like, you don't have to every month, when there is a new snapshot, like, you know, to go online and, you know, try a few new things.[00:26:34] Olivier Godement: That's a direction. We are pretty far away from it. But I think, like, that evaluation and decision product are essentially a first good step in that direction. It's like, hey, it's you. I set it by that direction, and you give us the evaluation data. We can actually log your completion data and start to do some automation on your behalf.[00:26:52] Alessio: And then you can do evals for free if you share data with OpenAI. How should people think about when it's worth it, when it's not? Sometimes people get overly protective of their data when it's actually not that useful. But how should developers think about when it's right to do it, when not, or[00:27:07] Olivier Godement: if you have any thoughts on it?[00:27:08] Olivier Godement: The default policy is still the same, like, you know, we don't train on, like, any API data unless you opt in. What we've seen from feedback is evaluation can be expensive. Like, if you run, like, O1 evals on, like, thousands of samples Like, your build will get increased, like, you know, pretty pretty significantly.[00:27:22] Olivier Godement: That's problem statement number one. Problem statement number two is, essentially, I want to get to a world where whenever OpenAI ships a new model snapshot, we have full confidence that there is no regression for the task that developers care about. And for that to be the case, essentially, we need to get evals.[00:27:39] Olivier Godement: And so that, essentially, is a sort of a two bugs one stone. It's like, we subsidize, basically, the evals. And we also use the evals when we ship new models to make sure that we keep going in the right direction. So, in my sense, it's a win win, but again, completely opt in. I expect that many developers will not want to share their data, and that's perfectly fine to me.[00:27:56] swyx: Yeah, I think free evals though, very, very good incentive. I mean, it's a fair trade. You get data, we get free evals. Exactly,[00:28:04] Olivier Godement: and we sanitize PII, everything. We have no interest in the actual sensitive data. We just want to have good evaluation on the real use cases.[00:28:13] swyx: Like, I always want to eval the eval. I don't know if that ever came up.[00:28:17] swyx: Like, sometimes the evals themselves are wrong, and there's no way for me to tell you.[00:28:22] Olivier Godement: Everyone who is starting with LLM, teaching with LLM, is like, Yeah, evaluation, easy, you know, I've done testing, like, all my life. And then you start to actually be able to eval, understand, like, all the corner cases, And you realize, wow, there's like a whole field in itself.[00:28:35] Olivier Godement: So, yeah, good evaluation is hard and so, yeah. Yeah, yeah.[00:28:38] swyx: But I think there's a, you know, I just talked to Brain Trust which I think is one of your partners. Mm-Hmm. . They also emphasize code based evals versus your sort of low code. What I see is like, I don't know, maybe there's some more that you didn't demo.[00:28:53] swyx: YC is kind of like a low code experience, right, for evals. Would you ever support like a more code based, like, would I run code on OpenAI's eval platform?[00:29:02] Olivier Godement: For sure. I mean, we meet developers where they are, you know. At the moment, the demand was more for like, you know, easy to get started, like eval. But, you know, if we need to expose like an evaluation API, for instance, for people like, you know, to pass, like, you know, their existing test data we'll do it.[00:29:15] Olivier Godement: So yeah, there is no, you know, philosophical, I would say, like, you know, misalignment on that. Yeah,[00:29:19] swyx: yeah, yeah. What I think this is becoming, by the way, and I don't, like it's basically, like, you're becoming AWS. Like, the AI cloud. And I don't know if, like, that's a conscious strategy, or it's, like, It doesn't even have to be a conscious strategy.[00:29:33] swyx: Like, you're going to offer storage. You're going to offer compute. You're going to offer networking. I don't know what networking looks like. Networking is maybe, like, Caching or like it's a CDN. It's a prompt CDN.[00:29:45] Alex Volkov: Yeah,[00:29:45] swyx: but it's the AI versions of everything, right? Do you like do you see the analogies or?[00:29:52] Olivier Godement: Whatever Whatever I took to developers. I feel like Good models are just half of the story to build a good app There's a third model you need to do Evaluation is the perfect example. Like, you know, you can have the best model in the world If you're in the dark, like, you know, it's really hard to gain the confidence and so Our philosophy is[00:30:11] Olivier Godement: The whole like software development stack is being basically reinvented, you know, with LLMs. There is no freaking way that open AI can build everything. Like there is just too much to build, frankly. And so my philosophy is, essentially, we'll focus on like the tools which are like the closest to the model itself.[00:30:28] Olivier Godement: So that's why you see us like, you know, investing quite a bit in like fine tuning, distillation, our evaluation, because we think that it actually makes sense to have like in one spot, Like, you know, all of that. Like, there is some sort of virtual circle, essentially, that you can set in place. But stuff like, you know, LLMOps, like tools which are, like, further away from the model, I don't know if you want to do, like, you know, super elaborate, like, prompt management, or, you know, like, tooling, like, I'm not sure, like, you know, OpenAI has, like, such a big edge, frankly, like, you know, to build this sort of tools.[00:30:56] Olivier Godement: So that's how we view it at the moment. But again, frankly, the philosophy is super simple. The strategy is super simple. It's meeting developers where they want us to be. And so, you know that's frankly, like, you know, day in, day out, like, you know, what I try to do.[00:31:08] Alessio: Cool. Thank you so much for the time.[00:31:10] Alessio: I'm sure you,[00:31:10] swyx: Yeah, I have more questions on, a couple questions on voice, and then also, like, your call to action, like, what you want feedback on, right? So, I think we should spend a bit more time on voice, because I feel like that's, like, the big splash thing. I talked well Well, I mean, I mean, just what is the future of real time for OpenAI?[00:31:28] swyx: Yeah. Because I think obviously video is next. You already have it in the, the ChatGPT desktop app. Do we just have a permanent, like, you know, like, are developers just going to be, like, sending sockets back and forth with OpenAI? Like how do we program for that? Like, what what is the future?[00:31:44] Olivier Godement: Yeah, that makes sense. I think with multimodality, like, real time is quickly becoming, like, you know, essentially the right experience, like, to build an application. Yeah. So my expectation is that we'll see like a non trivial, like a volume of applications like moving to a real time API. Like if you zoom out, like, audio is really simple, like, audio until basically now.[00:32:05] Olivier Godement: Audio on the web, in apps, was basically very much like a second class citizen. Like, you basically did like an audio chatbot for users who did not have a choice. You know, they were like struggling to read, or I don't know, they were like not super educated with technology. And so, frankly, it was like the crappy option, you know, compared to text.[00:32:25] Olivier Godement: But when you talk to people in the real world, the vast majority of people, like, prefer to talk and listen instead of typing and writing.[00:32:34] swyx: We speak before we write.[00:32:35] Olivier Godement: Exactly. I don't know. I mean, I'm sure it's the case for you in Singapore. For me, my friends in Europe, the number of, like, WhatsApp, like, voice notes they receive every day, I mean, just people, it makes sense, frankly, like, you know.[00:32:45] Olivier Godement: Chinese. Chinese, yeah.[00:32:46] swyx: Yeah,[00:32:47] Olivier Godement: all voice. You know, it's easier. There is more emotions. I mean, you know, you get the point across, like, pretty well. And so my personal ambition for, like, the real time API and, like, audio in general is to make, like, audio and, like, multimodality, like, truly a first class experience.[00:33:01] Olivier Godement: Like, you know, if you're, like, you know, the amazing, like, super bold, like, start up out of YC, you want to build, like, the next, like, billion, like, you know, user application to make it, like, truly your first and make it feel, like, you know, an actual good, like, you know, product experience. So that's essentially the ambition, and I think, like, yeah, it could be pretty big.[00:33:17] swyx: Yeah. I think one, one people, one issue that people have with the voice so far as, as released in advanced voice mode is the refusals.[00:33:24] Alex Volkov: Yeah.[00:33:24] swyx: You guys had a very inspiring model spec. I think Joanne worked on that. Where you said, like, yeah, we don't want to overly refuse all the time. In fact, like, even if, like, not safe for work, like, in some occasions, it's okay.[00:33:38] swyx: How, is there an API that we can say, not safe for work, okay?[00:33:41] Olivier Godement: I think we'll get there. I think we'll get there. The mobile spec, like, nailed it, like, you know. It nailed it! It's so good! Yeah, we are not in the business of, like, policing, you know, if you can say, like, vulgar words or whatever. You know, there are some use cases, like, you know, I'm writing, like, a Hollywood, like, script I want to say, like, will go on, and it's perfectly fine, you know?[00:33:59] Olivier Godement: And so I think the direction where we'll go here is that basically There will always be like, you know, a set of behavior that we will, you know, just like forbid, frankly, because they're illegal against our terms of services. But then there will be like, you know, some more like risky, like themes, which are completely legal, like, you know, vulgar words or, you know, not safe for work stuff.[00:34:17] Olivier Godement: Where basically we'll expose like a controllable, like safety, like knobs in the API to basically allow you to say, hey, that theme okay, that theme not okay. How sensitive do you want the threshold to be on safety refusals? I think that's the Dijkstra. So a[00:34:31] swyx: safety API.[00:34:32] Olivier Godement: Yeah, in a way, yeah.[00:34:33] swyx: Yeah, we've never had that.[00:34:34] Olivier Godement: Yeah. '[00:34:35] swyx: cause right now is you, it is whatever you decide. And then it's, that's it. That, that, that would be the main reason I don't use opening a voice is because of[00:34:42] Olivier Godement: it's over police. Over refuse over refusals. Yeah. Yeah, yeah. No, we gotta fix that. Yeah. Like singing,[00:34:47] Alessio: we're trying to do voice. I'm a singer.[00:34:49] swyx: And you, you locked off singing.[00:34:51] swyx: Yeah,[00:34:51] Alessio: yeah, yeah.[00:34:52] swyx: But I, I understand music gets you in trouble. Okay. Yeah. So then, and then just generally, like, what do you want to hear from developers? Right? We have, we have all developers watching you know, what feedback do you want? Any, anything specific as well, like from, especially from today anything that you are unsure about, that you are like, Our feedback could really help you decide.[00:35:09] swyx: For sure.[00:35:10] Olivier Godement: I think, essentially, it's becoming pretty clear after today that, you know, I would say the open end direction has become pretty clear, like, you know, after today. Investment in reasoning, investment in multimodality, Investment as well, like in, I would say, tool use, like function calling. To me, the biggest question I have is, you know, Where should we put the cursor next?[00:35:30] Olivier Godement: I think we need all three of them, frankly, like, you know, so we'll keep pushing.[00:35:33] swyx: Hire 10, 000 people, or actually, no need, build a bunch of bots.[00:35:37] Olivier Godement: Exactly, and so let's take O1 smart enough, like, for your problems? Like, you know, let's set aside for a second the existing models, like, for the apps that you would love to build, is O1 basically it in reasoning, or do we still have, like, you know, a step to do?[00:35:50] Olivier Godement: Preview is not enough, I[00:35:52] swyx: need the full one.[00:35:53] Olivier Godement: Yeah, so that's exactly that sort of feedback. Essentially what they would love to do is for developers I mean, there's a thing that Sam has been saying like over and over again, like, you know, it's easier said than done, but I think it's directionally correct. As a developer, as a founder, you basically want to build an app which is a bit too difficult for the model today, right?[00:36:12] Olivier Godement: Like, what you think is right, it's like, sort of working, sometimes not working. And that way, you know, that basically gives us like a goalpost, and be like, okay, that's what you need to enable with the next model release, like in a few months. And so I would say that Usually, like, that's the sort of feedback which is like the most useful that I can, like, directly, like, you know, incorporate.[00:36:33] swyx: Awesome. I think that's our time. Thank you so much, guys. Yeah, thank you so much.[00:36:38] AI Charlie: Thank you. We were particularly impressed that Olivier addressed the not safe for work moderation policy question head on, as that had only previously been picked up on in Reddit forums. This is an encouraging sign that we will return to in the closing candor with Sam Altman at the end of this episode.[00:36:57] Romain Huet, Head of DX, OpenAI[00:36:57] AI Charlie: Next, a chat with Roman Hewitt, friend of the pod, AI Engineer World's fair closing keynote speaker, and head of developer experience at OpenAI on his incredible live demos And advice to AI engineers on all the new modalities.[00:37:12] Alessio: Alright, we're live from OpenAI Dev Day. We're with Juan, who just did two great demos on, on stage.[00:37:17] Alessio: And he's been a friend of Latentspace, so thanks for taking some of the time.[00:37:20] Romain Huet: Of course, yeah, thank you for being here and spending the time with us today.[00:37:23] swyx: Yeah, I appreciate appreciate you guys putting this on. I, I know it's like extra work, but it really shows the developers that you're, Care and about reaching out.[00:37:31] Romain Huet: Yeah, of course, I think when you go back to the OpenAI mission, I think for us it's super important that we have the developers involved in everything we do. Making sure that you know, they have all of the tools they need to build successful apps. And we really believe that the developers are always going to invent the ideas, the prototypes, the fun factors of AI that we can't build ourselves.[00:37:49] Romain Huet: So it's really cool to have everyone here.[00:37:51] swyx: We had Michelle from you guys on. Yes, great episode. She very seriously said API is the path to AGI. Correct. And people in our YouTube comments were like, API is not AGI. I'm like, no, she's very serious. API is the path to AGI. Like, you're not going to build everything like the developers are, right?[00:38:08] swyx: Of[00:38:08] Romain Huet: course, yeah, that's the whole value of having a platform and an ecosystem of amazing builders who can, like, in turn, create all of these apps. I'm sure we talked about this before, but there's now more than 3 million developers building on OpenAI, so it's pretty exciting to see all of that energy into creating new things.[00:38:26] Alessio: I was going to say, you built two apps on stage today, an international space station tracker and then a drone. The hardest thing must have been opening Xcode and setting that up. Now, like, the models are so good that they can do everything else. Yes. You had two modes of interaction. You had kind of like a GPT app to get the plan with one, and then you had a cursor to do apply some of the changes.[00:38:47] Alessio: Correct. How should people think about the best way to consume the coding models, especially both for You know, brand new projects and then existing projects that you're trying to modify.[00:38:56] Romain Huet: Yeah. I mean, one of the things that's really cool about O1 Preview and O1 Mini being available in the API is that you can use it in your favorite tools like cursor like I did, right?[00:39:06] Romain Huet: And that's also what like Devin from Cognition can use in their own software engineering agents. In the case of Xcode, like, it's not quite deeply integrated in Xcode, so that's why I had like chat GPT side by side. But it's cool, right, because I could instruct O1 Preview to be, like, my coding partner and brainstorming partner for this app, but also consolidate all of the, the files and architect the app the way I wanted.[00:39:28] Romain Huet: So, all I had to do was just, like, port the code over to Xcode and zero shot the app build. I don't think I conveyed, by the way, how big a deal that is, but, like, you can now create an iPhone app from scratch, describing a lot of intricate details that you want, and your vision comes to life in, like, a minute.[00:39:47] Romain Huet: It's pretty outstanding.[00:39:48] swyx: I have to admit, I was a bit skeptical because if I open up SQL, I don't know anything about iOS programming. You know which file to paste it in. You probably set it up a little bit. So I'm like, I have to go home and test it. And I need the ChatGPT desktop app so that it can tell me where to click.[00:40:04] Romain Huet: Yeah, I mean like, Xcode and iOS development has become easier over the years since they introduced Swift and SwiftUI. I think back in the days of Objective C, or like, you know, the storyboard, it was a bit harder to get in for someone new. But now with Swift and SwiftUI, their dev tools are really exceptional.[00:40:23] Romain Huet: But now when you combine that with O1, as your brainstorming and coding partner, it's like your architect, effectively. That's the best way, I think, to describe O1. People ask me, like, can GPT 4 do some of that? And it certainly can. But I think it will just start spitting out code, right? And I think what's great about O1, is that it can, like, make up a plan.[00:40:42] Romain Huet: In this case, for instance, the iOS app had to fetch data from an API, it had to look at the docs, it had to look at, like, how do I parse this JSON, where do I store this thing, and kind of wire things up together. So that's where it really shines. Is mini or preview the better model that people should be using?[00:40:58] Romain Huet: Like, how? I think people should try both. We're obviously very excited about the upcoming O1 that we shared the evals for. But we noticed that O1 Mini is very, very good at everything math, coding, everything STEM. If you need for your kind of brainstorming or your kind of science part, you need some broader knowledge than reaching for O1 previews better.[00:41:20] Romain Huet: But yeah, I used O1 Mini for my second demo. And it worked perfectly. All I needed was very much like something rooted in code, architecting and wiring up like a front end, a backend, some UDP packets, some web sockets, something very specific. And it did that perfectly.[00:41:35] swyx: And then maybe just talking about voice and Wanderlust, the app that keeps on giving, what's the backstory behind like preparing for all of that?[00:41:44] Romain Huet: You know, it's funny because when last year for Dev Day, we were trying to think about what could be a great demo app to show like an assistive experience. I've always thought travel is a kind of a great use case because you have, like, pictures, you have locations, you have the need for translations, potentially.[00:42:01] Romain Huet: There's like so many use cases that are bounded to travel that I thought last year, let's use a travel app. And that's how Wanderlust came to be. But of course, a year ago, all we had was a text based assistant. And now we thought, well, if there's a voice modality, what if we just bring this app back as a wink.[00:42:19] Romain Huet: And what if we were interacting better with voice? And so with this new demo, what I showed was the ability to like, So, we wanted to have a complete conversation in real time with the app, but also the thing we wanted to highlight was the ability to call tools and functions, right? So, like in this case, we placed a phone call using the Twilio API, interfacing with our AI agents, but developers are so smart that they'll come up with so many great ideas that we could not think of ourselves, right?[00:42:48] Romain Huet: But what if you could have like a, you know, a 911 dispatcher? What if you could have like a customer service? Like center, that is much smarter than what we've been used to today. There's gonna be so many use cases for real time, it's awesome.[00:43:00] swyx: Yeah, and sometimes actually you, you, like this should kill phone trees.[00:43:04] swyx: Like there should not be like dial one[00:43:07] Romain Huet: of course para[00:43:08] swyx: espanol, you know? Yeah, exactly. Or whatever. I dunno.[00:43:12] Romain Huet: I mean, even you starting speaking Spanish would just do the thing, you know you don't even have to ask. So yeah, I'm excited for this future where we don't have to interact with those legacy systems.[00:43:22] swyx: Yeah. Yeah. Is there anything, so you are doing function calling in a streaming environment. So basically it's, it's web sockets. It's UDP, I think. It's basically not guaranteed to be exactly once delivery. Like, is there any coding challenges that you encountered when building this?[00:43:39] Romain Huet: Yeah, it's a bit more delicate to get into it.[00:43:41] Romain Huet: We also think that for now, what we, what we shipped is a, is a beta of this API. I think there's much more to build onto it. It does have the function calling and the tools. But we think that for instance, if you want to have something very robust, On your client side, maybe you want to have web RTC as a client, right?[00:43:58] Romain Huet: And, and as opposed to like directly working with the sockets at scale. So that's why we have partners like Life Kit and Agora if you want to, if you want to use them. And I'm sure we'll have many mores in the, in many more in the future. But yeah, we keep on iterating on that, and I'm sure the feedback of developers in the weeks to come is going to be super critical for us to get it right.[00:44:16] swyx: Yeah, I think LiveKit has been fairly public that they are used in, in the Chachapiti app. Like, is it, it's just all open source, and we just use it directly with OpenAI, or do we use LiveKit Cloud or something?[00:44:28] Romain Huet: So right now we, we released the API, we released some sample code also, and referenced clients for people to get started with our API.[00:44:35] Romain Huet: And we also partnered with LifeKit and Agora, so they also have their own, like ways to help you get started that plugs natively with the real time API. So depending on the use case, people can, can can decide what to use. If you're working on something that's completely client or if you're working on something on the server side, for the voice interaction, you may have different needs, so we want to support all of those.[00:44:55] Alessio: I know you gotta run. Is there anything that you want the AI engineering community to give feedback on specifically, like even down to like, you know, a specific API end point or like, what, what's like the thing that you want? Yeah. I[00:45:08] Romain Huet: mean, you know, if we take a step back, I think dev Day this year is all different from last year and, and in, in a few different ways.[00:45:15] Romain Huet: But one way is that we wanted to keep it intimate, even more intimate than last year. We wanted to make sure that the community is. Thank you very much for joining us on the Spotlight. That's why we have community talks and everything. And the takeaway here is like learning from the very best developers and AI engineers.[00:45:31] Romain Huet: And so, you know we want to learn from them. Most of what we shipped this morning, including things like prompt caching the ability to generate prompts quickly in the playground, or even things like vision fine tuning. These are all things that developers have been asking of us. And so, the takeaway I would, I would leave them with is to say like, Hey, the roadmap that we're working on is heavily influenced by them and their work.[00:45:53] Romain Huet: And so we love feedback From high feature requests, as you say, down to, like, very intricate details of an API endpoint, we love feedback, so yes that's, that's how we, that's how we build this API.[00:46:05] swyx: Yeah, I think the, the model distillation thing as well, it might be, like, the, the most boring, but, like, actually used a lot.[00:46:12] Romain Huet: True, yeah. And I think maybe the most unexpected, right, because I think if I, if I read Twitter correctly the past few days, a lot of people were expecting us. To shape the real time API for speech to speech. I don't think developers were expecting us to have more tools for distillation, and we really think that's gonna be a big deal, right?[00:46:30] Romain Huet: If you're building apps that have you know, you, you want high, like like low latency, low cost, but high performance, high quality on the use case distillation is gonna be amazing.[00:46:40] swyx: Yeah. I sat in the distillation session just now and they showed how they distilled from four oh to four mini and it was like only like a 2% hit in the performance and 50 next.[00:46:49] swyx: Yeah,[00:46:50] Romain Huet: I was there as well for the superhuman kind of use case inspired for an Ebola client. Yeah, this was really good. Cool man! so much for having me. Thanks again for being here today. It's always[00:47:00] AI Charlie: great to have you. As you might have picked up at the end of that chat, there were many sessions throughout the day focused on specific new capabilities.[00:47:08] Michelle Pokrass, Head of API at OpenAI ft. Simon Willison[00:47:08] AI Charlie: Like the new model distillation features combining EVOLs and fine tuning. For our next session, we are delighted to bring back two former guests of the pod, which is something listeners have been greatly enjoying in our second year of doing the Latent Space podcast. Michelle Pokras of the API team joined us recently to talk about structured outputs, and today gave an updated long form session at Dev Day, describing the implementation details of the new structured output mode.[00:47:39] AI Charlie: We also got her updated thoughts on the VoiceMode API we discussed in her episode, now that it is finally announced. She is joined by friend of the pod and super blogger, Simon Willison, who also came back as guest co host in our Dev Day. 2023 episode.[00:47:56] Alessio: Great, we're back live at Dev Day returning guest Michelle and then returning guest co host Fork.[00:48:03] Alessio: Fork, yeah, I don't know. I've lost count. I think it's been a few. Simon Willison is back. Yeah, we just wrapped, we just wrapped everything up. Congrats on, on getting everything everything live. Simon did a great, like, blog, so if you haven't caught up, I[00:48:17] Simon Willison: wrote my, I implemented it. Now, I'm starting my live blog while waiting for the first talk to start, using like GPT 4, I wrote me the Javascript, and I got that live just in time and then, yeah, I was live blogging the whole day.[00:48:28] swyx: Are you a cursor enjoyer?[00:48:29] Simon Willison: I haven't really gotten into cursor yet to be honest. I just haven't spent enough time for it to click, I think. I'm more a copy and paste things out of Cloud and chat GPT. Yeah. It's interesting.[00:48:39] swyx: Yeah. I've converted to cursor and 01 is so easy to just toggle on and off.[00:48:45] Alessio: What's your workflow?[00:48:46] Alessio: VS[00:48:48] Michelle Pokrass: Code co pilot, so Yep, same here. Team co pilot. Co pilot is actually the reason I joined OpenAI. It was, you know, before ChatGPT, this is the thing that really got me. So I'm still into it, but I keep meaning to try out Cursor, and I think now that things have calmed down, I'm gonna give it a real go.[00:49:03] swyx: Yeah, it's a big thing to change your tool of choice.[00:49:06] swyx: Yes,[00:49:06] Michelle Pokrass: yeah, I'm pretty dialed, so.[00:49:09] swyx: I mean, you know, if you want, you can just fork VS Code and make your own. That's the thing to dumb thing, right? We joked about doing a hackathon where the only thing you do is fork VS Code and bet me the best fork win.[00:49:20] Michelle Pokrass: Nice.[00:49:22] swyx: That's actually a really good idea. Yeah, what's up?[00:49:26] swyx: I mean, congrats on launching everything today. I know, like, we touched on it a little bit, but, like, everyone was kind of guessing that Voice API was coming, and, like, we talked about it in our episode. How do you feel going into the launch? Like, any design decisions that you want to highlight?[00:49:41] Michelle Pokrass: Yeah, super jazzed about it. The team has been working on it for a while. It's, like, a very different API for us. It's the first WebSocket API, so a lot of different design decisions to be made. It's, like, what kind of events do you send? When do you send an event? What are the event names? What do you send, like, on connection versus on future messages?[00:49:57] Michelle Pokrass: So there have been a lot of interesting decisions there. The team has also hacked together really cool projects as we've been testing it. One that I really liked is we had an internal hack a thon for the API team. And some folks built like a little hack that you could use to, like VIM with voice mode, so like, control vim, and you would tell them on like, nice, write a file and it would, you know, know all the vim commands and, and pipe those in.[00:50:18] Michelle Pokrass: So yeah, a lot of cool stuff we've been hacking on and really excited to see what people build with it.[00:50:23] Simon Willison: I've gotta call out a demo from today. I think it was Katja had a 3D visualization of the solar system, like WebGL solar system, you could talk to. That is one of the coolest conference demos I've ever seen.[00:50:33] Simon Willison: That was so convincing. I really want the code. I really want the code for that to get put out there. I'll talk[00:50:39] Michelle Pokrass: to the team. I think we can[00:50:40] Simon Willison: probably

The React Native Show Podcast
How to Get Started With 3D in React | React Universe On Air: Coffee Talk #21

The React Native Show Podcast

Play Episode Listen Later Sep 24, 2024 65:33


Why settle for flat graphics, when you can see the React Universe in 3D? In this Coffee Talk, Jakub invited three very special guests to dig deep into the topic of 3D rendering in React and React Native:

Hacker News Recap
August 5th, 2024 | Google loses antitrust suit over search deals on phones

Hacker News Recap

Play Episode Listen Later Aug 10, 2024 13:07


This is a recap of the top 10 posts on Hacker News on August 5th, 2024.This podcast was generated by wondercraft.ai(00:35): Google loses antitrust suit over search deals on phonesOriginal post: https://news.ycombinator.com/item?id=41164240&utm_source=wondercraft_ai(01:33): Stop Killing GamesOriginal post: https://news.ycombinator.com/item?id=41159063&utm_source=wondercraft_ai(02:44): Andy Warhol's lost Amiga art foundOriginal post: https://news.ycombinator.com/item?id=41162311&utm_source=wondercraft_ai(04:00): A new type of neural network is more interpretableOriginal post: https://news.ycombinator.com/item?id=41162676&utm_source=wondercraft_ai(05:23): Apple Intelligence beta flagged a phishing email as "Priority"Original post: https://news.ycombinator.com/item?id=41159680&utm_source=wondercraft_ai(06:29): Can we stop the decline of monarch butterflies and other pollinators?Original post: https://news.ycombinator.com/item?id=41165273&utm_source=wondercraft_ai(07:40): A cryptographically secure bootloader for RISC-V in RustOriginal post: https://news.ycombinator.com/item?id=41161580&utm_source=wondercraft_ai(08:59): WebGL visual programming editor cables.gl is now open sourceOriginal post: https://news.ycombinator.com/item?id=41162036&utm_source=wondercraft_ai(10:09): A 20-something dethroned dotcom CEO that went to work the counter at McDonald's (2000)Original post: https://news.ycombinator.com/item?id=41161947&utm_source=wondercraft_ai(11:16): Debugging a rustc segfault on IllumosOriginal post: https://news.ycombinator.com/item?id=41164885&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai

Zalet
Maja Nedeljković Batić

Zalet

Play Episode Listen Later Jul 8, 2024 97:26 Transcription Available


Dobrodošli na Zalet Podkast — podkast o dizajnu digitalnih proizvoda!Imamo novu gošću! U ovoj epizodi smo ugostili Maju Nedeljković Batić. Maja je inženjerka, umetnica i kreativna developerka. Trenutno radi u kompaniji Linear. Sa njom smo pričali o spoju umetnosti i frontend inženjeringa, saradnji sa dizajnerima, samostalnim projektima i govorništvu na konferencijama.

Building the Open Metaverse
The Future of Shopping: 3D Experiences W/ Neha Singh of Obsess.vr

Building the Open Metaverse

Play Episode Listen Later Jun 25, 2024 49:49


In this podcast episode, Neha Singh, Founder and CEO of Obsess.vr, delves into her journey and the innovative strides her company is making in the world of e-commerce. Singh, whose background uniquely blends computer science from MIT and fashion studies from FIT, has channeled her dual expertise into founding Obsess.vr. The platform transforms traditional online shopping into immersive 3D virtual store experiences, enhancing how consumers engage with brands. Singh's career began with roles at Google and a luxury e-commerce startup, eventually leading her to Vogue, where she launched various digital products. This diverse experience provided her with insights into both the tech and fashion industries, laying the foundation for Obsess.vr. Founded in 2016, Obsess.vr partners with top-tier brands like Dior, L'Oreal, and Crate & Barrel, creating virtual shopping environments that replicate the discovery-driven shopping of physical stores. Singh discusses the shortcomings of current e-commerce interfaces, which often reduce products to small thumbnails on a white background. This setup fails to convey the brand story and the experiential aspect of shopping. Obsess.vr addresses this by leveraging real-time 3D technology to create branded, interactive virtual stores. These environments are accessible via web links and integrate seamlessly with existing e-commerce platforms, making it easy for consumers to engage with them on any device. The platform's use of WebGL and Three.js ensures fast load times and high-quality rendering, essential for keeping users engaged. Singh emphasizes the importance of creating both photorealistic and fantastical virtual environments, depending on the brand's vision. Obsess.vr also incorporates gamification elements, such as scavenger hunts and custom games, which significantly boost user engagement and conversion rates. A key aspect of Obsess.vr's strategy is its focus on accessibility and integration. The platform integrates with major e-commerce systems like Salesforce and Shopify, allowing for seamless product data synchronization and enhanced customer experiences. Singh also highlights the importance of making these virtual experiences accessible through various channels, including social media, email, and even QR codes in physical stores. Singh envisions a future where the internet is inherently 3D, with virtual shopping becoming a standard part of the online experience. Obsess.vr aims to be at the forefront of this transformation, providing brands with tools to create and manage immersive digital experiences. The company is also exploring the use of generative AI to streamline the creation of 3D models and environments, further reducing production time and costs. The podcast also touches on innovative projects like the virtual store for K18 haircare, which educates users about hair structure through interactive 3D experiences, and the use of AI in extending historical ad campaigns into immersive environments for Elizabeth Arden. Singh's approach demonstrates how brands can use virtual spaces not just for selling products, but for creating memorable, educational experiences that build long-term customer relationships. In conclusion, Neha Singh's Obsess.vr is pioneering the next evolution of e-commerce, blending cutting-edge technology with creative brand storytelling. By making virtual shopping engaging, accessible, and integrated, Obsess.vr is set to revolutionize how consumers interact with brands online, making 3D virtual stores an essential component of the future internet. Have any comments or questions? Email the show Feedback@Buildingtheopenmetaverse.org Want more information? Visit our website www.buildingtheopenmetaverse.org And make sure you follow us on Linkedin for all of our show updates https://www.linkedin.com/company/buildingtheopenmetaverse Building the Open Metaverse is a podcast hosted by Patrick Cozzi (Cesium) and Marc Petit that invites a broad range of technical experts to share their insights on how the community is building the metaverse together. #BuildingTheOpenMetaversePodcast #MetaversePodcast #Metaverse

Building the Open Metaverse
Revolutionizing Game Development: W/ Will Eastcott of PlayCanvas

Building the Open Metaverse

Play Episode Listen Later Jun 11, 2024 38:48


In this episode of "Building the Open Metaverse," hosts Marc Petit and Patrick Cozzi welcome Will Eastcott, the co-founder and CEO of PlayCanvas, a pioneering open-source game engine. Eastcott's journey in the gaming industry began in the 1980s when he was captivated by the 3D space trading game "Elite," which fueled his passion for computers and video games. He pursued computing at Imperial College London and gained early industry experience at a VR company, working with advanced silicon graphics workstations. His career took off at Criterion Software, where he contributed to developing RenderWare and worked on notable games like Grand Theft Auto and Call of Duty.Eastcott's transition to web-based game development was sparked by the release of the WebGL specification in 2010, which he saw as a significant opportunity for interactive graphics. He founded PlayCanvas in 2011, focusing on creating a web-native game engine that leveraged WebGL. Despite initial challenges, such as limited WebGL support on iOS, PlayCanvas flourished, becoming open-source in 2014. This move fostered a global community of developers and solidified PlayCanvas's role in democratizing game development.The conversation delves into the strategic acquisition of PlayCanvas by Snap Inc. in 2017, which allowed Eastcott and his team to work on Snap Games, a platform serving millions of users. Eastcott shares insights into the unique aspects of PlayCanvas, including its lightweight runtime, collaborative browser-based platform, and commitment to open standards like glTF and WebXR. He emphasizes the importance of WebGPU in achieving significant performance improvements and explores the potential of AI and machine learning in revolutionizing content creation.Eastcott highlights the development of Super Splat, a tool for optimizing 3D Gaussian splat scenes, demonstrating how AI can streamline the creation of photorealistic content without extensive coding. He also discusses the future of web gaming, pointing out the need for improved payment systems, discovery mechanisms, and better support from browser vendors to enhance the web gaming experience.The episode concludes with Eastcott offering advice to aspiring game developers, encouraging them to leverage the vast audience and creative freedom provided by the web. He also gives a shout-out to Ken Russell and the National Center for Computing History in Cambridge, UK, acknowledging their contributions to the industry. Eastcott's journey and insights provide a compelling narrative on the evolution of web-based game development and the transformative potential of emerging technologies. Have any comments or questions? Email the show Feedback@Buildingtheopenmetaverse.org Want more information? Visit our website www.buildingtheopenmetaverse.org And make sure you follow us on Linkedin for all of our show updates https://www.linkedin.com/company/buildingtheopenmetaverse Building the Open Metaverse is a podcast hosted by Patrick Cozzi (Cesium) and Marc Petit that invites a broad range of technical experts to share their insights on how the community is building the metaverse together. #BuildingTheOpenMetaversePodcast #MetaversePodcast #Metaverse

One More Scoop
One More Scoop with CJ Looi

One More Scoop

Play Episode Listen Later Jun 9, 2024 77:16


CJ Looi is the co-founder and CEO of PixCap. There, PixCap's mission is to make 3D accessible to everyone, starting with graphic design. 3D opens up many possibilities for designers, including mockups, realistic visuals and animations. Yet, the complexity of accessing 3D technology remains a challenge. PixCap overcomes this by offering a vast library of 3D templates on the browser through WebGL, enabling brands and designers to customize mockups, icons and 3D animations at scale.Topics:[00:00:00] - Personal Growth and Family Dynamics[00:04:55] - Strict Upbringing and Video Game Influence[00:11:42] - Study Abroad and Startup Challenges[00:18:52] - Inspired Journey to Entrepreneurship and Tech[00:24:09] - Career Transition From Banking to Startups[00:31:19] - Transitioning to Entrepreneurship Through Internships[00:38:38] - Journey Through Tech Careers[00:48:35] - AI Motion Capture Technology Pivot[00:53:41] - Evolution of 3D Design Software[01:03:55] - Evolution of PixCap's 3D Tools[01:08:58] - The Challenges and Fulfillment of Startups[01:14:58] - The Three C'sLike the show? Subscribe to the BackScoop newsletter to stay updated with the latest news in Southeast Asian startups in minutes: backscoop.com.Visit BackScoop's social media pages and show your support!BackScoop (Linkedin): https://www.linkedin.com/company/backscoop/BackScoop (Twitter): https://twitter.com/BackScoopHQBackScoop (Facebook): https://www.facebook.com/BackScoopBackScoop (Instagram): https://www.instagram.com/backscoopVisit Amanda Cua's social media pages:Amanda (Linkedin): https://ph.linkedin.com/in/amanda-cuaAmanda (Twitter): https://twitter.com/itsAmandaCuaVisit Cj Looi's social media pages:Cj (Linkedin): https://www.linkedin.com/in/cj-looi/PixCap (Website): https://pixcap.com/index.html Hosted on Acast. See acast.com/privacy for more information.

Hacker News Recap
January 16th, 2023 | OpenAI drops ban on military tools to partner with The Pentagon

Hacker News Recap

Play Episode Listen Later Jan 17, 2024 22:41


This is a recap of the top 10 posts on Hacker News on January 16th, 2023.This podcast was generated by wondercraft.ai(00:34): US developers can offer non-app store purchasing, Apple still collect commissionOriginal post: https://news.ycombinator.com/item?id=39020365&utm_source=wondercraft_ai(02:52): 6174Original post: https://news.ycombinator.com/item?id=39018769&utm_source=wondercraft_ai(05:29): TinyML: Ultra-low power machine learningOriginal post: https://news.ycombinator.com/item?id=39014866&utm_source=wondercraft_ai(07:40): OpenAI drops ban on military tools to partner with The PentagonOriginal post: https://news.ycombinator.com/item?id=39020778&utm_source=wondercraft_ai(09:48): Fluorite lenses: Corrective capabilities beyond ordinary optical glassOriginal post: https://news.ycombinator.com/item?id=39020258&utm_source=wondercraft_ai(12:08): Post-mortem for last week's incident at KagiOriginal post: https://news.ycombinator.com/item?id=39019119&utm_source=wondercraft_ai(13:58): Stable Code 3B: Coding on the EdgeOriginal post: https://news.ycombinator.com/item?id=39019532&utm_source=wondercraft_ai(15:53): Is Google Getting Worse? A Longitudinal Investigation of SEO Spam in Search [pdf]Original post: https://news.ycombinator.com/item?id=39013497&utm_source=wondercraft_ai(17:57): OutRun – Open-source, privacy oriented, outdoor fitness trackerOriginal post: https://news.ycombinator.com/item?id=39014652&utm_source=wondercraft_ai(20:22): Web AI Model Testing: WebGPU, WebGL, and Headless ChromeOriginal post: https://news.ycombinator.com/item?id=39017607&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai

JS Party
From WebGL to WebGPU

JS Party

Play Episode Listen Later Dec 7, 2023 58:53


Gregg Tavares (author of WebGL/WebGPU Fundamentals) joins Jerod & Amal to give us a tour of these low-level technologies that are pushing the web forward into the world of video games, machine learning & other exciting rich applications.

Changelog Master Feed
From WebGL to WebGPU (JS Party #304)

Changelog Master Feed

Play Episode Listen Later Dec 7, 2023 58:53


Gregg Tavares (author of WebGL/WebGPU Fundamentals) joins Jerod & Amal to give us a tour of these low-level technologies that are pushing the web forward into the world of video games, machine learning & other exciting rich applications.

Building the Open Metaverse
Web Standards for the Win W/ Ken Russell & Corentin Wallez

Building the Open Metaverse

Play Episode Listen Later Nov 7, 2023 36:52


In this episode of the Building the Open Metaverse podcast, Ken Russell and Corentin Wallez from the Google Chrome graphics team discuss using web browsers and technologies like WebGPU, WebGL, and WebAssembly to build an open and accessible metaverse.   They explain how new browser capabilities like WebGPU's compute shaders and multi-threading support can enable complex 3D experiences on par with console and mobile games. Russell and Wallez examine performance considerations like streaming assets and reducing security overhead. An open question is supporting multi-user experiences across origins while maintaining security.   The guests are optimistic that an open metaverse can be built using web principles like transparency and permissionless innovation. They see opportunities for blending languages like Rust, C++, JavaScript, and TypeScript. A key benefit of web tech is portability across devices. Russell and Wallez encourage industry collaboration on ethical guidelines and standards. ==== Have any comments or questions? Email the show Feedback@Buildingtheopenmetaverse.org   Want more information? Visit our website www.buildingtheopenmetaverse.org   And make sure you follow us on Linkedin for all of our show updates   https://www.linkedin.com/company/buildingtheopenmetaverse/ Building the Open Metaverse is a podcast hosted by Patrick Cozzi (Cesium) and Marc Petit (Epic Games) that invites a broad range of technical experts to share their insights on how the community is building the metaverse together. #BuildingTheOpenMetaversePodcast #MetaversePodcast #Metaverse

The top AI news from the past week, every ThursdAI

ThursdAI October 26thTimestamps and full transcript for your convinience## [00:00:00] Intro and brief updates## [00:02:00] Interview with Bo Weng, author of Jina Embeddings V2## [00:33:40] Hugging Face open sourcing a fast Text Embeddings## [00:36:52] Data Provenance Initiative at dataprovenance.org## [00:39:27] LocalLLama effort to compare 39 open source LLMs +## [00:53:13] Gradio Interview with Abubakar, Xenova, Yuichiro## [00:56:13] Gradio effects on the open source LLM ecosystem## [01:02:23] Gradio local URL via Gradio Proxy## [01:07:10] Local inference on device with Gradio - Lite## [01:14:02] Transformers.js integration with Gradio-lite## [01:28:00] Recap and bye byeHey everyone, welcome to ThursdAI, this is Alex Volkov, I'm very happy to bring you another weekly installment of

Android Developers Backstage
Episode 200: WebGPU

Android Developers Backstage

Play Episode Listen Later Aug 15, 2023 50:26


In this episode, Chet and Romain speak with Ken Russell and Corentin Wallez from the WebGPU team. WebGPU is a new API that brings modern GPU rendering and compute functionality to web and other platforms (including Android!). We talk about the genesis and capabilities of WebGPU, WGSL (WebGPU's new shading language), the state of WebGL (the predecessor API for web GPU rendering), and lots of other fun related graphics topics. Ken, Romain, and Chet (not pictured: Corentin, who is on the monitor behind the photographer) Links: Samples (and its github repo)   Google I/O Codelab Google I/O presentation Introducing WebGPU (and associated blog post) Series of articles teaching WebGPU and WGSL Series of articles of WebGPU Best Practices Draft specs for WebGPU and WGSL Dawn from Google/Chromium wgpu from Firefox   Romain: @romainguy, romainguy@threads, romainguy@androiddev.social Tor: tor.norbye@threads and tornorbye@androiddev.social Chet: @chethaase, chet.haase@threads, and chethaase@androiddev.social Ken: @gfxprogrammerCorentin: @DaKangz and @DaKangz@mastodon.gamedev.place Catch more from ADB → https://goo.gle/adb-podcast Subscribe to Android Developers YouTube → https://goo.gle/AndroidDevs 

Netcetera by Myosin.xyz
Live from Myosin Gather "Metaverse & Immersive Media" w/ Everyrealm, Dollhouse, and Mona

Netcetera by Myosin.xyz

Play Episode Listen Later Jun 8, 2023 48:48


Welcome to the official recap of “Myosin Gathers #1: The State of Metaverse & Immersive Media." If you missed out on this groundbreaking webinar, don't worry. We've got you covered with all the highlights and key takeaways. In 2023, the metaverse has become more than just a buzzword – it's a rapidly evolving reality. This webinar delved into the current state of the metaverse and explored how brands should approach it as a vital component of their digital and marketing strategies. But what exactly is the metaverse and how does immersive media fit into the picture? Our esteemed panel of experts from Mona, EveryRealm, and Decentraland gathered in the Mona Space, a metaverse of its own, to provide deep insights and thought-provoking discussions. Together, they explored the potential of the metaverse, its impact on mainstream users, and its significance in the realm of web3. This webinar was a valuable opportunity for brands, mainstream users, and web3 enthusiasts to gain valuable insights from industry leaders and shape their understanding of the metaverse. The discussions were both thought-provoking and inspiring, opening up new possibilities for future collaborations and ventures. Time stamps: 00:02:27 The metaverse is immersive media. 00:06:40 Metaverse adoption faces behavior hurdles. 00:10:44 The future of brands in metaverse. 00:16:18 Brands should enhance mixed-reality experiences. 00:21:45 Web3 and decentralization are future. 00:27:12 Community building is essential. 00:31:21 Metaverse community and hardware evolution. 00:36:21 Using computer's GPU for WebGL experiences. 00:41:28 Interoperability key to metaverse. 00:46:30 Metaverse is better with friends.

Caro quetscht aus
Caro quetscht aus unterwegs | Folge 1– Software und Games meets Darstellende Künste

Caro quetscht aus

Play Episode Listen Later Jun 6, 2023 59:39


Unterwegs beim Event "Game Developing meets Performing Arts" Mit diesem Auftakt zur dritten Staffel ändern wir das Konzept: "Caro quetscht aus" – geht aus dem Tonstudio raus und ist UNTERWEGS und lässt euch reinhören in die Netzwerktreffen der Dresdner Kultur- und Kreativwirtschaft. Freut euch auf Kurzvorträge von Akteur:innen aus unterschiedlichen Teilbranchen – ebenso wie auf Kurzinterviews mit Profis aus dem Publikum. In Folge 1 der dritten Staffel treffen wir im LEVEL 44, Dresdens erster eSport- und Gaming Bar, Profis aus den Bereichen Software- und Games-Entwicklung sowie Theater, Tanz, Performance und Medienkunst. In zwei Impulsvorträgen und vier Kurzinterviews tauchen wir ein in virtuell animierte Räume – und hören, wie man sie zum Experimentieren ebenso wie zum Geld Verdienen nutzen kann: Wie werden 3D-Konfiguratoren und Virtual Showrooms in Kommunikationsagenturen genutzt? Wie unterscheiden sich die Anwendungsbereiche und Ziele der Entwicklung mit WebGL und Unity in den unterschiedlichen Kreativbranchen? Welche spielerisch-experimentellen Möglichkeiten bieten sensorgestützter VR-Umgebungen? Freut euch auf SEVERIN TARANKO von queo xr und seinen Impuls zum Thema "Potentiale von Virtual Showrooms für Marketing & Vertrieb". Sowie auf KATHARINA GROß vom Kollektiv neue raeume; und ihrem Impuls zum Thema „Kollektive Zusammenarbeit und künstlerisches Programmieren". Diese Maßnahme findet statt im Rahmen des Projektes CROSS INNOVATION MADE IN SAXONY und wird mitfinanziert mit Steuermitteln auf Grundlage des vom Sächsischen Landtag beschlossenen Haushaltes.

Continuous Delivery
Game programming nell'era di JavaScript e di WebGPU

Continuous Delivery

Play Episode Listen Later May 10, 2023 64:56


In questo episodio esploriamo le differenze tra lo sviluppo di un videogioco e di una webapp. Ci concentriamo, ad esempio, su come vengono distribuiti i videogiochi attraverso binari nativi e come invece sono deployate le webapp, e come ciò influisce sulla loro architettura e sulle loro prestazioni. Discutiamo anche degli strumenti più utili per lo sviluppo di videogiochi, inclusi i tool di creazione e le soluzioni "nocode" e "lowcode". Inoltre, analizziamo gli aspetti tecnici cruciali per lo sviluppo di videogiochi, come i cicli di update, il frame rate e la gestione delle risorse. Non mancheremo di esplorare una punta di WebGPU e WebGL, e come queste tecnologie stanno cambiando, o potrebbero cambiare, lo sviluppo e la distribuzione di videogiochi.Con: Edoardo Dusi, Paolo Mainardi e Paolo Pustorino/* Newsletter & Telegram */https://landing.sparkfabrik.com/continuous-delivery-newsletterhttps://t.me/continuous_delivery/* Link */https://godotengine.org/https://unity.com/https://gamemaker.io/enhttps://illiteratecodegames.itch.io/powers-in-the-basement

Science Faction Podcast
Episode 449: Pigeons, Podcasts, Putting

Science Faction Podcast

Play Episode Listen Later Apr 19, 2023 79:16


This episode contains: Welcome to the Ben Lawless podcast. Lawless: The Ben Story. Y'all seen the movie Lawless? Ben hasn't yet. Was it good? Devon's not here tonight, because of... dinner? With his wife? We still would love a tour of the Mighty Coconut offices. They're doing great work. Steven pitches a series of Walkabout Minigolf courses to Mighty Coconut. Wouldn't you love to play minigolf through the backlots for scifi movies? We pitch minigolf courses based on alien invasions and Buck Rogers. Did you know that Mighty Coconut made Pigeon: Impossible? Which Spies in Disguise is based on? What if there was a minigolf course based on Pigeon: Impossible? Steven is pivoting from podcasts to putting, and pigeons. Ben needs a vacation from his vacation, amirite? Ben's son wanted to watch Speed Racer 2. Ben's heart broke to tell him the truth. Emile Hirsch wants to do another Speed Racer movie! Get the Wachowski's onboard. http://screenrant.com/emile-hirsch-speed-racer-return-hopes-response/ Ben is SO STOKED about the Apple TV+ Speed Racer show helmed by J.J. Abrams. Let's make the Speed Racer Extended Universe a thing. Today in the Weird Wide Web: Chrome will support WebGPU! What the hell does that mean? WebGL is cool and all, but WebGPU is going to revolutionize graphics on the web. Remember how Apple told Adobe "No Thank You" about putting Flash on iPhone? Ben does. Steven wonders why the name Mozilla sounds so familiar... Hint: he uses Firefox. Sometime in May, WebGPU is coming to Chrome! Ben is so excited. Ben is telling you: Arc is the best web browser ever. Steven and Ben argue whether Bing is a browser. Ben has worked in web development for the last 14 years. Feels like 40. Ben called it years ago: Microsoft finally stopped making their own browser engine. https://arstechnica.com/gadgets/2023/04/chrome-113-will-enable-webgpu-a-modern-low-overhead-graphics-api-for-the-web/  Symphony of Scent: Making sense of scents: Deciphering our sense of smell. Remember the five senses? They're cool, right? What is smell? When do things start and stop smelling? Steven wonders. Scientists have created the first 3D picture of how an odor molecule activates an odorant receptor. "We need to see it so we can science it!" - Steven. Smells are somehow like hitting keys on a piano to produce a chord. Devon could explain it. https://www.sciencedaily.com/releases/2023/03/230315132416.htm  Science Fiction: The Big Door Prize on Apple TV+ is really good. Is it scifi? Ben thinks so. If you were in The Big Door Prize, would you want to hear your "life potential?" Getting big Tales from the Loop vibes from The Big Door Prize. Are there similarities between The Big Door Prize and Machine of Death?  Steven really enjoyed the ending of Shrinking on Apple TV+. The Bad Batch's third season will be its final. We finally talk about Star Wars Celebration this episode. THREE NEW STAR WARS MOVIES?!?!?! WHAT?!?!?! The Filoni film is going to wrap up the Mandoverse? Hate that term btw. So... Grogu trained with Luke for 2 years?!  We are digging the trailer for Star Wars Visions Season 2. The claymation looks incredible. Ben finally watched the first Hotel Transylvania. Cool flick! We spoil Star Trek: Picard's 3x09 Vōx because you have to spoil it to talk about it. Now we know why Ro Laren didn't use the transporters!   Pre-pod Patreon-only: Ben monologued about sleep for 49 minutes. 

Linux Action News
Linux Action News 288

Linux Action News

Play Episode Listen Later Apr 13, 2023 14:24


A classic gadget gets a Linux-powered new lease on life, the next project getting Rusty, great news for Btrfs users, and more.

Linux Action News
Linux Action News 288

Linux Action News

Play Episode Listen Later Apr 13, 2023 14:24


A classic gadget gets a Linux-powered new lease on life, the next project getting Rusty, great news for Btrfs users, and more.

IFTTD - If This Then Dev
#209 - Création et créativité dans le web - Anthony Lopez

IFTTD - If This Then Dev

Play Episode Listen Later Apr 12, 2023 65:15


"C'est l'envie de maîtriser son art au plus haut niveau" Le D.E.V. de la semaine est Anthony Lopez, de Antho parle web. Anthony vient nous parler du métier de creative developpeur. Il se penche sur le fait qu'un plus grand nombre de développeurs se dirigent vers la technologie de création et de l'art génératif dans leurs travaux. II mentionne également les technos comme WebGL et SVG et comment le côté artistique ou artisanal peut s'intégrer avec le code. Liens évoqués pendant l'émission Episode 107 IFTTD - Christophe GalatiDemoscene francaise : FlopineThreeJS Journey (by Bruno Simon)Cours en ligne de Matt DesLaurierNewsletter Quoi de neuf les devsMerci-MichelImmersive GardenMakeMePulseBruno SIMONHervé StudioUtsuboMatt DesLaurierMakio64Florian ZumbrunnLe voyage azarien **Continuons la discussion**@ifthisthendev@bibear@antho_parle_webLinkedInLinkedIn de Anthony LopezDiscord** Plus de contenus de dev **Retrouvez tous nos épisodes sur notre site.Nous sommes aussi sur Instagram, TikTok, Youtube, Twitch ** Job Board If This Then Dev **Si vous avez envie de changer de job, visitez le job board If This Then Dev ! Si vous voulez recruter des personnes qui écoutent IFTTD, il va s'en dire que le job board d'IFTTD est l'endroit où il faut être ! Ce job board est fait avec My Little Team!** La Boutique IFTTD !!! **Affichez votre appréciation de ce podcast avec des goodies faits avec amour sur la boutique ou affichez clairement votre camp tabulation ou espace.** Participez au prochain enregistrement !**Retrouvez-nous tous les lundis à 19:00 (mais pas que) pour assister à l'enregistrement de l'épisode en live et pouvoir poser vos questions pendant l'épisode :)Nous sommes en live sur Youtube, Twitch, LinkedIn et Twitter

Offset Podcast
#6 Bruno Simon - Qu'est-ce que le développement créatif ?

Offset Podcast

Play Episode Listen Later Apr 5, 2023 55:18


Bruno Simon est l'un des développeurs créatifs français les plus talentueux du moment. Il a notamment gagné le trophée de meilleur site du monde pour son portfolio en 3D dans lequel on peut explorer son univers avec une petite voiture. Dans cet épisode, on revient en détails sur ce qu'est le développement créatif, on aborde le parcours de Bruno ou encore la formation qu'il a créé pour apprendre Three.JS, un outil permettant de faire de la 3D sur un site internet. Enfin, on termine avec une grosse partie conseils pour ceux qui souhaitent se lancer en freelance. Timeline : 1:15 : Définition du développement créatif 5:29 : Les projets sur lesquels il va être amené à travailler 6:42 : Son parcours scolaire 11:32 : Son parcours professionnel 12:07 : Sa découverte du WebGL 15:43 : Comment il s'est formé 24:10 : Comment il jongle entre tous ses projets ? 27:11 : Comment il fait sa veille 37:08 : L'importance de ne pas se surcharger en freelance 41:48 : Ses tarifs en freelance 44:49 : L'importance de se construire un réseau 45:05 : Les expériences web peuvent-elles remplacer les jeux vidéos ? Pour découvrir et suivre Bruno : Twitter : https://twitter.com/bruno_simon https://bruno-simon.com/ Merci d'avoir écouté ce sixième épisode ! N'hésitez pas à nous laisser une bonne note sur votre plateforme d'écoute préférée, cela nous aiderait grandement. Vous pouvez également nous suivre sur nos différents réseaux sociaux (@offset_records) pour ne manquer aucun épisode. Ce podcast vous est présenté par Enzo Cailleton : https://enzocailleton.com @enzocailleton et est produit par Studio WAWWW, agence web créative. Pour en savoir plus : https://wawww.studio @studiowawww Encore merci pour votre écoute. Ciao

Kodsnack
Kodsnack 512 - Enrich the graphics, with Denis Radin

Kodsnack

Play Episode Listen Later Feb 14, 2023 40:19


Recorded at the Øredev 2022 developer conference, Fredrik chats with Denis Radin about React, Webgpu, standards development, coding standards, and a lot more. We start way back, with early React development - while React was still in beta, on amazingly bad hardware. A project where focus was actually on optimization and education instead of throwing hardware at solving the performance problem. We discuss AI art generation a bit, and how it affects our world. Denis then gets into how Webgpu is different from Webgl, mostly a lot better for a lot more use cases. What's holding back really cool graphical things in the browser now? Getting paid! Denis tells us about the development of the Webgpu standard, a unique standard which filled a gap major players all wanted filling. What if we applied NASA coding guidelines to Javascript? Denis did it to show that Javascript can be taken as seriously as C or other low-level languages, if we just want to. Do we web developers have more to internalize when it comes to pride in craftmanship? But examples are out there if we just know to look for them. What does Denis think of React's evolution? Finally, fullstack frameworks are coming and exciting. They are a revolution for Denis' side projects already! Thank you Cloudnet for sponsoring our VPS! Comments, questions or tips? We are @kodsnack, @tobiashieta, @oferlundand @bjoreman on Twitter, have a page on Facebook and can be emailed at info@kodsnack.se if you want to write longer. We read everything we receive. If you enjoy Kodsnack we would love a review in iTunes! You can also support the podcast by buying us a coffee (or two!) through Ko-fi. Links Øredev Denis Denis helps organize React conferences in Amsterdam Denis' presentation at Øredev 2022 Denis' blog post on WebGPU Thick clients Webgpu Webgl Canvas Opengl Metal Directx Vulkan NASA coding standards (for C) Denis' talk about applying the NASA coding standards High-performance Javascript Angular Solid.js Alpine.js Svelte React native React-three-fiber - React renderer for three.js Next.js Blitz.js Ruby on rails Titles Amazingly shitty hardware The performance and scalability wasn't there Let's use this pipeline Enrich the graphics How do you monetize? A standard that fills a gap Javascript developer: no Change the perception This is engineering Innovate by simplicity A fullstack developer with a couple of commands

Kodsnack in English
Kodsnack 512 - Enrich the graphics, with Denis Radin

Kodsnack in English

Play Episode Listen Later Feb 14, 2023 40:18


Recorded at the Øredev 2022 developer conference, Fredrik chats with Denis Radin about React, Webgpu, standards development, coding standards, and a lot more. We start way back, with early React development - while React was still in beta, on amazingly bad hardware. A project where focus was actually on optimization and education instead of throwing hardware at solving the performance problem. We discuss AI art generation a bit, and how it affects our world. Denis then gets into how Webgpu is different from Webgl, mostly a lot better for a lot more use cases. What’s holding back really cool graphical things in the browser now? Getting paid! Denis tells us about the development of the Webgpu standard, a unique standard which filled a gap major players all wanted filling. What if we applied NASA coding guidelines to Javascript? Denis did it to show that Javascript can be taken as seriously as C or other low-level languages, if we just want to. Do we web developers have more to internalize when it comes to pride in craftmanship? But examples are out there if we just know to look for them. What does Denis think of React’s evolution? Finally, fullstack frameworks are coming and exciting. They are a revolution for Denis' side projects already! Thank you Cloudnet for sponsoring our VPS! Comments, questions or tips? We are @kodsnack, @tobiashieta, @oferlund and @bjoreman on Twitter, have a page on Facebook and can be emailed at info@kodsnack.se if you want to write longer. We read everything we receive. If you enjoy Kodsnack we would love a review in iTunes! You can also support the podcast by buying us a coffee (or two!) through Ko-fi. Links Øredev Denis Denis helps organize React conferences in Amsterdam Denis' presentation at Øredev 2022 Denis' blog post on WebGPU Thick clients Webgpu Webgl Canvas Opengl Metal Directx Vulkan NASA coding standards (for C) Denis' talk about applying the NASA coding standards High-performance Javascript Angular Solid.js Alpine.js Svelte React native React-three-fiber - React renderer for three.js Next.js Blitz.js Ruby on rails Titles Amazingly shitty hardware The performance and scalability wasn’t there Let’s use this pipeline Enrich the graphics How do you monetize? A standard that fills a gap Javascript developer: no Change the perception This is engineering Innovate by simplicity A fullstack developer with a couple of commands

Whiskey Web and Whatnot
Advent of Whiskey: State of JS, ChatGPT, and Browser APIs

Whiskey Web and Whatnot

Play Episode Listen Later Dec 17, 2022 29:39


The State of JS is one of the best ways to get your voice heard and learn about the Javascript ecosystem. The React developers are likely in the majority of survey respondents but every developer within the Javascript community should consider taking it to get an accurate representation. One of the topics covered in the State of JS survey is browser APIs like WebGL, Web Animations, and Service Workers. Chuck and Robbie test their knowledge to see how much they know about them and Robbie recounts his terrible experience with service workers in Ember. In this episode, Chuck and Robbie talk about the importance of the State of JS survey, the features of different browser APIs, and the viral ChatGPT chatbot from OpenAI.  Key Takeaways [00:41] - Number Six Whiskey: Mars Shinshu Iwai 45% Japanese Blended Whisky. [06:12] - Numer Seven Whiskey: J. Rieger & Co. Blended 46%. [10:35] - Number Eight Whiskey: Two Stacks The Blenders Cut Cask Strength [17:00] - The features of the new browser APIs. [20:55] - The popularity of ChatGPT on Twitter. Quotes [20:07] - “Well, it's good that you realized that Ember is dead, and you've moved on to other technologies.” ~ Chuck Carpenter [20:43] - “Cache is just hard. That has been said in software for longer than we've been around or at least the internet.” ~ Chuck Carpenter [20:55] - “The only way that I've ever used WebSockets was to replace polling for your API.” ~ Robbie Wagner Links Flaviar's "Depths of Whiskey" Advent Calendar Mars Shinshu Iwai 45 Japanese Blended Whisky J. Rieger & Co. Kansas City Whiskey Two Stacks The Blenders Cut Cask Strength Whiskey Jameson Irish Whiskey Sharpie Allbirds Guinness  Svelte  Vue State of JS Google Chrome Safari Ember Nuxt OpenAI Twitter WebGL Connect with our hosts Robbie Wagner Chuck Carpenter Ship Shape Subscribe and stay in touch Apple Podcasts Spotify Google Podcasts Whiskey Web and Whatnot Top-Tier, Full-Stack Software Consultants This show is brought to you by Ship Shape. Ship Shape's software consultants solve complex software and app development problems with top-tier coding expertise, superior service, and speed. In a sea of choices, our senior-level development crew rises above the rest by delivering the best solutions for fintech, cybersecurity, and other fast-growing industries. Check us out at shipshape.io.

Into the Metaverse
S2E34: Into The Metaverse with Neil Trevett - VP Developer Ecosystems at NVIDIA & Chair of Metaverse Standards Forum

Into the Metaverse

Play Episode Listen Later Nov 30, 2022 40:18


Neil Trevett is the Vice President of Developer Ecosystems at NVIDIA, President of Khronos Group and Chair of the Metaverse Standards Forum. He joined me in a conversation in which we cover in detail why standards how vital for the metaverse to emerge and what could happen in its absence https://www.intometamedia.com/ At Nvidia, Neil is responsible for enabling and encouraging advanced applications to use GPU acceleration. At the Khronos Group Neil created and chaired the working group that defined the industry standard for 3D graphics on mobile devices. At Khronos he also chairs a working group for portable, parallel heterogeneous computing, and helped initiate the WebGL standard that is bringing interactive 3D graphics to the Web. He's now working to help formulate standards for vision and neural network inferencing. Previously, as Vice President of 3Dlabs, Neil was at the forefront of the silicon revolution bringing interactive 3D to the PC, and he established the embedded graphics division of 3Dlabs to bring advanced visual processing to a wide-range of non-PC platforms. Neil was elected President for eight consecutive years of the Web3D Consortium dedicated to creating open standards for communicating real-time 3D on the Internet.

Svelte Radio
3D, WebGL and AI

Svelte Radio

Play Episode Listen Later Nov 24, 2022 48:04


SponsorVercel is the platform for frontend developers, providing the speed and reliability innovators need to create at the moment of inspiration. Founded by the creators of Next.js, Vercel has zero configuration support for 35+ frontend frameworks, including SvelteKit. We enable the world's largest brands like Under Armour, eBay, and Nintendo, to iterate faster and create quality software. Try out Vercel today to experience the easiest way to use Svelte.DescriptionWe talk Threlte and AI... a lot of AI! And Svelte of course.MusicIntro music by Braden Wiggins a.k.a. Fractal (braden@fractal-hq.com)Notes Threlte Discussion Website Svelte Sirens stream Reddit Thread Remotion Remotion but in Svelte SvelteCMS Sveltosis What's new in Svelte: November Optional Params use:enhance - new update function AppWrite Console is written in Svelte Unpopular OpinionsSwyx: All search is AI, all If statements are AIPicks Brittney:  DevRel Social Programming Stream Repo Nov 30th Stream on Form Actions Swyx: Hey Github and Codeium Beta Kevin: Stacklands

The Zeitgeist
Jonas Hahn - Solana Summer Camp Hackathon Winner, Ep 12

The Zeitgeist

Play Episode Listen Later Nov 8, 2022 16:04


Jonas Hahn, an established game developer who won the Phantom Track in the Solana Summer Camp Hackathon joins Brian Friel to talk about web3 gaming and his work Combining Unity and Solana with his project SolPlay.Show Notes:00:48 What is Solplay?             02:16  What are Deep Links?                              03:41  What was what attracted you first to Solana?05:13  What is the state of gaming in Solana? 06:17   Exciting things in crypto    07:56  Tooling or developing infrastructure to make a breakthrough game10:04 WebGL for building           11:08 Future projects in the space 12:12  Is Mobile the future of Crypto Gamin?13:08   Advice to devs new to Web 3  14:25  A builder he admires     Links:Unity-Solana SDK that provides a single interface for interacting with Phantom deeplinks and extension from a Unity environment.Example game built with Phantom, Unity, and WebGLVideo tutorials for building Unity games with Phantom: Full Transcript:Brian (00:05):Hey everyone and welcome to the Zeitgeist, the show where we highlight the founders, developers and designers who are pushing the Web3 space forward. I'm Brian Friel, developer relations at Phantom, and I'm super excited to introduce our guests today. Jonas Hahn. Jonas is an established game developer who has won the most recent Phantom Track in the Solana Summer Camp Hackathon for his work with Combining Unity and Solana with his project SolPlay. Jonas, welcome to the show.Jonas (00:33):Hello. Super nice. Thank you for having me.Brian (00:35):We're super excited to chat with you today. We love the work that you've been doing, pioneering game development between Unity and Solana. I'd love to start off, what is SolPlay? What have you been working on these past few months?Jonas (00:48):Yeah, SolPlay is just a name I came up with when I started working on Solana games. Because everything needs a name and I heard everything in the ecosystem should be called something with Sol, so I called it SolPlay. And yeah, what's behind it is I started working on games with Solana. I started when I joined the Hacker House in Prague. And I already had a little app on my phone and I had a wallet, and first I wanted to make a Solana Pay where you can send Solana to other people. So I used the Solan Art Wallet and implemented that but then Solana Pay came out and I kind of stopped this and went back to games.(01:26):And then in Prague, I met a few other people and we started building a game, it's called World of Qro, and for that we built connection where we communicate with JavaScript directly from Unity. So we sent messages to the React app and then at some point I wanted to have it on mobile as well and that didn't work, that workflow. And then I was investigating how it could be done and then I wanted to build a deep link connection to a wallet and was trying to build my own wallet. And then Phantom came finally out, was there Deeplinks and I was super excited about it and immediately went on it and then I even noticed that there was a prize for it, so it was even more exciting.Brian (02:03):Yeah, yeah it's good timing, right? Then for those who don't know, can you walk people through maybe high level, what are Deeplinks and how has this enabled you to build a connection. Really one of the first Unity games that's on Solana.Jonas (02:16):The good thing about Deeplinks is like usually you have a browser extension or something, but of course on mobile you don't have that. But a mobile, different apps can communicate between each other using application links. And what the Deeplink does, it creates a secure connection using the 25-1-9 thing.Brian (02:36):Yeah, yeah. X25519 key pair. Yeah, that's not as important. But yeah.Jonas (02:41):So it has a secure connection and then if you want to do a transaction in the game, for example, send Solana somewhere or MinTON NFT, you just create the transaction in the app and then you send it over to Phantom. Phantom signs it for you, and then you get the signature back and it says, of course a big benefit that people who play the game don't need to write down the key phrase or something, but they can immediately use the wallet that they already have. So I think no one can be bothered writing down these words, and I think it's very accessible, a good feature.Brian (03:11):I couldn't agree more. We're super excited about its potential to break crypto out of this pure browser environment. We know with extensions now you can be doing this in mobile applications, you can be doing these gaming applications. You mentioned that you got started with this originally at Hacker houses, you visit Prague, you kind of got into the Solana ecosystem. What was it that first attracted you to Solana? You're a game developer yourself, you spent a lot of time in Unity. What is it that first caught your eye about Solana in particular?Jonas (03:42):So the thing is, I tried making crypto games for 10 years ago already. But it just wasn't possible, I had a game where people could play against each other in a tower defense game. And then I tried something with PayPal and something with a Bitcoin, but nothing worked. And then two years ago I had started trying a game called Township from Gala Games, and they were using Ethereum and I got a little fountain, like an NFT. That was my first interaction with these NFTs. And then I tried to sell it later. It was because it was worth 0.5 Eth. But then I noticed I had to mint it from this side chain, and then it was only Ethereum and I paid $60 to get it off there. And then I had to pay another $60 to list it on OpenSea, and all that stuff was very tedious.(04:24):And then I was looking off their other blockchains and at some point I noticed that there's Solana where transactions don't cost anything and the transaction doesn't take minutes or seconds, but it's almost immediately, at least if you just wait for it to be confirmed. And then for a game, it just is necessary that transactions go fast, otherwise it just doesn't make sense. You can't wait a minute for a transaction and that's why Solana is the obvious choice. It doesn't cost anything and is super fast. Short answer.Brian (04:54):Yeah, I couldn't agree more. I think there's a really natural pairing there. If you could summarize for developers who maybe are familiar with building Unity games but maybe aren't as familiar with Solana, what is kind of the state of gaming in your opinion on Solana? Both maybe from the games that are out there, but also as a developer, you know what tools you have at your disposal right now to start building?Jonas (05:14):It's super interesting because everything is still super early. There are a few games which raised a lot of money like last year or two years ago. And they are building their own things, but there's a lot of games coming up from indie developers and small teams and everyone is trying to build their own things. And that's why I also want to try to make it easier for people. I started making some YouTube videos where I explain Solana stuff and I started now working on the Unity SDK and also the Phantom Deeplinks. They are also now in the Unity SDK from the people from Garbles who build it and lots of stuff is coming, many little games and I am in contact with some and trying to support them.Brian (05:57):That's awesome. What would you say is the big selling point for game developers who are interested in Web3 in general. SolPlay maybe we can take that as an example. You're able to use your NFTs in the game, you're able to save your high scores to a blockchain, but what is it that excites you the most about combining both the gaming world and the crypto world?Jonas (06:17):Yeah, I've been in the gaming world for a long time already. It's a very competitive market and I'm super excited about the whole blockchain thing because now finally you can build a game again, which isn't reliable on some service that is online. You have your own backend, you don't need the backend anymore. I mean, you still use Solana as a backend. You don't need a payment provider anymore because all you need to do is you just make a transaction which sent you some SOL or the token or whatever you want.(06:45):And what's also super exciting is, also for games companies I think, is that they can save the 30% fees in the stores. On web, it's one hand, but of course everyone wants to be on mobile because everyone uses mobile phones and these 30% fees are very important in the very competitive mobile games market, which we currently have. And what I'm also excited about is that it will probably spawn whole new kinds of games. I think the big game hasn't been spawned yet, the big crypto game, but it will probably be there somewhere and will probably be something new that we haven't seen before.Brian (07:17):Yeah, I agree. It's such an interesting paradigm, having this open backend that other people can plug into and you have tokens that potentially could be mutable and your interactions on the chain also maybe impact what your experience in the game is. I couldn't agree more that I think we don't quite know what the end state of the is, but I'd like those two examples you gave, especially the one around the 30% payment tax that most developers in crypto know all too well, especially if you're building mobile games. You mentioned that the space is early. We're still waiting for that kind of real breakthrough game. In your opinion, is there any tooling or developer infrastructure that needs to happen before we can have that breakthrough game?Jonas (07:57):It would be good if most stuff is open source. Solana is very good at that already. Many things are open source, but what's missing is targeting the traditional game developers. For example, go to the A MAZE conference in Berlin, which has crazy indie games, or go to the Unite Conference from Unity and try to get the people there. Because many developers I talk to, they still think it's a scam. It doesn't really make sense. It's super expensive, it's low and there are no standards yet where I'm not sure if it's good or bad. It kind of is this open stage still where you can do whatever you want. And also it's nice that there's not this big monopoly of a company who organizes everything. Maybe you have to use Google and now for Solana you can use Phantom software, all wallets you can use Fractal, has a nice API and very many opportunities and a Saga phone of course, if this happens, that will be also amazing if you have a crypto phone that you can use.Brian (08:57):Yeah, I don't think it's a matter of if, I think it's a matter of when. And we'll be releasing this episode shortly before Break Point. I know that Phantom and then also Solana have some great news to share around the progress around Saga phone. I couldn't agree more that that will just continue to break down the barriers and make this a lot easier.Jonas (09:14):One more thing about that topic is that there needs to be some clarity about regulation about somehow. For example, if I mint an NFT, I still have no idea how I pay taxes on that.Brian (09:24):Right?Jonas (09:25):Or if someone sends me a USDC token, what is it? It's an income. What is staking? What do I do with the tokens I get? And if that is all cleared out at some point, which will eventually happen in the next one or two years, then more big companies will also start.Brian (09:40):Yeah, the fact that this is a global phenomenon doesn't make this any easier too. You and I'm in the United States as a recording, you're in Germany. Interactions between the two of us definitely don't make it any easier and I would not want to be a crypto accountant having to think through all of these things right now. So all this is really awesome. If you haven't checked out SolPlay, I would definitely recommend checking it out. It's a great game. If I'm not mistaken, you've also used WebGL in some aspects of this as well, is that right?Jonas (10:05):Yeah, it's cross platform. So I'm building an example game. Open source example game, which also has these Deeplinks in it and now it also has a token swap using the Orca tools and Metaplex for minting NFTs. And yeah, it's mostly an example game, but it also has this slipping mechanic where you can jump around and collect points and at some point I want to put in that you can give out token and rewards, so make it play to earn and then have some staking in it as well. So I basically want to make a suite where everything is in that you need to build a game.Brian (10:37):And then as part of this too, you know, mentioned your work. You've done great open source work around the Unity SDK, integrating Phantom Deeplinks that in particular you also have a great YouTube channel where you go in depth since really long tutorials on actually bringing up your IDE and showing this is how I built the game. Walking through all the steps of that. Taking all that into consideration, you've built these games, you have these SDK tooling, and then you've also gone on the developer education front. What are you most excited looking forward in spending your next couple months in the Solana gaming space?Jonas (11:09):I want to build the perfect example game, basically. Where everything is in and which is actually also fun to play. Put it on all platforms. It's so difficult to get into the iOS store. You wouldn't believe I get like six, seven points every time I submit. And they definitely leave minting NFTs within their purchases I think. So that's one of the things I want to build next. And then just to get this game on all platforms, it's biggest challenge and goal for the next months. It's already in the Play Store, but on iOS it's definitely harder.Brian (11:43):Even us at Phantom building a crypto wallet, we've run into kind of unexplainable headaches with the App Store review process sometime. It's definitely a very opaque process. Definitely resonate with that. You've spoken a lot about cross platform and your game being a platform as well. Is there any one platform in particular you're most excited about for Web3 gaming looking ahead? You mentioned everyone has a mobile phone that in particular seems pretty compelling, but do you think mobile will be the first explosive growth phase of crypto gaming or what are your opinions on that?Jonas (12:13):Yeah, I think it will probably be mobile and I think you can't ignore iOS, just too many people have iOS. That's why I'm so amazed that everything works for Phantom on all platforms the same. But there could be... Unity can theoretically export to all platforms. So theoretically we could make a switch game, which facilitates Solana somehow. I don't think there's a phantom bullet for Switch yet, but theoretically it could be on all platforms. That would be really nice.Brian (12:38):I would love that. Yeah, Phantom on One Switch I think is going to be the next catchphrase that catches fire on Twitter here. I love that.Jonas (12:45):Exactly.Brian (12:46):This is awesome. Jonas, if you were having to talk to a Web3 developer today who's listen to this, they're interested in this, they have some game experience, but they're maybe not sure how to best to get involved with the Solana gaming space. What would you recommend is the first step for a new dev coming into this? To learn the ropes of Solana game development and to get involved with potentially a project or some direction and connecting with other devs in the space?Jonas (13:11):There are quite a few good YouTube channels to learn the basics of Solana, which you probably should learn first. The channel of Solandy and Josh's DevBox and my YouTube channel of course as well. Although I recently heard that tutorials are still a bit too detailed and complicated, but I'm trying to get better on that definitely. And then yeah, I would check out some of the games that are already there. There are a bunch of games on Fractal, for example, that you can try out and then you can already experience a little bit with wallets. And then I would check out the Solana Unity SDK or my example game and then just try it out, see how it feels, deploy it on Android, try to mint an NFT, do some transactions from it in Unity. And I think then people will already get hooked and want to do more of it, I hopeBrian (13:57):Yeah, I love it. We'll drop some links to all those that you just referenced in the show notes for folks as well. Jonas, this has been awesome. We loved hearing your story of how, you know, got involved with Solana and then we couldn't agree more that the future of gaming's really exciting on Solana. Phantom wants to be a big part of that, enabling developers to build cross-platform and make it just easy for folks. One closing question we always ask all of our guests, and I'd love to hear your take on this as well, is who is a builder that you admire in the Solana ecosystem?Jonas (14:27):Oh, there are actually quite a few. I ran into a few of them at the Stockholm Hacker House and especially Jonathan Singh, Steven Laver, Steven Luscher, and they're just all amazing. Everyone actually who really built on Solana, the team, they're super amazing. I mean, not everyone in the NFT communities is always super amazing, but the people who really work on it are super nice. And then of course Anatoly, which I met in Miami and he's just such a nice, humble, genius person. It's really nice meeting him.Brian (14:59):Yeah, I agree. I think he really set the tone for kind of the culture of the developer community here on Solana. You mentioned some other Solana Labs employees as well, Steven Luscher, Steven Laver, and those guys, if you're not familiar with them, they're basically savants across all the open source initiatives. They're leading it all. They're doing the commits, they're doing the code reviews, the PR's, they're saying the strategic direction, they're speaking on stage. I don't know where Solana finds all these guys, but it's a really cool community to be part of.Jonas (15:26):It's crazy. It's so good what they're building with a mobile wallet now, it's so nice.Brian (15:32):I agree. Awesome. Well, Jonas, thank you so much. Congratulations again on winning the Phantom Deeplinks Track for the Solana Summer Camp Hackathon. We're really excited to see where you go from here and for the future of free gaming on Solana.Jonas (15:44):Thank you so much.

Mark Pesce - The Next Billion Seconds
A Brief History of the Metaverse: VIRTUALLY EVERYONE

Mark Pesce - The Next Billion Seconds

Play Episode Listen Later Oct 19, 2022 47:05


In the 2010s, VR roared back into life with Oculus, Vive, WebGL and Hololens. Minecraft, Roblox and Fortnite gave millions a deep drink of the potential of the Metaverse. Facebook - renamed Meta - bet the house on the Metaverse. Where is the Metaverse headed? We hear insights from forty years of experts.   For more information about this and all our other 'The Next Billion Seconds" content, please check out https://nextbillionseconds.com This podcast is sponsored by the Digital Skills Organisation. The DSO is championing an employer-led, skills-based approach to digital literacy. Our offering is designed to support future-proofing the country, growing jobs, supporting our economic growth and ensuring that Australia remains a global leader in digital. If we are to equip our workforce with the skills to meet a rapidly changing, technological future, we need a new approach. We're working in collaboration with employers, trainers and employees. Their involvement is vital. We believe it's a better way to create consistent journey pathways and build relevant digital skills. We define the problem this way - digital skills training must: Create value both internally and externally. Improve customer experience.Build operational capabilities. To deliver on these objectives we need to strengthen Australia's digital workforce. It's that simple. DSO - Digitally Upskilling Australia To find out more, visit the DSO website: https://digitalskillsorg.com.au "Mark Pesce - The Next Billion Seconds" is produced by Ampel - visit https://ampel.com.au to find out what Ampel could do for you! If you are interested in sponsoring The Next Billion Seconds podcast, reach out to our Director of Media and Partnerships Lauren Deighton at lauren@ampel.com.au If you enjoyed this show, please leave a rating and/or review on Apple, Spotify or any other podcast platform. See omnystudio.com/listener for privacy information.

The Zeitgeist
Michael Wagner - Star Atlas founder and CEO, Ep 9

The Zeitgeist

Play Episode Listen Later Sep 8, 2022 30:21


Brian Friel sits down with Star Atlas founder and CEO Michael Wagner to learn about what it's like building web3's first giant strategy MMO on Solana. Show Notes:0:58   - What is Star Atlas?              03:44 - Background / Founders of SA      07:01 - Going for a AAA game             09:32 - Phantom integration in Star Atlas10:20 - Recent events, achievements and releases15:08 - The DAO                          19:27 - Interacting with the SA ecosystem23:44 - Star Atlas, AAA gaming and Web 3.028:09 - Builders he admires     Full Transcript:     Brian (00:05):Hey everyone. Welcome back to The Zeitgeist, the show where we highlight the founders, developers, and designers who are pushing the web 3.0 space forward. I'm Brian Friel, Developer Relations at Phantom, and I'm super excited to introduce my guest, Michael Wagner. Michael is the founder and CEO of Star Atlas, one of the most ambitious games being built on Solana and in web 3.0 in general. Michael, welcome to the show.Michael (00:29):Hey, Brian. Fantastic to be here and really looking forward to our conversation.Brian (00:33):I'm really excited as well. You guys are building something that a lot of people on Solana are really excited about. Anatoly, our first guest, actually quoted you guys as saying that he thinks you guys are "the most ambitious project being built on Solana maybe just outside of Serum." But for maybe listeners who are unfamiliar with Star Atlas, could you give us a quick breakdown on, what is, in your own world, what is Star Atlas, and how does it maybe differ from other crypto-based games that are out there?Michael (00:57):Yeah. Just on that interpretation of being the most ambitious project, I think that again, that is very subject to interpretation. It is a large vision. It's a very grand vision with a long roadmap ahead of us. Some people, I think, might be concerned about the grandiosity of the underlying vision itself. But we have a lot of confidence in what we're doing and the progress we've made and features and products we've launched and the success we've had to date. So, with that being said, let me just start with the flagship component of what we're building here at Automata, the studio behind Star Atlas, which is a grand strategy, space exploration, massively multiplayer online game.Michael (01:32):And we're building that in Unreal Engine 5. We've been building in UE 5 since early access and of course, building on the Solana blockchain. And one of the really pioneering gaming groups and projects to adopt Solana as our layer one protocol, going all the way back to 2020.Michael (01:47):Now, within this giant strategy MMO, you'll be navigating the stars, you'll be flying your ships. Which, by the way, all of the assets in the game are non-fungible tokens or NFTs. So, flying your ships and operating your land and outfitting your ships with crew members and components and modules, leveling up, progressing, getting new attributes and enhancements, and moving your way through skill trees. So, anybody who's a fan out there of MMOs will be familiar with these gameplay mechanics. But I do want to clarify that, although the Unreal Engine product is our flagship, it is our North Star, it's where we're focusing the majority of our energy, we actually develop across really five different segments within our business. And so, Unreal Engine or the fully immersive triple A game is one of those. We're also building out gameplay modals in browser as well.Michael (02:32):We're using Play Canvas as the game development engine there, and it's fully 3D-enabled, WebGL-based assets. And we actually have a release that's planned for this year; up and coming, anyway. And I'm sure we can get more into what that will look like, later in the conversation. But we also build infrastructure tools, architecture, primitives, that are essential to developing and refining the ecosystem and really encouraging innovation and development around all that we're building at Star Atlas.Michael (02:59):We have a marketplace that we've custom-created from scratch, built from the ground up. And then we also have a mobile gaming division as well. So, obviously a lot there; encompasses a lot. As I said, very big vision. But we're also very surgical in how we approach this and tactical in how we tackle these challenges that are in front of us.Brian (03:15):Yeah. No, I'm super excited about that. And you mentioned a call to arms for MMO-based players out there. I used to be a big wild player back in the day. I totally think this is a really awesome vision. Letting these end users level up in a way that they owning the NFTs that they have and having this be very user-centric, I think,Brian (03:31):Is a really revolutionary model. Before we dive into all of that, could you give us a little background on maybe who you are and who the rest of the founders were, and how did you all decide, "Now is the time that we want to build this game on Solana?"Michael (03:44):Sure. I'll be brief here. I will say my background is in traditional finance. Previously worked in portfolio management. I hold this chartered financial analyst designation, so really expertise in all things investment management and investment analysis decision-making. But the more exciting part of my life started back in 2013, when I got introduced to Bitcoin and cryptocurrencies and really SCRYPT coin mining with GPUs.Michael (04:07):Now, I grew up as a big computer nerd and part of a computer gaming group. Built all my PCs as I was growing up. And we would spend our weekends playing video games and drinking Mountain Dew. And those were our parties. So, one of my buddies from back in that era introduced this idea of building out SCRYPT coin rigs in milk crates. And so, it didn't take me long to essentially sink every penny I had into buying GPUs and just putting these rigs together and getting involved in the blockchain ecosystem. Terrible timing for me. It was immediately preceding the collapse of Mt. Gox in 2013, the largest Fiat on-ramp and exchange at the time.Michael (04:43):And we went through, one of many that I've experienced now, crypto winters and bear cycles. But my interest was certainly captivated. And I just spent the next couple of years learning as much as I could and by 2015 had made the decision to go full-time crypto. 2016 I launched my first company in the space, which was at the center of the legal cannabis industry in crypto, providing financial solutions there. And it was really through that company, and through a series of acquisitions, that I onboarded two individuals who went on to co-found Star Atlas with me. That's Danny Floyd, who's our chief product officer and Jacob Floyd, who's our chief technology officer. And also, along that path, I got connected with Pablo Queiroga, coincidentally, serendipitously, you might say, in Las Vegas, actually in a coffee shop. Just overheard him having a conversation about one of his ventures that he was working on.Michael (05:29):And that was in 2018. But really just fast forward a couple of years: 2020, we're in the middle of the pandemic. The company that I had previously founded, called Tokes, things were moving a bit slow. And just given where our passions, interests, and expertise lie, particularly with Jacob, Danny, and myself, who were all working on Tokes at the time, and understanding, seeing the trends that were moving forward across the crypto space, which was predominantly DeFi, NFTs were up-and-coming in a very nascent stage of blockchain gaming. We just felt that we could actually build something bigger and better. And that was the idea of, "Let's bring a AAA MMO to the blockchain." And by the end of that year, Pablo was fully recruited onto the co-founding team as well. And we had our formal debut in January of 2021. And the team is now about 240 people strong.Brian (06:18):Wow. That is seriously impressive. Like you said, January, 2021, I think you guys were one of the earliest, serious projects that planted your flag on Solana. I'm curious, shifting over to the product now, in addition to being earliest Solana, I think you guys, like you said, you're also early to bringing a AAA game into the blockchain context. In particular, I think you're one of the first major games that is building in Unity Engine 5 and trying to combine that with a crypto world that is normally very web-based. Like you said, you guys are doing some stuff in Canvas and WebGL as well. Can you talk a little bit about that decision to go AAA, to go to the top, Unity Engine 5, and how that implications are for your project?Michael (07:00):Well, I think one of the valid criticisms that exist in, really from the traditional gaming community, but even within this crypto ecosphere, if you will, is that the gameplay mechanics that were present back at that time and even persist through to this day is that they're relatively simple mechanics: a lot of turn-based games, a lot of card-style games. But more importantly, they emphasized the financial components of the gameplay. The idea of play-to-earn, the emergence of play-to-earn. Emphasizing the ability to earn, instead of emphasizing what we feel is most important, which is creating and entertaining an engaging gameplay environment. And so, making a fun game.Michael (07:39):And for us, we're extremely passionate about what we're doing. But a large part of that is we're building the game that we want to play as well. This is essentially living the dream, being able to create the product that we will one day be participating in as well. And we're just extremely excited about it.Michael (07:56):So, the choice at the time was really between Unity and Unreal Engine. Videos were coming out around UE 5, which was not yet released when we started conceptualizing all of this. But there's a couple of pieces of fundamental technology that are coming with UE 5, like Nanite, which is the way that assets are rendered. Polygons are rendered in the game environment. And what is going to be possible in terms of visual fidelity? And so, that was really important, as well as Lumen; is another technology that's new to U E 5, which is the way that light essentially reflects and reacts within the game environment as well.Michael (08:27):And there's a couple of additional pieces there. But what we saw was the potential to create an immersive digital reality, cinematic quality reality. UE 5 is the next generation of gaming, with or without blockchain. Now, what we are doing, adding the fundamentals of blockchain for true asset ownership for a digital asset native economy and for on-chain mechanics that can initiate ecosystem-wide contributions and development to the gaming product itself; we think that is absolutely incredible and will lead to an explosion of growth over time, beyond just the game product itself. But we just saw UE as the best game engine to build within, especially with the opportunities that were present at the time and into the immediate future.Brian (09:08):Yeah. Well, I love that philosophy of building a game that you would want to play and then choosing the right tool for the job there. From the Phantom side, we've definitely seen that interest as well. Just from the metrics that we see of who's using Phantom, what they're connecting to, throughout the last year, Star Atlas is always one of the top projects that people are connecting to. And they're wanting to view their ships, they're super excited about this, and I think you guys have a ton of great momentum into building with Unreal here.Michael (09:31):Big shout out to you guys, if I may just take a moment. Because Phantom came onto the scene. It wasn't the first wallet, but it was, and I think continues to be, the best UI web wallet for the Solana ecosystem. It was already in demand from our users. It was a pretty simple decision for us to integrate. And you guys see the data on your side. We see the data on our analytics. And I think something like 95% of the user base of a couple hundred thousand people across the Star Atlas ecosystem are using Phantom as a wallet. So, big credit to you guys for what you've been able to accomplish there.Brian (10:01):Oh, thanks. Let's keep it going. Let's get gaming on Solana for... I think there's a lot of folks who are really excited about this. I guess, following on that thread, it's been a really busy summer for you guys, across a lot of different aspects. Can you tell us a little bit about some of this events that you guys have had back in June? Specifically, I've seen a lot of noise around 426; maybe what that is, for some folks who are unfamiliar with that?Michael (10:20):Sure. Yeah. I guess just to preface it all, I will mention that we've built a lot, as I've said. We have a lot to continue to build. And we have a community that has been cultivated that is very passionate and enthusiastic, but that also we need to cater to. And so, it's very important for us to interact regularly with our community, engage with our community, keep them informed of the process. And this is truly one of the first times in history, I believe, where a large user base and community is getting the backstage access into the actual process of building a game as complex as Star Atlas.Michael (10:53):And so, a part of that as well is that we're really fostering this ecosystem for collective gameplay. So, guilds: encouraging the gamers to join forces to play across this universe that we're building right now.Michael (11:07):And so, the first event that we hosted this year was called COPA, the Council of Peace Assembly. And this really was a showcase, a place for guilds to highlight the innovation that they're bringing, their recruiting strategies, their intended methods of engaging with a game, of which there are many. We have 28 different career paths in the game that you can take and many different options for player vs. environment, single-player missions that are all scoped out, from deep space PVP with permanent death and destruction of your assets. And so, how you choose to engage is it's really entirely up to the user; there's plentiful options. But doing so with a community and group of other players is probably going to be to the benefit of everyone involved.Michael (11:48):And so, we're really looking to cultivate these relationships between members and help guilds recruit and bring more people in and create strong bonds with their members. And Council of Peace Assembly was an event that was dedicated exclusively to the guilds.Michael (12:01):We sent out a request to all the guilds across our ecosystem. We had 39 submissions from groups that wanted to present their vision for participation in Star Atlas. We had to narrow that down, so we selected 15, based on a set of criteria of presentation materials. And then we hosted an all-day event. We brought in some friends and family and existing partners and some new partners. Relationship with iBUYPOWER, which is one of the largest PC manufacturers in the world, for some custom-branded Star Atlas gear. We actually gave away a top of the line, 3090 Ti-equipped PC to one community member. It was an absolute beast of a machine. I'm kind of jealous, envious that I couldn't win it myself. But-Brian (12:40):Yeah. Yeah, right.Michael (12:42):... great PC. But in any event, the whole point is it was a eight-hour event. 15 different guilds presented, and we had some activities in between and sidetracks that people could participate in. But it was these guilds just presenting, again, their intended approach to Star Atlas, whether that's playing the game, creating a business, creating technology, and how they want to recruit and actually play.Michael (13:02):Moving forward, just towards the end of July, we hosted our first product launch event. And that was called 426 Live. And this is something that is a bit memetic. I won't go too deep into the background of it but in working with the community so closely and trying to provide as much information as possible to them about anticipated timelines for product rollouts, we often refer to the next release coming in four to six weeks. Didn't always or frequently hit those timelines. And so therefore, four to six just became the community meme whenever a projected target date was released.Michael (13:39):And so, we just embraced that. And we've launched the 426 Live. And we think it's really appropriate, just given we had real products coming out. So, on this last one, we rolled out a brand new marketplace program as I described earlier. And one of the biggest releases of the year so far is the official rollout of the Start Atlas Dow. Baseline functionality of the platform with locking in emissions, using the Tribeca framework as part of that overall suite. We launched a Web 3.0 affiliate program; referral program for people that are bringing new members in. We call that Star Path. And then we also, again, solidified the relationship with IBP, iBUYPOWER. And we do have options available now for people to go out and buy Star Atlas-branded PCs, mouse pads, headphones, peripherals. And just really cool to see this Star Atlas-branded gear. And that whole relationship, by the way, just came about because King, the representative there, happened to be a member of our community. And he's just really excited about what's going on across Star Atlas and wanted to get us involved.Brian (14:34):I can't think of another project, I don't think, that has this level of a rabid fan base that wants to be engaged in this key decision-making at this kind of a level. You mentioned a lot here, between all the guilds and the conferences you had around that and the 39 guilds wanting to all present their vision here and the referral programs you guys have; now also, Star Atlas Dow. Let's focus a little bit on the Dow in particular. What is this Dow? What impacts in decision-making do they have on the game as a whole? And who can participate in this? Can I just come in one day and show up? What are the mechanics to this Dow?Michael (15:08):Yeah, I think the concept of Dow's decentralized autonomous organizations, in general, are pretty radical in ideology and what they propose. I would say, even more so in the case of Star Atlas. Here we are, spending enormous time and resources in creating out and building out intellectual property in the form of a gaming product, with the sole intention of taking this ecosystem. And it really does emerge into, as I said earlier, immersive digital environments; the idea of the metaverse. The idea of a place where people can go outside of reality. They can get themselves involved in social aspects and work in education, in commerce; do your shopping there. All things that you can do online today, I think, in some way, are going to live in the metaverse.Michael (15:50):I think it will displace a lot of industry. And I think gaming is one of the core applications there. But we're taking this idea of building the centerpiece of all of that, the gaming product and the ecosystem, and ultimately transitioning control over to the Dow. Which again, in my opinion, very radical proposal to essentially just transfer all of the value that's been created.Michael (16:08):In terms of who can participate, this is, I think, a pretty commonplace model. We have a governance token. It's called POLIS. It's actually a dual token ecosystem. Atlas is our in-game transactional currency, and POLIS is the governance token. But anybody, of course, can acquire POLIS and participate in the Dow. And what I would extend that beyond is, what we've released now is really the universal Dow, the pinnaculum Dow. This is the top level of a hierarchical structure.Michael (16:35):There are elements within the game, within the lore, where political strategy becomes important. And that's at the faction level, it's at the regional level, and it's at the local level. And you can think of this as a county, city, state, federal government, if we were to think of it in modern context of the way that politics is structured. And so, there are gameplay elements that will be influenced. And then there's really the universal Dow, which is just released. And that is to govern the future of Star Atlas itself.Michael (17:03):Now, while I think that this is a paradigm shift and almost antithetical to the way that most companies are built, I also fundamentally embrace fully the idea that significantly more value can be created through these decentralized ecosystems, by opening the pathways to be bidirectional, as opposed to trying to capture all value, retain all value, and create protective walled gardens around it. This is why layer one protocols are succeeding. This is why Solana is succeeding. The idea that you can create primitives in a base layer protocol, that other people can build on top of, is mutually beneficial. It's symbiotic. And we want to do the same thing with the metaverse.Michael (17:43):And so, that's really the idea. That's the vision. We take the idea of the gaming product that attracts people in, it builds this digital society. And then many different functions, features applications, programs, and economies can be built on top of, around it, and tangential to it.Michael (17:58):And so, that's where we're going with this. Now, this is not a short path. America wasn't founded in a day either. So, we have an ethos and a philosophy behind the path to decentralization and how we get from a point where a lot of the content is directed and governed by us, as a more centralized entity at Automata, to one in which Automata is really just a contractor to the Dow. We actually work for the community. We work for the Dow itself. And the Dow is making decisions. Final point I'll make here is that it's specifically because of this philosophy that I think a product like Star Atlas has an indefinite and perpetual life cycle. Because now you're not reliant on the studio to build the 2.0 version and the 3.0 version. It can actually be something that is constantly evolving over time by every contributor around the world that wants to participate in this.Brian (18:49):Yeah. I love that analogy to a layer one blockchain. That opened up my eyes originally, with seeing that anyone from community could just open a GitHub issue and, if they were technical enough, share their vision on how this could improve. And Solana's very open to that. And seeing you guys also position yourself in that way, I think, is really cool, especially as it relates to a AAA game. It's turning the whole industry on its head there.Brian (19:09):So we deep dove a little bit into Dow. You also mentioned a couple other components that you guys have live today, being Star Path. Also your new marketplace. How can end users today interact with the Star Atlas ecosystem, as far as those two respects? And is there anything else that they should be aware of as far as interacting with the Star Atlas ecosystem today?Michael (19:26):Well, first and foremost, I encourage everybody to be informed and educated on what we're doing, what the vision is, what the road ahead looks like. Because I emphasize this all the time. It is a very long journey ahead. We have many years of development. However, our philosophy to development releases is also not conforming to that of a traditional studio. So, we're not building the whole game for 10 years, in isolation, and then rolling that out. We're rolling out components. And I'll speak to one of those in just a moment. But first things first, I would encourage people to read through our white paper, read through our economics paper, read through our quarterly State of the Economy report that was published last month. And also, go check out some of our cinematics, some of our studio trailers, some of our in-engine trailers. It'll really give you a sense of the quality that we're developing too, and what the lore and story behind this is.Michael (20:15):So, that's the first thing. Once people are comfortable with that, options do exist to purchase ships on our marketplace. That's the predominant asset that's out right now, even though land and crew members and components and buildings and structures and all of these things will be available in the future. But for now, the ships are the core asset that people can purchase. They can do that through our marketplace. And it was important for us to get the first, even if simplistic, admittedly simplistic, gaming feature out, which we did on December 17th of last year.Michael (20:42):And we call that SCORE, or Faction Fleet. Score is Ship Commissions on Remote Expeditions. But Faction Fleet. And this is a utility driver for those assets that people have already purchased. It's a way to support your faction through sending or loaning your ships out to the faction, is the story behind it.Michael (20:59):And the player is responsible for managing a set of four different resources. And to the extent that they are keeping those restocked, then the play-to-earn economy is actually activated. So, those players are earning Atlas today. That's one thing you can do.Michael (21:12):I would also encourage everybody to get in Discord because a lot of conversation and strategy and guild formation and guild recruiting, these things are all taking place right now. A lot of speculation around what gameplay mechanics are going to look like, but the Automata team are there as well. And we're interacting frequently and giving guidance and feedback and giving just our current thinking on state of affairs. But going forward, the timing of this podcast is fortuitous. Going forward, very soon, we'll make some announcements around the next 426 Live event for this year.Michael (21:43):And I don't want to reveal exactly what's going to be announced there, but people do know that we have a couple of things in the pipeline, which includes the first Unreal Engine client. And that's called the Showroom. And this is the first time you'll have access to a downloadable client. You can run it on your PC, enter into the Unreal Engine-immersive world, summon your ships, inspect your ships, walk around them, see them at scale. And for anyone that's seen our videos and trailers related to Showroom, I'm sure they can attest to this. But seeing something like a Pure Sex 4, which is a two meter-long air bike, essentially. It's like a hovering motorcycle. Next to some of our largest ships, the Commander class ships that are 300, 400, 700 meters long, you get a real sense of the value of the asset that you've purchased and what it's going to be capable of in-game.Michael (22:30):So, that's up and coming. A new locking mechanism for Atlas, which provides benefits to the marketplace. And last thing I'll add here is, we've also been spending quite a bit of time developing the underlying story behind Star Atlas. And we do have a couple of releases this year, including a graphic novel, which is a comic book format, comic book style, written media that tells the story of Star Atlas before the current era, at least in-game current era, as well as an upcoming podcast sometime this year, which is a storytelling podcast. Not interview style, but it's a storytelling-style podcast that speaks to another aspect, another region in the Star Atlas universe. So, a lot of exciting things up and coming. But I think join the community and read through some of the materials and just have a good time with us in Discord.Brian (23:19):That sounds great. I won't press you for any more spoilers there. We'll wait for the next 426 event. The showroom in particular sounds really exciting. I'm stoked about that. Zooming out now, a little bit longer-term view, you've spoken a little bit about this in your guys' role in merging the AAA gaming and the Web 3.0 worlds, but how do you see these two worlds, that are pretty separate today, converging? And how do you see Star Atlas' place in both of those?Michael (23:44):Well, as a long-time gamer, I see immense value in Web 3.0 gaming, not only for the true asset ownership, not only for the play-to-earn, but again, this idea that you can mutually benefit through co-creation of some idea. I think it's enormous. That, in and of itself, is a paradigm shift that traditional studios would need to get comfortable with. The idea that, "Hey, maybe it makes sense actually for you not to create and retain all of the value, but share in that value and enable a community to co-create that value with you." I think that that is going to be a challenge for almost every traditional business, not just studios, for that matter. Moving into the era of decentralized governance and Dows, and again, Web 3.0, just broadly speaking.Michael (24:28):But again, referring to being a longtime gamer and spending probably tens of thousands of hours in games over my life, it's great to be able to recapture some of that valued, even just through the asset ownership itself or through the account progression, as opposed to being inherently restricted and prohibited from being able to turn around and sell gold that you've farmed or sell a character that you've leveled up. We actually encourage that. We want to facilitate that. You created the value; you get to keep it.Michael (24:57):So, I think Web 3.0 Gaming is absolutely the future. I think that incorporating elements like NFTs and cryptocurrency-based assets into the gameplay add an enormous amount of value. I think for blockchain in general, gaming is going to be a massive contributor to mass adoption of blockchain and cryptocurrency. I think one of the big issues with crypto today is that it's great at attracting the speculator. Anytime you have a high volatility environment, people that have the risk tolerance will enter into the ecosystem to try to capture some portion of that.Michael (25:30):DeFi was a good step, by the way, in more utility-driven functionality, although still a large speculative component. Gaming is different. Gaming is the entertainment value. You cannot overlook the utility. In economic terms, you cannot overlook the utility that is derived by a sense of escapism when joining a game environment. Now, whether or not blockchain was a part of that and whether or not play-to-earn was a part of that, if you create a great product, if you create a great game, people will come to play it. And so now, what we're doing is focusing on that great gameplay and that escapism that people desire out of engaging in video games, but then also enhancing that and saying, "Look, if you want to take it a step further, you can actually derive true financial benefit from this as well. And that's a component that you can layer into your gameplay strategy or not. Play as simple as you want or as complex as you want."Michael (26:20):But I do want to emphasize this idea of, really, development methodology that we use across Star Atlas, which is actually abstracting logic from a game engine and building that logic on the blockchain, using smart contracts. And this exposes the world to the whole idea of permissionless systems and censorship resistance and self-sovereign ownership of assets. The true inception of this philosophy was with the creation of Bitcoin, 12 years ago now, I guess, or 13 years ago. And so, because we're taking game logic, like in the case of Faction Fleet, all of your resource management is done on chain. There are already guilds out there that are creating their own programs, in their own UI, that interfaces with ours and interacts with that and adds more utility to our players. Now, our players can go use a third-party service. And there was no contractual relationship between us and that provider.Michael (27:14):So, the idea that you can initiate this type of innovation, because we're using blockchain as, really, the backend server technology, to me just creates enormous potential for growth. So, I think it's definitely the future. I think the way that we're doing things at Automata, and for Star Atlas, really attempt to capture all of the potential value. Humbly, I feel that we've been leaders in the space, that we have presented a big vision. But I also feel that we've executed well to date. And we will continue to do so. And we will continue to be leaders, not only on Solana, but across blockchain and Web 3.0 in its whole.Brian (27:50):That's awesome. I think that's a great way to wrap things up here. I agree. I think the Solana space is lucky to have you guys as pioneers in pushing forward AAA gaming on Solana. One question we always love to ask anyone, at the end of our discussion here, is: who is a builder that you admire in the Solana ecosystem?Michael (28:09):I almost don't want to answer the question just because picking out one is so difficult. I think, realistically, what we've seen on Solana is, in and of itself, an explosive growth, a Cambrian explosion of innovation. Especially thinking about the early days in the late 2020 and the early '21, where we had to build a lot of the tools that we needed ourselves. And how so many tools are coming online. And you and I were chatting, just before this, about the Solana summer camps and the hackathons and 18,000 submissions to the latest summer camp for project proposals.Brian (28:42):Pretty unreal.Michael (28:43):It is absolutely unreal. So, it's very difficult to isolate one. I think what Metaplex is doing is phenomenal because they're developing some of those same primitives that I'm referring to, that we're actually developing as well, in a different capacity. Magic Eden has done a fantastic job with the marketplace and with the NFT and the PFP drops and cultivating an environment for NFT purchases for industries like gaming as well. So, I think they're doing great. I'd be remiss if I didn't mention Phantom. You guys are awesome. So, maybe I'll just close with that and say I absolutely am thrilled about the Solana ecosystem. Any challenges that we are facing, I am sure, are short-term. And I think Solana's done amazing job, just cultivating a great environment for warm, welcome developers and great innovation.Brian (29:26):I couldn't agree more. Michael Wagner, thanks so much for coming on the show. It's a really great discussion. You mentioned a lot throughout here about how folks could get engaged with Star Atlas. Make sure you join the Discord, join the website. Any closing thoughts here?Michael (29:38):No, that's it. But please do stay tuned. Our 426 Live event is coming up soon. I didn't even mention it, but we have this web client that's coming out this year as well. And we'll probably speak to that at the 426 Live event. We're pretty much @staratlas on any of the socials or staratlas.com. And you can find all the links there. But yeah, I look forward to meeting some new people in our community soon.Brian (29:58):Yeah, we'll have to have you back on to talk about that web-based environ and more exciting announcements for 426. Thanks so much, Michael.Michael (30:05):Thank you, Brian.

GlobalCryptoPress.com - Cryptocurrency News Live
Lithosphere (LITHO) GameFi Project Jot Art Raises $55 Million Round Led by KaJ Labs ACP & Psalms Capital to Launch Cross-Chain Metaverse Finesse RPG Preview Season Goes Live!

GlobalCryptoPress.com - Cryptocurrency News Live

Play Episode Listen Later Sep 1, 2022 3:21


Cryptocurrency News Live and Breaking in Real-Time! Current crypto coin prices, analysis, and predictions from the Global Crypto Press Association. https://www.globalcryptopress.com/2022/09/lithosphere-litho-gamefi-project-jot.html KaJ LabsJot ArtWebGL (browser)https://kajlabs.orghttps://lithosphere.network

Sixteen:Nine
Alex Epshteyn, Zignage

Sixteen:Nine

Play Episode Listen Later Jul 6, 2022 34:54


The 16:9 PODCAST IS SPONSORED BY SCREENFEED – DIGITAL SIGNAGE CONTENT If a company wants to hang its business hat on the proposition that it is very good at visualizing real-time data to screens, it helps to have a big, very familiar client that heavily uses that sort of thing. A small New York City start-up called Zignage has that in the New York Stock Exchange - providing and maintaining a platform that shows the numbers and trends charting on screens around the hyper-kinetic trading floor in Wall Street. The company grew out of an NYU media lab and spent its first few years working mostly behind the curtains, developing signage and data-handling capabilities to software firms and end-user clients. But a few years ago, the company made the decision to develop a brand and start selling its data-centric capabilities directly to end-users. I had a great chat with Alex Epshteyn, the CEO and Founder of the company, about how it got started, where its headed, who it all serves, and how there can be a huge gulf between software shops that can take a number from a shared data table somewhere, and running mission-critical, hyper-secure visualizations on a stock exchange floor. Subscribe to this podcast: iTunes * Google Play * RSS TRANSCRIPT Alex, thank you for joining me. Can you give me a rundown on what Zignage is all about, how they started and how long you've been around?  Alex Epshteyn: Absolutely. Thank you for having me, Dave. Zignage started in 2009 formally, and we started at the NYU Incubator while I was doing my graduate work at the Media Lab in NYU and suffice to say the company was more interesting than the graduate works. So I started doing that, even though I'm from the east coast and this doesn't typically happen, it kinda happened here. So initially, conceptually, we were gonna get into the digital out home space and we were gonna build an auction backend that people can bid for spots on digital signs. So kinda a slightly novel idea, especially in digital signage and we couldn't do a big enough raise, and then we found a number of these sort of remnant advertising platforms coming into the market and we decided, since I have a pretty good little black book of enterprise clients, and we built the platform to about 50% at that point, in mid to end of 2009, let's try our hand at some enterprise folks, and what ended up happening is a trajectory that basically pushed us for about eight years, which is we built a middleware and a toolkit, essentially our own toolkit, that enabled us to build very quickly CMSs and builds and anything related to that, data bindings for third party systems like CRM systems and CRP systems, a variety of backends essentially, and we essentially entered OEM space. So we built products for other companies. Some of them were large, some of them were small. We had a tremendous amount of NDAs and non-competes, as you can imagine. These companies would not like you to advertise your own stuff while you were building it for them and typically we would have maybe one or two of these customers at the same time. So from 2009 to about 2017, maybe a little bit later even, we basically did work for third parties and we built a lot of different solutions, and around 2018, we decided that we were gonna attempt to productize. That means, essentially build our own, front facing, become a brand, and move away from a pure sort of project solution, even though we had a product in there. But it was a product for us, not so much for the end customer and to get into the market and so we did, and in the meanwhile we had two direct customers during almost all the time. NYU was one. We had a number of schools at NYU that we were able to pitch, and successfully had running, so NYU Law School, NYU Engineering School, where I was a student and then NYSE where we initially partnered Thomson Reuters. So Thomson Reuters did the data and, most of the application stack actually, and what we provided is a device management framework and advanced players to run the WebGL and all the other things that they needed to run for the New York Stock Exchange. This was under the NYSE-Euronext regime, which has since been bought by the Intercontinental Exchange. This was in 2017, which was a formative year for us. As I mentioned, NYSE under the new ownership came to us and said, “Look, Thomson Reuters is relatively expensive and essentially they're reselling us their data, how about you guys take on their responsibility?” You get nine months to replicate and you get this support contract that basically takes over for them, at a discount for them but it was a nice option for us.  We took on the challenge. Because we were able  in these intermittent years we built up so much experience and know how to deal with realtime sources, realtime data sources, and WebGL specifically to make things pretty bulletproof whereas perhaps some other HTML5 technology that is fairly popular in digital signage would is maybe not robust or maybe not as performant. So we took that toolkit and applied it to over essentially at the New York Stock Exchange and took the contract over and successfully we did that. So at the New York Stock Exchange today, they're actually running two separate solutions from us. They have our more standard on print solution for their marketing group and then they have a much more customized, almost like an OEM version for their trading real time data, which are now classed as a number of financial data widgets.  So if I'm at the NYSE and I'm looking down on the floor, or I'm walking around the floor with all the guys with the funny jackets and everything, those various dashboard screens that I see with all the pricing indicators and everything else, that's all being driven by you? Alex Epshteyn: That is correct. So everything essentially above the workstation level, everywhere above the trader level, if you just look up above the 5'8” level from the ground, you'll basically be looking at our solutions. It actually is a full gambit of our capabilities. We have synchronized video, real time widgets for financial data consumption, charting types of things and a lot of different ticker technologies that we've custom built and some of our generic ones, and streaming as well. The only other company that works with us at the site is Haivision, so they provide the backend system and supplementary streaming solutions. So we consume their feeds and also feed them.  They're a video distribution company?  Alex Epshteyn: That's right. So we're actually partnered with them. So they're one of our partners in space. We like working with them, they are a nice Canadian company to say the least and I know some of the original folks that sort of constituted the company and they have grown as a company tremendously through the years. So we really like working with them.  Yeah, this must have been a really big holy shit moment for you guys when you got that deal because it's not like winning a hundred locations QSR chain or something, this is the New York Stock Exchange. It's on the TV every day with endless photos and everything else, and it's mission critical. Like you can't say, oh, we're just doing a software update and we'll be back in 10 minutes? Alex Epshteyn: Indeed, and the escalations we get are pretty hardcore. We have just a few minutes to get things going, and philosophically, we try to blend some aspects of redundancy with a lot of resiliency because redundancy itself, some folks who deal with these sorts of mission critical situations, could itself present its own set of problems, right? So you want the system, the platform itself to be as resilient or high availability as possible to use a term out of the server space.  So yeah, it was a huge thing for us and ultimately, we specialized in a lot of financial services and non-retail banking is a more generic category or an area we do very well in and we work with some integrators in the space that are known for it as well in terms of channel. Currently our CTO is actually the chief architect of the Thompson Reuter solution. He came on board with us a year ago, a year and a half ago as a full time hire. He was a consultant for many years after Thomson Reuters got customization space, and he worked with us for a long time and then finally our CTO to do other stuff, and Steve came on board. So we're very well positioned for this work.  So for your company, if you had to do an elevator pitch saying what all you do, what do you rattle off for them? Alex Epshteyn: I think what we would do is, as you mentioned, mission critical type of usages, whichever vertical, right? We've done things with SCADA. We've done things in transportation that I wish I was at liberty to say, maybe soon, and it doesn't have to just be financial data. It could be sports feeds. It could be building services, things of that sort that are critical for the use. That's one of our specialty points.  The other is, I would say, while we're very happy to have relationships with a number of hardware companies, we still have really some high end hardware that we field. So what we do is, for very demanding applications, not necessarily mission critical ones, but those overlap obviously, we provide a full-stack solution, and these players, we're getting into the realm of show control type of players, really beefy and professional level graphics capabilities. So we do sell those. Those are fully our stack, and this way we can guarantee basically the solution as opposed to having us do a certain portion system integrated to another and so forth.  The last thing I would say is while we still support some level of OEM work, we currently have two customers that we work with. Our business model changed a bit in the last three years of supporting them. We have our standard SaaS business and in some cases we modified it for on-prem. So it's already flexible, but we also have a platform as a service offering to really support those OEM customers. So it's a lot less expensive in volume, very scalable, and I would say those are the things that really make us stand out. It's real time data, data visualization, full-stack solution with hardware to do very difficult things often, and finally, configuration where people assume real, ad-hocs customization. There's an assumption there, right? If you're doing something very bespoke, the assumption there is that it's gonna be insanely expensive and take a long time to build and that's true if you haven't built two dozen variants of it and you don't have a toolkit to basically assemble it from parts like a LEGO set, which we do. I would assume that your calling card when you go in to talk to opportunities, when you can say, yeah, we do the New York Stock Exchange, we do all the data handling on that, and you could imagine it's more than a little bit secure and mission critically oriented. I suspect that makes the target customers feel pretty comfy? Alex Epshteyn: It does, and even before them, it makes consultants who put us on the bid lists and generally are interested in finding parties that can actually fulfill the scope, call us. So we don't really advertise much, and that's gonna change, I think, maybe next year. We're gonna do maybe a marketing splash at some point next year.  Right now, it's all word of mouth, and we do get a lot of calls. There's a lot of projects we actually pass on because they're not in our sweet spot and they're distractions, but the projects that we do take on are often difficult. We even do work in retail, as I mentioned to you, and the types of deals we take in are always really heavy data integration, visualization, where they are very automated workflows, there's almost no humans involved where the humans are basically special events, and then the system essentially corrects for automation again.  Yeah, I've been writing about data visualization for 6-7 years now, and when I started writing about it, it was pretty rare and beyond FIDS displays and things like that but it's now pretty standard. I'm curious because you guys are obviously super deep and experienced in that area, when you see all the other software companies saying, yeah, we do real time data, we can do realtime data handling, we can integrate, we have APIs and this and that.  When you get into a conversation with a prospect, how do you distinguish what you do versus other companies who say, yeah we do all that too, cuz I suspect it's different?  Alex Epshteyn: It is. One of the first things we've put on a table is that we can mostly guarantee our resolution time SLA, nobody else can pretty much. Most people will be aggressive, pick up the phone and work the problem, but the way that our stuff is built, we can fix the problem. We can guarantee fixing the problem within a certain period of time. Now it's not inexpensive, sometimes it's actually affordable for a lot of types of businesses where a fully custom solution would not be.  The other one is that most data visualization takes a lot of shortcuts, it really leverages, not to get too deep in technicalities unless you want me to, basically JavaScript and CSS, the mainstay of HTML5. But all of our data visualizations are built in WebGL. It's like the difference between driving a car on the road and driving a bullet train on tracks, right? There's no interruptions to the bullet train. It'll just go and it'll be on schedule. There's no interruptions. There's no jitter. There's no movement. That sort of paradigm. So we like to guarantee behavior of our data visualization, especially dynamic like charting or graphing libraries that we use and implement. It's actually extremely difficult to build something that you would think is easy like a ticker or crawler.  Whatever data that's feeding it, I'm sure we both have probably seen a lot of instances where it stutters, it has problems, it doesn't refresh on time and doesn't deal well with different fonts and whatnot. That's just not true of our solution. Our solution is, I would say, cutting edge on dynamic data visualization.  So for an end user or for an integrator, they have to educate themselves that just because a company says they can do real time data doesn't mean they can really do it. That means they might be able to reflect a number that's in a data table and show it on a screen, and that's quite a bit different from what you're talking about.  Alex Epshteyn: It is and maybe the third aspect is most of the companies we work with already have accounts with the big data warehouse places like Refinitive, IBS, and a number of others, so we already are super familiar with these back ends. In fact, we have things that monitor the APIs. We routinely do a lot of monitoring of real time or just dynamic sources. So this is a huge value add in the industry, and I wish more providers would do that because ultimately, if you are a data fed platform, it's up to you to tell the customer something's failing on the back end because they won't know, they'll assume all sorts of things, but you need to critically have the tools inside to tell what's going on, and if you build it out in a smart way, you can also alert the right people at the right time that something's happening and to look into it. So you can be proactive about it. That's the third item, I'd say.  They also change like the schemas and everything without telling people, right? Alex Epshteyn: That's true. But it's a super exciting space. Once you have the core technology built out. You could really do a lot, in terms of, consuming this kind of data and I think generally, signage, we're in a slightly privileged position regarding this, but I think there's a move into industry towards generative and procedural content away from more Codec-heavy content. Although, there's obviously gonna be overlap for many years for both. We certainly support Codec playback in a variety of ways, synchronized, on different players and so forth, and there's nice innovations like AV1 coming onto the market nowadays. But you could do so much more with generative dynamic content, it's a big difference. For instance, we had a client that wanted us to expose much more of the controllability of a layout, standard design tool inside of our platform. Now, typically we would not wanna do that because there's some nice tools on the market like Premiere, like After Effects, real tools that they generally use. But the problem that certain customers power users I would say are having is they don't wanna have to export an After Effects file and have it encoded in something, that's time, that's sometimes money because they do it externally because they don't have a kit on-prem, or in the cloud.  So what we've done is basically have a simpler version of something like Adobe Premiere or After Effects that lets them make quick changes in some key framing or some transitory effects and they don't have to put the whole thing into a codec. So that seemed to really resonate with certain power users that we have and directionally, it's the area that we'd like to innovate in. Is it important to make a distinction between generative data for business applications and generative data for artwork? Because I see a lot of video walls out there that are set and forget. They're driven by generative data and it's just these abstract visuals that are swirl and kind of bloom and everything else, but that's very different from, I think what you're talking about, which is what on the screen in terms of charting or what appears is based on what the data is influencing, it's it's shaping what appears?  Alex Epshteyn: That's correct. A lot of general data is canned, right? It's almost like a video basically, and some experts, some design shops typically would change it for you, and it becomes evergreen content, day two, three, and day four. What we try to do is something a little bit different and we work with some really nice design companies as well. So just to be completely clear, we don't do the design ourselves. We typically either partner with a company that's really good at it. Sometimes the company brings us into the opportunity, right?  The consultant can also spec us to partner with somebody or the end client may have relationships with companies that do this very well. But, I would say the formulation, the recipe for this kind of thing, to make it dynamic is a few things, and that's where this sort of generative content becomes more like a Zignage type of problem, as opposed to something that you could hire a design house to basically build for you, right? One is that you could update content even if the filters or the generative piece is running. Separately you might be able to in CMS have the tools to change the filters of the generative option, just as I explained prior, and finally have trigger conditions. We do mostly casting, right? There are some great companies in space. I think they're very good at that kinda stuff. They do a lot of smart interactive signage. We do a little bit of that, but we mostly do narrowcasting. So in our world trigger conditions come from some sort of backend system. It could be a calendaring system, it could be something smarter, right? Where it's not just a boolean condition. It could be a multivariable that basically has to click off a list of things that can happen. And that's really where we can add a lot of value and it overlaps with the kinda work we do with the New York Stock Exchange. We generally term it as business logic So we really do some smart business logic and I think it's actually, there's a lot of growth in that area once we apply modern sort of machine learning to it to make it extensible to go further.  But with that kind of approach you have an ability to modify a piece of content continuously, right? It's a living piece of generative content, even if it's not dynamically fed with financial data sources, or sports data sources.  I haven't seen your user experience, but I'm guessing people listening to this are thinking, this is really interesting, but I'd be terrified to try to use this software. What's it actually like? Alex Epshteyn: You're not gonna be terrified because we are one of the proponents of nearly or fully automated systems. So often what we do for non-power users is to give a build out to the software that our customers use, and then everything is essentially this business logic that I'm describing to you.  It's kinda like a headless CMS? Alex Epshteyn: It's like a headless CMS for the non-power users. For the power users that really like their tools like Adobe, or you could just use a Dropbox or some sort of hotfolder mechanism. We're also partnered with a number of DAM solutions. There's a lot of workflow that happens in digital asset management solutions, including tag based workflows.  We do a lot of tag based workflows nowadays, where we consume the tags that are done in a DAM, and essentially they find their way onto the right players at the right time, and on the flip side, we do have a standard suite. It's actually going through a major overhaul at the end of the year, what we call Z Cast 6. It does have a number of these power tools. But our CMS generally follows a certain idea. It was popular for a while and it's hard to execute unless you have our kinds of customers, which is what we call an additive UX. So it's the opposite of something like Microsoft Office, right where you have a billion features and there's a long learning curve if you wanna learn everything. What we do is really try to identify the user story behind what needs to be done. We create the access controls that really expose certain parts of the CMS, and even within the same context, add or remove tools as needed. That creates a situation where there's almost really minimal training. I think one of the biggest problems we're trying to solve for our direct customers, or channel customers is the attrition that happens in major enterprises for users of digital signage, right? Like one of the biggest problems we face even in huge banks is the fact that digital signage is consigned to a webmaster subcategory. Like they manage the CMS that's published on their portal, and then somebody in that team or a few people in that team handles digital signage as well. So that's historically been a problem for our whole industry, and what we're trying to tackle is kinda remove both the friction of adoption and also try to give them the tools that they need, and if they use tools, bridge those tools, that's our philosophy on that end. So what's the structure of your company? Are you a private company?  Alex Epshteyn: We are a private company. We're an LLC in New York, and we're about 20 people. Most of our development used to take place until very recently in Ukraine because one of my partners and I from there originally. So as this topic is in the news, unfortunately, forget about our team. The fact is cities in the eastern part of Ukraine are partially destroyed but luckily a lot of the folks that we would use are in the Western part of Ukraine now, and we continue to use them but not all of them unfortunately. So you're having to manage your way through that along with other things, right?  Alex Epshteyn: We did, and they're very talented folks. We have worked on so many projects.  Yeah, it's interesting. I was trading LinkedIn messages with another company and he was talking about operating out of Odessa and they're still like opening QSRs and things like that and putting in menu boards.  Alex Epshteyn: Good for them. That's exactly what they should do.  Yeah, and I was thinking, boy, all the other challenges you have out there, like supply chain and everything else, layer in a hot war on top of that. Good lord. Alex Epshteyn: Our problems are very small compared to the real problems in Ukraine and the world. But it's a small world. You sort of face these things as they come.  Well, hopefully someway or other, it gets resolved. I'm not quite sure how, but this was great. Can you let people know where they can find your company online?  Alex Epshteyn: Sure. It's Zignage.com  So signage with a Z on the front?  Alex Epshteyn: Correct. The last word is Zignage. You find me on LinkedIn, Alex Epshteyn. That's where mostly we do our sort of minimum branding that we do.  All right, but we'll be looking for more later in the year, right?  Alex Epshteyn: Absolutely. We're excited to make some announcements in the transportation space, some more in the financial industry and some more in retail. All right. Great to hear it's going well for you. Thanks so much for spending the time with me. Alex Epshteyn: Thank you, Dave. My pleasure.

Digital Spacecast
Chia Cat Project Green Wings Compassion - Web3, GameFI and Chia

Digital Spacecast

Play Episode Listen Later Mar 14, 2022 25:01


Today we got to interview GRIG from the https://wings.green #Chia CAT project and get some insight into the metaverse they are creating for charitable causes and outreach. It was a great chat and we are really looking forward to seeing what this team produces next. The WebGL we are seeing so far is really cool, can't wait to see more!Green Wings + Compassion CATS - https://wings.greenShop our Store (receive 3% or 5% off unlimited items w/Youtube channel membership) https://shop.digitalspaceport.com/Join our Youtube channel to get access to perks like Store discounts, Discord access + more:https://www.youtube.com/channel/UCiaQzXI5528Il6r2NNkrkJA/joinCOINEX (Buy XCH) https://www.coinex.com/register?refer_code=xv67310K 800TB blueprintArticle https://digitalspaceport.com/chia-farm-template-750-tib-for-10000/Video https://youtu.be/yqG9vlH6LwcChia Earnings Sheet https://docs.google.com/spreadsheets/d/1lXviouJ1dZhT3s_4wQgdySnTMJJX3E5LXcJJv5xftdg/edit#gid=905970511Enterprise Chia Farm DIY

CPQ Podcast
Interview with Tom Connard, Co-Founder & COO at 3D Source

CPQ Podcast

Play Episode Listen Later Mar 13, 2022 31:11


In this episode you hear from Tom Connard, Co-Founder & COO at 3D Source, a US company that provides visual capabilities. Tom was previously on the CPQ Podcast and in this episode he talks about Unreal Engine 5, photo realism, Pixel Streaming, WebGL and when to use it, immersive experiences that are not possible in the real world, digital photography and their benefits, digital twins, product animations and more web www.3dsource.com  LinkedIn https://www.linkedin.com/in/tomconnard/  email tom@3dsource.com c

Ubuntu Touch Audiocast
Ubuntu Touch Q&A 116

Ubuntu Touch Audiocast

Play Episode Listen Later Feb 7, 2022 46:27


A big improvement has just been announced, with the release of Clickable 7.0. This is the software which UT uses to build apps and this version has new features across the board. There is a Clickable tool which will handle the format and library changes for those on the older version. It has way more options than any developer is likely to need and overall has more features than Xcode. It makes the whole process of creating apps for UT easy and seamless. Work on Xenial has not stopped. Rotating lock screen now works. Thanks to Capsia for that change. Some ports now have background blur. A wallpaper behind the UI was a very simple arrangement and posed no risk to stability. Making things more fancy has the potential to introduce bugs and degrade performance but we have brought in a modification which works well. Porters have the final say but if they want, they can implement a translucency layer, so that when the app drawer is expanded from the left side, the wallpaper is visible in blurred form behind it. This feature will be in the next OTA release. Something similar was tried a long time ago and was tested on the Nexus 5. The performance was dreadful, so it is good to see that the problems have been solved. Dialer has had an improvement. There is now a predictive feature which will offer a suggestion based on your first few numbers punched in. WebGL in Morph is going to be in the next OTA release. Some of the very old devices such as BQ will unfortunately not be able to make use of it. An improvement which you might not have noticed is that when swiping away tabs in Morph, the action is now more fluid. Alfred has been continuing his refinement of the Pixel 3a. Landscape keyboard now works and there is an improvement in battery life. A really big one is the fact that you can at last fix in-call voice volume. That was a fundamental usability issue. “It will no longer blow Mark Shuttleworth's ears – only his mind” :) OTA-22 was delayed by a bit. There was a synchronisation fault which, unusually, affected the rc channel too. Apps started to break so we delayed and reverted. It will be sorted out soon and procedures changed around a bit to avoid future problems.

DOU Podcast
Юрий Артюх, основатель CodeRiver, — о математических задачах на собеседовании, стримах на YouTube и первом провальном бизнесе | Что Потом

DOU Podcast

Play Episode Listen Later Jan 11, 2022 70:05


В новом выпуске “Что потом?” мы поговорили с Юрием Артюхом о роли математики в его жизни, почему не любит Роберта Паттинсона, зачем начал стримить на YouTube, как потерял деньги на первом бизнесе и как университетский профилакторий помог начать карьеру.

The Toby Gribben Show
Dr. Ani Atanasova

The Toby Gribben Show

Play Episode Listen Later Nov 5, 2021 10:03


About our platform: What we did: https://multiplayerteamtraining.com Image ads: https://vimeo.com/456050280Image ads https://vimeo.com/425878356Multiplayer Team Training is a game-based/gamification platform that allows us to turn games into exciting learning or events experiences. We can plug any type of quiz related format + videos, texts, images into the games and in a multiplayer way people enjoy but we are measuring all at the back-end. It allows assessment and practising on the go. We have Individual reports, team reports, global team leaderboards, etc. Also, we track users progress.We can make competitions within a certain team or between teams of different offices within one company. It’s like a lego to use. All games are multiplayer, although many can be played in single mode. WE HAVE A GAME FOR EVERY PLAYER. Every game has a different learning goal. Games are available at the App Store, Google play and for desktop, there is a WebGL that can be easily integrated into third parties sites. Every gameplay is about 5-10 min, but due to competitive spirit replay rate is very high (over 66% and 35% play over 10 times). On average every normal user can easily retain and memorize 70 questions in just 1-hour play. Very suitable if you have dry and hard to remember information. Our platform works equally well for 7-77 years old. Overview of our games and type of content + each game gameplay and specs videos.https://multiplayerteamtraining.com/overview-and-gameplay-all-games/4 different types of clients & partners and target markets: -Business companies who need game-based/gamified training as final clients (literally everyone). They can give us their content, can select from our ready programs or can get our partners to consult them on how to organize the training. -Events companies for real and virtual events. We have 2 separate solutions for real events and virtual events and the games in each are different. -Business: Training centres, training professionals, consultants (they can be our clients or/and our content contributors + resellers)-Schools and Universities See acast.com/privacy for privacy and opt-out information.

Linux Action News
Linux Action News 178

Linux Action News

Play Episode Listen Later Feb 28, 2021 25:52


Red Hat is still in damage control mode, a new hacker laptop called Framework makes bold promises, and what Google is spending money on in the Linux kernel. Plus why we've recently switched back to Firefox, and more.

Code Chefs - Hungry Web Developer Podcast
Creative Development with WebGL, SVG, and Canvas

Code Chefs - Hungry Web Developer Podcast

Play Episode Listen Later Jan 30, 2021 58:09


What is creative development? Creative developers are a specialized set of frontend developers who focus more on the "wow" factor. They work…

The Disruptors
181. The Divergence of AR and VR and Connectivity, Collaboration and Entertainment in a Post COVID World | Tony Parisi

The Disruptors

Play Episode Listen Later Apr 20, 2020 39:31


Tony Parisi (@auradeluxe) is a virtual reality pioneer, serial entrepreneur, and angel investor. He co-created several, 3D graphics standards, is Head of AR/VR Ad Innovation at Unity Technologies, and author several O'Reilly Media books on virtual reality and WebGL. [spreaker type=player resource="episode_id=25799246" width="100%" height="80px" theme="light" playlist="false" playlist-continuous="false" autoplay="false" live-autoplay="false" chapters-image="true" episode-image-position="right" hide-logo="true" hide-likes="false" hide-comments="false" hide-sharing="false" hide-download="true"]  

XR for Business
The YouTube of 3D Models, with Sketchfab CEO Alban Denoyel

XR for Business

Play Episode Listen Later Oct 23, 2019 47:48


If YouTube is the world’s compendium of videos of cute cats and unboxings, then the Sketchfab platform is well on its way to becoming the equivalent cultural database of user-generated 3D objects. CEO Alban Denoyel discusses the origins and the future of the service with Alan in this episode. Alan: Welcome to the XR for Business Podcast with your host, Alan Smithson. Today’s guest is Alvin Denyuel– I screwed it up already. How do I say it? Alban: “Deh-Noh-Yell.” Alan: “Denoyel,” OK. Today’s guest is Alban Denoyel from Sketchfab, the world’s largest platform to publish and find 3D content online. Imagine it’s like the YouTube for 3D. Prior to Sketchfab, he worked for four years in the 2D world of photography. He loves making 3D content with photogrammetry or VR sculpting. He’s a graduate from the ESSEC Business School in Paris, France. If you want to learn more about the wonderful work they’re doing, you can visit sketchfab.com. Alban, welcome to the show. Alban: Hey, Alan, thanks. Alan: It’s my absolute pleasure. I’ve been looking forward to this episode for so long. I’ve had a Sketchfab account for about four years now and I’ve only managed to publish a couple of things on there. But it’s so cool. I mean, you’re literally making the YouTube of 3D models. How did you guys come up with that concept? Where did that come from? Alban: Actually, it initially came from a technical challenge, I guess. My– So, Sketchfab is built on top of WebGL, which is the first web-based framework to display 3D graphics in the browser and WebGL was initiated by Mozilla back in 2011. And my co-founder and CTO, Cedric, had been a 3D programmer in the gaming industry for 15 years, was hired by Mozilla to make one of the very first demos of WebGL for the launch of Firefox 4. And then I just started peeking around the tech and started building an MVP to essentially help the people he was working with in the 3D industry to be share and display 3D assets with just a euro and a browser. Alan: Incredible. I mean, you guys have come a long, long way. How long have you been doing that? When did it start? Alban: Cedric started in 2011, I met him early 2012, and we officially launched in March 2012. So it’s been more than seven years. Alan: Wow. Seven years. And how many 3D models are hosted on Sketchfab today? Alban: I stopped counting at three million. [chuckles] Alan: So there’s over three million 3D assets hosted on Sketchfab today. And I would assume over the next 10 years, as everything moves to 3D, that number is going to probably end up at 3 billion, at some point. So why do people need Sketchfab? Alban: So people use this mostly in two ways: either to publish content or to find content. So published content means sharing, embedding, displaying, hosting 3D files that they have. So these are as 3D creators, or brands, or architects, or any number of industries. And so they have 3D files and they need a way to embed them on a web page or share them with someone who doesn’t have 3D software to open them, or use them in VR and AR and so on. And then other people come to Sketchfab just because they need content. Either regards to presentations, or it could be to build video games. It can be to build AR/VR experiences, it can be to make a video or two-dimensional learning. Again, the use cases are pretty diverse as well. Alan: Let’s start with the way– where you guys came from, because up until recently it was a free pla

The Frontside Podcast
Svelte and Reactivity with Rich Harris

The Frontside Podcast

Play Episode Listen Later Sep 4, 2019 52:11


Rich Harris talks about Svelte and Reactivity. Rich Harris: Graphics Editor on The New York Times investigations team. Resources: Svelte Please join us in these conversations! If you or someone you know would be a perfect guest, please get in touch with us at contact@frontside.io. Our goal is to get people thinking on the platform level which includes tooling, internalization, state management, routing, upgrade, and the data layer. This show was produced by Mandy Moore, aka @therubyrep of DevReps, LLC. Transcript: CHARLES: Hello and welcome to The Frontside Podcast, a place where we talk about user interfaces and everything that you need to know to build them right. TARAS: It's actually a really nice, Rich and I'm really, really happy to have a chance to actually chat with you about this because Svelte is a really fun piece technology. In many ways, it's interesting to see our technology evolve and our industry evolve through innovation, real innovation. I think Svelte 3 has really been kind of that next thought provoking technology that kind of makes you think about different ways that we can approach problems in our space. So, really excited to chat with you about this stuff. RICH: Well, thank you. Excited to be here. TARAS: I think quite a lot of people know, Rich, about your history, like how you got into what you're doing now. But I'm not sure if Charles is aware, so if you could kind of give us a little bit of a lowdown on where you kind of come from in terms of your technical background and such. RICH: Sure. I'll give you the 30-second life history. I started out as a reporter at a financial news organization. I had a Philosophy Degree and didn't know what else to do with it. So, I went into journalism. This was around the time of the great recession. And within a few weeks of me joining this company, I watched half of my colleagues get laid off and it's like, "Shit, I need to make myself more employable." And so gradually, sort of took on more and more technical responsibilities until I was writing JavaScript as part of my day job. Then from there, all these opportunities kind of opened up. And the big thing that I had in mind was building interactive pieces of journalism, data-driven, personalized, all of that sort of thing, which were being built at places like the New York Times, and The Guardian, and the BBC. That was the reason that I really wanted to get into JavaScript. And that's guided my career path ever since. CHARLES: It's interesting that this D3 and all that did come out of journalism. RICH: It's not a coincidence because when you're working under extreme time pressure and you're not building things with a view to maintain them over a long period of time, you just need to build something and get it shipped immediately. But it needs to be built in a way that is going to work across a whole range of devices. We've got native apps, we've got [inaudible], we've got our own website. And in order to do all that, you need to have tools that really guide you into the pit of success. And D3 is a perfect example of that. And a lot of people have come into JavaScript through D3. CHARLES: And so, are you still working for the same company? RICH: No. That's ancient history at this point. CHARLES: Because I'm wondering, are you actually getting to use these tools that you've been building to actually do the types of visualizations and stuff that we've been talking about? RICH: Very much so. I moved to The Guardian some years ago. And then from there, moved to Guardian US, which has an office in New York. And it was there that I started working on Svelte. I then moved to the New York Times and I'm still working on Svelte. I've used it a number of times to build things at the New York Times and the people have built things with it too. And so, yeah, it's very much informed by the demands of building high performance interactive applications on a very tight deadline. CHARLES: Okay, cool. So I've probably used, I mean, I'm an avid reader of both Guardian and the New York Times, so I've probably used a bunch of these visualizations. I had no idea what was driving them. I just assumed it was all D3. RICH: There is a lot of D3. Mike Bostock, the creator of D3, he was a linchpin at the graphics department for many years. Unfortunately we didn't overlap. He left the Times before I joined the Times, but his presence is still very much felt in the department. And a lot of people who are entering the industry, they're still becoming database practitioners by learning from D3 examples. It's been a hugely influential thing in our industry. TARAS: How long is a typical project? How long would it take to put together a visualization for an article that we typically see? RICH: It varies wildly. The graphics desk is about 50 strong and they will turn around things within a day. Like when the Notre Dame burnt down a couple of months ago, my colleagues turned around this interactive scroll driven webGL 3D reconstruction of how the fire spreads through the cathedral in less than 24 hours, which was absolutely mind blowing. But at the same time, there are projects that will take months. I work on the investigations team at the Times. And so, I'm working with people who are investigating stories for the best part of the year or sometimes more. And I'm building graphics for those. And so that, it's two very different timescales, but you need to be able to accommodate all of those different possibilities. CHARLES: So, what does the software development practice look like? I mean, because it sounds like some of this stuff, are you just throwing it together? I guess what I mean by that is, I guess the projects that we typically work on, three months is kind of a minimum that you would expect. So, you go into it, we need to make sure we've got good collaboration practices around source control and continuous integration and testing and all this stuff. But I mean, you're talking about compressing that entire process into a matter of hours. So what, do you just throw right out the window? What do you say? "We're just doing a live version of this." RICH: Our collaboration processes consist of sitting near each other. And when the time calls for it, getting in the same room as each other and just hammering stuff out on the laptop together. There's no time for messing around with continuous integration and writing tests. No one writes tests in the news graphics, it's just not a thing. CHARLES: Right. But then for those projects that stretch into like three months, I imagine there are some. Do you run into like quality concerns or things like that where you do have to take into account some of those practices? I'm just so curious because it sounds like there's actually, the difference between two hours and two months is, that's several orders of magnitude and complexity of what you're developing. RICH: It is. Although I haven't worked on a news project yet that has involved tests. And I know that's a shocking admission to a lot of people who have a development background, but it's just not part of the culture. And I guess the main difference between the codebase for a two-hour project and a two-month project is that the two-month project will strive to have some reasonable components. And that's, I think, the main thing that I've been able to get out of working on the kinds of projects that I do is instead of just throwing code at the page until it works, we actually have a bit of time to extract out common functionality and make components that can be used in subsequent interactives. So, things like scroll driven storytelling, that's much easier for me now than it was when I first built a scroll driven storytelling component like a couple of years ago. CHARLES: Yeah. That was actually literally my next question is how do you bridge that, given that you've got kind of this frothy experimentation, but you are being, sounds like, very deliberate about extracting those tools and extracting those common components? And how do you find the time to even do that? RICH: Well, this is where the component driven mindset comes in really handy, I think. I think that five or 10 years ago when people thought in terms of libraries and scripts, there wasn't like that good unit of reusability that wasn't the sort of all encompassing, like a component is just the right level of atomicity or whatever the word is. It makes sense to have things that are reusable but also very easy to tweak and manipulate and adapt to your current situation. And so, I think that the advent of component oriented development is actually quite big for those of us working in this space. And it hasn't really caught on yet to a huge degree because like I say, a lot of people are still coming with this kind of D3 script based mindset because the news industry, for some interesting and historical reasons, is slightly out of step with mainstream mode development in some ways. We don't use things like Babel a lot, for example. CHARLES: That makes sense, right? I mean, the online print is not like it's a React application or it's not like the application is all encompassing, so you really need to have a light footprint, I would imagine, because it really is a script. What you're doing is scripting in the truest sense of the word where you essentially have a whole bunch of content and then you just need to kind of -- RICH: Yeah. And the light footprint that you mentioned is key because like most new sites, we have analytics on the page and we have ads and we have comments and all of these things that involve JavaScript. And by the time our code loads, all of this other stuff is already fighting for the main thread. And so, we need to get in there as fast as we can and do our work with a minimum fuss. We don't have the capacity to be loading big frameworks and messing about on the page. So that again is one of these sort of downward pressures that kind of enforces a certain type of tool to come out of the news business. TARAS: A lot of the tooling that's available, especially on like the really fatter, bigger frameworks, the tools that you get with those frameworks, they benefit over long term. So if you have like a long running project, the weight of the abstractions, you've experienced that benefit over time and it adds up significantly. But if you're working to ship something in a day, you want something that is just like a chisel. It does exactly what you want it to do. You want to apply it in exactly the right place and you want to get it done exactly, like you want the outcome to be precise. RICH: That's true. And I think a lot of people who have built large React apps, for example, or large Ember apps, they sort of look at Svelte and think, "Well, maybe this isn't going to be applicable to my situation," because it has this bias towards being able to very quickly produce something. And I'm not convinced that that's true. I think that if you make something easier to get started with, then you're just making it easier. If you build something that is simple for beginners to use, then you're also building something simple for experts to use. And so, I don't necessarily see it as a tradeoff, I don't think we're trading long-term maintainability for short term production. But it is certainly a suspicion that I've encountered from people. TARAS: This is something that we've also encountered recently. It's been kind of a brewing discussion inside a front side about the fact that it seems to be that certain problems are actually better to rewrite than they are to maintain or refactor towards an end goal. And we found this, especially as the tools that we create have gotten more precise and more refined and simplified and lighter, it is actually easier to rewrite those things five times than it is to refactor it one time to a particular place that we want it to be. And it's interesting, like I find this to be very recent, this idea is blossoming in my mind very recently. I didn't observe this in the past. CHARLES: Do you mean in the sense that like if a tool is focused enough and a tool is simple enough, then refactoring is tantamount to a rewrite if you're talking about 200 or 300 lines of code? Is that what you mean? TARAS: Yeah. If you're sitting down to make a change or you have something in mind, it is actually easy to say, "Let's just start from scratch and then we're going to get exactly the same place in the same amount of time." But this kind of mantra of not rewriting makes me think about that, makes me question whether that's actually something that is always the right answer. RICH: I definitely question that conventional wisdom at all levels, as well. I started a bundler called Rollup as well as Svelte more recently. And Rollup was the second JavaScript bundler that I wrote, because the first one that I wrote wasn't quite capable of doing the things that I wanted. And it was easier to just start from scratch than to try and shift the existing user base of its predecessor over to this new way of doing things. Svelte 3 is a more or less complete rewrite. Svelte has had multiple, more or less, complete rewrite. Some of them weren't breaking changes. But Svelte itself was a rewrite of an earlier project that I'd started in 2013. And so in my career, I've benefited massively from learning from having built something. But then when the time comes and you realize that you can't change it in the ways that you need to change it, just rewrite it. And I think that at the other end of the spectrum, the recent debate about micro frontend has largely missed this point. People think that the benefit of the micro frontend is that people don't need to talk to each other, which is absolute nonsense. I think the benefit of this way of thinking about building applications is that it optimizes for this fact of life that we all agree is inevitable, which is that at some point, you're going to have to rewrite your code. And we spend so much energy trying to optimize for the stability of a code base over the long term. And in the process, lock ourselves into architectural and technical decisions that don't necessarily make sense three or four years down the line. And I think as an industry, would be a lot better placed if we all started thinking about how to optimize for rewrites. CHARLES: So for those of us who aren't familiar, what is the debate surrounding micro frontends? This is actually something I've heard a lot about, but I've actually never heard what micro frontends actually are. RICH: Yeah. I mean, to be clear, I don't really have a dog in this fight because I'm not building products, but the nub of it is that typically if you're building a website that maybe has like an admin page, maybe it has a a settings page, maybe it has product pages, whatever. Traditionally, these would all be parts of a single monolithic application. The micro frontend approach is to say, "Well, this team is going to own the settings page. This team is going to own the product page." And they can use whatever technologies they want to bring that about. And the detractors sort of attack a straw man version of this, "You're going to have different styles in every page. You're going to have to load Vue on one page. You're going to have to load React on the other page. It's going to be a terrible user experience," when actually its proponents aren't suggesting that at all. They're suggesting that people from these different teams coordinate a lot more that are free to deviate from some kind of grand master architectural plan when it's not suitable for a given task. And darn right. I think it means that you have a lot more agility as an engineering organization than you would if you're building this monolithic app where someone can't say, "Oh, we should use this new tool for this thing. We should use microstates when the rest of the organization is using Google docs." It's not possible. And so, you get locked into the decisions of a previous generation. CHARLES: Right. No, it makes sense. It's funny because my first reaction is like, "Oh my goodness, that's a potential for disaster." The klaxon's going to go off in your head, but then you think, really then the work is how do you actually manage it so it doesn't become a disaster. And if you can figure that out, then yeah, there is a lot of potential. RICH: Yeah. People always try and solve social problems with technology. You solve social problems with social solutions. CHARLES: Right. And you have to imagine it too, it depends on the application, right? I think Amazon, the Amazon website is developed that way where they have different teams that are responsible even down to little content boxes that are up on the toolbar. And the site doesn't really, it shows, right? Like it shows like this is kind of like slapped together, but that's not what they need. They don't need it to not look like there's slight variation with the different ways that things behave. They need to be showing for their business to work. They need to be showing the right thing at the right time. And that's the overriding concern. So having it look very beautiful and very coherent isn't necessarily a thing. Same thing in Spotify, used as another example of this. I didn't know if it was called micro frontends, but I know that they've got a similar type thing, but they are clearly the experience and having it look coherent is more important. And so, they make it work somehow. And then like you're saying, it probably involves groups of people talking to other groups of people about the priorities. So yeah, it doesn't sound to me like just like you're going to adopt micro frontends guarantees one particular set of outcomes. It really is context dependent on what you make of it. RICH: Totally. TARAS: I'm curious though, so with Svelte, essentially for your reactivity engine, you have to compile to get that reactive behavior. RICH: Yeah. TARAS: How does that play with other tools like when you actually integrate it together? I've never worked with Svelte on a large project, so I can't imagine what it looks like at scale. I was wondering if you've seen those kind of use cases and what that ends up, if there's any kind of side effects from that. RICH: As you say, the reactivity within a component is only in the local state within that component or to state that is patched in as a prop from a parent component. But we also have this concept called a store. And a store is just a project that represents a specific value and you import it from svelte/store. And there are three types of store that you get out of the box. A writable, a readable and a derived. And a writeable is just, var count = writable (0) and then you can update that and you can set it using methods on that store. Inside your marker, you can reference or in fact inside the script block in the component, you can reference the value of that store just by prefacing it with a dollar sign. And the compiler sees that and says, "Okay, we need to subscribe to this store as value and then assign it and apply the reactivity." And that is the primary way of having state that exists outside the component hierarchy. Now, I mentioned the writable, readable, and derived are the built in stores that you get, but you can actually implement your own stores. You just need to implement this very simple contract. And so,, it's entirely possible to use that API to wrap any state management solution you have. So you can wrap redux, you can wrap microstates, you can wrap state, you can wrap whatever it is, whatever your preferred state management solution is, you can adapt it to use with Svelte. And it's very sort of idiomatic and streamlined. Like it takes care of unsubscriptions when the component is unmounted. All of that stuff is just done for you. CHARLES: Digging a little bit deeper into the question of integration, how difficult would it be to take wholesale components that were implemented in Svelte and kind of integrate them with some other component framework like React? RICH: If the component is a leaf node, then it's fairly straightforward. There is a project called react-svelte which is, I say project, it's like 20 lines of code and I don't think it's [inaudible] they did for Svelte 3, which I should probably do. But that allows you to use a Svelte component in the context of React application, just using the component API the same way that you would [inaudible] or whatever. You can do that inside a React component. Or you could compile the Svelte component to a web component. And this is one of the great benefits of being a compiler is that you can target different things. You can generate a regular JavaScript class and you've got an interactive application. Or you can target a server side rendering component which will just generate some html for some given state which can then later be hydrated on the client. Or you can target a web component which you can use like any other element in the context of any framework at all. And because it's a compiler, because it's discarding all of the bits of the framework that you're not using, it's not like you're bundling an entire framework to go along with your component. And I should mention while I'm talking about being able to target different outputs, we can also, as a NativeScript project, you can target iOS and Android that same way. Where it gets a little bit more complicated is if it's not a leaf node. If you want to have a React app that contains a Svelte component that has React [inaudible], then things start to get a little bit more unwieldy, I think. It's probably technically possible, but I don't know that I would recommend it. But the point is that it is definitely possible to incrementally adopt Svelte inside an existing application, should that be what you need to do. CHARLES: You said there's a NativeScript project, but it sounds to me like you shouldn't necessarily need NativeScript, right? If you're a compiler, you can actually target Android and you could target iOS directly instead of having NativeScript as an intermediary, right? RICH: Yes. If, if we had the time to do the work, then yes. I think the big thing there would be getting styles to work because Svelte components have styles. And a regular style tag just to CSS and you can't just throw CSS in a native app. CHARLES: Right. Sometimes, I feel like it'd be a lot cooler if you could. [Laughter] RICH: NativeScript really is doing a lot of heavy lifting. Basically what it's doing is it's providing a fake dom. And so, what the NativeScript does is it targets that dom instead of the real dom and then NativeScript turns that into the native instructions. CHARLES: Okay. And you can do that because you're a compiler. TARAS: Compilers has been on our radar for some time, but I'm curious like what is your process for figuring out what it should compile to? Like how do you arrive at the final compile output? Manually, have you written that code and then, "I'm going to now change this to be dynamically generated." Or like how do you figure out what the output should be? RICH: That's pretty much it. Certainly, when the project started, it was a case of, I'm going to think like a compiler, I'm going to hand convert this declarative component code into some framework plus JavaScript. And then once that's done, sort of work backwards and figure out how a compiler would generate that code. And then the process, you do learn certain things about what the points of reusability are, which things should be abstracted out into a shared internal helper library and what things should be generated in line. The whole process is designed to produce output that is easy for a human to understand and reason about. It's not like what you would imagine compile [inaudible] to be like, it's not completely inscrutable. It's designed to be, even to that level of being well formatted, it's designed to be something that someone can look at and understand what the compiler was thinking at that moment. And there's definitely ways that we could change and improve it. There are some places where there's more duplication than we need to have. There are some places where we should be using classes instead of closures for performance and memory benefits. But these are all things that once you've got that base, having gone through that process, that you can begin to iterate on. CHARLES: It's always curious to me about when is the proper time to move to a compiler, because when you're doing everything at runtime, there's more flexibility there. But at what point do you decide, "You know what? I know that these pathways are so well worn that I'm going to lay down pavement. And I'm going to write a compiler." What was the decision process in your mind about, "Okay, now it's time." Because I think that that's maybe not a thought that occurs to most of us. It's like, "I had to write a compiler for this." Is this something that people should do more often? RICH: The [inaudible] of 'this should be a compiler' is one that is worth sort of having at the back of your head. I think there are a lot of opportunities not just in DUI framework space but in general, like is there some way that we can take this work that is currently happening at runtime and shift it into a step that only happens once. That obviously benefits users. And very often we find that benefits developers as well. I don't think there was a point at which I said, "Oh, this stuff that's happening at runtime should be happening at compile time." It was more, I mean, the actual origin has felt that it was a brain worm that someone else infected me with. Judgment is a very well known figure in the JavaScript world. He had been working on this exact idea but hadn't taken it to the point where he was ready to open source it. But he had shared like his findings and the general idea and I was just immediately smitten with this concept of getting rid of the framework runtime. At the time, the big conversation happening in the JavaScript community was about the fact that we're shipping too much JavaScript and it's affecting startup performance time. And so the initial thought was, "Well, maybe we can solve that problem by just not having the runtime." And so, that was the starting point with Svelte. Over time, I've come to realize that that is maybe not the main benefit. That is just one of the benefits that you get from this approach. You also get much faster update performance because you don't have to do this fairly expensive virtual dom different process. Lately, I've come to think that the biggest win from it is that you can write a lot less code. If you're a compiler, then you're not kind of hemmed in by the constraints of the language, so you can almost invent your own language. And if you can do that, then you can do the same things that you have been doing with an API in the language itself. And that's the basis of our system of reactivity, for example. We can build these apps that are smaller and by extension, less bug prone and more maintainable. I just wanted to quickly address the point you made about flexibility. This is a theoretical downside of being a compiler. We're throwing away the constraints about the code needing to be something that runs in the browser, but we're adding a constraint, which is that the code needs to be statically analyzable. And in theory, that results in a loss of flexibility. In practice, we haven't found that to affect the things that we can build. And I think that a lot of times when people have this conversation, they're focusing on the sort of academic concepts of flexibility. But what matters is what can you build? How easy is it to build a certain thing? And so if empirically you find that you're not restricted in the things that you can build and you can build the same things much faster, then that academic notion of flexibility doesn't, to my mind, have any real value. CHARLES: Hearing you talk reminded me of kind of a quote that I heard that always stuck with me back from early in my career. I came into programming through Perl. Perl was my first language and Perl is a very weird language. But among other things, you can actually just change the way that Perl parses code. You can write Perl that makes Perl not throw, if that makes any sense. And when asked about this feature, the guy, Larry Wall, who came up with Perl, he's like, "You program Perl, but really what you're doing is you're programming Perl with a set of semantics that you've negotiated with the compiler." And that was kind of a funny way of saying like, "You get to extend the compiler yourself." Here's like the default set of things that you can do with our compiler, but if you want to tweak it or add or modify, you can do that. And so, you can utilize the same functionality that makes it powerful in the first place. You can kind of inject that whole mode of operation into the entire workflow. Does that make sense? That's like a long way of saying, have you thought about, and is it possible to kind of extend the Svelte compiler as part of a customization or as part of the Svelte programming experience? RICH: We have a very rudimentary version of that, which is pre-processing. There's an API that comes with Svelte called preprocess. And the idea there is that you can pass in some code and it will do some very basic, like it will extract your styles, it will extract your script and it will extract your markup. And then it will give you the opportunity to replace those things with something else. So for example, you could write some futuristic JavaScript and then compile it with Babel before it gets passed to the Svelte compiler, which uses acorn and therefore needs to be able to have managed other scripts so that it can construct an abstract syntax tree. A more extreme version of that, people can use [inaudible] to write their markup instead of html. You can use Sass and Less and things like that. Generally, I don't recommend that people do because it adds these moving parts and it makes like a lot of bug reports of people just trying to figure out how to get these different moving parts to operate together. I don't know, it means that your editor plugins can't understand what's inside your style tag all of a sudden and stuff like that. So, it definitely adds some complexity, but it is possible. At the other end, at a slightly more extreme level, we have talked about making the cogeneration part plugable so that for example, the default renderer and the SSR renderer are just two examples of something that plugs into the compiler that says, "Here is the component, here's the abstract syntax tree, here's some metadata about which values are in scope," all of this stuff and then go away and generate some code from this. We haven't done that so far, partly because there hasn't been a great demand for it, but also because it's really complicated. As soon as you turn something into a plugin platform, you just magnify the number of connection points and the number of ways that things could go wrong by an order of magnitude. And so, we've been a little bit wary of doing that, but it is something that we've talked about primarily in the context of being able to do new and interesting things like target webGL directly or target the command line. There are renders for React that let you build command line apps using React components. And like we've talked about, maybe we should be able to do that. Native is another example. The NativeScript integration as you say, it could be replaced with the compiler doing that work directly, but for that to work presently, that would mean that all of that logic would need to sit in core. And it would be nice if that could be just another extension to the compiler. We're talking about a lot of engineering effort and there's higher priority items on our to do list at the moment. So, it's filed under one day. CHARLES: Right. What are those high priority items? RICH: The biggest thing I think at the moment is TypeScript integration. Surprisingly, this is probably like the number one feature request I think is that people want to be able to write Typescript inside the Svelte components and they want to be able to get TypeScript when they import the Svelte component into something else. They want to be able to get completion [inaudible] and type checking and all the rest of it. A couple of years ago, that would've been more or less than thinkable but now it's like table stakes is that you have to have first-class TypeScript support. CHARLES: Yeah, TypeScript is as popular as Babel these days, right? RICH: Yeah, I think so. I don't need to be sold on the benefits. I've been using TypeScript a lot myself. Svelte is written in TypeScript, but actually being able to write it inside your components is something that would involve as hacking around in the TypeScript compiler API in a way that, I don't know if anyone actually or any of us on the team actually knows how to do. So, we just need to spend some time and do that. But obviously when you've got an open source project, you need to deal with the bugs that arise and stuff first. So, it's difficult to find time to do a big project like that. CHARLES: So, devil's advocate here is if the compiler was open for extension, couldn't a TypeScript support be just another plugin? RICH: It could, but then you could end up with a situation where there's multiple competing TypeScript plugins and no one's sure which ones are used and they all have slightly different characteristics. I always think it's better if these things that are common feature requests that a lot of people would benefit from, if they're built into the project themselves. I go really light in the batteries included way of developing and I think this is something that we've sort of drifted away from in the frontend world over the last few years, we've drifted away from batteries included towards do it yourself. CHARLES: Assemble the entire thing. Step one, open the box and pour the thousand Lego pieces onto the floor. RICH: Yeah, but it's worse than that because at least, with a Lego set, you get the Lego pieces. It's like if you had the Lego manual showing you how to build something, but you were then responsible for going out and getting the Lego pieces, that's frontend development and I don't like it. CHARLES: Right. Yeah. I don't like that either. But still, there's a lot of people advocating directly. You really ought to be doing everything completely and totally yourself. RICH: Yes. CHARLES: And a lot of software development shops still operate that way. RICH: Yeah. I find that the people advocating for that position the most loudly, they tend to be the maintainers of the projects in question. The whole small modules philosophy, they exist for the benefit primarily of library authors and framework authors, not for the benefit of developers, much less users. And the fact that the people who are building libraries and frameworks tend to have the loudest megaphones means that that mindset, that philosophy is taken as a best practice for the industry as a whole. And I think it's a mistake to think that way. TARAS: There is also, I think, a degree of a sliding scale where you start off with like as the more experience you get, because there is more experience you get closer, you get to that kind of wanting granular control and then they kind of slides down towards granular control and then slice back up to, once you've got a lot of experience, you're like, "Okay, I don't want this control anymore." And then you kind of cast that and you get into like, "I'm now responsible for tools that my team uses," and now you're back to wanting that control because you want things to be able to click together. It's kind of like a way that your interest in that might change over time depending on your experience level and your position in the organization. So yeah, there's definitely different motivating factors. Like one of the things that we've been thinking a lot about is designing tools that are composable and granular at individual module level, but combined together into a system for consumption by regular people. So like finding those primitives that will just click together when you know how to click them together. But when you're consuming them, just feel like a holistic whole, but at the same time not being monolithic. That's a lot of things to figure out and it's a lot of things to manage over time, but that's solely the kind of things we've been thinking about a lot. RICH: I think that's what distinguishes the good projects that are going to have a long lifespan from the projects that are maybe interesting but don't have a long shelf life is whether they're designed in such a way that permits that kind of cohesion and innovation tradeoff, if you think of it as a trade off. Anyone can build the fastest thing or the smallest thing or the whatever it is thing. But building these things in a way that feels like it was designed holistically but is also flexible enough to be used with everything else that you use, that's the real design challenge. CHARLES: It's hard to know where to draw that line. Maybe one good example of this and, these are actually two projects that I'm not particularly a fan of, but I think they do a good job of operating this way. So, I guess in that sense, it means I can even be more honest about it. I don't particularly care for Redux or like observables, but we ended up using, in one of our last React projects, we had to choose between using Redux-Saga and Redux-Observable. The Redux-Observable worked very well for us. And I think one of the reasons is because they both had to kind of exist. They had to kind of co-exist is their own projects. Like Redux exists as its own entity and Observables exist as their own kind of whole ecosystem. And so, they put a lot of thought in like what is the natural way in which these two primitives compose together? As opposed to the Saga, which I don't want to disparage the project because I think it actually is a really good project. There's a lot of really good ideas there but because it's more like just bolted on to Redux and it doesn't exist outside of the ecosystem of Redux and the ideas can't flourish outside and figure out how it interfaces with other things. Like the true primitive is still unrevealed there. And so, whereas I feel like with Redux you actually have to really, really true primitives. Now, they're not necessarily my favorite primitives, but they are very refined and very like these do exactly what they are meant to do. And so when you find how they connect together, that experience is also really good. And the primitive that arises there I think ends up being better. Is that an example of what you guys are talking about? RICH: Maybe. [Laughs] TARAS: No, I think so. I mean, it's distilling to the essence, the core of what you're trying to do and then be able to combine it together. I mean, that's been kind of the thing that we've been working on at the Frontside. But also within this context, it makes me think of how does a compiler fit into that? How does that work with the compiler? It's just like when you add the compiler element, it just makes it like my mind just goes poof! [Laughter] CHARLES: Yeah, exactly. That's why I keep coming back to like, how do you, and maybe I haven't, you just have to kind of go through the experience, but it feels like maybe there's this cycle of like you build up the framework and then once it's well understood, you throw the framework away in favor of like just wiring it straight in there with the compiler and then you iterate on that process. Is that fair to say? RICH: Kind of, yeah. At the moment, I'm working on this project, so I referred a moment ago to being able to target webGL directly. At the moment, the approach that I'm taking to building webGL apps is to have webGL components inside Svelte in this project called SvelteGL. And we've used it a couple of times at the Times. It's not really production ready yet, but I think it has some promise. But it's also slightly inefficient, like it needs to have all of the shade of code available for whichever path you're going to take, whatever characteristics your materials have, you need to have all of the shade of code. And if we're smart about it, then the compiler could know ahead of time which bits of shade of code it needed to include. At the moment, it just doesn't have a way of figuring that out. And so that would be an example of paving those cow paths. Like if you do try and do everything within the compiler universe, it does restrict your freedom of movement. It's true. And to qualify my earlier statements about how the small modules philosophy is to the benefit of authors over developers, it has actually enabled this huge flourishing of innovation, particularly in the React world. We've got this plethora of different state management solutions and CSS and JS solutions. And while I, as a developer, probably don't want to deal with that, I just want there to be a single correct answer. It's definitely been to the advantage of the ecosystem as a whole to have all of this experimentation. Then in the wild, there are projects like Svelte they can then take advantage of. We can say, "Oh well, having observed all of this, this is the right way to solve this problem." And so, we can kind of bake in that and take advantage of the research that other people have done. And I think we have made contributions of our own but there is a lot of stuff in Svelte like the fact that data generally flows one way instead of having [inaudible] everywhere. Things like that are the results of having seen everyone make mistakes in the past and learning from them. So, there are tradeoffs all around. TARAS: One thing on topic of data flow here and there, one thing that I've been kind of struggling to compute is the impact of that as opposed to something where you have like one directional data flow because it seems like conceptually it's really simple. You set a property like in two way balance system, like you just propagate through stuff but we don't really have a way, you don't have any way of assessing what is the true impact of that computation. Like what is the cost of that propagation where I think it's almost easier to see the cost of that computation if you have like one directional data flow because you know that essentially everything between the moment that you invoke transition to computing the next state, that is the cost of your computation where you don't have that way of computing the result in a two way balance system. Something like Ember Run Loop or mobx or zones, Vues, reactive system. All these systems make it really difficult to understand what is the real cost of setting state. And that's something that I personally find difficult because this clarity that you have about the one directional data flow and what it takes to compute the next state, it's almost like because that cost is tangible where you're thinking about like mutation of objects and tracking their change like that cost is almost immeasurable. It just seems like a blob of changes that they have to propagate. I don't know. That's just something that I've been thinking a lot because especially with the work that we'll be doing with microstates because as you're figuring out what the next state is, you know exactly what operations are performed in a process where that might not be the case with the system that tracks changes like where you'd have with zones or with Ember Run Loop, or Vue. RICH: I would agree with that. The times that I found it to be beneficial to deviate from the top-down ideology is when you have things like form elements and you want to bind to the values of those form elements. You want to use them in some other computation. And when you do all that by having props going in and then events going out and then you intercept the event and then you set the prop, you're basically articulating what the compiler can articulate for you more effectively anyway. And so conceptually, we have two way bindings within Svelte, but mechanically everything is top down, if that makes sense. CHARLES: Is it because you can analyze the tree of top down and basically understanding when you can cheat. This might be really over-simplistic, but if you're kind of with the event, you're collecting the water and then you have to put it way up on top of the thing and it flows down. But if you can see the entire apparatus, you can say, "Actually, I've got this water and it's going to end up here, so I'm just going to cheat and put it over right there." Is that the type of thing that you're talking about where you're effectively getting a two way binding, but you're skipping the ceremony. RICH: It's kind of writing the exact same code that you would write if you were doing it using events. But if you're writing it yourself, then maybe you would do something in a slightly inefficient way perhaps. For example, with some kinds of bindings, you have to be careful to avoid an infinite loop. If you have an event that triggers a state change, the state change could trigger the event again and you get this infinite loop. A compiler can guard against that. It can say this is a binding that could have that problem, so we're going to just keep track of whether the state changes as a result of the binding. And so, the compiler can sort of solve all of these really hairy problems that you had faced as a developer while also giving you the benefit in terms of being able to write much less code and write code that expresses the relationship between these two things in a more semantic and declarative way without the danger. TARAS: This is one of the reasons why I was so excited to talk to you about this stuff, Rich, because this stuff is really interesting. I mentioned that we might, so we have a little bit more time. So I just want to mention, because I think that you might find this interesting, the [inaudible], the stuff that we were talking about that I mentioned to you before. So, I want to let Charles talk about it briefly because it's interesting, because it essentially comes down to managing asynchrony as it ties to life cycle of objects. Life cycle of objects and components are something we deal with on a regular basis. So, it's been an interesting exercise and experimenting with that. Charles, do you want to give kind of a low down? CHARLES: Sure. It's definitely something that I'm very excited about. So, Taras gets to hear like an earful pretty much every day. But the idea behind structure concurrency, I don't know if you're familiar with it. It's something that I read a fantastic -- so people have been using this for a while in the Ember community. So Alex Matchneer, who's a friend and often time guest on the podcast created a library called ember-concurrency where he brought these ideas of structure concurrency to the ember world. But it's actually very prevalent. There's C libraries and Python libraries. There's not a generic one for JavaScript yet, but the idea is just really taking the same concepts of scope that you have with variables and with components, whether they be ember components, Svelte components, React components or whatever there is, you have a tree of components or you have a of parents and children and modeling every single asynchronous process as a tree rather than what we have now, which is kind of parallel linear stacks. You call some tick happens in the event loop and you drill down and you either edit an exception or you go straight back up. The next tick of the event loop comes, you drill down to some stack and then you go back up. A promise resolves, you do that stack. And so with structure concurrency, essentially every stack can have multiple children. And so, you can fork off multiple children. But if you have an error in any of these children, it's going to propagate up the entire tree. And so, it's essentially the same idea as components except to apply to concurrent processes. And you can do some just really, really amazing things because you don't ever have to worry about some process going rogue and you don't have to worry about coordinating all these different event loops. And one of the things that I'm discovering is that I don't need like event loops. I don't really use promises anymore. Like actually, I was watching, I think it was why I was watching your talk when you're talking about Svelte 3, when you're like -- or maybe did you write a blog post about we've got to stop saying that virtual doms are fast? RICH: Yes, I did. CHARLES: So I think it was that one. I was reading that one and it jived with me because it's just like, why can't we just go and do the work? We've got the event, we can just do the work. And one of the things that I'm discovering is with using the construction concurrency with generators, I'm experiencing a very similar phenomenon where these stack traces, like if there's an error, the stack traces like three lines long because you're basically doing the work and you're executing all these stacks and you're pausing them with a generator. And then when an event happens, you just resume right where you left off. There's no like, we've got this event, let's push it into this event queue that's waiting behind these three event loops. And then we're draining these queues one at a time. It's like, nope, the event happens. You can just resume right where you were. You're in the middle of a function call, in the middle of like [inaudible] block. You just go without any ceremony, without any fuss. You just go straight to where you were, and the stack and the context and all the variables and everything is there preserved exactly where you left it. So, it's really like you're just taking the book right off the shelf and going right to your bookmark and continuing along. Rather than when you've got things like the run loop in ember or the zones in angular where you have all these mechanics to reconstruct the context of where you were to make sure that you don't have some event listener. An event listeners created inside of a context and making sure that that context is either reconstructed or the event listener doesn't fire. All these problems just cease to exist when you take this approach. And so, if it's pertinent to this conversation, that was a surprising result for me was that if you're using essentially code routines to manage your concurrency, you don't need event loops, you don't need buffers, you don't need any of this other stuff. You just use the JavaScript call stack. And that's enough. RICH: I'm not going to pretend to have fully understood everything you just said but it does sound interesting. It does have something not that dissimilar to ember's run loop because if you have two state changes right next to each other, X+=1, Y+=1, you want to have a single update resulting from those. So instead of instruments in the code such that your components are updated immediately after X+=1, it waits until the end of the event loop and then it will flush all of the pending changes simultaneously. So, what you're describing sounds quite wonderful and I hope to understand that better. You have also reminded me that Alex Matchneer implemented this idea in Svelte, it's called svelte-concurrency. And when he sent it to me, I was out in the woods somewhere and I couldn't take a look at it and it went on my mental to do list and you just brought it to the top of that to do list. So yeah, we have some common ground here, I think. CHARLES: All right. TARAS: This is a really, really fascinating conversation. Thank you, Rich, so much for joining us. CHARLES: Thank you for listening. If you or someone you know has something to say about building user interfaces that simply must be heard, please get in touch with us. We can be found on Twitter at @thefrontside or over just plain old email at contact@frontside.io. Thanks and see you next time.

UX Coffee 设计咖
#61:数据可视化的力量(Uber 工程师消极姐 - 下)

UX Coffee 设计咖

Play Episode Listen Later Oct 12, 2018 34:59


2015 年,Uber 从卡耐基梅隆大学雇佣了一整个研究团队,建立起了它的无人车部门。在这个部门里,有汽车工程师、有人工智能科学家,也有钻研图像识别的专家,还有一支团队做的事情很特别,那就是无人车的数据可视化平台——这也是消极姐所在的部门。他们的工作使得 Uber 的工程师们可以去理解,他们所创造的无人车,究竟有多聪明了。 时间线 03:35 为什么无人车需要数据可视化? 08:20 说到底,我们在讨论的是哪些才是应该被可视化的重要信息 12:37 为什么浏览器可以跑这么复杂的可视化程序? 16:25 把中国城市的空气质量变化视觉化地记录下来 28:51 任何的数据可视化,都是个人立场的表达 参考链接 Uber 无人车数据可视化平台的官方博客:https://eng.uber.com/atg-dataviz/ D3.js 创始人 Mike Bostock 的 Twitter: https://twitter.com/mbostock visualcomplexity.com WebGL: https://en.wikipedia.org/wiki/WebGL DeckGL: https://deck.gl/ 我台联系方式 微信公众号:uxcoffee 微博:https://www.weibo.com/uxcoffee 知乎专栏:https://zhuanlan.zhihu.com/uxcoffee Twitter:https://twitter.com/ux_coffee Instagram:https://www.instagram.com/ux_coffee/

UX Coffee 设计咖
#61:数据可视化的力量(Uber 工程师消极姐 - 下)

UX Coffee 设计咖

Play Episode Listen Later Oct 12, 2018 34:59


2015 年,Uber 从卡耐基梅隆大学雇佣了一整个研究团队,建立起了它的无人车部门。在这个部门里,有汽车工程师、有人工智能科学家,也有钻研图像识别的专家,还有一支团队做的事情很特别,那就是无人车的数据可视化平台——这也是消极姐所在的部门。他们的工作使得 Uber 的工程师们可以去理解,他们所创造的无人车,究竟有多聪明了。 时间线 03:35 为什么无人车需要数据可视化? 08:20 说到底,我们在讨论的是哪些才是应该被可视化的重要信息 12:37 为什么浏览器可以跑这么复杂的可视化程序? 16:25 把中国城市的空气质量变化视觉化地记录下来 28:51 任何的数据可视化,都是个人立场的表达 参考链接 Uber 无人车数据可视化平台的官方博客:https://eng.uber.com/atg-dataviz/ D3.js 创始人 Mike Bostock 的 Twitter: https://twitter.com/mbostock visualcomplexity.com WebGL: https://en.wikipedia.org/wiki/WebGL DeckGL: https://deck.gl/ 我台联系方式 微信公众号:uxcoffee 微博:https://www.weibo.com/uxcoffee 知乎专栏:https://zhuanlan.zhihu.com/uxcoffee Twitter:https://twitter.com/ux_coffee Instagram:https://www.instagram.com/ux_coffee/

Frontend Weekend
#19 – Юрий Артюх учит как стать самым красивым верстальщиком рунета

Frontend Weekend

Play Episode Listen Later Sep 18, 2017 28:18


Юрий Артюх, CEO в Coderiver и стример верстки на YouTube, в гостях у Андрея Смирнова из Frontend Weekend. Заранее прошу прощения за звук и за длину интервью, оно было записано в экстренном порядке на afterparty конференции CSS-Minsk-JS. - Как любовь к математике и оптимизации может занести в мир frontend'а? 00:39 - Откуда пришло прозвище «самый красивый верстальщик рунета»? 02:34 - Что было интересного в разработке сайта kremlin.ru? 03:31 - Почему бросил писать в своем популярном блоге? 04:32 - Как появились стримы на YouTube и почему не Twitch? 05:56 - Есть ли планы развития канала и цели по количеству подписчиков? 07:53 - Что такое и как создавалась компания Coderiver? 09:58 - Мешает ли наличие только frontend-разработчиков и сколько в среднем поступает заказов? 11:09 - В чем цель поездок с докладами по конференциям? 13:22 - Каким образом Юрий стал специалистом по анимациям, в частности, на WebGL? 13:50 - Как удалось до сих пор не потерять любовь к математике? 15:49 - Как появилась любовь к сальсе и как часто нужно танцевать, чтобы не потерять навык? 16:40 - Как удается бороться с заиканием на выступлениях? 18:36 - Впечатления от CSS-Minsk-JS и ждать ли Юрия в России на ближайших конференциях? 19:58 - Откуда берутся никнеймы akella и flintyara? 21:09 - Сколько максимум времени можно потратить на одну анимацию из стрима? 22:01 - Насколько актуален совет начинающим блогерам про регулярность контента? 23:42 - React, Angular или Vue? 24:40 - Какая справедливая зарплата для frontend-разработчика? 25:12 - Готовим вместе с frontend-разработчиком 25:53 - Советы от Юрия: https://thebookofshaders.com и https://youtube.com/user/shiffman 26:50 Ссылки по теме: 1) Стримы ALL YOUR HTML – https://youtu.be/M7NGYXMp4JM?list=PLswdBLT9llbjS2o4xreJqgBRjTmkEz6sv 2) Блог – http://cssing.org.ua и https://twitter.com/akella 3) Обсуждение доклада Юрия на CSS-Minsk-JS – https://soundcloud.com/frontend-weekend/fw-18 4) Поддержи Frontend Weekend – http://frontendweekend.ml 5) Frontend Weekend Patreon – https://patreon.com/frontendweekend

ceo react webgl frontend weekend
Frontend Weekend
#10 – Василика Климова про переезд в Люксембург, WebGL и феминизм

Frontend Weekend

Play Episode Listen Later Jul 9, 2017 72:22


Василика Климова, frontend-разработчик в Artec 3D, в гостях у Андрея Смирнова и Артема Цацина из Frontend Weekend. - Какое впечатление от выступления на английском на конференции AmsterdamJS? 00:20 - Насколько видоизменяется знаменитый доклад про WebGL? 03:44 - Не надоело ли рассказывать один и тот же доклад? 05:25 - Насколько можно прокачать язык за поездку на иностранную конференцию? 06:14 - Куда двигаться frontend'еру, чтобы начать работать с 3D и WebGL? 07:07 - Насколько широк рынок WebGL программистов и как с мобильной поддержкой? 08:22 - Почему Василика не ушла в разработку игр и жалеет ли? 11:27 - Действительно ли браузерные игры выходят на уровень с обычными? 15:11 - Почему и когда произошел переезд в Люксембург? 19:16 - Как изменилась зарплата и жильё в процессе переезда? 20:07 - Насколько Люксембург маленький по сравнению с Москвой? 22:41 - Что нужно сделать, чтобы переехать в Люксембург и чувствовать себя там своим? 25:14 - Насколько высокотехнологичные сайты у банков в Люксембурге? 29:53 - Вопрос из YouTube от Даны Батьковой https://www.youtube.com/watch?v=f_7TSstGrpg&lc=z12aw1f4bpbgcd0a004cglphbri3vbbxfoo0k 31:35 - Как создавался курс Coding Girls в Moscow Coding School? 34:20 - Будет ли популярен аналогичный курс для мужчин? 39:45 - Зачем проценты с каждого курса направляются в благотворительный фонд? 42:30 - Почему курс подорожал с 30 до 75 тысяч рублей? 43:29 - Чем отличается социалистический феминизм от обычного? 45:26 - Для кого сделан фотоальбом «О наболевшем» на странице vk? 47:05 - Почему Василика в 2015 году была против нудизма и геев? 49:32 - Каково было работать помощником депутата в Законодательном Собрании Тверской области? 54:37 - Какие мысли по поводу проекта реновации? 56:44 - «Оказавшись перед Путиным, что ты ему скажешь?» и про Навального 58:11 - React или Angular? 1:01:30 - Какая справедливая зарплата у frontend-разработчика? 1:03:29 - Что делать, если твой брак разваливается в 26 лет? 1:05:21 - Почему Василика сокращается как Лика? 1:06:49 - Готовим вместе с frontend-разработчиком (пицца-пирог) 1:07:23 Пики 1:09:07 - Не учите фреймворки, учите архитектуру https://habrahabr.ru/post/253297/ - D3: Data-Driven Documents https://d3js.org - JavaScript 3D Library https://threejs.org

webgl react angular coding girls frontend weekend