Podcasts about Alan Kay

American computer scientist (born 1940)

  • 116PODCASTS
  • 187EPISODES
  • 48mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Jun 2, 2025LATEST
Alan Kay

POPULARITY

20172018201920202021202220232024


Best podcasts about Alan Kay

Latest podcast episodes about Alan Kay

Go To Market Grit
Bret Taylor's Journey Leading Salesforce, Sierra & OpenAI

Go To Market Grit

Play Episode Listen Later Jun 2, 2025 89:48


Over the past two decades, Bret Taylor has quietly helped shape the arc of Silicon Valley.From co-creating Google Maps to steering Facebook, Salesforce, and OpenAI, he's been behind some of the most consequential products in tech. Now, with his new company Sierra, he's starting from zero—again.In this conversation, Bret opens up about how founders navigate identity, why the best ideas often come from everyday friction, and how staying relentlessly focused can unlock real momentum in AI.Guest: Bret Taylor, Co-Founder of SierraChapters:00:00 Trailer00:49 Introduction01:57 Saving OpenAI09:15 Overwhelming yet capable of a lot13:36 Father and founder16:49 History is written by the victors22:13 How you price matters35:58 Stickiest piece of software49:48 The first realtime social network55:34 Facebook CTO who rewrote Google Maps1:02:10 Least known, most impressive1:11:39 The best way to predict the future1:16:22 Most personally passionate1:21:22 Currency of reputation1:27:17 Away from work1:28:35 Who Sierra is hiring1:28:58 What “grit” means to Bret1:29:18 OutroMentioned in this episode: Google Maps, Salesforce, OpenAI ChatGPT, Meta Facebook, X (formerly Twitter), Sam Altman, Elon Musk, Mark Zuckerberg, Google, Marissa Mayer, Excite, MSN, AltaVista, Amazon, Harvey, Airbnb, Coinbase, Apple, John Doerr, Cursor, Codeium Windsurf, Perplexity, xAI, Kleenex, Amazon Web Services (AWS), FriendFeed, Tumblr, Kevin Gibbs, Google Maps, Yelp, Trulia, iOS App Store, Blackberry, Facebook Messenger, Marvel Avengers, Slack, Quip, Leonardo da Vinci, Clay Bavor, Microsoft, Eric Schmidt, Alan Kay, Brian Armstrong, Brian Chesky, Shopify, SiriusXM, Patrick CollisonLinks:Connect with Bret TaylorXLinkedInConnect with JoubinXLinkedInEmail: grit@kleinerperkins.comLearn more about Kleiner Perkins

Lenny's Podcast: Product | Growth | Career
Microsoft CPO: If you aren't prototyping with AI you're doing it wrong | Aparna Chennapragada

Lenny's Podcast: Product | Growth | Career

Play Episode Listen Later May 18, 2025 61:12


Aparna Chennapragada is the chief product officer of experiences and devices at Microsoft, where she oversees AI product strategy for their productivity tools and work on agents. Previously, she was the CPO at Robinhood, spent 12 years at Google, and is also on the board of eBay and Capital One.What you'll learn:1. How “prompt sets are the new PRDs” and why prototyping with AI is now essential for effective product development2. The three key characteristics of AI agents: autonomy (delegation of tasks), complexity (handling multi-step challenges), and natural interaction (conversing beyond simple chat)3. Why NLX (natural language experience) is the new UX, requiring deliberate design principles for conversational interfaces4. Why the PM role isn't dying in the AI era—it's evolving to emphasize tastemaking and editing5. How living “one year in the future” can be operationalized with programs like Microsoft's Frontier6. How even traditional enterprises can balance cutting-edge AI adoption with appropriate governance through dual-track approaches7. Insights on leadership differences between Microsoft's Satya Nadella (known for multi-level thinking and early trendspotting) and Google's Sundar Pichai (mastery of complex ecosystems)8. The vision for human and AI collaboration in the workplace, where people and agents achieve outcomes greater than either could alone9. A practical framework for evaluating zero-to-one product opportunities—Brought to you by:Eppo—Run reliable, impactful experimentsPragmatic Institute—Industry‑recognized product, marketing, and AI training and certificationsCoda—The all-in-one collaborative workspace—Where to find Aparna Chennapragada:• X: https://x.com/aparnacd• LinkedIn: https://www.linkedin.com/in/aparnacd/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Aparna Chennapragada(04:28) Aparna's stand-up comedy journey(07:29) Transition to Microsoft and enterprise insights(10:00) The Frontier program and AI integration(13:28) Understanding AI agents(17:59) NLX is the new UX(22:28) The future of product development(31:16) Building a custom Chrome extension(35:45) Leadership styles of Satya and Sundar(37:47) Counterintuitive lessons in product building(41:20) Inflection points for successful products(45:16) GitHub Copilot and code generation(48:34) Excel's enduring success(50:27) Pivotal career moments(54:55) The future of human-agent collaboration(56:25) Lightning round and final thoughts—Referenced:• Google Lens: https://lens.google/• Saturday Night Live: https://www.nbc.com/saturday-night-live• Reid Hoffman on LinkedIn: https://www.linkedin.com/in/reidhoffman/• Robinhood: https://robinhood.com/• eBay: https://www.ebay.com/• Capital One: https://www.capitalone.com/• Microsoft: https://www.microsoft.com/• Aparna's LinkedIn post about enterprise vs. consumer: https://www.linkedin.com/posts/aparnacd_every-enterprise-user-feature-has-a-shadow-activity-7321176091610542080-8X-E/• The Epic Split: https://en.wikipedia.org/wiki/The_Epic_Split• AI Frontiers: https://www.microsoft.com/en-us/research/lab/ai-frontiers/• OpenAI's CPO on how AI changes must-have skills, moats, coding, startup playbooks, more | Kevin Weil (CPO at OpenAI, ex-Instagram, Twitter): https://www.lennysnewsletter.com/p/kevin-weil-open-ai• Deepseek: https://www.deepseek.com/• Satya Nadella on LinkedIn: https://www.linkedin.com/in/satyanadella/• Tobi Lütke's leadership playbook: Playing infinite games, operating from first principles, and maximizing human potential (founder and CEO of Shopify): https://www.lennysnewsletter.com/p/tobi-lutkes-leadership-playbook• Tobi Lütke's post on X about reflexive AI: https://x.com/tobi/status/1909251946235437514• GitHub Copilot: https://github.com/features/copilot• Sundar Pichai on LinkedIn: https://www.linkedin.com/in/sundarpichai/• South Park “Underwear Gnomes” episode: https://southpark.cc.com/episodes/13y790/south-park-gnomes-season-2-ep-17• Google Home: https://home.google.com/welcome/• Cursor: https://www.cursor.com/• v0: https://v0.dev/• Bolt: https://bolt.net/• Lovable: https://lovable.dev/• Replit: https://replit.com/• Inside Bolt: From near-death to ~$40m ARR in 5 months—one of the fastest-growing products in history | Eric Simons (founder and CEO of StackBlitz): https://www.lennysnewsletter.com/p/inside-bolt-eric-simons• Building Lovable: $10M ARR in 60 days with 15 people | Anton Osika (CEO and co-founder): https://www.lennysnewsletter.com/p/building-lovable-anton-osika• Everyone's an engineer now: Inside v0's mission to create a hundred million builders | Guillermo Rauch (founder and CEO of Vercel, creators of v0 and Next.js): https://www.lennysnewsletter.com/p/everyones-an-engineer-now-guillermo-rauch• The rise of Cursor: The $300M ARR AI tool that engineers can't stop using | Michael Truell (co-founder and CEO): https://www.lennysnewsletter.com/p/the-rise-of-cursor-michael-truell• Behind the product: Replit | Amjad Masad (co-founder and CEO): https://www.lennysnewsletter.com/p/behind-the-product-replit-amjad-masad• Microsoft Excel World Championship: https://fmworldcup.com/microsoft-excel-world-championship/• Google Now: https://en.wikipedia.org/wiki/Google_Now• Hacks on Max: https://www.max.com/shows/hacks/67e940b7-aab2-46ce-a62b-c7308cde9de7• Granola: https://www.granola.ai/• Alan Kay quote: https://www.brainyquote.com/quotes/alan_kay_100831• Sindhu Vee's website: https://sindhuvee.com/• Nate Bargatze's website: https://natebargatze.com/—Recommended book:• A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains: https://www.amazon.com/Brief-History-Intelligence-Evolution-Breakthroughs/dp/0063286351—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.lennysnewsletter.com/subscribe

Lenny's Podcast: Product | Growth | Career
Notion's lost years, its near collapse during Covid, staying small to move fast, the joy and suffering of building horizontal, more | Ivan Zhao (CEO and co-founder)

Lenny's Podcast: Product | Growth | Career

Play Episode Listen Later Mar 6, 2025 72:18


Ivan Zhao is the co-founder and CEO of Notion. Ivan shares the untold story of Notion, from nearly running out of database space during Covid to finding product-market fit after several “lost years,” and the hard-won lessons along the way.—What you'll learn:1. Why you sometimes need to “hide your vision” behind something people actually want—what Ivan calls “sugar-coating the broccoli”2. How Ivan and his co-founder persevered through multiple product resets and complete code rewrites3. Why Notion prioritized systems over headcount, keeping the team small and focused even at scale4. Why Ivan believes in craft and values as the foundation for product development, balancing technical excellence with aesthetic sensibility5. The surprising story of how Notion nearly collapsed during Covid when their single database almost ran out of space with only weeks to spare6. Community-led growth tactics7. Ivan's unique journey from a small town in China8. Much more—Brought to you by:• Eppo—Run reliable, impactful experiments• Airtable ProductCentral—Launch to new heights with a unified system for product development• Sinch—Build messaging, email, and calling into your product—Find the transcript at: https://www.lennysnewsletter.com/p/inside-notion-ivan-zhao—Where to find Ivan Zhao:• X: https://x.com/ivanhzhao• LinkedIn: https://www.linkedin.com/in/ivanhzhao/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Ivan Zhao(04:41) Ivan's early life and education(07:46) Discovering the vision for Notion(10:49) The lost years of Notion(13:56) Rebuilding and perseverance(17:14) Layoffs and company morale(18:53) Advice for startup founders(25:08) Product-market fit(29:56) Staying lean and efficient(34:27) Creating a unique office culture(37:20) Craft and values: the foundation of Notion's philosophy(38:44) Navigating tradeoffs in product and business building(41:24) Leadership and personal growth(49:11) Challenges and crises: lessons from Notion's journey(51:08) Building horizontal software: joys and pains(01:02:40) Philosophy of tools and human potential(01:06:17) Lightning round and final thoughts—Referenced:• Ürümqi: https://en.wikipedia.org/wiki/%C3%9Cr%C3%BCmqi• Notion: https://www.notion.com/• SpongeBob SquarePants: https://en.wikipedia.org/wiki/SpongeBob_SquarePants• Augmenting Human Intellect: https://web.stanford.edu/class/history34q/readings/Engelbart/Engelbart_AugmentIntellect.html• Alan Kay: https://en.wikipedia.org/wiki/Alan_Kay• Ted Nelson: https://en.wikipedia.org/wiki/Ted_Nelson• Steve Jobs on Why Computers Are Like a Bicycle for the Mind (1990): https://www.themarginalian.org/2011/12/21/steve-jobs-bicycle-for-the-mind-1990/• Xerox Alto: https://en.wikipedia.org/wiki/Xerox_Alto• React: https://react.dev/• Simon Last on LinkedIn: https://www.linkedin.com/in/simon-last-41404140/• Magna-Tiles: https://www.magnatiles.com/• Design on a deadline: How Notion pulled itself back from the brink of failure: https://www.figma.com/blog/design-on-a-deadline-how-notion-pulled-itself-back-from-the-brink-of-failure/• Bryan Johnson on X: https://x.com/bryan_johnson• Tobi Lütke's leadership playbook: Playing infinite games, operating from first principles, and maximizing human potential (founder and CEO of Shopify): https://www.lennysnewsletter.com/p/tobi-lutkes-leadership-playbook• Smalltalk: https://en.wikipedia.org/wiki/Smalltalk#:• Lisp: https://en.wikipedia.org/wiki/Lisp_(programming_language)• DeepSeek: https://www.deepseek.com/• Shana Fisher: https://www.crunchbase.com/person/shana-fisher• LAMY 2000 fountain pens: https://www.jetpens.com/LAMY-2000-Fountain-Pens/• Macintosh 128K: https://en.wikipedia.org/wiki/Macintosh_128K• Toshiba rice cooker: https://www.toshiba-lifestyle.com/us/cooking-appliances/rice-cooker• Transistor radio: https://en.wikipedia.org/wiki/Transistor_radio• Jira: https://www.atlassian.com/software/jira• Salesforce: https://www.salesforce.com/• HubSpot: https://www.hubspot.com/• Zendesk: https://www.zendesk.com/• Misattributed McLuhan quote: https://mcluhangalaxy.wordpress.com/2013/04/01/we-shape-our-tools-and-thereafter-our-tools-shape-us/• Phin Barnes on LinkedIn: https://www.linkedin.com/in/phineasbarnes/• Hacker News: https://news.ycombinator.com/• Pablo Picasso quote: https://www.goodreads.com/quotes/629531-good-artists-copy-great-artists-steal#:~• Connections with James Burke on Prime Video: https://www.amazon.com/gp/video/detail/amzn1.dv.gti.484e32c5-60bd-4493-a800-e44fd0940312• The Enneagram Institute: https://www.enneagraminstitute.com/—Recommended book:• The Romance of the Three Kingdoms: https://www.amazon.com/Romance-Three-Kingdoms-Luo-Guanzhong/dp/024133277X—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe

Hiçbir Şey Tesadüf Değil
Silikon Tanrıları | Bilgisayarların Hikayesi - Bölüm 2

Hiçbir Şey Tesadüf Değil

Play Episode Listen Later Feb 13, 2025 23:42


Konu bilgisayarların tarihi olduğunda hep bir en birinci kimdi rekabetinin ortasında buluyoruz kendimizi. Fakat bu kavgaların dışında birbiriyle daha iyisini yapmak için rekabet eden devler de var. Ki esas farkı yaratanlar da onlar oluyor. Onlar sayesinde “bir tıkla” tüm dünya parmaklarımızın ucunda artık. Bilgisayarların tarihinin ikinci bölümünde, o bir “tıka” nasıl geldiğimiz üzerine konuşacağız. Bugün cebimize sığan teknolojinin gelişimine tanık olacağız. Ve bölümün sonunda şunu çok net bir şekilde anlayacağız aslında: Her şeyin bu kadar kolay olması, hiç de kolay olmadı.Dijital sağlık platformu Eczacıbaşı Evital ile alanlarında uzman psikolog ve diyetisyenlerle internetin olduğu her yerden online görüşmelerinizi hemen yapabilirsiniz. Ücretsiz ön görüşme fırsatına ek olarak görüşmelerinizi %25 indirimle planlamak için PODBEE25 kodunu kullanabilirsiniz. Evital'i deneyimlemek için hemen tıklayınSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

iSenaCode Live
#365 ¡Apple M5 en el horno!, OpenAI responde con o3mini y más noticias de Apple

iSenaCode Live

Play Episode Listen Later Feb 6, 2025 82:46


En este episodio del iSenaCode Live, analizamos los primeros rumores sobre el Apple M5 y lo que podría acompañar a esta bestia en potencia. Además, OpenAI sorprende con el lanzamiento de o3mini, un nuevo modelo que promete hacer sombra a DeepSeek. También repasamos las últimas novedades de Apple y otras noticias tecnológicas que no te puedes perder.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Bolt.new, Flow Engineering for Code Agents, and >$8m ARR in 2 months as a Claude Wrapper

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Dec 2, 2024 98:39


The full schedule for Latent Space LIVE! at NeurIPS has been announced, featuring Best of 2024 overview talks for the AI Startup Landscape, Computer Vision, Open Models, Transformers Killers, Synthetic Data, Agents, and Scaling, and speakers from Sarah Guo of Conviction, Roboflow, AI2/Meta, Recursal/Together, HuggingFace, OpenHands and SemiAnalysis. Join us for the IRL event/Livestream! Alessio will also be holding a meetup at AWS Re:Invent in Las Vegas this Wednesday. See our new Events page for dates of AI Engineer Summit, Singapore, and World's Fair in 2025. LAST CALL for questions for our big 2024 recap episode! Submit questions and messages on Speakpipe here for a chance to appear on the show!When we first observed that GPT Wrappers are Good, Actually, we did not even have Bolt on our radar. Since we recorded our Anthropic episode discussing building Agents with the new Claude 3.5 Sonnet, Bolt.new (by Stackblitz) has easily cleared the $8m ARR bar, repeating and accelerating its initial $4m feat.There are very many AI code generators and VS Code forks out there, but Bolt probably broke through initially because of its incredible zero shot low effort app generation:But as we explain in the pod, Bolt also emphasized deploy (Netlify)/ backend (Supabase)/ fullstack capabilities on top of Stackblitz's existing WebContainer full-WASM-powered-developer-environment-in-the-browser tech. Since then, the team has been shipping like mad (with weekly office hours), with bugfixing, full screen, multi-device, long context, diff based edits (using speculative decoding like we covered in Inference, Fast and Slow).All of this has captured the imagination of low/no code builders like Greg Isenberg and many others on YouTube/TikTok/Reddit/X/Linkedin etc:Just as with Fireworks, our relationship with Bolt/Stackblitz goes a bit deeper than normal - swyx advised the launch and got a front row seat to this epic journey, as well as demoed it with Realtime Voice at the recent OpenAI Dev Day. So we are very proud to be the first/closest to tell the full open story of Bolt/Stackblitz!Flow Engineering + Qodo/AlphaCodium UpdateIn year 2 of the pod we have been on a roll getting former guests to return as guest cohosts (Harrison Chase, Aman Sanger, Jon Frankle), and it was a pleasure to catch Itamar Friedman back on the pod, giving us an update on all things Qodo and Testing Agents from our last catchup a year and a half ago:Qodo (they renamed in September) went viral in early January this year with AlphaCodium (paper here, code here) beating DeepMind's AlphaCode with high efficiency:With a simple problem solving code agent:* The first step is to have the model reason about the problem. They describe it using bullet points and focus on the goal, inputs, outputs, rules, constraints, and any other relevant details.* Then, they make the model reason about the public tests and come up with an explanation of why the input leads to that particular output. * The model generates two to three potential solutions in text and ranks them in terms of correctness, simplicity, and robustness. * Then, it generates more diverse tests for the problem, covering cases not part of the original public tests. * Iteratively, pick a solution, generate the code, and run it on a few test cases. * If the tests fail, improve the code and repeat the process until the code passes every test.swyx has previously written similar thoughts on types vs tests for putting bounds on program behavior, but AlphaCodium extends this to AI generated tests and code.More recently, Itamar has also shown that AlphaCodium's techniques also extend well to the o1 models:Making Flow Engineering a useful technique to improve code model performance on every model. This is something we see AI Engineers uniquely well positioned to do compared to ML Engineers/Researchers.Full Video PodcastLike and subscribe!Show Notes* Itamar* Qodo* First episode* Eric* Bolt* StackBlitz* Thinkster* AlphaCodium* WebContainersChapters* 00:00:00 Introductions & Updates* 00:06:01 Generic vs. Specific AI Agents* 00:07:40 Maintaining vs Creating with AI* 00:17:46 Human vs Agent Computer Interfaces* 00:20:15 Why Docker doesn't work for Bolt* 00:24:23 Creating Testing and Code Review Loops* 00:28:07 Bolt's Task Breakdown Flow* 00:31:04 AI in Complex Enterprise Environments* 00:41:43 AlphaCodium* 00:44:39 Strategies for Breaking Down Complex Tasks* 00:45:22 Building in Open Source* 00:50:35 Choosing a product as a founder* 00:59:03 Reflections on Bolt Success* 01:06:07 Building a B2C GTM* 01:18:11 AI Capabilities and Pricing Tiers* 01:20:28 What makes Bolt unique* 01:23:07 Future Growth and Product Development* 01:29:06 Competitive Landscape in AI Engineering* 01:30:01 Advice to Founders and Embracing AI* 01:32:20 Having a baby and completing an Iron ManTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:12]: Hey, and today we're still in our sort of makeshift in-between studio, but we're very delighted to have a former returning guest host, Itamar. Welcome back.Itamar [00:00:21]: Great to be here after a year or more. Yeah, a year and a half.Swyx [00:00:24]: You're one of our earliest guests on Agents. Now you're CEO co-founder of Kodo. Right. Which has just been renamed. You also raised a $40 million Series A, and we can get caught up on everything, but we're also delighted to have our new guest, Eric. Welcome.Eric [00:00:42]: Thank you. Excited to be here. Should I say Bolt or StackBlitz?Swyx [00:00:45]: Like, is it like its own company now or?Eric [00:00:47]: Yeah. Bolt's definitely bolt.new. That's the thing that we're probably the most known for, I imagine, at this point.Swyx [00:00:54]: Which is ridiculous to say because you were working at StackBlitz for so long.Eric [00:00:57]: Yeah. I mean, within a week, we were doing like double the amount of traffic. And StackBlitz had been online for seven years, and we were like, what? But anyways, yeah. So we're StackBlitz, the company behind bolt.new. If you've heard of bolt.new, that's our stuff. Yeah.Swyx [00:01:12]: Yeah.Itamar [00:01:13]: Excellent. I see, by the way, that the founder mode, you need to know to capture opportunities. So kudos on doing that, right? You're working on some technology, and then suddenly you can exploit that to a new world. Yeah.Eric [00:01:24]: Totally. And I think, well, not to jump, but 100%, I mean, a couple of months ago, we had the idea for Bolt earlier this year, but we haven't really shared this too much publicly. But we actually had tried to build it with some of those state-of-the-art models back in January, February, you can kind of imagine which, and they just weren't good enough to actually do the code generation where the code was accurate and it was fast and whatever have you without a ton of like rag, but then there was like issues with that. So we put it on the shelf and then we got kind of a sneak peek of some of the new models that have come out in the past couple of months now. And so once we saw that, once we actually saw the code gen from it, we were like, oh my God, like, okay, we can build a product around this. And so that was really the impetus of us building the thing. But with that, it was StackBlitz, the core StackBlitz product the past seven years has been an IDE for developers. So the entire user experience flow we've built up just didn't make sense. And so when we kind of went out to build Bolt, we just thought, you know, if we were inventing our product today, what would the interface look like given what is now possible with the AI code gen? And so there's definitely a lot of conversations we had internally, but you know, just kind of when we logically laid it out, we were like, yeah, I think it makes sense to just greenfield a new thing and let's see what happens. If it works great, then we'll figure it out. If it doesn't work great, then it'll get deleted at some point. So that's kind of how it actually came to be.Swyx [00:02:49]: I'll mention your background a little bit. You were also founder of Thinkster before you started StackBlitz. So both of you are second time founders. Both of you have sort of re-founded your company recently. Yours was more of a rename. I think a slightly different direction as well. And then we can talk about both. Maybe just chronologically, should we get caught up on where Kodo is first and then you know, just like what people should know since the last pod? Sure.Itamar [00:03:12]: The last pod was two months after we launched and we basically had the vision that we talked about. The idea that software development is about specification, test and code, etc. We are more on the testing part as in essence, we think that if you solve testing, you solve software development. The beautiful chart that we'll put up on screen. And testing is a really big field, like there are many dimensions, unit testing, the level of the component, how big it is, how large it is. And then there is like different type of testing, is it regression or smoke or whatever. So back then we only had like one ID extension with unit tests as in focus. One and a half year later, first ID extension supports more type of testing as context aware. We index local, local repos, but also 10,000s of repos for Fortune 500 companies. We have another agent, another tool that is called, the pure agent is the open source and the commercial one is CodoMerge. And then we have another open source called CoverAgent, which is not yet a commercial product coming very soon. It's very impressive. It could be that already people are approving automated pull requests that they don't even aware in really big open sources. So once we have enough of these, we will also launch another agent. So for the first one and a half year, what we did is grew in our offering and mostly on the side of, does this code actually works, testing, code review, et cetera. And we believe that's the critical milestone that needs to be achieved to actually have the AI engineer for enterprise software. And then like for the first year was everything bottom up, getting to 1 million installation. 2024, that was 2023, 2024 was starting to monetize, to feel like how it is to make the first buck. So we did the teams offering, it went well with a thousand of teams, et cetera. And then we started like just a few months ago to do enterprise with everything you need, which is a lot of things that discussed in the last post that was just released by Codelm. So that's how we call it at Codelm. Just opening the brackets, our company name was Codelm AI, and we renamed to Codo and we call our models Codelm. So back to my point, so we started Enterprise Motion and already have multiple Fortune 100 companies. And then with that, we raised a series of $40 million. And what's exciting about it is that enables us to develop more agents. That's our focus. I think it's very different. We're not coming very soon with an ID or something like that.Swyx [00:06:01]: You don't want to fork this code?Itamar [00:06:03]: Maybe we'll fork JetBrains or something just to be different.Swyx [00:06:08]: I noticed that, you know, I think the promise of general purpose agents has kind of died. Like everyone is doing kind of what you're doing. There's Codogen, Codomerge, and then there's a third one. What's the name of it?Itamar [00:06:17]: Yeah. Codocover. Cover. Which is like a commercial version of a cover agent. It's coming soon.Swyx [00:06:23]: Yeah. It's very similar with factory AI, also doing like droids. They all have special purpose doing things, but people don't really want general purpose agents. Right. The last time you were here, we talked about AutoGBT, the biggest thing of 2023. This year, not really relevant anymore. And I think it's mostly just because when you give me a general purpose agent, I don't know what to do with it.Eric [00:06:42]: Yeah.Itamar [00:06:43]: I totally agree with that. We're seeing it for a while and I think it will stay like that despite the computer use, et cetera, that supposedly can just replace us. You can just like prompt it to be, hey, now be a QA or be a QA person or a developer. I still think that there's a few reasons why you see like a dedicated agent. Again, I'm a bit more focused, like my head is more on complex software for big teams and enterprise, et cetera. And even think about permissions and what are the data sources and just the same way you manage permissions for users. Developers, you probably want to have dedicated guardrails and dedicated approvals for agents. I intentionally like touched a point on not many people think about. And of course, then what you can think of, like maybe there's different tools, tool use, et cetera. But just the first point by itself is a good reason why you want to have different agents.Alessio [00:07:40]: Just to compare that with Bot.new, you're almost focused on like the application is very complex and now you need better tools to kind of manage it and build on top of it. On Bot.new, it's almost like I was using it the other day. There's basically like, hey, look, I'm just trying to get started. You know, I'm not very opinionated on like how you're going to implement this. Like this is what I want to do. And you build a beautiful app with it. What people ask as the next step, you know, going back to like the general versus like specific, have you had people say, hey, you know, this is great to start, but then I want a specific Bot.new dot whatever else to do a more vertical integration and kind of like development or what's the, what do people say?Eric [00:08:18]: Yeah. I think, I think you kind of hit the, hit it head on, which is, you know, kind of the way that we've, we've kind of talked about internally is it's like people are using Bolt to go from like 0.0 to 1.0, like that's like kind of the biggest unlock that Bolt has versus most other things out there. I mean, I think that's kind of what's, what's very unique about Bolt. I think the, you know, the working on like existing enterprise applications is, I mean, it's crazy important because, you know, there's a, you look, when you look at the fortune 500, I mean, these code bases, some of these have been around for 20, 30 plus years. And so it's important to be going from, you know, 101.3 to 101.4, et cetera. I think for us, so what's been actually pretty interesting is we see there's kind of two different users for us that are coming in and it's very distinct. It's like people that are developers already. And then there's people that have never really written software and more if they have, it's been very, very minimal. And so in the first camp, what these developers are doing, like to go from zero to one, they're coming to Bolt and then they're ejecting the thing to get up or just downloading it and, you know, opening cursor, like whatever to, to, you know, keep iterating on the thing. And sometimes they'll bring it back to Bolt to like add in a huge piece of functionality or something. Right. But for the people that don't know how to code, they're actually just, they, they live in this thing. And that was one of the weird things when we launched is, you know, within a day of us being online, one of the most popular YouTube videos, and there's been a ton since, which was, you know, there's like, oh, Bolt is the cursor killer. And I originally saw the headlines and I was like, thanks for the views. I mean, I don't know. This doesn't make sense to me. That's not, that's not what we kind of thought.Swyx [00:09:44]: It's how YouTubers talk to each other. Well, everything kills everything else.Eric [00:09:47]: Totally. But what blew my mind was that there was any comparison because it's like cursor is a, is a local IDE product. But when, when we actually kind of dug into it and we, and we have people that are using our product saying this, I'm not using cursor. And I was like, what? And it turns out there are hundreds of thousands of people that we have seen that we're using cursor and we're trying to build apps with that where they're not traditional software does, but we're heavily leaning on the AI. And as you can imagine, it is very complicated, right? To do that with cursor. So when Bolt came out, they're like, wow, this thing's amazing because it kind of inverts the complexity where it's like, you know, it's not an IDE, it's, it's a, it's a chat-based sort of interface that we have. So that's kind of the split, which is rather interesting. We've had like the first startups now launch off of Bolt entirely where this, you know, tomorrow I'm doing a live stream with this guy named Paul, who he's built an entire CRM using this thing and you know, with backend, et cetera. And people have made their first money on the internet period, you know, launching this with Stripe or whatever have you. So that's, that's kind of the two main, the two main categories of folks that we see using Bolt though.Itamar [00:10:51]: I agree that I don't understand the comparison. It doesn't make sense to me. I think like we have like two type of families of tools. One is like we re-imagine the software development. I think Bolt is there and I think like a cursor is more like a evolution of what we already have. It's like taking the IDE and it's, it's amazing and it's okay, let's, let's adapt the IDE to an era where LLMs can do a lot for us. And Bolt is more like, okay, let's rethink everything totally. And I think we see a few tools there, like maybe Vercel, Veo and maybe Repl.it in that area. And then in the area of let's expedite, let's change, let's, let's progress with what we already have. You can see Cursor and Kodo, but we're different between ourselves, Cursor and Kodo, but definitely I think that comparison doesn't make sense.Alessio [00:11:42]: And just to set the context, this is not a Twitter demo. You've made 4 million of revenue in four weeks. So this is, this is actually working, you know, it's not a, what, what do you think that is? Like, there's been so many people demoing coding agents on Twitter and then it doesn't really work. And then you guys were just like, here you go, it's live, go use it, pay us for it. You know, is there anything in the development that was like interesting and maybe how that compares to building your own agents?Eric [00:12:08]: We had no idea, honestly, like we, we, we've been pretty blown away and, and things have just kind of continued to grow faster since then. We're like, oh, today is week six. So I, I kind of came back to the point you just made, right, where it's, you, you kind of outlined, it's like, there's kind of this new market of like kind of rethinking the software development and then there's heavily augmenting existing developers. I think that, you know, both of which are, you know, AI code gen being extremely good, it's allowed existing developers, it's allowing existing developers to camera out software far faster than they could have ever before, right? It's like the ultimate power tool for an existing developer. But this code gen stuff is now so good. And then, and we saw this over the past, you know, from the beginning of the year when we tried to first build, it's actually lowered the barrier to people that, that aren't traditionally software engineers. But the kind of the key thing is if you kind of think about it from, imagine you've never written software before, right? My co-founder and I, he and I grew up down the street from each other in Chicago. We learned how to code when we were 13 together and we've been building stuff ever since. And this is back in like the mid 2000s or whatever, you know, there was nothing for free to learn from online on the internet and how to code. For our 13th birthdays, we asked our parents for, you know, O'Reilly books cause you couldn't get this at the library, right? And so instead of like an Xbox, we got, you know, programming books. But the hardest part for everyone learning to code is getting an environment set up locally, you know? And so when we built StackBlitz, like kind of the key thesis, like seven years ago, the insight we had was that, Hey, it seems like the browser has a lot of new APIs like WebAssembly and service workers, et cetera, where you could actually write an operating system that ran inside the browser that could boot in milliseconds. And you, you know, basically there's this missing capability of the web. Like the web should be able to build apps for the web, right? You should be able to build the web on the web. Every other platform has that, Visual Studio for Windows, Xcode for Mac. The web has no built in primitive for this. And so just like our built in kind of like nerd instinct on this was like, that seems like a huge hole and it's, you know, it will be very valuable or like, you know, very valuable problem to solve. So if you want to set up that environments, you know, this is what we spent the past seven years doing. And the reality is existing developers have running locally. They already know how to set up that environment. So the problem isn't as acute for them. When we put Bolt online, we took that technology called WebContainer and married it with these, you know, state of the art frontier models. And the people that have the most pain with getting stuff set up locally is people that don't code. I think that's been, you know, really the big explosive reason is no one else has been trying to make dev environments work inside of a browser tab, you know, for the past if since ever, other than basically our company, largely because there wasn't an immediate demand or need. So I think we kind of find ourselves at the right place at the right time. And again, for this market of people that don't know how to write software, you would kind of expect that you should be able to do this without downloading something to your computer in the same way that, hey, I don't have to download Photoshop now to make designs because there's Figma. I don't have to download Word because there's, you know, Google Docs. They're kind of looking at this as that sort of thing, right? Which was kind of the, you know, our impetus and kind of vision from the get-go. But you know, the code gen, the AI code gen stuff that's come out has just been, you know, an order of magnitude multiplier on how magic that is, right? So that's kind of my best distillation of like, what is going on here, you know?Alessio [00:15:21]: And you can deploy too, right?Eric [00:15:22]: Yeah.Alessio [00:15:23]: Yeah.Eric [00:15:24]: And so that's, what's really cool is it's, you know, we have deployment built in with Netlify and this is actually, I think, Sean, you actually built this at Netlify when you were there. Yeah. It's one of the most brilliant integrations actually, because, you know, effectively the API that Sean built, maybe you can speak to it, but like as a provider, we can just effectively give files to Netlify without the user even logging in and they have a live website. And if they want to keep, hold onto it, they can click a link and claim it to their Netlify account. But it basically is just this really magic experience because when you come to Bolt, you say, I want a website. Like my mom, 70, 71 years old, made her first website, you know, on the internet two weeks ago, right? It was about her nursing days.Swyx [00:16:03]: Oh, that's fantastic though. It wouldn't have been made.Eric [00:16:06]: A hundred percent. Cause even in, you know, when we've had a lot of people building personal, like deeply personal stuff, like in the first week we launched this, the sales guy from the East Coast, you know, replied to a tweet of mine and he said, thank you so much for building this to your team. His daughter has a medical condition and so for her to travel, she has to like line up donors or something, you know, so ahead of time. And so he actually used Bolt to make a website to do that, to actually go and send it to folks in the region she was going to travel to ahead of time. I was really touched by it, but I also thought like, why, you know, why didn't he use like Wix or Squarespace? Right? I mean, this is, this is a solved problem, quote unquote, right? And then when I thought, I actually use Squarespace for my, for my, uh, the wedding website for my wife and I, like back in 2021, so I'm familiar, you know, it was, it was faster. I know how to code. I was like, this is faster. Right. And I thought back and I was like, there's a whole interface you have to learn how to use. And it's actually not that simple. There's like a million things you can configure in that thing. When you come to Bolt, there's a, there's a text box. You just say, I need a, I need a wedding website. Here's the date. Here's where it is. And here's a photo of me and my wife, put it somewhere relevant. It's actually the simplest way. And that's what my, when my mom came, she said, uh, I'm Pat Simons. I was a nurse in the seventies, you know, and like, here's the things I did and a website came out. So coming back to why is this such a, I think, why are we seeing this sort of growth? It's, this is the simplest interface I think maybe ever created to actually build it, a deploy a website. And then that website, my mom made, she's like, okay, this looks great. And there's, there's one button, you just click it, deploy, and it's live and you can buy a domain name, attach it to it. And you know, it's as simple as it gets, it's getting even simpler with some of the stuff we're working on. But anyways, so that's, it's, it's, uh, it's been really interesting to see some of the usage like that.Swyx [00:17:46]: I can offer my perspective. So I, you know, I probably should have disclosed a little bit that, uh, I'm a, uh, stack list investor.Alessio [00:17:53]: Canceled the episode. I know, I know. Don't play it now. Pause.Eric actually reached out to ShowMeBolt before the launch. And we, you know, we talked a lot about, like, the framing of, of what we're going to talk about how we marketed the thing, but also, like, what we're So that's what Bolt was going to need, like a whole sort of infrastructure.swyx: Netlify, I was a maintainer but I won't take claim for the anonymous upload. That's actually the origin story of Netlify. We can have Matt Billman talk about it, but that was [00:18:00] how Netlify started. You could drag and drop your zip file or folder from your desktop onto a website, it would have a live URL with no sign in.swyx: And so that was the origin story of Netlify. And it just persists to today. And it's just like it's really nice, interesting that both Bolt and CognitionDevIn and a bunch of other sort of agent type startups, they all use Netlify to deploy because of this one feature. They don't really care about the other features.swyx: But, but just because it's easy for computers to use and talk to it, like if you build an interface for computers specifically, that it's easy for them to Navigate, then they will be used in agents. And I think that's a learning that a lot of developer tools companies are having. That's my bolt launch story and now if I say all that stuff.swyx: And I just wanted to come back to, like, the Webcontainers things, right? Like, I think you put a lot of weight on the technical modes. I think you also are just like, very good at product. So you've, you've like, built a better agent than a lot of people, the rest of us, including myself, who have tried to build these things, and we didn't get as far as you did.swyx: Don't shortchange yourself on products. But I think specifically [00:19:00] on, on infra, on like the sandboxing, like this is a thing that people really want. Alessio has Bax E2B, which we'll have on at some point, talking about like the sort of the server full side. But yours is, you know, inside of the browser, serverless.swyx: It doesn't cost you anything to serve one person versus a million people. It doesn't, doesn't cost you anything. I think that's interesting. I think in theory, we should be able to like run tests because you can run the full backend. Like, you can run Git, you can run Node, you can run maybe Python someday.swyx: We talked about this. But ideally, you should be able to have a fully gentic loop, running code, seeing the errors, correcting code, and just kind of self healing, right? Like, I mean, isn't that the dream?Eric: Totally.swyx: Yeah,Eric: totally. At least in bold, we've got, we've got a good amount of that today. I mean, there's a lot more for us to do, but one of the nice things, because like in web container, you know, there's a lot of kind of stuff you go Google like, you know, turn docker container into wasm.Eric: You'll find a lot of stuff out there that will do that. The problem is it's very big, it's slow, and that ruins the experience. And so what we ended up doing is just writing an operating system from [00:20:00] scratch that was just purpose built to, you know, run in a browser tab. And the reason being is, you know, Docker 2 awesome things will give you an image that's like out 60 to 100 megabits, you know, maybe more, you know, and our, our OS, you know, kind of clocks in, I think, I think we're in like a, maybe, maybe a megabyte or less or something like that.Eric: I mean, it's, it's, you know, really, really, you know, stripped down.swyx: This is basically the task involved is I understand that it's. Mapping every single, single Linux call to some kind of web, web assembly implementation,Eric: but more or less, and, and then there's a lot of things actually, like when you're looking at a dev environment, there's a lot of things that you don't need that a traditional OS is gonna have, right?Eric: Like, you know audio drivers or you like, there's just like, there's just tons of things. Oh, yeah. Right. Yeah. That goes . Yeah. You can just kind, you can, you can kind of tos them. Or alternatively, what you can do is you can actually be the nice thing. And this is, this kind of comes back to the origins of browsers, which is, you know, they're, they're at the beginning of the web and, you know, the late nineties, there was two very different kind of visions for the web where Alan Kay vehemently [00:21:00] disagree with the idea that should be document based, which is, you know, Tim Berners Lee, you know, that, and that's kind of what ended up winning, winning was this document based kind of browsing documents on the web thing.Eric: Alan Kay, he's got this like very famous quote where he said, you know, you want web browsers to be mini operating systems. They should download little mini binaries and execute with like a little mini virtualized operating system in there. And what's kind of interesting about the history, not to geek out on this aspect, what's kind of interesting about the history is both of those folks ended up being right.Eric: Documents were actually the pragmatic way that the web worked. Was, you know, became the most ubiquitous platform in the world to the degree now that this is why WebAssembly has been invented is that we're doing, we need to do more low level things in a browser, same thing with WebGPU, et cetera. And so all these APIs, you know, to build an operating system came to the browser.Eric: And that was actually the realization we had in 2017 was, holy heck, like you can actually, you know, service workers, which were designed for allowing your app to work offline. That was the kind of the key one where it was like, wait a second, you can actually now run. Web servers within a [00:22:00] browser, like you can run a server that you open up.Eric: That's wild. Like full Node. js. Full Node. js. Like that capability. Like, I can have a URL that's programmatically controlled. By a web application itself, boom. Like the web can build the web. The primitive is there. Everyone at the time, like we talked to people that like worked on, you know Chrome and V8 and they were like, uhhhh.Eric: You know, like I don't know. But it's one of those things you just kind of have to go do it to find out. So we spent a couple of years, you know, working on it and yeah. And, and, and got to work in back in 2021 is when we kind of put the first like data of web container online. Butswyx: in partnership with Google, right?swyx: Like Google actually had to help you get over the finish line with stuff.Eric: A hundred percent, because well, you know, over the years of when we were doing the R and D on the thing. Kind of the biggest challenge, the two ways that you can kind of test how powerful and capable a platform are, the two types of applications are one, video games, right, because they're just very compute intensive, a lot of calculations that have to happen, right?Eric: The second one are IDEs, because you're talking about actually virtualizing the actual [00:23:00] runtime environment you are in to actually build apps on top of it, which requires sophisticated capabilities, a lot of access to data. You know, a good amount of compute power, right, to effectively, you know, building app in app sort of thing.Eric: So those, those are the stress tests. So if your platform is missing stuff, those are the things where you find out. Those are, those are the people building games and IDEs. They're the ones filing bugs on operating system level stuff. And for us, browser level stuff.Eric [00:23:47]: yeah, what ended up happening is we were just hammering, you know, the Chromium bug tracker, and they're like, who are these guys? Yeah. And, and they were amazing because I mean, just making Chrome DevTools be able to debug, I mean, it's, it's not, it wasn't originally built right for debugging an operating system, right? They've been phenomenal working with us and just kind of really pushing the limits, but that it's a rising tide that's kind of lifted all boats because now there's a lot of different types of applications that you can debug with Chrome Dev Tools that are running a browser that runs more reliably because just the stress testing that, that we and, you know, games that are coming to the web are kind of pushing as well, but.Itamar [00:24:23]: That's awesome. About the testing, I think like most, let's say coding assistant from different kinds will need this loop of testing. And even I would add code review to some, to some extent that you mentioned. How is testing different from code review? Code review could be, for example, PR review, like a code review that is done at the point of when you want to merge branches. But I would say that code review, for example, checks best practices, maintainability, and so on. It's not just like CI, but more than CI. And testing is like a more like checking functionality, et cetera. So it's different. We call, by the way, all of these together code integrity, but that's a different story. Just to go back to the, to the testing and specifically. Yeah. It's, it's, it's since the first slide. Yeah. We're consistent. So if we go back to the testing, I think like, it's not surprising that for us testing is important and for Bolt it's testing important, but I want to shed some light on a different perspective of it. Like let's think about autonomous driving. Those startups that are doing autonomous driving for highway and autonomous driving for the city. And I think like we saw the autonomous of the highway much faster and reaching to a level, I don't know, four or so much faster than those in the city. Now, in both cases, you need testing and quote unquote testing, you know, verifying validation that you're doing the right thing on the road and you're reading and et cetera. But it's probably like so different in the city that it could be like actually different technology. And I claim that we're seeing something similar here. So when you're building the next Wix, and if I was them, I was like looking at you and being a bit scared. That's what you're disrupting, what you just said. Then basically, I would say that, for example, the UX UI is freaking important. And because you're you're more aiming for the end user. In this case, maybe it's an end user that doesn't know how to develop for developers. It's also important. But let alone those that do not know to develop, they need a slick UI UX. And I think like that's one reason, for example, I think Cursor have like really good technology. I don't know the underlying what's under the hood, but at least what they're saying. But I think also their UX UI is great. It's a lot because they did their own ID. While if you're aiming for the city AI, suddenly like there's a lot of testing and code review technology that it's not necessarily like that important. For example, let's talk about integration tests. Probably like a lot of what you're building involved at the moment is isolated applications. Maybe the vision or the end game is maybe like having one solution for everything. It could be that eventually the highway companies will go into the city and the other way around. But at the beginning, there is a difference. And integration tests are a good example. I guess they're a bit less important. And when you think about enterprise software, they're really important. So to recap, like I think like the idea of looping and verifying your test and verifying your code in different ways, testing or code review, et cetera, seems to be important in the highway AI and the city AI, but in different ways and different like critical for the city, even more and more variety. Actually, I was looking to ask you like what kind of loops you guys are doing. For example, when I'm using Bolt and I'm enjoying it a lot, then I do see like sometimes you're trying to catch the errors and fix them. And also, I noticed that you're breaking down tasks into smaller ones and then et cetera, which is already a common notion for a year ago. But it seems like you're doing it really well. So if you're willing to share anything about it.Eric [00:28:07]: Yeah, yeah. I realized I never actually hit the punchline of what I was saying before. I mentioned the point about us kind of writing an operating system from scratch because what ended up being important about that is that to your point, it's actually a very, like compared to like a, you know, if you're like running cursor on anyone's machine, you kind of don't know what you're dealing with, with the OS you're running on. There could be an error happens. It could be like a million different things, right? There could be some config. There could be, it could be God knows what, right? The thing with WebConnect is because we wrote the entire thing from scratch. It's actually a unified image basically. And we can instrument it at any level that we think is going to be useful, which is exactly what we did when we started building Bolt is we instrumented stuff at like the process level, at the runtime level, you know, et cetera, et cetera, et cetera. Stuff that would just be not impossible to do on local, but to do that in a way that works across any operating system, whatever is, I mean, would just be insanely, you know, insanely difficult to do right and reliably. And that's what you saw when you've used Bolt is that when an error actually will occur, whether it's in the build process or the actual web application itself is failing or anything kind of in between, you can actually capture those errors. And today it's a very primitive way of how we've implemented it largely because the product just didn't exist 90 days ago. So we're like, we got some work ahead of us and we got to hire some more a little bit, but basically we present and we say, Hey, this is, here's kind of the things that went wrong. There's a fix it button and then a ignore button, and then you can just hit fix it. And then we take all that telemetry through our agent, you run it through our agent and say, kind of, here's the state of the application. Here's kind of the errors that we got from Node.js or the browser or whatever, and like dah, dah, dah, dah. And it can take a crack at actually solving it. And it's actually pretty darn good at being able to do that. That's kind of been a, you know, closing the loop and having it be a reliable kind of base has seemed to be a pretty big upgrade over doing stuff locally, just because I think that's a pretty key ingredient of it. And yeah, I think breaking things down into smaller tasks, like that's, that's kind of a key part of our agent. I think like Claude did a really good job with artifacts. I think, you know, us and kind of everyone else has, has kind of taken their approach of like actually breaking out certain tasks in a certain order into, you know, kind of a concrete way. And, and so actually the core of Bolt, I know we actually made open source. So you can actually go and check out like the system prompts and et cetera, and you can run it locally and whatever have you. So anyone that's interested in this stuff, I'd highly recommend taking a look at. There's not a lot of like stuff that's like open source in this realm. It's, that was one of the fun things that we've we thought would be cool to do. And people, people seem to like it. I mean, there's a lot of forks and people adding different models and stuff. So it's been cool to see.Swyx [00:30:41]: Yeah. I'm happy to add, I added real-time voice for my opening day demo and it was really fun to hack with. So thank you for doing that. Yeah. Thank you. I'm going to steal your code.Eric [00:30:52]: Because I want that.Swyx [00:30:52]: It's funny because I built on top of the fork of Bolt.new that already has the multi LLM thing. And so you just told me you're going to merge that in. So then you're going to merge two layers of forks down into this thing. So it'll be fun.Eric [00:31:03]: Heck yeah.Alessio [00:31:04]: Just to touch on like the environment, Itamar, you maybe go into the most complicated environments that even the people that work there don't know how to run. How much of an impact does that have on your performance? Like, you know, it's most of the work you're doing actually figuring out environment and like the libraries, because I'm sure they're using outdated version of languages, they're using outdated libraries, they're using forks that have not been on the public internet before. How much of the work that you're doing is like there versus like at the LLM level?Itamar [00:31:32]: One of the reasons I was asking about, you know, what are the steps to break things down, because it really matters. Like, what's the tech stack? How complicated the software is? It's hard to figure it out when you're dealing with the real world, any environment of enterprise as a city, when I'm like, while maybe sometimes like, I think you do enable like in Bolt, like to install stuff, but it's quite a like controlled environment. And that's a good thing to do, because then you narrow down and it's easier to make things work. So definitely, there are two dimensions, I think, actually spaces. One is the fact just like installing our software without yet like doing anything, making it work, just installing it because we work with enterprise and Fortune 500, etc. Many of them want on prem solution.Swyx [00:32:22]: So you have how many deployment options?Itamar [00:32:24]: Basically, we had, we did a metric metrics, say 96 options, because, you know, they're different dimensions. Like, for example, one dimension, we connect to your code management system to your Git. So are you having like GitHub, GitLab? Subversion? Is it like on cloud or deployed on prem? Just an example. Which model agree to use its APIs or ours? Like we have our Is it TestGPT? Yeah, when we started with TestGPT, it was a huge mistake name. It was cool back then, but I don't think it's a good idea to name a model after someone else's model. Anyway, that's my opinion. So we gotSwyx [00:33:02]: I'm interested in these learnings, like things that you change your mind on.Itamar [00:33:06]: Eventually, when you're building a company, you're building a brand and you want to create your own brand. By the way, when I thought about Bolt.new, I also thought about if it's not a problem, because when I think about Bolt, I do think about like a couple of companies that are already called this way.Swyx [00:33:19]: Curse companies. You could call it Codium just to...Itamar [00:33:24]: Okay, thank you. Touche. Touche.Eric [00:33:27]: Yeah, you got to imagine the board meeting before we launched Bolt, one of our investors, you can imagine they're like, are you sure? Because from the investment side, it's kind of a famous, very notorious Bolt. And they're like, are you sure you want to go with that name? Oh, yeah. Yeah, absolutely.Itamar [00:33:43]: At this point, we have actually four models. There is a model for autocomplete. There's a model for the chat. There is a model dedicated for more for code review. And there is a model that is for code embedding. Actually, you might notice that there isn't a good code embedding model out there. Can you name one? Like dedicated for code?Swyx [00:34:04]: There's code indexing, and then you can do sort of like the hide for code. And then you can embed the descriptions of the code.Itamar [00:34:12]: Yeah, but you do see a lot of type of models that are dedicated for embedding and for different spaces, different fields, etc. And I'm not aware. And I know that if you go to the bedrock, try to find like there's a few code embedding models, but none of them are specialized for code.Swyx [00:34:31]: Is there a benchmark that you would tell us to pay attention to?Itamar [00:34:34]: Yeah, so it's coming. Wait for that. Anyway, we have our models. And just to go back to the 96 option of deployment. So I'm closing the brackets for us. So one is like dimensional, like what Git deployment you have, like what models do you agree to use? Dotter could be like if it's air-gapped completely, or you want VPC, and then you have Azure, GCP, and AWS, which is different. Do you use Kubernetes or do not? Because we want to exploit that. There are companies that do not do that, etc. I guess you know what I mean. So that's one thing. And considering that we are dealing with one of all four enterprises, we needed to deal with that. So you asked me about how complicated it is to solve that complex code. I said, it's just a deployment part. And then now to the software, we see a lot of different challenges. For example, some companies, they did actually a good job to build a lot of microservices. Let's not get to if it's good or not, but let's first assume that it is a good thing. A lot of microservices, each one of them has their own repo. And now you have tens of thousands of repos. And you as a developer want to develop something. And I remember me coming to a corporate for the first time. I don't know where to look at, like where to find things. So just doing a good indexing for that is like a challenge. And moreover, the regular indexing, the one that you can find, we wrote a few blogs on that. By the way, we also have some open source, different than yours, but actually three and growing. Then it doesn't work. You need to let the tech leads and the companies influence your indexing. For example, Mark with different repos with different colors. This is a high quality repo. This is a lower quality repo. This is a repo that we want to deprecate. This is a repo we want to grow, etc. And let that be part of your indexing. And only then things actually work for enterprise and they don't get to a fatigue of, oh, this is awesome. Oh, but I'm starting, it's annoying me. I think Copilot is an amazing tool, but I'm quoting others, meaning GitHub Copilot, that they see not so good retention of GitHub Copilot and enterprise. Ooh, spicy. Yeah. I saw snapshots of people and we have customers that are Copilot users as well. And also I saw research, some of them is public by the way, between 38 to 50% retention for users using Copilot and enterprise. So it's not so good. By the way, I don't think it's that bad, but it's not so good. So I think that's a reason because, yeah, it helps you auto-complete, but then, and especially if you're working on your repo alone, but if it's need that context of remote repos that you're code-based, that's hard. So to make things work, there's a lot of work on that, like giving the controllability for the tech leads, for the developer platform or developer experience department in the organization to influence how things are working. A short example, because if you have like really old legacy code, probably some of it is not so good anymore. If you just fine tune on these code base, then there is a bias to repeat those mistakes or old practices, etc. So you need, for example, as I mentioned, to influence that. For example, in Coda, you can have a markdown of best practices by the tech leads and Coda will include that and relate to that and will not offer suggestions that are not according to the best practices, just as an example. So that's just a short list of things that you need to do in order to deal with, like you mentioned, the 100.1 to 100.2 version of software. I just want to say what you're doing is extremelyEric [00:38:32]: impressive because it's very difficult. I mean, the business of Stackplus, kind of before bulk came online, we sold a version of our IDE that went on-prem. So I understand what you're saying about the difficulty of getting stuff just working on-prem. Holy heck. I mean, that is extremely hard. I guess the question I have for you is, I mean, we were just doing that with kind of Kubernetes-based stuff, but the spread of Fortune 500 companies that you're working with, how are they doing the inference for this? Are you kind of plugging into Azure's OpenAI stuff and AWS's Bedrock, you know, Cloud stuff? Or are they just like running stuff on GPUs? Like, what is that? How are these folks approaching that? Because, man, what we saw on the enterprise side, I mean, I got to imagine that that's a huge challenge. Everything you said and more, like,Itamar [00:39:15]: for example, like someone could be, and I don't think any of these is bad. Like, they made their decision. Like, for example, some people, they're, I want only AWS and VPC on AWS, no matter what. And then they, some of them, like there is a subset, I will say, I'm willing to take models only for from Bedrock and not ours. And we have a problem because there is no good code embedding model on Bedrock. And that's part of what we're doing now with AWS to solve that. We solve it in a different way. But if you are willing to run on AWS VPC, but run your run models on GPUs or inferentia, like the new version of the more coming out, then our models can run on that. But everything you said is right. Like, we see like on-prem deployment where they have their own GPUs. We see Azure where you're using OpenAI Azure. We see cases where you're running on GCP and they want OpenAI. Like this cross, like a case, although there is Gemini or even Sonnet, I think is available on GCP, just an example. So all the options, that's part of the challenge. I admit that we thought about it, but it was even more complicated. And it took us a few months to actually, that metrics that I mentioned, to start clicking each one of the blocks there. A few months is impressive. I mean,Eric [00:40:35]: honestly, just that's okay. Every one of these enterprises is, their networking is different. Just everything's different. Every single one is different. I see you understand. Yeah. So that just cannot be understated. That it is, that's extremely impressive. Hats off.Itamar [00:40:50]: It could be, by the way, like, for example, oh, we're only AWS, but our GitHub enterprise is on-prem. Oh, we forgot. So we need like a private link or whatever, like every time like that. It's not, and you do need to think about it if you want to work with an enterprise. And it's important. Like I understand like their, I respect their point of view.Swyx [00:41:10]: And this primarily impacts your architecture, your tech choices. Like you have to, you can't choose some vendors because...Itamar [00:41:15]: Yeah, definitely. To be frank, it makes us hard for a startup because it means that we want, we want everyone to enjoy all the variety of models. By the way, it was hard for us with our technology. I want to open a bracket, like a window. I guess you're familiar with our Alpha Codium, which is an open source.Eric [00:41:33]: We got to go over that. Yeah. So I'll do that quickly.Itamar [00:41:36]: Yeah. A pin in that. Yeah. Actually, we didn't have it in the last episode. So, so, okay.Swyx [00:41:41]: Okay. We'll come back to that later, but let's talk about...Itamar [00:41:43]: Yeah. So, so just like shortly, and then we can double click on Alpha Codium. But Alpha Codium is a open source tool. You can go and try it and lets you compete on CodeForce. This is a website and a competition and actually reach a master level level, like 95% with a click of a button. You don't need to do anything. And part of what we did there is taking a problem and breaking it to different, like smaller blocks. And then the models are doing a much better job. Like we all know it by now that taking small tasks and solving them, by the way, even O1, which is supposed to be able to do system two thinking like Greg from OpenAI like hinted, is doing better on these kinds of problems. But still, it's very useful to break it down for O1, despite O1 being able to think by itself. And that's what we presented like just a month ago, OpenAI released that now they are doing 93 percentile with O1 IOI left and International Olympiad of Formation. Sorry, I forgot. Exactly. I told you I forgot. And we took their O1 preview with Alpha Codium and did better. Like it just shows like, and there is a big difference between the preview and the IOI. It shows like that these models are not still system two thinkers, and there is a big difference. So maybe they're not complete system two. Yeah, they need some guidance. I call them system 1.5. We can, we can have it. I thought about it. Like, you know, I care about this philosophy stuff. And I think like we didn't see it even close to a system two thinking. I can elaborate later. But closing the brackets, like we take Alpha Codium and as our principle of thinking, we take tasks and break them down to smaller tasks. And then we want to exploit the best model to solve them. So I want to enable anyone to enjoy O1 and SONET and Gemini 1.5, etc. But at the same time, I need to develop my own models as well, because some of the Fortune 500 want to have all air gapped or whatever. So that's a challenge. Now you need to support so many models. And to some extent, I would say that the flow engineering, the breaking down to two different blocks is a necessity for us. Why? Because when you take a big block, a big problem, you need a very different prompt for each one of the models to actually work. But when you take a big problem and break it into small tasks, we can talk how we do that, then the prompt matters less. What I want to say, like all this, like as a startup trying to do different deployment, getting all the juice that you can get from models, etc. is a big problem. And one need to think about it. And one of our mitigation is that process of taking tasks and breaking them down. That's why I'm really interested to know how you guys are doing it. And part of what we do is also open source. So you can see.Swyx [00:44:39]: There's a lot in there. But yeah, flow over prompt. I do believe that that does make sense. I feel like there's a lot that both of you can sort of exchange notes on breaking down problems. And I just want you guys to just go for it. This is fun to watch.Eric [00:44:55]: Yeah. I mean, what's super interesting is the context you're working in is, because for us too with Bolt, we've started thinking because our kind of existing business line was going behind the firewall, right? We were like, how do we do this? Adding the inference aspect on, we're like, okay, how does... Because I mean, there's not a lot of prior art, right? I mean, this is all new. This is all new. So I definitely am going to have a lot of questions for you.Itamar [00:45:17]: I'm here. We're very open, by the way. We have a paper on a blog or like whatever.Swyx [00:45:22]: The Alphacodeum, GitHub, and we'll put all this in the show notes.Itamar [00:45:25]: Yeah. And even the new results of O1, we published it.Eric [00:45:29]: I love that. And I also just, I think spiritually, I like your approach of being transparent. Because I think there's a lot of hype-ium around AI stuff. And a lot of it is, it's just like, you have these companies that are just kind of keep their stuff closed source and then just max hype it, but then it's kind of nothing. And I think it kind of gives a bad rep to the incredible stuff that's actually happening here. And so I think it's stuff like what you're doing where, I mean, true merit and you're cracking open actual code for others to learn from and use. That strikes me as the right approach. And it's great to hear that you're making such incredible progress.Itamar [00:46:02]: I have something to share about the open source. Most of our tools are, we have an open source version and then a premium pro version. But it's not an easy decision to do that. I actually wanted to ask you about your strategy, but I think in your case, there is, in my opinion, relatively a good strategy where a lot of parts of open source, but then you have the deployment and the environment, which is not right if I get it correctly. And then there's a clear, almost hugging face model. Yeah, you can do that, but why should you try to deploy it yourself, deploy it with us? But in our case, and I'm not sure you're not going to hit also some competitors, and I guess you are. I wanted to ask you, for example, on some of them. In our case, one day we looked on one of our competitors that is doing code review. We're a platform. We have the code review, the testing, et cetera, spread over the ID to get. And in each agent, we have a few startups or a big incumbents that are doing only that. So we noticed one of our competitors having not only a very similar UI of our open source, but actually even our typo. And you sit there and you're kind of like, yeah, we're not that good. We don't use enough Grammarly or whatever. And we had a couple of these and we saw it there. And then it's a challenge. And I want to ask you, Bald is doing so well, and then you open source it. So I think I know what my answer was. I gave it before, but still interestingEric [00:47:29]: to hear what you think. GeoHot said back, I don't know who he was up to at this exact moment, but I think on comma AI, all that stuff's open source. And someone had asked him, why is this open source? And he's like, if you're not actually confident that you can go and crush it and build the best thing, then yeah, you should probably keep your stuff closed source. He said something akin to that. I'm probably kind of butchering it, but I thought it was kind of a really good point. And that's not to say that you should just open source everything, because for obvious reasons, there's kind of strategic things you have to kind of take in mind. But I actually think a pretty liberal approach, as liberal as you kind of can be, it can really make a lot of sense. Because that is so validating that one of your competitors is taking your stuff and they're like, yeah, let's just kind of tweak the styles. I mean, clearly, right? I think it's kind of healthy because it keeps, I'm sure back at HQ that day when you saw that, you're like, oh, all right, well, we have to grind even harder to make sure we stay ahead. And so I think it's actually a very useful, motivating thing for the teams. Because you might feel this period of comfort. I think a lot of companies will have this period of comfort where they're not feeling the competition and one day they get disrupted. So kind of putting stuff out there and letting people push it forces you to face reality soon, right? And actually feel that incrementally so you can kind of adjust course. And that's for us, the open source version of Bolt has had a lot of features people have been begging us for, like persisting chat messages and checkpoints and stuff. Within the first week, that stuff was landed in the open source versions. And they're like, why can't you ship this? It's in the open, so people have forked it. And we're like, we're trying to keep our servers and GPUs online. But it's been great because the folks in the community did a great job, kept us on our toes. And we've got to know most of these folks too at this point that have been building these things. And so it actually was very instructive. Like, okay, well, if we're going to go kind of land this, there's some UX patterns we can kind of look at and the code is open source to this stuff. What's great about these, what's not. So anyways, NetNet, I think it's awesome. I think from a competitive point of view for us, I think in particular, what's interesting is the core technology of WebContainer going. And I think that right now, there's really nothing that's kind of on par with that. And we also, we have a business of, because WebContainer runs in your browser, but to make it work, you have to install stuff from NPM. You have to make cores bypass requests, like connected databases, which all require server-side proxying or acceleration. And so we actually sell WebContainer as a service. One of the core reasons we open-sourced kind of the core components of Bolt when we launched was that we think that there's going to be a lot more of these AI, in-your-browser AI co-gen experiences, kind of like what Anthropic did with Artifacts and Clod. By the way, Artifacts uses WebContainers. Not yet. No, yeah. Should I strike that? I think that they've got their own thing at the moment, but there's been a lot of interest in WebContainers from folks doing things in that sort of realm and in the AI labs and startups and everything in between. So I think there'll be, I imagine, over the coming months, there'll be lots of things being announced to folks kind of adopting it. But yeah, I think effectively...Swyx [00:50:35]: Okay, I'll say this. If you're a large model lab and you want to build sandbox environments inside of your chat app, you should call Eric.Itamar [00:50:43]: But wait, wait, wait, wait, wait, wait. I have a question about that. I think OpenAI, they felt that people are not using their model as they would want to. So they built ChatGPT. But I would say that ChatGPT now defines OpenAI. I know they're doing a lot of business from their APIs, but still, is this how you think? Isn't Bolt.new your business now? Why don't you focus on that instead of the...Swyx [00:51:16]: What's your advice as a founder?Eric [00:51:18]: You're right. And so going into it, we, candidly, we were like, Bolt.new, this thing is super cool. We think people are stoked. We think people will be stoked. But we were like, maybe that's allowed. Best case scenario, after month one, we'd be mind blown if we added a couple hundred K of error or something. And we were like, but we think there's probably going to be an immediate huge business. Because there was some early poll on folks wanting to put WebContainer into their product offerings, kind of similar to what Bolt is doing or whatever. We were actually prepared for the inverse outcome here. But I mean, well, I guess we've seen poll on both. But I mean, what's happened with Bolt, and you're right, it's actually the same strategy as like OpenAI or Anthropic, where we have our ChatGPT to OpenAI's APIs is Bolt to WebContainer. And so we've kind of taken that same approach. And we're seeing, I guess, some of the similar results, except right now, the revenue side is extremely lopsided to Bolt.Itamar [00:52:16]: I think if you ask me what's my advice, I think you have three options. One is to focus on Bolt. The other is to focus on the WebContainer. The third is to raise one billion dollars and do them both. I'm serious. I think otherwise, you need to choose. And if you raise enough money, and I think it's big bucks, because you're going to be chased by competitors. And I think it will be challenging to do both. And maybe you can. I don't know. We do see these numbers right now, raising above $100 million, even without havingEric [00:52:49]: a product. You can see these. It's excellent advice. And I think what's been amazing, but also kind of challenging is we're trying to forecast, okay, well, where are these things going? I mean, in the initial weeks, I think us and all the investors in the company that we're sharing this with, it was like, this is cool. Okay, we added 500k. Wow, that's crazy. Wow, we're at a million now. Most things, you have this kind of the tech crunch launch of initiation and then the thing of sorrow. And if there's going to be a downtrend, it's just not coming yet. Now that we're kind of looking ahead, we're six weeks in. So now we're getting enough confidence in our convictions to go, okay, this se

javaswag
#71 - Алексей Жидков - эргономичный подход и декомпозиция архитектуры

javaswag

Play Episode Listen Later Nov 12, 2024 108:51


В 71 выпуске подкаста Javaswag поговорили с Алексеем Жидковым об эргономичном подходе для разработки архитектуры проекта 00:00 Начало 12:06 Работа консультанта 17:38 Эргономичный подход и его принципы 26:44 Практика применения принципов разработки 30:55 Трудности внедрения DDD на практике 37:15 Популярность DDD и его реальная эффективность 39:33 TDD и его место в эргономичном подходе 41:00 Тестирование как основа разработки 43:55 Проблемы с моками в тестировании 48:50 Архитектурные подходы и JPA 51:01 Функциональная архитектура и ее влияние на разработку 55:36 Проблемы с ORM и Hibernate 01:00:03 Эргономичность и альтернативы ORM 01:01:53 Неизменяемая модель данных 01:05:58 Эргономичный подход в разработке 01:08:32 Обсуждение стека технологий и его эволюция 01:11:21 Эргономичный подход в разработке проектов 01:17:14 Проблемы объектно-ориентированного программирования 01:20:56 Декомпозиция системы и создание API 01:22:38 Тестирование и разработка по TDD 01:27:24 Экономика эргономичной архитектуры 01:30:59 Элементы эргономичного подхода 01:40:15 Проблемы многопоточности 01:42:58 Непопулярное мнение Гость - https://t.me/ergonomic_code Ссылки: Канал в телеграме https://t.me/ergonomic_code Сайт Алексея https://azhidkov.pro/ Многоликий принцип единственности ответственности - мой разбор формулировок и интерпретаций Single Responsibility Principle, которые даёт сам Анкл Боб. FizzBuzz Enterprise Edition - пример доведения Open-Closed Principle до абсурда SOLID Deconstruction - Kevlin Henney - c 28:23 докладчик говорит о том, что Liskov Substituion Principle является нонсенсом - для его соблюдения, вы не можете переобределять методы - только добавлять новые, про которые программа-клиент ничего не знает Domain-Driven Design: Tackling Complexity in the Heart of Software - та самая книга про DDD Принципы юнит-тестирования - самая крутая на сегодняшний день книга по тестированию бакендов Сайт Владимира Хорикова РЕПЕТИЦИЯ Структурный дизайн. Древний секрет простого и быстрого кода. - репетиция моего доклада на Joker ‘24 РЕПЕТИЦИЯ Функциональная архитектура и Spring Data JDBC. 4 года в проде, полёт отличны" - репетиция моего второго доклада на Joker ‘24, который в итоге стал Lightening Talk-ом Why is Java making so many things immutable? - пост в блоге Оракла, где автор пишет “чуваки, не парьтесь, GC заточен на быстрое создание объектов” Trainer Advisor - реальный проект по Эргономичному подходу Диаграмма эффектов - диаграмма, которую я использую для декопозиции ядра/домена/модели (сущностей и интеграций) на модули Алексей Жидков — Рациональный подход к декомпозиции систем на модули или микросервисы - мой доклад на JPoint ‘23 с алгоримтом декомпозиции диаграммы эффектов Lean Architecture: for Agile Software Development The Transformation Priority Premise, Code That Fits in Your Head - как выходить из тупика, когда в продовом коде захардкожен OK 200 и тест зелёный Как я превратил легаси-проект в конфетку за полгода. Том 1 - мой пост о том, как я переделал проект по Эргономичному подходу и ускорил работу команды в три раза Метрика Cognitive complexity или простой способ измерить сложность кода - лучшая альтеранитва цикломатической сложности Code Complexity - плагин для IDEA, который рисует когнитивную сложность прямо в редакторе Alan Kay at OOPSLA 1997 - The computer revolution hasnt happened yet - Алан Кей говорит, что не имел ввиду C++, когда придумывал термин ООП Dr. Alan Kay on the Meaning of “Object-Oriented Programming” - Алан Кей говорит, что имел ввиду под ООП Кип сейф!

ANTIC The Atari 8-bit Podcast
ANTIC Interview 442 - Bob Stein, Atari Research

ANTIC The Atari 8-bit Podcast

Play Episode Listen Later Nov 1, 2024 56:48


Bob Stein, Atari's Encyclopedia Project   Bob Stein worked at Atari Research for 18 months beginning in 1981. He was hired by Alan Kay. He worked almost exclusively on an encyclopedia project, a potential collaboration between Atari and Encyclopaedia Britannica that never went anywhere.   I learned about Bob after he uploaded an item called The Atari Drawings to Internet Archive. It's a collection of nine colorful pencil drawings, drawn in 1982 by Disney animator Glen Keane. The drawings depict futuristic scenarios where people use a computerized encyclopedia to get information: for instance, "An earthquake wakes a couple in the middle of the night. The Intelligent Encyclopedia, connected to an online service, informs them of the severity of the earthquake and makes safety tips readily available." and "A mother and her children looking into a tidepool in Laguna ask the Intelligent Encyclopedia about the plants and animals that they see."   Bob described the collection of art in his introduction to the document:   "In 1982 executives from Warner, Inc., Atari's parent company, were scheduled to visit the Research Lab where the Encyclopedia Project was located. Brenda Laurel and I came up with these scenarios to give the execs a sense of what we were working toward. The drawings were made by Disney animator, Glen Keane.   When you look at these, remember they were made 16 years before Google and 12 years before Yahoo, even 8 years before the earliest web-based search engines.   That said, one of the most interesting things about these scenarios as seen today, is that with the exception of the image of the architect and the teacher none of them indicated any inkling that the most important element of the web to come was that it would bring people into contact with each other. What we see here is almost entirely people accessing content from a central server, no sense that we would be communicating with each other or uploading our own contributions to the collective culture. My own explanation for this lapse focuses on the print-era mentality that saw readers purely as consumers of content."   Bob saved and scanned a large number of materials from his time at Atari, and uploaded them to Internet Archive. In addition to the scans of Keane's Atari Drawings, the documents include memos about the encyclopedia project and a transcript of a 1982 seminar for Atari Research featuring Charles Van Doren. Check the show notes for those links.   After Atari, Bob was co-founder of The Criterion Collection, which restores and distributes important classic films; and co-founder of The Voyager Company, the first commercial multimedia CD-ROM publisher. In 2004, he co-founded The Institute for the Future of the Book, a think tank "investigating the evolution of discourse as it shifts from printed pages to networked screens."   This interview took place December 16, 2023.   Video version of this interview at YouTube   The Atari Drawings   ANTIC Interview 420 - Brenda Laurel, Atari Research   Whither The Encyclopedia Project - Atari Encyclopedia Project memos   Back to the Future -- In honor of Encyclopedia Britannica giving up its print edition (Wayback machine)   Stein Kay Atari Memos Pt 1   Stein Kay Atari Memos Pt 2   Exchange With Steve Weyer And J. David Bolter 1983   Hadley Letter 1980-12-01   Atari...Ifugao Question Journal, Michael Naimark   CVD Atari Seminar 20 December 1982   Encyclopedia And The Intellectual Tools Of The Future . . . November 1981   Bob Stein Archives at Stanford   The Digital Antiquarian — Bob Stein and Voyager   Charles Van Doren in Wikipedia   Bob Stein wants to change how people think about the book (2010)

The .NET Core Podcast
CQRS, System Maintainability, and Pragmatic Tech Choices with Dejan Miličić

The .NET Core Podcast

Play Episode Listen Later Sep 27, 2024 64:59


Metalama This episode of The Modern .NET Show is supported, in part, by Metalama, reduce your boilerplate code by up to 15% with Metalama's C#-to-C# template engine and reduce your code complexity today! Show Notes "Essentially, when you look at it, what we are doing, we are not building software. We are changing it, we are maintaining it. Because when you look at the typical lifecycle of the application, enterprise one I would say, or any big system where you invest lots of time lots of money building it, you want to exploit this for 5-10-15 years hopefully. So when you look at the percentages, you're building something for one year and then you are using it for nine more years, let's say. It's a 10 year life cycle. So when you think about it, you spend 10 times... er, 10 percent of your time building something and 90 percent of the lifecycle of the application or the system responding to change requests, building new things, changing, adapting, and maintaining. So essentially, our job is not to build out. Our job is to sustain all these request changes. I think that's the first point we need to clear."— Dejan Miličić Welcome friends to The Modern .NET Show; the premier .NET podcast, focussing entirely on the knowledge, tools, and frameworks that all .NET developers should have in their toolbox. We are the go-to podcast for .NET developers worldwide, and I am your host: Jamie "GaProgMan" Taylor. In this episode, Dejan Miličić joined us to talk about the CQRS pattern, how it came from CQS, what CQS is and how it's related to Alan Kay's original ideas for object-oriented programming, being pragmatic as developers and engineers, the importance of system design and system's thinking, and how we all need to realise that our software lives on for years after we've pushed our changes to the repo. "So I, indeed, agree with you that people should pay more attention to system design. Start looking at the whole picture. And the extreme of this thinking would be: okay you will go into job interview, they will ask you about, I don't know, quick sort you, will implement it on the whiteboard, and then six months later on you will go and purchase or maybe download a NuGet package with with a quick sort implementation, because you know it makes sense; you don't want to implement something that have been implemented this many times before."— Dejan Miličić Whilst this episode doesn't focus on .NET per se, I think that the subjects that Dejan and I covered are incredibly important for anyone who wants to seriously level up their game. In fact, Dejan proved this point when he said: "We [are] all, like, containerised into our own small silos and working on our own without being aware of the whole system. When you look at what people go through when they prepare for job interviews at, I don't you know, Google, Facebook, nowadays Meta, Microsoft, you have all these books on the system design. And then they go, instead of, you know, going to the job interview after 20 years of experience and talking about what you do and what you know, people with 20 years of experience still sit down and learn and prepare for the system design interview. I'm not saying they shouldn't be preparing, but some of my colleagues told me that that was the first time in their careers that they started looking at the system design as a discipline."— Dejan Miličić Anyway, without further ado, let's sit back, open up a terminal, type in `dotnet new podcast` and we'll dive into the core of Modern .NET. Supporting the Show If you find this episode useful in any way, please consider supporting the show by either leaving a review (check our review page for ways to do that), sharing the episode with a friend or colleague, buying the host a coffee, or considering becoming a Patron of the show. Full Show Notes The full show notes, including links to some of the things we discussed and a full transcription of this episode, can be found at: https://dotnetcore.show/season-7/cqrs-system-maintainability-and-pragmatic-tech-choices-with-dejan-milicic/ Useful Links Dejan on Twitter: x.com/dejanmilicic Dejjan on LinkedIn: linkedin.com/in/dejanmilicic Music created by Mono Memory Music, licensed to RJJ Software for use in The Modern .NET Show Editing and post-production services for this episode were provided (in part) by MB Podcast Services Supporting the show: Leave a rating or review Buy the show a coffee Become a patron Remember to rate and review the show on Apple Podcasts, Podchaser, or wherever you find your podcasts, this will help the show's audience grow. Or you can just share the show with a friend. And don't forget to reach out via our Contact page. We're very interested in your opinion of the show, so please get in touch. You can support the show by making a monthly donation on the show's Patreon page at: https://www.patreon.com/TheDotNetCorePodcast.

Learn English Through Listening
What3words App-How It Can Change Your Life-English Listening Practice Ep 767

Learn English Through Listening

Play Episode Listen Later Aug 5, 2024 8:56


What3words App: Imagine three simple words saving a life and sharpening your English listening practice skills at the same time. Intrigued? Discover how a clever app could transform emergencies and boost your English language fluency. Click play and start listening to the story behind 3 simple life-saving words and improve your English comprehension https://adeptenglish.com/company/learning-system/ skills the fun way. An English lesson https://adeptenglish.com/lessons/ you will want to listen to:

Empowerment Minutes
The Future of Agile Leadership-Trends and Predictions with Dr Maurine Kwende

Empowerment Minutes

Play Episode Listen Later Jul 30, 2024 7:51


Are you prepared for the future of leadership?  Tune into our latest podcast episode, where we explore the evolving leadership landscape in the modern world. In this episode, we uncover how cutting-edge technologies transform agile practices, cultural shifts' importance, and more. "The best way to predict the future is to invent it." – Alan Kay. Agile leaders don't just adapt to changes; they drive them. Tune in now to gain insights that will prepare you to lead with agility and innovation. Available on Amazon Music, iTunes, Podbean, Spotify, YouTube Music, or your favorite streaming service!

Beginner's Mind
#135: Empowering Innovators: 10 Lessons from The Innovators by Walter Isaacson

Beginner's Mind

Play Episode Listen Later Jul 4, 2024 78:56


Ever wondered how the digital revolution came to be? Was it the work of lone geniuses, or was there something more at play? In this episode, we delve into Walter Isaacson's "The Innovators," uncovering the collaborative efforts and key principles that have shaped our technological landscape.In the world of investing and entrepreneurship, building a multidisciplinary mental model is key to success. "The Innovators" reveals that diverse, collaborative teams have historically been the driving force behind groundbreaking solutions.In my martial arts days, a coach taught me to study exceptional role models – a strategy akin to the famous Harvard Business Cases. Analyze success, discover core principles, then adapt them to your own unique path. After all, as Bruce Lee said, "Absorb what is useful, reject what is useless, and add what is essentially your own."That's the lens I bring to biographies – extracting valuable lessons amidst the complexities of each individual story.This episode dissects 10 crucial tools for fostering innovation, drawing on stories from the book and my own experiences as an entrepreneur and investor. From visionary thinking and customer-centricity to the power of persistence and collaboration, we'll explore the strategies that can help you identify winning teams and create an environment where innovation thrives.Book on Amazon[Link to Amazon]Problems This Solves:Overwhelmed by history books? This concise summary delivers the most relevant insights for entrepreneurs and investors.Unsure how to apply innovation principles? We'll provide actionable takeaways and reflection questions.Curious about the minds behind the digital age? Gain insights into the collaborative spirit that drives technological progress.Why Listen:Discover the 10 tools for innovation: Uncover the strategies that have fueled successful collaborations and groundbreaking technologies.Learn from real-world examples: Hear stories from the book and my own experiences that illustrate these principles in action.Apply the lessons to your own ventures: Reflect on how you can foster innovation and build winning teams in your own organization.Quotes:"Creativity is a collaborative process. Innovation comes from teams more often than from the lightbulb moments of lone geniuses." - Walter Isaacson"The best way to predict the future is to invent it." - Alan Kay (as quoted in "The Innovators")Timestamps:(00:00) Intro(04:18) Walter Isaacson(08:21) Overview of the Book(12:28) Tool #1: Ada Lovelace and the Power of Visionary Thinking(18:01) Tool #2: Collaborative Teamwork(23:13) Tool #3: Craftsmanship(29:00) Tool #4: The Entrepreneurial Spirit and Culture of Innovation(35:32) Tool #5: Leadership that Breeds Innovation(42:13) Tool #6: Persistent Innovation(47:19) Tool #7: Public Awareness and Advocacy(53:48) Tool #8: Customer Centricity(58:51) Tool #9: Technicians Collaborating with Business People(01:03:20) Tool #10: Building Collaborative Ecosystems(01:07:55) Key Takeaways(01:14:00) Tl;dr Episode SummarySend us a Text Message.Support the Show.Join the Podcast Newsletter: Link

Video Game Newsroom Time Machine

Acorn and Commodore soar, as Coleco and Atari falter Software takes center stage in PC clone world UK game prices hit rock bottom   These stories and many more on this episode of the VGNRTM!   This episode we will look back at the biggest stories in and around the video game industry in April 1984. As always, we'll mostly be using magazine cover dates, and those are of course always a bit behind the actual events.   Alex Smith of They Create Worlds is our cohost.  Check out his podcast here: https://www.theycreateworlds.com/ and order his book here: https://www.theycreateworlds.com/book   Get us on your mobile device: Android:  https://www.google.com/podcasts?feed=aHR0cHM6Ly92aWRlb2dhbWVuZXdzcm9vbXRpbWVtYWNoaW5lLmxpYnN5bi5jb20vcnNz iOS:  https://podcasts.apple.com/de/podcast/video-game-newsroom-time-machine   And if you like what we are doing here at the podcast, don't forget to like us on your podcasting app of choice, YouTube, and/or support us on patreon! https://www.patreon.com/VGNRTM   Send comments on Mastodon @videogamenewsroomtimemachine@oldbytes.space Or twitter @videogamenewsr2 Or Instagram https://www.instagram.com/vgnrtm Or videogamenewsroomtimemachine@gmail.com   Links: If you don't see all the links, find them here: https://www.patreon.com/posts/106448718   7 Minutes in Heaven: Atic Atac Video Version:  https://www.patreon.com/posts/7-minutes-in-106323507     https://en.wikipedia.org/wiki/Atic_Atac     https://archive.org/details/popular-computing-weekly-1984-04-26/page/n1/mode/1up   Corrections: March 1984 Ep - https://www.patreon.com/posts/march-1984-104469980 Ethan's fine site The History of How We Play: https://thehistoryofhowweplay.wordpress.com/        1954     Kefauver hearing puts pressure on comics industry     https://www.nytimes.com/1954/04/20/archives/comicbook-hearing-to-start-tomorrow.html     https://www.nytimes.com/1954/04/23/archives/senator-charges-deceit-on-comics-kefauver-says-child-study-groups.html         https://en.wikipedia.org/wiki/Estes_Kefauver   1964     IBM announces the 360     https://x.com/kenshirriff/status/1777022892477239724     https://en.wikipedia.org/wiki/IBM_System/360   1974     Sega becomes American https://archive.org/details/cashbox35unse_41/page/50/mode/1up?view=theater     https://podcast.theycreateworlds.com/e/the-saga-of-sega/   Mirco Games brings freeplay to video     https://archive.org/details/cashbox35unse_42/page/50/mode/1up?view=theater      Sam Stern urges industry to go to 25 cent play     https://archive.org/details/cashbox35unse_43/page/35/mode/1up?view=theater   David Gottlieb, RIP     https://archive.org/details/cashbox35unse_43/page/35/mode/1up?view=theater   1984 Atari losses fall     Post Net Income of $30.9 Million, The Associated Press, April 19, 1984, Thursday, AM cycle, Section: Business News       Newsbytes, April 17, 1984, Atari in Trouble Again     https://www.nytimes.com/1984/04/19/business/warner-severs-tie-with-ladd-warner-communications.html   Banks cut Coleco credit line     https://www.nytimes.com/1984/04/03/business/coleco-says-banks-cut-its-credit.html?searchResultPosition=1     Coleco reports rebound from last quarter's loss, United Press International, April 17, 1984, Tuesday, BC cycle   Commodore sales skyrocket!     https://www.nytimes.com/1984/04/27/business/profit-off-at-xerox-sperry-up.html     Toy & Hobby World, April 1984 pg. 12   Acorn revenues soar!     https://archive.org/details/AcornUser021-Apr84/page/n8/mode/1up   TI turn-around unprecedented     Newsbytes, April 17, 1984, Texas Instruments Update   Milton Bradley returns to profitability     Milton Bradley Shows Profit in First Quarter, The Associated Press, April 20, 1984, Friday, BC cycle, Section: Business News     Toy Maker Talking About 'Business Combination' With Another Company, The Associated Press, April 26, 1984, Thursday, AM cycle, Section: Business News, Dateline: SPRINGFIELD, Mass.   Pizza Time Losses continue to mount     Newsbytes, April 17, 1984, In Brief   Atari closes last California factory     Newsbytes, April 3, 1984, Atari Lay-Off        https://www.nytimes.com/1984/04/07/business/atari-to-cut-550-jobs-at-2-plants.html      Alan Kay leaves Atari     Newsbytes, April 10, 1984, A week of resignations      5200 Software drought     https://archive.org/details/computer-entertainer-3-1/page/12/mode/1up        https://archive.org/details/computer-entertainer-3-1/page/15/mode/1up   German court squashe Unimex duplicator     https://binarium.de/unimex_duplicator_sp280         https://archive.org/details/happycomputer-magazine-1984-04/page/n7/mode/1up   Sinclair misses another QL deadline     https://archive.org/details/popular-computing-weekly-1984-04-05/page/n4/mode/1up?view=theater       https://archive.org/details/PersonalComputerNews/PersonalComputerNews057-14Apr1984/page/n3/mode/1up?view=theater     https://archive.org/details/PersonalComputerNews/PersonalComputerNews056-07Apr1984/page/n5/mode/1up   Sinclair finds QL workaround!     https://archive.org/details/PersonalComputerNews/PersonalComputerNews059-28Apr1984/page/n3/mode/1up?view=theater       https://archive.org/details/popular-computing-weekly-1984-04-26/page/n4/mode/1up?view=theater   Hannover Fair sees flood of PCs     https://archive.org/details/PersonalComputerNews/PersonalComputerNews058-21Apr1984/page/n4/mode/1up?view=theater   Commodore unveils PC clone     Newsbytes, April 3, 1984, Speaking of Clones        https://www.nytimes.com/1984/04/05/business/2-machines-challenge-ibm.html   IBM looks to shut down clones     Newsbytes, April 3, 1984, More IBM, by Paul Richter     https://archive.org/details/PersonalComputerNews/PersonalComputerNews056-07Apr1984/page/n4/mode/1up?view=theater   IBM buys  additional Intel shares     Newsbytes, April 3, 1984, In Brief   PCJr sales still dismal     https://archive.org/details/computer-entertainer-3-1/page/10/mode/1up?view=theater     https://archive.org/details/PersonalComputerNews/PersonalComputerNews059-28Apr1984/page/n5/mode/1up?view=theater   Apple ][ forever!     Newsbytes, April 10, 1984, Apple II Forever        https://apple.fandom.com/wiki/Apple_II_Forever       Newsbytes, April 17, 1984, Mac-Update   Apple holds its own with IBM     Newsbytes, April 10, 1984, Apple vs. IBM   Tomy drops out of US market     Toy & Hobby World, April 1984 pg. 12   HP introduces the ThinkJet     https://www.hp.com/hpinfo/abouthp/histnfacts/museum/imagingprinting/0011/        https://archive.org/details/byte-magazine-1984-04-rescan/page/n11/mode/1up?view=theater   Moves to Asia continue     Newsbytes, April 17, 1984, Stealing some thunder         https://www.nytimes.com/1984/04/26/nyregion/thursday-april-26-1984-international.html      Bill Gates on Time cover     https://content.time.com/time/covers/0,16641,19840416,00.html     https://www.nytimes.com/1984/04/22/business/the-heady-world-of-ibm-suppliers.html   IBM working on GUI     https://archive.org/details/byte-magazine-1984-04-rescan/page/n10/mode/1up?view=theater        https://en.wikipedia.org/wiki/IBM_TopView   3rd parties abandon Adam     https://archive.org/details/computer-entertainer-3-1/page/8/mode/1up?view=theater     https://archive.org/details/computer-entertainer-3-1/page/10/mode/1up?view=theater   Imagine changes course on  price cut     https://archive.org/details/PersonalComputerNews/PersonalComputerNews056-07Apr1984/page/n6/mode/1up?view=theater   UK game prices hit 1 pound 99!     https://archive.org/details/PersonalComputerNews/PersonalComputerNews057-14Apr1984/page/n3/mode/1up?view=theater     https://archive.org/details/PersonalComputerNews/PersonalComputerNews057-14Apr1984/page/n4/mode/1up?view=theater   PC Write shareware model a success     Newsbytes, April 10, 1984, Betting on Piracy   Wordstar hits hard times     Newsbytes, April 17, 1984, Software Snafus   Atari Program Exchange shuttered     https://archive.org/details/computer-entertainer-3-1/page/n4/mode/1up?view=theater   Atari introduces AtariLab     PRESS CONFERENCE, PR Newswire, April 4, 1984, Wednesday         https://en.wikipedia.org/wiki/AtariLab   Broderbund announces Print Shop     https://archive.org/details/computer-entertainer-3-1/page/6/mode/1up?view=theater      HESWare hires Nimoy     https://archive.org/details/Ahoy_Issue_04_1984-04_Ion_International_US/page/n7/mode/2up   Empires redefines pass-and-play     https://archive.org/details/PersonalComputerNews/PersonalComputerNews057-14Apr1984/page/n6/mode/1up?view=theater        https://spectrumcomputing.co.uk/entry/9706   Synapse withdraws from UK market     https://archive.org/details/computer-and-videogames-030/page/n19/mode/1up   Random House enters software biz     https://www.nytimes.com/1984/04/11/business/advertising-random-s-software-account.html      64er launches     https://archive.org/details/64er_1984_04/mode/2up      France goes online     https://www.nytimes.com/1984/04/30/business/computer-linkups-spurred-by-france.html   Games Network signs up additional franchises     https://archive.org/details/popular-computing-weekly-1984-04-12/page/n18/mode/1up?view=theater        https://archive.org/details/popular-computing-weekly-1984-04-26/page/n12/mode/1up?view=theater   Videotext gone took 'r jawbs!     https://www.nytimes.com/1984/04/29/arts/tv-view-a-corporate-look-into-the-near-future.html      MITI throws in the towel     MITI GIVES UP PLAN TO SUBMIT SOFTWARE BILL TO DIET, Japan Economic Newswire, APRIL 21, 1984, SATURDAY       REVIEW-SOFTWARE: JAPAN, U.S. REMAIN APART ON SOFTWARE PROTECTION, Japan Economic Newswire, APRIL 19, 1984, THURSDAY   CalTech students hack Rose Bowl scoreboard     https://archive.org/details/byte-magazine-1984-04-rescan/page/n10/mode/1up?view=theater        https://www.edn.com/forget-touchdowns-engineers-score-with-pranks/   RCA axes Video disk https://www.nytimes.com/1984/04/12/arts/tv-review-viewer-controls-screen.html?searchResultPosition=1     https://www.nytimes.com/1984/04/06/business/rca-defends-timing-of-videodisk-canceling.html   Licensing business explodes     Children's Characters Stir Big Sales, The Associated Press, April 25, 1984, Wednesday, PM cycle, Section: Domestic News, Byline: By ROBERT WADE, Associated Press Writer   Takara pens deal with Hasbro     US firm permitted to use characters, The Japan Economic Journal, April 17, 1984, Section: SERVICE/LEISURE/FOOD; Pg. 18     Toy & Hobby World, April 1984   D&D scare hits the UK     https://archive.org/details/popular-computing-weekly-1984-04-12/page/n1/mode/2up Recommended Links:   The History of How We Play: https://thehistoryofhowweplay.wordpress.com/ Gaming Alexandria: https://www.gamingalexandria.com/wp/ They Create Worlds: https://tcwpodcast.podbean.com/ Digital Antiquarian: https://www.filfre.net/ The Arcade Blogger: https://arcadeblogger.com/ Retro Asylum: http://retroasylum.com/category/all-posts/ Retro Game Squad: http://retrogamesquad.libsyn.com/ Playthrough Podcast: https://playthroughpod.com/ Retromags.com: https://www.retromags.com/ Games That Weren't - https://www.gamesthatwerent.com/ Sound Effects by Ethan Johnson of History of How We Play. Copyright Karl Kuras   sega nintendo atari coleco acorn commodore c64 spectrum sinclair ql microsoft windows vision gottlieb chuck e cheese pizza time pcjr shareware pulsonic imagine mastertronic ultimate aticatac apple macintosh appleii 40 years ago: #Acorn and #Commodore soar, as #Coleco and #Atari falter, Software takes center stage in PC clone world & UK game prices hit rock bottom   These stories and more on the VGNRTM!   #c64 #zxspectrum #ql #mastertronic #sega 

Dark Finds Podcast
#10 - KevinJBeatz & Alan Kay - Addiction, Prison, Making Music, and Taking Risks

Dark Finds Podcast

Play Episode Listen Later May 9, 2024 59:39


On toady's episode we have rappers KevinJBeatz's and Alan Kay. Kevin's style of music blends elements of cloud rap, emo rap, and trap music, while Alan's style brings raw, relatable lyrics and melodic vocals. We get through a lot in this episode, from dropping out of school at seventeen, to Xanax addiction, spending time in prison, making music, taking risks, and finding creative inspiration through pain.Alan Kay's IG - https://www.instagram.com/theonly.alankayKevinJBeatz IG - https://www.instagram.com/kevinjbeatzAlan's Spotify - https://open.spotify.com/artist/7KARNYQZORDXiXRfwZ2yaCKevin's Spotify - https://open.spotify.com/artist/40iB6Q7G2XujthlMUfFDnlDark Finds IG - https://www.instagram.com/dark.finds/Dark Finds Podcast IG - https://www.instagram.com/darkfindspodcast/Dark Finds Youtube - https://www.youtube.com/channel/UCap737Yzv3XONh8GaARy5bwDark Finds book - https://www.amazon.com/Dark-Finds-Gary-Colton/dp/1805173634/ref=sr_1_1?crid=1G9HA22S7IUXH&dib=eyJ2IjoiMSJ9.jWppV6gOilTHk6itSI62xEZM7JDxuHpsFX7nniLg2hNrNGA3GCBKyKB4IVXnxblY40oCqVusYd8ANP9my9zV6i-UqhKn5I-ATRqpyYYaAoxfDueFrXJVfbOuwjwNktHDdETVSRbrfNcAeDkr_ecl_J2ScOGreBMkOd3JJVlo64xwLM4ZRzLt-y_Yo74kBHm24spAFtRbWjCLwwhZ0L9w2D0NsorIkOc45Gpv8_7ioPE.MHBrgD9sxfSSwH4JCf5kBU4GDL32_1YWqKOrLXGOxWg&dib_tag=se&keywords=Dark+Finds+book&qid=1715244127&sprefix=dark+finds+book%2Caps%2C179&sr=8-1 Hosted on Acast. See acast.com/privacy for more information.

MacVoices Video
MacVoices #24109: Filmmaker Dave Greelish on 'Before Macintosh: The Apple Lisa' (1)

MacVoices Video

Play Episode Listen Later May 1, 2024 26:20


Our conversation with documentary filmmaker Dave Greelish about his new documentary on one of Apple's “failures”, Before Macintosh: The Apple Lisa. The discussion into the evolution of personal computing and Lisa's part in it, why the Apple Vision Pro and the Lisa may share a place in Apple's legacy going forward, and the impact of AI and automation on where we've been and where we are going. (Part 2)  This edition of MacVoices is supported by The MacVoices Slack. Available all Patrons of MacVoices. Sign up at Patreon.com/macvoices. Show Notes: Chapters: 05:23 Sharing the Story 09:23 Fear and Excitement of AI 13:35 Prioritizing Values 21:38 The Evolution of AI 25:04 Viewing Options 27:03 Diverse Discussions Links: ClassicComputing.com Before Macintosh: The Apple Lisa - Vimeo Link Guests: David Greelish is a Computer Historian, Writer & Media Producer. He has studied computer history, collected old computers and written about them for over 25 years now. In 1993, he founded the Historical Computer Society and published a fanzine called Historically Brewed. In 2010, he founded the Atlanta Historical Computing Society and was the Director of the first Vintage Computer Festival Southeast in 2013. His book, Classic Computing: The Complete Historically Brewed contains all of his computer history zines, plus his personal story. He has written extensively about computer history, created and participated in numerous podcasts, plus conducted both audio and written interviews with notable people in the industry, like: Ed Roberts, John Sculley, Alan Kay and more. He has also presented at numerous vintage computing conventions. Find out more at Dave's web site, Classic Computing, follow him on Twitter, and check out his Love Notes to Newton documentary. Support: Become a MacVoices Patron on Patreon      http://patreon.com/macvoices      Enjoy this episode? Make a one-time donation with PayPal Connect: Web:      http://macvoices.com Twitter: http://www.twitter.com/chuckjoiner      http://www.twitter.com/macvoices Mastodon:      https://mastodon.cloud/@chuckjoiner Facebook:      http://www.facebook.com/chuck.joiner MacVoices Page on Facebook:      http://www.facebook.com/macvoices/ MacVoices Group on Facebook:      http://www.facebook.com/groups/macvoice LinkedIn:      https://www.linkedin.com/in/chuckjoiner/ Instagram:      https://www.instagram.com/chuckjoiner/ Subscribe:      Audio in iTunes      Video in iTunes      Subscribe manually via iTunes or any podcatcher: Audio: http://www.macvoices.com/rss/macvoicesrss      Video: http://www.macvoices.com/rss/macvoicesvideorss

MacVoices Video
MacVoices #24110: More About 'Before Macintosh: The Apple Lisa" with Filmmaker Dave Greelish (2)

MacVoices Video

Play Episode Listen Later May 1, 2024 28:14


Our conversation with documentary filmmaker Dave Greelish about his new documentary on one of Apple's “failures”, Before Macintosh: The Apple Lisa. The discussion into the evolution of personal computing and Lisa's part in it, why the Apple Vision Pro and the Lisa may share a place in Apple's legacy going forward, and the impact of AI and automation on where we've been and where we are going. (Part 2)  http://traffic.libsyn.com/maclevelten/MV24110.mp3 This edition of MacVoices is supported by The MacVoices Slack. Available all Patrons of MacVoices. Sign up at Patreon.com/macvoices. Show Notes: Chapters: 05:23 Sharing the Story 09:23 Fear and Excitement of AI 13:35 Prioritizing Values 21:38 The Evolution of AI 25:04 Viewing Options 27:03 Diverse Discussions Links: ClassicComputing.com Before Macintosh: The Apple Lisa - Vimeo Link Guests: David Greelish is a Computer Historian, Writer & Media Producer. He has studied computer history, collected old computers and written about them for over 25 years now. In 1993, he founded the Historical Computer Society and published a fanzine called Historically Brewed. In 2010, he founded the Atlanta Historical Computing Society and was the Director of the first Vintage Computer Festival Southeast in 2013. His book, Classic Computing: The Complete Historically Brewed contains all of his computer history zines, plus his personal story. He has written extensively about computer history, created and participated in numerous podcasts, plus conducted both audio and written interviews with notable people in the industry, like: Ed Roberts, John Sculley, Alan Kay and more. He has also presented at numerous vintage computing conventions. Find out more at Dave's web site, Classic Computing, follow him on Twitter, and check out his Love Notes to Newton documentary. Support: Become a MacVoices Patron on Patreon      http://patreon.com/macvoices      Enjoy this episode? Make a one-time donation with PayPal Connect: Web:      http://macvoices.com Twitter: http://www.twitter.com/chuckjoiner      http://www.twitter.com/macvoices Mastodon:      https://mastodon.cloud/@chuckjoiner Facebook:      http://www.facebook.com/chuck.joiner MacVoices Page on Facebook:      http://www.facebook.com/macvoices/ MacVoices Group on Facebook:      http://www.facebook.com/groups/macvoice LinkedIn:      https://www.linkedin.com/in/chuckjoiner/ Instagram:      https://www.instagram.com/chuckjoiner/ Subscribe:      Audio in iTunes      Video in iTunes      Subscribe manually via iTunes or any podcatcher: Audio: http://www.macvoices.com/rss/macvoicesrss      Video: http://www.macvoices.com/rss/macvoicesvideorss

MacVoices Audio
MacVoices #24110: More About 'Before Macintosh: The Apple Lisa' with Filmmaker David Greelish (2)

MacVoices Audio

Play Episode Listen Later Apr 24, 2024 28:15


Our conversation with documentary filmmaker David Greelish about his new documentary on one of Apple's “failures”, Before Macintosh: The Apple Lisa. The discussion into the evolution of personal computing and Lisa's part in it, why the Apple Vision Pro and the Lisa may share a place in Apple's legacy going forward, and the impact of AI and automation on where we've been and where we are going. (Part 2)  http://traffic.libsyn.com/maclevelten/MV24110.mp3   This edition of MacVoices is supported by The MacVoices Slack. Available all Patrons of MacVoices. Sign up at Patreon.com/macvoices. Show Notes: Chapters: 05:23 Sharing the Story 09:23 Fear and Excitement of AI 13:35 Prioritizing Values 21:38 The Evolution of AI 25:04 Viewing Options 27:03 Diverse Discussions Links: ClassicComputing.com Before Macintosh: The Apple Lisa - Vimeo Link Guests: David Greelish is a Computer Historian, Writer & Media Producer. He has studied computer history, collected old computers and written about them for over 25 years now. In 1993, he founded the Historical Computer Society and published a fanzine called Historically Brewed. In 2010, he founded the Atlanta Historical Computing Society and was the Director of the first Vintage Computer Festival Southeast in 2013. His book, Classic Computing: The Complete Historically Brewed contains all of his computer history zines, plus his personal story. He has written extensively about computer history, created and participated in numerous podcasts, plus conducted both audio and written interviews with notable people in the industry, like: Ed Roberts, John Sculley, Alan Kay and more. He has also presented at numerous vintage computing conventions. Find out more at Dave's web site, Classic Computing, follow him on Twitter, and check out his Love Notes to Newton documentary. Support:      Become a MacVoices Patron on Patreon      http://patreon.com/macvoices      Enjoy this episode? Make a one-time donation with PayPal Connect:      Web:      http://macvoices.com      Twitter:      http://www.twitter.com/chuckjoiner      http://www.twitter.com/macvoices      Mastodon:      https://mastodon.cloud/@chuckjoiner      Facebook:      http://www.facebook.com/chuck.joiner      MacVoices Page on Facebook:      http://www.facebook.com/macvoices/      MacVoices Group on Facebook:      http://www.facebook.com/groups/macvoice      LinkedIn:      https://www.linkedin.com/in/chuckjoiner/      Instagram:      https://www.instagram.com/chuckjoiner/ Subscribe:      Audio in iTunes      Video in iTunes      Subscribe manually via iTunes or any podcatcher:      Audio: http://www.macvoices.com/rss/macvoicesrss      Video: http://www.macvoices.com/rss/macvoicesvideorss

MacVoices Audio
MacVoices #24109: Filmmaker David Greelish on 'Before Macintosh: The Apple Lisa' (1)

MacVoices Audio

Play Episode Listen Later Apr 22, 2024 26:21


Documentary filmmaker Dave Greelish shares the story of how his latest effort, Before Macintosh: The Apple Lisa, was rooted in his passion for Apple tech that dates back to his college days. He discusses the evolution of consumer preferences from functionality to design in selecting computers. The  conversation explores the significance of Apple's failures like the Lisa and the Newton, highlighting their role in shaping the company's successful products. (Part 1)  This edition of MacVoices is brought to you by the MacVoices Dispatch, our weekly newsletter that keeps you up-to-date on any and all MacVoices-related information. Subscribe today and don't miss a thing. Show Notes: Chapters: 03:31 The Evolution of Macintosh 05:23 From Lisa to Macintosh 08:54 Unveiling the Lisa Documentary 10:44 Starting the Documentary Process 14:06 The Journey of Making a Documentary 19:30 Speculations on Apple's Future 19:47 Surprises from the Interviews Links: ClassicComputing.com Before Macintosh: The Apple Lisa - Vimeo Link Guests: David Greelish is a Computer Historian, Writer & Media Producer. He has studied computer history, collected old computers and written about them for over 25 years now. In 1993, he founded the Historical Computer Society and published a fanzine called Historically Brewed. In 2010, he founded the Atlanta Historical Computing Society and was the Director of the first Vintage Computer Festival Southeast in 2013. His book, Classic Computing: The Complete Historically Brewed contains all of his computer history zines, plus his personal story. He has written extensively about computer history, created and participated in numerous podcasts, plus conducted both audio and written interviews with notable people in the industry, like: Ed Roberts, John Sculley, Alan Kay and more. He has also presented at numerous vintage computing conventions. Find out more at Dave's web site, Classic Computing, follow him on Twitter, and check out his Love Notes to Newton documentary. Support:      Become a MacVoices Patron on Patreon      http://patreon.com/macvoices      Enjoy this episode? Make a one-time donation with PayPal Connect:      Web:      http://macvoices.com      Twitter:      http://www.twitter.com/chuckjoiner      http://www.twitter.com/macvoices      Mastodon:      https://mastodon.cloud/@chuckjoiner      Facebook:      http://www.facebook.com/chuck.joiner      MacVoices Page on Facebook:      http://www.facebook.com/macvoices/      MacVoices Group on Facebook:      http://www.facebook.com/groups/macvoice      LinkedIn:      https://www.linkedin.com/in/chuckjoiner/      Instagram:      https://www.instagram.com/chuckjoiner/ Subscribe:      Audio in iTunes      Video in iTunes      Subscribe manually via iTunes or any podcatcher:      Audio: http://www.macvoices.com/rss/macvoicesrss      Video: http://www.macvoices.com/rss/macvoicesvideorss

OBio
Biology for the rest of us

OBio

Play Episode Listen Later Apr 1, 2024 15:22


It's 5:00AM in Toluca when Sofia Sanchez wakes up to the sound of her Westclox, ready for another day as a business major. She drives to her hometown's business school, listening to the radio, wondering what her professor's reaction will be to the 76-page final report she wrote with the help of her beloved IBM Selectric, her worn-out Chartpak stencil, her family's encyclopedia, and Mexico City's IBM center for some calculations. She makes a stop at a phone call station to remind her teammate to correct the finger mistake on the cardboard they made about some ultra-innovative “personal computing” devices her uncle in the US once told her about. Not much more is to be said about Sofia Sanchez. April 1st, 2024 was just another day for a normal 20-year-old in a small town in Mexico.Thank Lick it's April Fool's!“How do the current and next thing get determined?” was the question that gave birth to this piece. Whether they'd be ideas, beliefs, fonts, or technologies, I'd been deeply curious to understand the nature of trends in our world. I wanted to know if there were principles for steering the world towards a particular direction, and whether I could use those principles to change the way biotechnology changes the world.For the past few weeks, I've explored J.C.R Licklider's visionary mind, Everett Roger's theory on the Diffusion of Innovations, René Girard's Theory of Mimetic Desire, and the desires of GenZ influencers on Instagram. This piece is a third batch of thoughts at the intersection of culture and biology, this time focusing on how biology can influence modern culture as much as modern culture influences biology.The Presynbiotech EraThe habitants of the terrifying alternate universe I described in the very first paragraph, are firm believers that the future is merely what hasn't happened yet but eventually will. To them, technology equals destiny and evolution is synonymous with inexorable. What we, in this universe made happen in 30 years, they might start dreaming about in 60.To yours and my own fortune, 5 years prior to the birth of Moore's Law, Man-computer symbiosis was published by J.C.R Licklider (Lick for us friends). While Moore had made an accurate prediction on how far we could take computers, Lick set a clear vision and agenda for how personal computers and the intergalactic network would become the new medium of creative expression: how tech would change culture.Lick dreamed about everything from Zoom to PayPal, Instagram, and the iPad and he led the development of early versions of them. Slightly paraphrased, one of my absolute favorite passages in the Dream Machine book answers how he did that:Lick knew he couldn't get it all done in one year or two years or a lifetime. By creating a community of fellow believers, however, he guaranteed that his vision would live on after him. When he arrived at ARPA in 1962, there was nothing more than a handful of uncoordinated efforts scattered across the country. By the time he left in 1964, he had forged those into a nationwide movement with direction, coherence and purpose. By putting most of the money into universities, he supported the rising generation, whose hearts and minds he won and convinced that computer science was an exciting thing to do. In 1988, that community was the one thing that he was willing to take credit for: “I think I found a lot of bright people and got them working in this area”.Of course, we would be doomed too if the world was only full of dreamers like Lick! We need builders like Alan Kay who build them to life and artists like Steve Jobs who steal from them and bring the tech to everyone once it's cheap enough. Call it lack of vision, bad management or timing, neither Intel nor H&P nor Xerox nor IBM capitalized on their decades of technological advantage over the garage hackers. Yet the garage hackers didn't invent the tech alone; they more so put the puzzle together once the pieces were there. Dream, build, sell.Today, we are living in the Presynbiotech Era. It is a time before ubiquitous Artificial Intelligence-engineered life, a time before the worlds of silicon and carbon intelligence truly fuse as one.Oddly enough, I have not yet encountered a clear 50-year dream of the social paradigm shifts that we will create through biotechnology. If no one's even dreaming, how will we arrive at a different future❗❓Computers have changed the way we think. Synbio will change the way we feel. There will not only be dinosaurs, but UBERs, TikTok marketplaces and AirBnBs. We will have new kinds of enhanced humans of mixed sexes, races and talents, iPlants that act as programmable vending machines where I can buy anything from on the street, an axolotl-turtle hybrid that wakes me up in the morning and cleans my bedroom, and expresso machines that use personalized capsules that keep track of and maintain each one of my biomarkers in place. End of famine, disease, poverty and war too.The fact that we even start thinking of a Pre-synbiotech Era means that we can start dreaming about a Synbiotech one. If you'd been waiting for a call, here you go: I'm calling YOU to share more audacious, truly crazy, dreams of a Synbiotech future, for those who look back on these dreams 50 years hence might as well be living them.And surely, remember: had the world waited for Lick to sell iPads, you would not be reading this either. Dreams matter as much as execution and you truly can learn anything. So if you enjoy building, go for it. Real artists ship.Today's weirdos are tomorrow's basicsIn his Theory on the Diffusion of Innovations, Everett Rogers defines innovation as anything that is perceived as new by someone. He categorizes the adopters of an innovation into 5 main groups, according to influential power, risk aversion and the resulting order in which they adopt: the innovators, early adopters, early majority, late majority, and laggards—A business school all-time favorite.The graph is so good that reading the whole book won't get you much farther than: 1) it's early adopters (aka influencers) and not innovators who bring innovations to billions; 2) though anyone can be an influencer, the closer the better; 3) the who matters more than the what.The theory checks as I go back to my latest purchasing and habit decisions. For example, I'd heard about the WHOOP band from Bryan Johnson, Steve from Diary of a CEO, other macro influencers and numerous YouTube ads, but I was only convinced enough to buy it until I interacted closely with a user I could relate to: a young, female, Mexican bioengineer who'd graduated from the same university where I study.Something that Tesla has left clear too, is that innovators and early adopters can absorb the cost of being first. They are not only helping spread the technology by providing feedback and generating revenue but by modeling that desire to the early majority that relies on and admires them.Where Rogers falls short and Girard goes deeper is in the actual dynamics between influencers and followers. What I learned from reading about the Theory of Mimetic Desire is that, in our quest for personal differentiation, we look up to people whom we can relate to, who are looked up to by people like us AND who have something we still lack.The most obvious examples of influencers are early adopters like Bryan Johnson and Steve. The more subtle yet more powerful influencers are people like the bioengineer with the WHOOP who had something I wanted (being perceived as a healthy person) while still being relatable.Lick's dreams have come to life to such an extent that being an internet influencer is no longer exclusive to elites. The rise of the microinfluencer (

OBio
Biology for the rest of us

OBio

Play Episode Listen Later Apr 1, 2024 15:22


It's 5:00AM in Toluca when Sofia Sanchez wakes up to the sound of her Westclox, ready for another day as a business major. She drives to her hometown's business school, listening to the radio, wondering what her professor's reaction will be to the 76-page final report she wrote with the help of her beloved IBM Selectric, her worn-out Chartpak stencil, her family's encyclopedia, and Mexico City's IBM center for some calculations. She makes a stop at a phone call station to remind her teammate to correct the finger mistake on the cardboard they made about some ultra-innovative “personal computing” devices her uncle in the US once told her about. Not much more is to be said about Sofia Sanchez. April 1st, 2024 was just another day for a normal 20-year-old in a small town in Mexico.Thank Lick it's April Fool's!“How do the current and next thing get determined?” was the question that gave birth to this piece. Whether they'd be ideas, beliefs, fonts, or technologies, I'd been deeply curious to understand the nature of trends in our world. I wanted to know if there were principles for steering the world towards a particular direction, and whether I could use those principles to change the way biotechnology changes the world.For the past few weeks, I've explored J.C.R Licklider's visionary mind, Everett Roger's theory on the Diffusion of Innovations, René Girard's Theory of Mimetic Desire, and the desires of GenZ influencers on Instagram. This piece is a third batch of thoughts at the intersection of culture and biology, this time focusing on how biology can influence modern culture as much as modern culture influences biology.The Presynbiotech EraThe habitants of the terrifying alternate universe I described in the very first paragraph, are firm believers that the future is merely what hasn't happened yet but eventually will. To them, technology equals destiny and evolution is synonymous with inexorable. What we, in this universe made happen in 30 years, they might start dreaming about in 60.To yours and my own fortune, 5 years prior to the birth of Moore's Law, Man-computer symbiosis was published by J.C.R Licklider (Lick for us friends). While Moore had made an accurate prediction on how far we could take computers, Lick set a clear vision and agenda for how personal computers and the intergalactic network would become the new medium of creative expression: how tech would change culture.Lick dreamed about everything from Zoom to PayPal, Instagram, and the iPad and he led the development of early versions of them. Slightly paraphrased, one of my absolute favorite passages in the Dream Machine book answers how he did that:Lick knew he couldn't get it all done in one year or two years or a lifetime. By creating a community of fellow believers, however, he guaranteed that his vision would live on after him. When he arrived at ARPA in 1962, there was nothing more than a handful of uncoordinated efforts scattered across the country. By the time he left in 1964, he had forged those into a nationwide movement with direction, coherence and purpose. By putting most of the money into universities, he supported the rising generation, whose hearts and minds he won and convinced that computer science was an exciting thing to do. In 1988, that community was the one thing that he was willing to take credit for: “I think I found a lot of bright people and got them working in this area”.Of course, we would be doomed too if the world was only full of dreamers like Lick! We need builders like Alan Kay who build them to life and artists like Steve Jobs who steal from them and bring the tech to everyone once it's cheap enough. Call it lack of vision, bad management or timing, neither Intel nor H&P nor Xerox nor IBM capitalized on their decades of technological advantage over the garage hackers. Yet the garage hackers didn't invent the tech alone; they more so put the puzzle together once the pieces were there. Dream, build, sell.Today, we are living in the Presynbiotech Era. It is a time before ubiquitous Artificial Intelligence-engineered life, a time before the worlds of silicon and carbon intelligence truly fuse as one.Oddly enough, I have not yet encountered a clear 50-year dream of the social paradigm shifts that we will create through biotechnology. If no one's even dreaming, how will we arrive at a different future❗❓Computers have changed the way we think. Synbio will change the way we feel. There will not only be dinosaurs, but UBERs, TikTok marketplaces and AirBnBs. We will have new kinds of enhanced humans of mixed sexes, races and talents, iPlants that act as programmable vending machines where I can buy anything from on the street, an axolotl-turtle hybrid that wakes me up in the morning and cleans my bedroom, and expresso machines that use personalized capsules that keep track of and maintain each one of my biomarkers in place. End of famine, disease, poverty and war too.The fact that we even start thinking of a Pre-synbiotech Era means that we can start dreaming about a Synbiotech one. If you'd been waiting for a call, here you go: I'm calling YOU to share more audacious, truly crazy, dreams of a Synbiotech future, for those who look back on these dreams 50 years hence might as well be living them.And surely, remember: had the world waited for Lick to sell iPads, you would not be reading this either. Dreams matter as much as execution and you truly can learn anything. So if you enjoy building, go for it. Real artists ship.Today's weirdos are tomorrow's basicsIn his Theory on the Diffusion of Innovations, Everett Rogers defines innovation as anything that is perceived as new by someone. He categorizes the adopters of an innovation into 5 main groups, according to influential power, risk aversion and the resulting order in which they adopt: the innovators, early adopters, early majority, late majority, and laggards—A business school all-time favorite.The graph is so good that reading the whole book won't get you much farther than: 1) it's early adopters (aka influencers) and not innovators who bring innovations to billions; 2) though anyone can be an influencer, the closer the better; 3) the who matters more than the what.The theory checks as I go back to my latest purchasing and habit decisions. For example, I'd heard about the WHOOP band from Bryan Johnson, Steve from Diary of a CEO, other macro influencers and numerous YouTube ads, but I was only convinced enough to buy it until I interacted closely with a user I could relate to: a young, female, Mexican bioengineer who'd graduated from the same university where I study.Something that Tesla has left clear too, is that innovators and early adopters can absorb the cost of being first. They are not only helping spread the technology by providing feedback and generating revenue but by modeling that desire to the early majority that relies on and admires them.Where Rogers falls short and Girard goes deeper is in the actual dynamics between influencers and followers. What I learned from reading about the Theory of Mimetic Desire is that, in our quest for personal differentiation, we look up to people whom we can relate to, who are looked up to by people like us AND who have something we still lack.The most obvious examples of influencers are early adopters like Bryan Johnson and Steve. The more subtle yet more powerful influencers are people like the bioengineer with the WHOOP who had something I wanted (being perceived as a healthy person) while still being relatable.Lick's dreams have come to life to such an extent that being an internet influencer is no longer exclusive to elites. The rise of the microinfluencer (

Hacker News Recap
March 19th, 2024 | How to Start Google

Hacker News Recap

Play Episode Listen Later Mar 20, 2024 19:39


This is a recap of the top 10 posts on Hacker News on March 19th, 2024.This podcast was generated by wondercraft.ai(00:33): How to Start GoogleOriginal post: https://news.ycombinator.com/item?id=39756865&utm_source=wondercraft_ai(02:12): Java 22 ReleasedOriginal post: https://news.ycombinator.com/item?id=39755471&utm_source=wondercraft_ai(04:05): Lego price per part over the yearsOriginal post: https://news.ycombinator.com/item?id=39759693&utm_source=wondercraft_ai(05:45): Inside the Massive Alleged AT&T Data BreachOriginal post: https://news.ycombinator.com/item?id=39754330&utm_source=wondercraft_ai(07:39): BootLogo: Logo language in 508 bytes of x86 machine codeOriginal post: https://news.ycombinator.com/item?id=39753650&utm_source=wondercraft_ai(09:25): Root cause of Alzheimer's may be fat buildup in brain cells, research suggestsOriginal post: https://news.ycombinator.com/item?id=39760333&utm_source=wondercraft_ai(11:16): Intel 8080 emulator. 19th IOCCC. Best of ShowOriginal post: https://news.ycombinator.com/item?id=39758667&utm_source=wondercraft_ai(13:38): What does Alan Kay think about LLMs?Original post: https://news.ycombinator.com/item?id=39758391&utm_source=wondercraft_ai(15:30): USB hubs, printers, Java, and more seemingly broken by macOS 14.4 updateOriginal post: https://news.ycombinator.com/item?id=39755358&utm_source=wondercraft_ai(17:21): Astronaut Thomas Stafford has diedOriginal post: https://news.ycombinator.com/item?id=39755267&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai

Geldbewusst
Folge 352 - Die Zukunft voraussagen

Geldbewusst

Play Episode Listen Later Oct 26, 2023 2:04


Die Zukunft voraussagen Leider, und das meine ich so, leider stelle ich immer wieder fest, dass Menschen den Kopf in den Sand stecken, wenn sie in finanziellen Schwierigkeiten stecken. Dabei hilft das eher nichts. Schwierigkeiten lösen sich nicht durch Nichtstun auf, sondern durch Handeln. Natürlich kann auch ich die Zukunft nicht vorhersehen, aber ich weiß, dass an folgendem Zitat etwas dran ist. “Die Zukunft kann man am besten voraussagen, wen man sie selbst gestaltet” Es stammt von Alan Kay. Warum, weil ich es selbst erlebe. Ich gestalte meine Zukunft, indem ich mir finanzielle Ziele stecke und an Ihnen arbeite und dann auch die Ergebnisse sehe, die mich dem Ziel immer näherbringen. Ich gestalte das, was ich mir selbst voraussage durch meine selbstgesteckten Ziele. Damit das klar ist: Zukunft voraussagen, Hände in den Schoß legen und abwarten, Zukunft anders erleben - geht natürlich auch, ist aber enttäuschend. Gestalten zu wollen, zu können und zu dürfen - das ist der Dreiklang, den es braucht. An der eigenen finanziellen Zukunft zu arbeiten ist ein Privileg. Wenn Du auch an Deiner finanziellen Zukunft arbeiten willst und einen Impuls benötigst, dann nutze gerne mein 90 Tage Programm mit 90 Fragen, die dich finanziell weiterbringen. Deine Investition sind 27 Euro. Hier der Link zur Buchung: http://zum-kurs.de/?geldbewusst_leben Ich wünsche Dir eine erfolgreiche Woche.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Thanks to the over 11,000 people who joined us for the first AI Engineer Summit! A full recap is coming, but you can 1) catch up on the fun and videos on Twitter and YouTube, 2) help us reach 1000 people for the first comprehensive State of AI Engineering survey and 3) submit projects for the new AI Engineer Foundation.See our Community page for upcoming meetups in SF, Paris, NYC, and Singapore. This episode had good interest on Twitter.Last month, Imbue was crowned as AI's newest unicorn foundation model lab, raising a $200m Series B at a >$1 billion valuation. As “stealth” foundation model companies go, Imbue (f.k.a. Generally Intelligent) has stood as an enigmatic group given they have no publicly released models to try out. However, ever since their $20m Series A last year their goal has been to “develop generally capable AI agents with human-like intelligence in order to solve problems in the real world”.From RL to Reasoning LLMsAlong with their Series A, they announced Avalon, “A Benchmark for RL Generalization Using Procedurally Generated Worlds”. Avalon is built on top of the open source Godot game engine, and is ~100x faster than Minecraft to enable fast RL benchmarking and a clear reward with adjustable game difficulty.After a while, they realized that pure RL isn't a good path to teach reasoning and planning. The agents were able to learn mechanical things like opening complex doors, climbing, but couldn't go to higher level tasks. A pure RL world also doesn't include a language explanation of the agent reasoning, which made it hard to understand why it made certain decisions. That pushed the team more towards the “models for reasoning” path:“The second thing we learned is that pure reinforcement learning is not a good vehicle for planning and reasoning. So these agents were able to learn all sorts of crazy things: They could learn to climb like hand over hand in VR climbing, they could learn to open doors like very complicated, like multiple switches and a lever open the door, but they couldn't do any higher level things. And they couldn't do those lower level things consistently necessarily. And as a user, I do not want to interact with a pure reinforcement learning end to end RL agent. As a user, like I need much more control over what that agent is doing.”Inspired by Chelsea Finn's work on SayCan at Stanford, the team pivoted to have their agents do the reasoning in natural language instead. This development parallels the large leaps in reasoning that humans have developed as the scientific method:“We are better at reasoning now than we were 3000 years ago. An example of a reasoning strategy is noticing you're confused. Then when I notice I'm confused, I should ask:* What was the original claim that was made? * What evidence is there for this claim? * Does the evidence support the claim? * Is the claim correct? This is like a reasoning strategy that was developed in like the 1600s, you know, with like the advent of science. So that's an example of a reasoning strategy. There are tons of them. We employ all the time, lots of heuristics that help us be better at reasoning. And we can generate data that's much more specific to them.“The Full Stack Model LabOne year later, it would seem that the pivot to reasoning has had tremendous success, and Imbue has now reached a >$1B valuation, with participation from Astera Institute, NVIDIA, Cruise CEO Kyle Vogt, Notion co-founder Simon Last, and others. Imbue tackles their work with a “full stack” approach:* Models. Pretraining very large (>100B parameter) models, optimized to perform well on internal reasoning benchmarks, with a ~10,000 Nvidia H100 GPU cluster lets us iterate rapidly on everything from training data to architecture and reasoning mechanisms.* Tools and Agents. Building internal productivity tools from coding agents for fixing type checking and linting errors, to sophisticated systems like CARBS (for hyperparameter tuning and network architecture search).* Interface Invention. Solving agent trust and collaboration (not merely communication) with humans by creating better abstractions and interfaces — IDEs for users to program computers in natural language.* Theory. Publishing research about the theoretical underpinnings of self-supervised learning, as well as scaling laws for machine learning research.Kanjun believes we are still in the “bare metal phase” of agent development, and they want to take a holistic approach to building the “operating system for agents”. We loved diving deep into the Imbue approach toward solving the AI Holy Grail of reliable agents, and are excited to share our conversation with you today!Timestamps* [00:00:00] Introductions* [00:06:07] The origin story of Imbue* [00:09:39] Imbue's approach to training large foundation models optimized for reasoning* [00:12:18] Imbue's goals to build an "operating system" for reliable, inspectable AI agents* [00:15:37] Imbue's process of developing internal tools and interfaces to collaborate with AI agents* [00:17:27] Imbue's focus on improving reasoning capabilities in models, using code and other data* [00:19:50] The value of using both public benchmarks and internal metrics to evaluate progress* [00:21:43] Lessons learned from developing the Avalon research environment* [00:23:31] The limitations of pure reinforcement learning for general intelligence* [00:28:36] Imbue's vision for building better abstractions and interfaces for reliable agents* [00:31:36] Interface design for collaborating with, rather than just communicating with, AI agents* [00:37:40] The future potential of an agent-to-agent protocol* [00:39:29] Leveraging approaches like critiquing between models and chain of thought* [00:45:49] Kanjun's philosophy on enabling team members as creative agents at Imbue* [00:53:51] Kanjun's experience co-founding the communal co-living space The Archive* [01:00:22] Lightning RoundShow Notes* Imbue* Avalon* CARBS (hyperparameter optimizer)* Series B announcement* Kanjun/Imbue's Podcast* MIT Media Lab* Research mentioned:* Momentum Contrast* SimClr* Chelsea Finn - SayCan* Agent Protocol - part of the AI Engineer Foundation* Xerox PARC* Michael Nielsen* Jason Benn* Outset Capital* Scenius - Kevin Kelly* South Park Commons* The Archive* Thursday Nights in AITranscriptAlessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, Partner and CTO at Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai. [00:00:19]Swyx: Hey, and today in the studio we have Kanjun from Imbue. Welcome. So you and I have, I guess, crossed paths a number of times. You're formerly named Generally Intelligent and you've just announced your rename, rebrand in huge, humongous ways. So congrats on all of that. And we're here to dive in into deeper detail on Imbue. We like to introduce you on a high level basis, but then have you go into a little bit more of your personal side. So you graduated your BS at MIT and you also spent some time at the MIT Media Lab, one of the most famous, I guess, computer hacking labs in the world. Then you graduated MIT and you went straight into BizOps at Dropbox, where you're eventually chief of staff, which is a pretty interesting role we can dive into later. And then it seems like the founder bug hit you. You were basically a three times founder at Ember, Sorceress, and now at Generally Intelligent slash Imbue. What should people know about you on the personal side that's not on your LinkedIn? That's something you're very passionate about outside of work. [00:01:12]Kanjun: Yeah. I think if you ask any of my friends, they would tell you that I'm obsessed with agency, like human agency and human potential. [00:01:19]Swyx: That's work. Come on.Kanjun: It's not work. What are you talking about?Swyx: So what's an example of human agency that you try to promote? [00:01:27]Kanjun: With all of my friends, I have a lot of conversations with them that's kind of helping figure out what's blocking them. I guess I do this with a team kind of automatically too. And I think about it for myself often, like building systems. I have a lot of systems to help myself be more effective. At Dropbox, I used to give this onboarding talk called How to Be Effective, which people liked. I think like a thousand people heard this onboarding talk, and I think maybe Dropbox was more effective. I think I just really believe that as humans, we can be a lot more than we are. And it's what drives everything. I guess completely outside of work, I do dance. I do partner dance. [00:02:03]Swyx: Yeah. Lots of interest in that stuff, especially in the sort of group living houses in San Francisco, which I've been a little bit part of, and you've also run one of those. [00:02:12]Kanjun: That's right. Yeah. I started the archive with two friends, with Josh, my co-founder, and a couple of other folks in 2015. That's right. And GPT-3, our housemates built. [00:02:22]Swyx: Was that the, I guess, the precursor to Generally Intelligent, that you started doing more things with Josh? Is that how that relationship started? Yeah. [00:02:30]Kanjun: This is our third company together. Our first company, Josh poached me from Dropbox for Ember. And there we built a really interesting technology, laser raster projector, VR headset. And then we were like, VR is not the thing we're most passionate about. And actually it was kind of early days when we both realized we really do believe that in our lifetimes, like computers that are intelligent are going to be able to allow us to do much more than we can do today as people and be much more as people than we can be today. And at that time, we actually, after Ember, we were like, work on AI research or start an AI lab. A bunch of our housemates were joining OpenAI, and we actually decided to do something more pragmatic to apply AI to recruiting and to try to understand like, okay, if we are actually trying to deploy these systems in the real world, what's required? And that was Sorceress. That taught us so much about maybe an AI agent in a lot of ways, like what does it actually take to make a product that people can trust and rely on? I think we never really fully got there. And it's taught me a lot about what's required. And it's kind of like, I think informed some of our approach and some of the way that we think about how these systems will actually get used by people in the real world. [00:03:42]Swyx: Just to go one step deeper on that, you're building AI agents in 2016 before it was cool. You got some muscle and you raised $30 million. Something was working. What do you think you succeeded in doing and then what did you try to do that did not pan out? [00:03:56]Kanjun: Yeah. So the product worked quite well. So Sorceress was an AI system that basically looked for candidates that could be a good fit and then helped you reach out to them. And this was a little bit early. We didn't have language models to help you reach out. So we actually had a team of writers that like, you know, customized emails and we automated a lot of the customization. But the product was pretty magical. Like candidates would just be interested and land in your inbox and then you can talk to them. As a hiring manager, that's such a good experience. I think there were a lot of learnings, both on the product and market side. On the market side, recruiting is a market that is endogenously high churn, which means because people start hiring and then we hire the role for them and they stop hiring. So the more we succeed, the more they... [00:04:39]Swyx: It's like the whole dating business. [00:04:40]Kanjun: It's the dating business. Exactly. Exactly. And I think that's the same problem as the dating business. And I was really passionate about like, can we help people find work that is more exciting for them? A lot of people are not excited about their jobs and a lot of companies are doing exciting things and the matching could be a lot better. But the dating business phenomenon like put a damper on that, like it's actually a pretty good business. But as with any business with like relatively high churn, the bigger it gets, the more revenue we have, the slower growth becomes because if 30% of that revenue you lose year over year, then it becomes a worse business. So that was the dynamic we noticed quite early on after our Series A. I think the other really interesting thing about it is we realized what was required for people to trust that these candidates were like well vetted and had been selected for a reason. And it's what actually led us, you know, a lot of what we do at Imbue is working on interfaces to figure out how do we get to a situation where when you're building and using agents, these agents are trustworthy to the end user. That's actually one of the biggest issues with agents that, you know, go off and do longer range goals is that I have to trust, like, did they actually think through this situation? And that really informed a lot of our work today. [00:05:52]Alessio: Let's jump into GI now, Imbue. When did you decide recruiting was done for you and you were ready for the next challenge? And how did you pick the agent space? I feel like in 2021, it wasn't as mainstream. Yeah. [00:06:07]Kanjun: So the LinkedIn says that it started in 2021, but actually we started thinking very seriously about it in early 2020, late 2019, early 2020. So what we were seeing is that scale is starting to work and language models probably will actually get to a point where like with hacks, they're actually going to be quite powerful. And it was hard to see that at the time, actually, because GPT-3, the early versions of it, there are all sorts of issues. We're like, oh, that's not that useful, but we could kind of see like, okay, you keep improving it in all of these different ways and it'll get better. What Josh and I were really interested in is how can we get computers that help us do bigger things? Like, you know, there's this kind of future where I think a lot about, you know, if I were born in 1900 as a woman, like my life would not be that fun. I'd spend most of my time like carrying water and literally like getting wood to put in the stove to cook food and like cleaning and scrubbing the dishes and, you know, getting food every day because there's no refrigerator, like all of these things, very physical labor. And what's happened over the last 150 years since the industrial revolution is we've kind of gotten free energy, like energy is way more free than it was 150 years ago. And so as a result, we've built all these technologies like the stove and the dishwasher and the refrigerator, and we have electricity and we have infrastructure, running water, all of these things that have totally freed me up to do what I can do now. And I think the same thing is true for intellectual energy. We don't really see it today, but because we're so in it, but our computers have to be micromanaged. You know, part of why people are like, oh, you're stuck to your screen all day. Well, we're stuck to our screen all day because literally nothing happens unless I'm doing something in front of my screen. I don't, you know, I can't send my computer off to do a bunch of stuff for me. And there is a future where that's not the case, where, you know, I can actually go off and do stuff and trust that my computer will pay my bills and figure out my travel plans and do the detailed work that I am not that excited to do so that I can like be much more creative and able to do things that I as a human, I'm very excited about and collaborate with other people. And there are things that people are uniquely suited for. So that's kind of always been the thing that has been really exciting to me. Like Josh and I have known for a long time, I think that, you know, whatever AI is, it would happen in our lifetimes. And the personal computer kind of started giving us a bit of free intellectual energy. And this is like really the explosion of free intellectual energy. So in early 2020, we were thinking about this and what happened was self-supervised learning basically started working across everything. So worked in language, SimClear came out, I think MoCo had come out, Momentum Contrast had come out earlier in 2019, SimClear came out in early 2020. And we're like, okay, for the first time, self-supervised learning is working really well across images and text and suspect that like, okay, actually it's the case that machines can learn things the way that humans do. And if that's true, if they can learn things in a fully self-supervised way, because like as people, we are not supervised. We like go Google things and try to figure things out. So if that's true, then like what the computer could be is much bigger than what it is today. And so we started exploring ideas around like, how do we actually go? We didn't think about the fact that we could actually just build a research lab. So we were like, okay, what kind of startup could we build to like leverage self-supervised learning? So that eventually becomes something that allows computers to become much more able to do bigger things for us. But that became General Intelligence, which started as a research lab. [00:09:39]Alessio: So your mission is you aim to rekindle the dream of the personal computer. So when did it go wrong and what are like your first products and user facing things that you're building to rekindle it? [00:09:53]Kanjun: Yeah. So what we do at Imbue is we train large foundation models optimized for reasoning. And the reason for that is because reasoning is actually, we believe the biggest blocker to agents or systems that can do these larger goals. If we think about something that writes an essay, like when we write an essay, we like write it. We put it and then we're done. We like write it and then we look at it and we're like, oh, I need to do more research on that area. I'm going to go do some research and figure it out and come back and, oh, actually it's not quite right. The structure of the outline. So I'm going to rearrange the outline, rewrite it. It's this very iterative process and it requires thinking through like, okay, what am I trying to do? Is the goal correct? Also like, has the goal changed as I've learned more? So as a tool, like when should I ask the user questions? I shouldn't ask them questions all the time, but I should ask them questions in higher risk situations. How certain am I about the like flight I'm about to book? There are all of these notions of like risk certainty, playing out scenarios, figuring out how to make a plan that makes sense, how to change the plan, what the goal should be. That are things that we lump under the bucket of reasoning and models today, they're not optimized for reasoning. It turns out that there's not actually that much explicit reasoning data on the internet as you would expect. And so we get a lot of mileage out of optimizing our models for reasoning in pre-training. And then on top of that, we build agents ourselves and we, I can get into, we really believe in serious use, like really seriously using the systems and trying to get to an agent that we can use every single day, tons of agents that we can use every single day. And then we experiment with interfaces that help us better interact with the agents. So those are some set of things that we do on the kind of model training and agent side. And then the initial agents that we build, a lot of them are trying to help us write code better because code is most of what we do every day. And then on the infrastructure and theory side, we actually do a fair amount of theory work to understand like, how do these systems learn? And then also like, what are the right abstractions for us to build good agents with, which we can get more into. And if you look at our website, we build a lot of tools internally. We have a like really nice automated hyperparameter optimizer. We have a lot of really nice infrastructure and it's all part of the belief of like, okay, let's try to make it so that the humans are doing the things humans are good at as much as possible. So out of our very small team, we get a lot of leverage. [00:12:18]Swyx: And so would you still categorize yourself as a research lab now, or are you now in startup mode? Is that a transition that is conscious at all? [00:12:26]Kanjun: That's a really interesting question. I think we've always intended to build, you know, to try to build the next version of the computer, enable the next version of the computer. The way I think about it is there's a right time to bring a technology to market. So Apple does this really well. Actually, iPhone was under development for 10 years, AirPods for five years. And Apple has a story where iPhone, the first multi-touch screen was created. They actually were like, oh wow, this is cool. Let's like productionize iPhone. They actually brought, they like did some work trying to productionize it and realized this is not good enough. And they put it back into research to try to figure out like, how do we make it better? What are the interface pieces that are needed? And then they brought it back into production. So I think of production and research as kind of like these two separate phases. And internally we have that concept as well, where like things need to be done in order to get to something that's usable. And then when it's usable, like eventually we figure out how to productize it. [00:13:20]Alessio: What's the culture like to make that happen, to have both like kind of like product oriented, research oriented. And as you think about building the team, I mean, you just raised 200 million. I'm sure you want to hire more people. What are like the right archetypes of people that work at Imbue? [00:13:35]Kanjun: I would say we have a very unique culture in a lot of ways. I think a lot about social process design. So how do you design social processes that enable people to be effective? I like to think about team members as creative agents, because most companies, they think of their people as assets and they're very proud of this. And I think about like, okay, what is an asset? It's something you own that provides you value that you can discard at any time. This is a very low bar for people. This is not what people are. And so we try to enable everyone to be a creative agent and to really unlock their superpowers. So a lot of the work I do, you know, I was mentioning earlier, I'm like obsessed with agency. A lot of the work I do with team members is try to figure out like, you know, what are you really good at? What really gives you energy and where can we put you such that, how can I help you unlock that and grow that? So much of our work, you know, in terms of team structure, like much of our work actually comes from people. Carbs, our hyperparameter optimizer came from Abe trying to automate his own research process doing hyperparameter optimization. And he actually pulled some ideas from plasma physics. He's a plasma physicist to make the local search work. A lot of our work on evaluations comes from a couple of members of our team who are like obsessed with evaluations. We do a lot of work trying to figure out like, how do you actually evaluate if the model is getting better? Is the model making better agents? Is the agent actually reliable? A lot of things kind of like, I think of people as making the like them shaped blob inside imbue and I think, you know, yeah, that's the kind of person that we're, we're hiring for. We're hiring product engineers and data engineers and research engineers and all these roles. We have projects, not teams. We have a project around data, data collection and data engineering. That's actually one of the key things that improve the model performance. We have a pre-training kind of project with some fine tuning as part of that. And then we have an agent's project that's like trying to build on top of our models as well as use other models in the outside world to try to make agents then we actually use as programmers every day. So all sorts of different, different projects. [00:15:37]Swyx: As a founder, you're now sort of a capital allocator among all of these different investments effectively at different projects. And I was interested in how you mentioned that you were optimizing for improving reasoning and specifically inside of your pre-training, which I assume is just a lot of data collection. [00:15:55]Kanjun: We are optimizing reasoning inside of our pre-trained models. And a lot of that is about data. And I can talk more about like what, you know, what exactly does it involve? But actually big, maybe 50% plus of the work is figuring out even if you do have models that reason well, like the models are still stochastic. The way you prompt them still makes, is kind of random, like makes them do random things. And so how do we get to something that is actually robust and reliable as a user? How can I, as a user, trust it? We have all sorts of cool things on the, like, you know, I was mentioning earlier when I talked to other people building agents, they have to do so much work, like to try to get to something that they can actually productize and it takes a long time and agents haven't been productized yet for, partly for this reason is that like the abstractions are very leaky. We can get like 80% of the way there, but like self-driving cars, like the remaining 20% is actually really difficult. We believe that, and we have internally, I think some things that like an interface, for example, that lets me really easily like see what the agent execution is, fork it, try out different things, modify the prompt, modify like the plan that it is making. This type of interface, it makes it so that I feel more like I'm collaborating with the agent as it's executing, as opposed to it's just like doing something as a black box. That's an example of a type of thing that's like beyond just the model pre-training, but on the model pre-training side, like reasoning is a thing that we optimize for. And a lot of that is about what data do we put in. [00:17:27]Swyx: It's interesting just because I always think like, you know, out of the levers that you have, the resources that you have, I think a lot of people think that running foundation model company or a research lab is going to be primarily compute. And I think the share of compute has gone down a lot over the past three years. It used to be the main story, like the main way you scale is you just throw more compute at it. And now it's like, Flops is not all you need. You need better data, you need better algorithms. And I wonder where that shift has gone. This is a very vague question, but is it like 30-30-30 now? Is it like maybe even higher? So one way I'll put this is people estimate that Llama2 maybe took about three to $4 million of compute, but probably 20 to $25 million worth of labeling data. And I'm like, okay, well that's a very different story than all these other foundation model labs raising hundreds of millions of dollars and spending it on GPUs. [00:18:20]Kanjun: Data is really expensive. We generate a lot of data. And so that does help. The generated data is close to actually good, as good as human labeled data. [00:18:34]Swyx: So generated data from other models? [00:18:36]Kanjun: From our own models. From your own models. Or other models, yeah. [00:18:39]Swyx: Do you feel like there's certain variations of this? There's the sort of the constitutional AI approach from Anthropic and basically models sampling training on data from other models. I feel like there's a little bit of like contamination in there, or to put it in a statistical form, you're resampling a distribution that you already have that you already know doesn't match human distributions. How do you feel about that basically, just philosophically? [00:19:04]Kanjun: So when we're optimizing models for reasoning, we are actually trying to like make a part of the distribution really spiky. So in a sense, like that's actually what we want. We want to, because the internet is a sample of the human distribution that's also skewed in all sorts of ways. That is not the data that we necessarily want these models to be trained on. And so when we're generating data, we're not really randomly generating data. We generate very specific things that are like reasoning traces and that help optimize reasoning. Code also is a big piece of improving reasoning. So generated code is not that much worse than like regular human written code. You might even say it can be better in a lot of ways. So yeah. So we are trying to already do that. [00:19:50]Alessio: What are some of the tools that you thought were not a good fit? So you built Avalon, which is your own simulated world. And when you first started, the metagame was like using games to simulate things using, you know, Minecraft and then OpenAI is like the gym thing and all these things. And I think in one of your other podcasts, you mentioned like Minecraft is like way too slow to actually do any serious work. Is that true? Yeah. I didn't say it. [00:20:17]Swyx: I don't know. [00:20:18]Alessio: That's above my pay grade. But Avalon is like a hundred times faster than Minecraft for simulation. When did you figure that out that you needed to just like build your own thing? Was it kind of like your engineering team was like, Hey, this is too slow. Was it more a long-term investment? [00:20:34]Kanjun: Yeah. At that time we built Avalon as a research environment to help us learn particular things. And one thing we were trying to learn is like, how do you get an agent that is able to do many different tasks? Like RL agents at that time and environments at that time. What we heard from other RL researchers was the like biggest thing keeping holding the field back is lack of benchmarks that let us explore things like planning and curiosity and things like that and have the agent actually perform better if the agent has curiosity. And so we were trying to figure out in a situation where, how can we have agents that are able to handle lots of different types of tasks without the reward being pretty handcrafted? That's a lot of what we had seen is that like these very handcrafted rewards. And so Avalon has like a single reward it's across all tasks. And it also allowed us to create a curriculum so we could make the level more or less difficult. And it taught us a lot, maybe two primary things. One is with no curriculum, RL algorithms don't work at all. So that's actually really interesting. [00:21:43]Swyx: For the non RL specialists, what is a curriculum in your terminology? [00:21:46]Kanjun: So a curriculum in this particular case is basically the environment Avalon lets us generate simpler environments and harder environments for a given tasks. What's interesting is that the simpler environments, what you'd expect is the agent succeeds more often. So it gets more reward. And so, you know, kind of my intuitive way of thinking about it is, okay, the reason why it learns much faster with a curriculum is it's just getting a lot more signal. And that's actually an interesting general intuition to have about training these things as like, what kind of signal are they getting? And like, how can you help it get a lot more signal? The second thing we learned is that reinforcement learning is not a good vehicle, like pure reinforcement learning is not a good vehicle for planning and reasoning. So these agents were not able to, they were able to learn all sorts of crazy things. They could learn to climb like hand over hand in VR climbing, they could learn to open doors like very complicated, like multiple switches and a lever open the door, but they couldn't do any higher level things. And they couldn't do those lower level things consistently necessarily. And as a user, I do not want to interact with a pure reinforcement learning end to end RL agent. As a user, like I need much more control over what that agent is doing. And so that actually started to get us on the track of thinking about, okay, how do we do the reasoning part in language? And we were pretty inspired by our friend Chelsea Finn at Stanford was I think working on SACAN at the time where it's basically an experiment where they have robots kind of trying to do different tasks and actually do the reasoning for the robot in natural language. And it worked quite well. And that led us to start experimenting very seriously with reasoning. [00:23:31]Alessio: How important is the language part for the agent versus for you to inspect the agent? You know, like is it the interface to kind of the human on the loop really important or? [00:23:43]Kanjun: Yeah, I personally think of it as it's much more important for us, the human user. So I think you probably could get end to end agents that work and are fairly general at some point in the future. But I think you don't want that. Like we actually want agents that we can like perturb while they're trying to figure out what to do. Because, you know, even a very simple example, internally we have like a type error fixing agent and we have like a test generation agent. Test generation agent goes off rails all the time. I want to know, like, why did it generate this particular test? [00:24:19]Swyx: What was it thinking? [00:24:20]Kanjun: Did it consider, you know, the fact that this is calling out to this other function? And the formatter agent, if it ever comes up with anything weird, I want to be able to debug like what happened with RL end to end stuff. Like we couldn't do that. Yeah. [00:24:36]Swyx: It sounds like you have a bunch of agents operating internally within the company. What's your most, I guess, successful agent and what's your least successful one? [00:24:44]Kanjun: The agents don't work. All of them? I think the only successful agents are the ones that do really small things. So very specific, small things like fix the color of this button on the website or like change the color of this button. [00:24:57]Swyx: Which is now sweep.dev is doing that. Exactly. [00:25:00]Kanjun: Perfect. Okay. [00:25:02]Swyx: Well, we should just use sweep.dev. Well, I mean, okay. I don't know how often you have to fix the color of a button, right? Because all of them raise money on the idea that they can go further. And my fear when encountering something like that is that there's some kind of unknown asymptote ceiling that's going to prevent them, that they're going to run head on into that you've already run into. [00:25:21]Kanjun: We've definitely run into such a ceiling. But what is the ceiling? [00:25:24]Swyx: Is there a name for it? Like what? [00:25:26]Kanjun: I mean, for us, we think of it as reasoning plus these tools. So reasoning plus abstractions, basically. I think actually you can get really far with current models and that's why it's so compelling. Like we can pile debugging tools on top of these current models, have them critique each other and critique themselves and do all of these, like spend more computer inference time, context hack, retrieve augmented generation, et cetera, et cetera, et cetera. Like the pile of hacks actually does get us really far. And a way to think about it is like the underlying language model is kind of like a noisy channel. Actually I don't want to use this analogy. It's actually a really bad analogy, but you kind of like trying to get more signal out of the channel. We don't like to think about it that way. It's what the default approach is, is like trying to get more signal out of this noising channel. But the issue with agents is as a user, I want it to be mostly reliable. It's kind of like self-driving in that way. Like it's not as bad as self-driving, like in self-driving, you know, you're like hurtling at 70 miles an hour. It's like the hardest agent problem. But one thing we learned from Sorceress and one thing we learned by using these things internally is we actually have a pretty high bar for these agents to work. You know, it's actually really annoying if they only work 50% of the time and we can make interfaces to make it slightly less annoying. But yeah, there's a ceiling that we've encountered so far and we need to make the models better. We also need to make the kind of like interface to the user better. And also a lot of the like critiquing. I hope what we can do is help people who are building agents actually like be able to deploy them. I think, you know, that's the gap that we see a lot of today is everyone who's trying to build agents to get to the point where it's robust enough to be deployable. It just, it's like an unknown amount of time. Okay. [00:27:12]Swyx: So this goes back into what Embu is going to offer as a product or a platform. How are you going to actually help people deploy those agents? Yeah. [00:27:21]Kanjun: So our current hypothesis, I don't know if this is actually going to end up being the case. We've built a lot of tools for ourselves internally around like debugging, around abstractions or techniques after the model generation happens. Like after the language model generates the text and like interfaces for the user and the underlying model itself, like models talking to each other, maybe some set of those things kind of like an operating system. Some set of those things will be helpful for other people. And we'll figure out what set of those things is helpful for us to make our agents. Like what we want to do is get to a point where we can like start making an agent, deploy it, it's reliable, like very quickly. And there's a similar analog to software engineering, like in the early days, in the seventies and the sixties, like to program a computer, like you have to go all the way down to the registers and write things and eventually we had assembly. That was like an improvement. But then we wrote programming languages with these higher levels of abstraction and that allowed a lot more people to do this and much faster. And the software created is much less expensive. And I think it's basically a similar route here where we're like in the like bare metal phase of agent building. And we will eventually get to something with much nicer abstractions. [00:28:36]Alessio: We had this conversation with George Hotz and we were like, there's not a lot of reasoning data out there. And can the models really understand? And his take was like, look, with enough compute, you're not that complicated as a human. Like the model can figure out eventually why certain decisions are made. What's been your experience? Like as you think about reasoning data, like do you have to do a lot of like manual work or like is there a way to prompt models to extract the reasoning from actions that they [00:29:03]Swyx: see? [00:29:03]Kanjun: So we don't think of it as, oh, throw enough data at it and then it will figure out what the plan should be. I think we're much more explicit. You know, a way to think about it is as humans, we've learned a lot of reasoning strategies over time. We are better at reasoning now than we were 3000 years ago. An example of a reasoning strategy is noticing you're confused. Then when I notice I'm confused, I should ask like, huh, what was the original claim that was made? What evidence is there for this claim? Does the evidence support the claim? Is the claim correct? This is like a reasoning strategy that was developed in like the 1600s, you know, with like the advent of science. So that's an example of a reasoning strategy. There are tons of them. We employ all the time, lots of heuristics that help us be better at reasoning. And we didn't always have them. And because they're invented, like we can generate data that's much more specific to them. So I think internally, yeah, we have a lot of thoughts on what reasoning is and we generate a lot more specific data. We're not just like, oh, it'll figure out reasoning from this black box or like it'll figure out reasoning from the data that exists. Yeah. [00:30:04]Alessio: I mean, the scientific method is like a good example. If you think about hallucination, right, people are thinking, how do we use these models to do net new, like scientific research? And if you go back in time and the model is like, well, the earth revolves around the sun and people are like, man, this model is crap. It's like, what are you talking about? Like the sun revolves around the earth. It's like, how do you see the future? Like if the models are actually good enough, but we don't believe them, it's like, how do we make the two live together? So you're like, you use Inbu as a scientist to do a lot of your research and Inbu tells you, hey, I think this is like a serious path you should go down. And you're like, no, that sounds impossible. Like how is that trust going to be built? And like, what are some of the tools that maybe are going to be there to inspect it? [00:30:51]Kanjun: Really there are two answers to this. One element of it is as a person, like I need to basically get information out of the model such that I can try to understand what's going on with the model. Then the second question is like, okay, how do you do that? And that's kind of some of our debugging tools, they're not necessarily just for debugging. They're also for like interfacing with and interacting with the model. So like if I go back in this reasoning trace and like change a bunch of things, what's going to happen? Like, what does it conclude instead? So that kind of helps me understand like, what are its assumptions? And, you know, we think of these things as tools. And so it's really about like, as a user, how do I use this tool effectively? I need to be willing to be convinced as well. It's like, how do I use this tool effectively? And what can it help me with? [00:31:36]Swyx: And what can it tell me? There's a lot of mention of code in your process. And I was hoping to dive in even deeper. I think we might run the risk of giving people the impression that you view code or you use code just as like a tool within InView just for coding assistance. But I think you actually train code models. And I think there's a lot of informal understanding about how adding code to language models improves their reasoning capabilities. I wonder if there's any research or findings that you have to share that talks about the intersection of code and reasoning. Hmm. Yeah. [00:32:08]Kanjun: So the way I think about it intuitively is like code is the most explicit example of reasoning data on the internet. [00:32:15]Swyx: Yeah. [00:32:15]Kanjun: And it's not only structured, it's actually very explicit, which is nice. You know, it says this variable means this, and then it uses this variable. And then the function does this. As people, when we talk in language, it takes a lot more to extract that explicit structure out of our language. And so that's one thing that's really nice about code is I see it as almost like a curriculum for reasoning. I think we use code in all sorts of ways. The coding agents are really helpful for us to understand what are the limitations of the agents. The code is really helpful for the reasoning itself. But also code is a way for models to act. So by generating code, it can act on my computer. And, you know, when we talk about rekindling the dream of the personal computer, kind of where I see computers going is, you know, like computers will eventually become these much more malleable things where I, as a user today, I have to know how to write software code, like in order to make my computer do exactly what I want it to do. But in the future, if the computer is able to generate its own code, then I can actually interface with it in natural language. And so one way we think about agents is kind of like a natural language programming language. It's a way to program my computer in natural language that's much more intuitive to me as a user. And these interfaces that we're building are essentially IDEs for users to program our computers in natural language. Maybe I should say what we're doing that way. Maybe it's clearer. [00:33:47]Swyx: I don't know. [00:33:47]Alessio: That's a good pitch. What do you think about the different approaches people have, kind of like text first, browser first, like multi-on? What do you think the best interface will be? Or like, what is your, you know, thinking today? [00:33:59]Kanjun: In a lot of ways, like chat as an interface, I think Linus, Linus Lee, you had on this. I really like how he put it. Chat as an interface is skeuomorphic. So in the early days, when we made word processors on our computers, they had notepad lines because that's what we understood these like objects to be. Chat, like texting someone is something we understand. So texting our AI is something that we understand. But today's word documents don't have notepad lines. And similarly, the way we want to interact with agents, like chat is a very primitive way of interacting with agents. What we want is to be able to inspect their state and to be able to modify them and fork them and all of these other things. And we internally have, think about what are the right representations for that? Like architecturally, like what are the right representations? What kind of abstractions do we need to build? And how do we build abstractions that are not leaky? Because if the abstractions are leaky, which they are today, like, you know, this stochastic generation of text is like a leaky abstraction. I cannot depend on it. And that means it's actually really hard to build on top of. But our experience and belief is actually by building better abstractions and better tooling, we can actually make these things non-leaky. And now you can build like whole things on top of them. So these other interfaces, because of where we are, we don't think that much about them. [00:35:17]Swyx: Yeah. [00:35:17]Alessio: I mean, you mentioned, this is kind of like the Xerox Spark moment for AI. And we had a lot of stuff come out of Parc, like the, what you see is what you got editors and like MVC and all this stuff. But yeah, but then we didn't have the iPhone at Parc. We didn't have all these like higher things. What do you think it's reasonable to expect in like this era of AI, you know, call it like five years or so? Like what are like the things we'll build today and what are things that maybe we'll see in kind of like the second wave of products? [00:35:46]Kanjun: That's interesting. I think the waves will be much faster than before. Like what we're seeing right now is basically like a continuous wave. Let me zoom a little bit earlier. So people like the Xerox Parc analogy I give, but I think there are many different analogies. Like one is the like analog to digital computer is kind of an example, like another analogy to where we are today. The analog computer Vannevar Bush built in the 1930s, I think, and it's like a system of pulleys and it can only calculate one function. Like it can calculate like an integral. And that was so magical at the time because you actually did need to calculate this integral bunch, but it had a bunch of issues like in analog errors compound. And so there was actually a set of breakthroughs necessary in order to get to the digital computer, like Turing's decidability, Shannon. I think the like whole like relay circuits can be thought of as can be mapped to Boolean operators and a set of other like theoretical breakthroughs, which essentially were abstractions. They were like creating abstractions for these like very like lossy circuits. They were creating abstractions for these like very analog circuits and digital had this nice property of like being error correcting. And so when I talk about like less leaky abstractions, that's what I mean. That's what I'm kind of pointing a little bit to. It's not going to look exactly the same way. And then the Xerox PARC piece, a lot of that is about like, how do we get to computers that as a person, I can actually use well. And the interface actually helps it unlock so much more power. So the sets of things we're working on, like the sets of abstractions and the interfaces, like hopefully that like help us unlock a lot more power in these systems. Like hopefully that'll come not too far in the future. I could see a next version, maybe a little bit farther out. It's like an agent protocol. So a way for different agents to talk to each other and call each other. Kind of like HTTP. [00:37:40]Swyx: Do you know it exists already? [00:37:41]Kanjun: Yeah, there is a nonprofit that's working on one. I think it's a bit early, but it's interesting to think about right now. Part of why I think it's early is because the issue with agents, it's not quite like the internet where you could like make a website and the website would appear. The issue with agents is that they don't work. And so it may be a bit early to figure out what the protocol is before we really understand how these agents get constructed. But, you know, I think that's, I think it's a really interesting question. [00:38:09]Swyx: While we're talking on this agent to agent thing, there's been a bit of research recently on some of these approaches. I tend to just call them extremely complicated chain of thoughting, but any perspectives on kind of meta-GPT, I think it's the name of the paper. I don't know if you care about at the level of individual papers coming out, but I did read that recently and TLDR, it beat GPT-4 and human eval by role-playing software agent development agency, instead of having sort of single shot or single role, you have multiple roles and how having all of them criticize each other as agents communicating with other agents. [00:38:45]Kanjun: Yeah, I think this is an example of an interesting abstraction of like, okay, can I just plop in this like multi-role critiquing and see how it improves my agent? And can I just plop in chain of thought, tree of thought, plop in these other things and see how they improve my agent? One issue with this kind of prompting is that it's still not very reliable. It's like, there's one lens, which is like, okay, if you do enough of these techniques, you'll get to high reliability. And I think actually that's a pretty reasonable lens. We take that lens often. And then there's another lens that's like, okay, but it's starting to get really messy what's in the prompt and like, how do we deal with that messiness? And so maybe you need like cleaner ways of thinking about and constructing these systems. And we also take that lens. So yeah, I think both are necessary. Yeah. [00:39:29]Swyx: Side question, because I feel like this also brought up another question I had for you. I noticed that you work a lot with your own benchmarks, your own evaluations of what is valuable. I would say I would contrast your approach with OpenAI as OpenAI tends to just lean on, hey, we played StarCraft or hey, we ran it on the SAT or the, you know, the AP bio test and that did results. Basically, is benchmark culture ruining AI? [00:39:55]Swyx: Or is that actually a good thing? Because everyone knows what an SAT is and that's fine. [00:40:04]Kanjun: I think it's important to use both public and internal benchmarks. Part of why we build our own benchmarks is that there are not very many good benchmarks for agents, actually. And to evaluate these things, you actually need to think about it in a slightly different way. But we also do use a lot of public benchmarks for like, is the reasoning capability in this particular way improving? So yeah, it's good to use both. [00:40:26]Swyx: So for example, the Voyager paper coming out of NVIDIA played Minecraft and set their own benchmarks on getting the Diamond X or whatever and exploring as much of the territory as possible. And I don't know how that's received. That's obviously fun and novel for the rest of the engineer, the people who are new to the scene. But for people like yourselves, you build Avalon just because you already found deficiencies with using Minecraft. Is that valuable as an approach? Oh, yeah. I love Voyager. [00:40:57]Kanjun: I mean, Jim, I think is awesome. And I really like the Voyager paper and I think it has a lot of really interesting ideas, which is like the agent can create tools for itself and then use those tools. [00:41:06]Swyx: He had the idea of the curriculum as well, which is something that we talked about earlier. Exactly. [00:41:09]Kanjun: And that's like a lot of what we do. We built Avalon mostly because we couldn't use Minecraft very well to like learn the things we wanted. And so it's like not that much work to build our own. [00:41:19]Swyx: It took us, I don't know. [00:41:22]Kanjun: We had like eight engineers at the time, took about eight weeks. So six weeks. [00:41:27]Swyx: And OpenAI built their own as well, right? Yeah, exactly. [00:41:30]Kanjun: It's just nice to have control over our environment. But if you're doing our own sandbox to really trying to inspect our own research questions. But if you're doing something like experimenting with agents and trying to get them to do things like Minecraft is a really interesting environment. And so Voyager has a lot of really interesting ideas in it. [00:41:47]Swyx: Yeah. Cool. One more element that we had on this list, which is context and memory. I think that's kind of like the foundational, quote unquote, RAM of our era. I think Andrej Karpathy has already made this comparison. So there's nothing new here. And that's just the amount of working knowledge that we can fit into one of these agents. And it's not a lot, right? Especially if you need to get them to do long running tasks. If they need to self-correct from errors that they observe while operating in their environment. Do you see this as a problem? Do you think we're going to just trend to infinite context and that'll go away? Or how do you think we're going to deal with it? [00:42:22]Kanjun: I think when you talked about what's going to happen in the first wave and then in the second wave, I think what we'll see is we'll get like relatively simplistic agents pretty soon. And they will get more and more complex. And there's like a future wave in which they are able to do these like really difficult, really long running tasks. And the blocker to that future, one of the blockers is memory. And that was true of computers too. You know, I think when von Neumann made the von Neumann architecture, he was like, the biggest blocker will be like, we need this amount of memory, which is like, I don't remember exactly like 32 kilobytes or something to store programs. And that will allow us to write software. He didn't say it this way because he didn't have these terms, but that only really was like happened in the seventies with the microchip revolution. It may be the case that we're waiting for some research breakthroughs or some other breakthroughs in order for us to have like really good long running memory. And then in the meantime, agents will be able to do all sorts of things that are a little bit smaller than that. I do think with the pace of the field, we'll probably come up with all sorts of interesting things like, you know, RAG is already very helpful. [00:43:26]Swyx: Good enough, you think? [00:43:27]Kanjun: Maybe good enough for some things. [00:43:29]Swyx: How is it not good enough? I don't know. [00:43:31]Kanjun: I just think about a situation where you want something that's like an AI scientist. As a scientist, I have learned so much about my fields and a lot of that data is maybe hard to fine tune or on, or maybe hard to like put into pre-training. Like a lot of that data, I don't have a lot of like repeats of the data that I'm seeing. You know, like if I'm a scientist, I've like accumulated so many little data points. And ideally I'd want to store those somehow, or like use those to fine tune myself as a model somehow, or like have better memory somehow. I don't think RAG is enough for that kind of thing. But RAG is certainly enough for like user preferences and things like that. Like what should I do in this situation? What should I do in that situation? That's a lot of tasks. We don't have to be a scientist right away. Awesome. [00:44:21]Swyx: I have a hard question, if you don't mind me being bold. Yeah. I think the most comparable lab to InView is Adept. You know, a research lab with like some amount of product situation on the horizon, but not just yet, right? Why should people work for InView over Adept? And we can cut this if it's too like... Yeah. [00:44:40]Kanjun: The way I think about it is I believe in our approach. The type of thing that we're doing is we're trying to like build something that enables other people to build agents and build something that really can be maybe something like an operating system for agents. I know that that's what we're doing. I don't really know what everyone else is doing. You know, I can kind of like talk to people and have some sense of what they're doing. And I think it's a mistake to focus too much on what other people are doing, because extremely focused execution on the right thing is what matters. To the question of like, why us? I think like strong focus on reasoning, which we believe is the biggest blocker, on inspectability, which we believe is really important for user experience and also for the power and capability of these systems. Building non-leaky, good abstractions, which we believe is solving the core issue of agents, which is around reliability and being able to make them deployable. And then really seriously trying to use these things ourselves, like every single day, and getting to something that we can actually ship to other people that becomes something that is a platform. Like, it feels like it could be Mac or Windows. I love the dogfooding approach. [00:45:49]Swyx: That's extremely important. And you will not be surprised how many agent companies I talk to that don't use their own agent. Oh no, that's not good. That's a big surprise. [00:45:59]Kanjun: Yeah, I think if we didn't use our own agents, then we would have all of these beliefs about how good they are. Wait, did you have any other hard questions you wanted to ask? [00:46:08]Swyx: Yeah, mine was just the only other follow-up that you had based on the answer you just gave was, do you see yourself releasing models or do you see yourself, what is the artifacts that you want to produce that lead up to the general operating system that you want to have people use, right? And so a lot of people just as a byproduct of their work, just to say like, hey, I'm still shipping, is like, here's a model along the way. Adept took, I don't know, three years, but they released Persimmon recently, right? Like, do you think that kind of approach is something on your horizon? Or do you think there's something else that you can release that can show people, here's kind of the idea, not the end products, but here's the byproducts of what we're doing? [00:46:51]Kanjun: Yeah, I don't really believe in releasing things to show people like, oh, here's what we're doing that much. I think as a philosophy, we believe in releasing things that will be helpful to other people. [00:47:02]Swyx: Yeah. [00:47:02]Kanjun: And so I think we may release models or we may release tools that we think will help agent builders. Ideally, we would be able to do something like that, but I'm not sure exactly what they look like yet. [00:47:14]Swyx: I think more companies should get into the releasing evals and benchmarks game. Yeah. [00:47:20]Kanjun: Something that we have been talking to agent builders about is co-building evals. So we build a lot of our own evals and every agent builder tells me, basically evals are their biggest issue. And so, yeah, we're exploring right now. And if you are building agents, please reach out to me because I would love to, like, figure out how we can be helpful based on what we've seen. Cool. [00:47:40]Swyx: That's a good call to action. I know a bunch of people that I can send your way. Cool. Great. [00:47:43]Kanjun: Awesome. [00:47:44]Swyx: Yeah. We can zoom out to other interests now. [00:47:46]Alessio: We got a lot of stuff. So we have Sherif from Lexicon, the podcast. He had a lot of interesting questions on his website. You similarly have a lot of them. Yeah. [00:47:55]Swyx: I need to do this. I'm very jealous of people with personal websites right there. Like, here's the high level questions of goals of humanity that I want to set people on. And I don't have that. [00:48:04]Alessio: It's never too late, Sean. [00:48:05]Swyx: Yeah. [00:48:05]Alessio: It's never too late. [00:48:06]Kanjun: Exactly. [00:48:07]Alessio: There were a few that stuck out as related to your work that maybe you're kind of learning [00:48:12]Swyx: more about it. [00:48:12]Alessio: So one is why are curiosity and goal orientation often at odds? And from a human perspective, I get it. It's like, you know, would you want to like go explore things or kind of like focus on your career? How do you think about that from like an agent perspective? Where it's like, should you just stick to the task and try and solve it as in the guardrails as possible? Or like, should you look for alternative solutions? [00:48:34]Swyx: Yeah. [00:48:34]Kanjun: I think one thing that's really interesting about agents actually is that they can be forked. Like, you know, we can take an agent that's executed to a certain place and said, okay, here, like fork this and do a bunch of different things. I try a bunch of different things. Some of those agents can be goal oriented and some of them can be like more curiosity driven. You can prompt them in slightly different ways. And something I'm really curious about, like what would happen if in the future, you know, we were able to actually go down both paths. As a person, why I have this question on my website is I really find that like I really can only take one mode at a time and I don't understand why. And like, is it inherent in like the kind of context that needs to be held? That's why I think from an agent perspective, like forking it is really interesting. Like I can't fork myself to do both, but I maybe could fork an agent to like add a certain point in a task. [00:49:26]Swyx: Yeah. Explore both. Yeah. [00:49:28]Alessio: How has the thinking changed for you as the funding of the company changed? That's one thing that I think a lot of people in the space think is like, oh, should I raise venture capital? Like, how should I get money? How do you feel your options to be curious versus like goal oriented has changed as you raise more money and kind of like the company has grown? [00:49:50]Kanjun: Oh, that's really funny. Actually, things have not changed that much. So we raised our Series A $20 million in late 2021. And our entire philosophy at that time was, and still kind of is, is like, how do we figure out the stepping stones, like collect stepping stones that eventually let us build agents, kind of these new computers that help us do bigger things. And there was a lot of curiosity in that. And there was a lot of goal orientation in that. Like the curiosity led us to build CARBS, for example, this hyperparameter optimizer. Great name, by the way. [00:50:28]Swyx: Thank you. [00:50:29]Kanjun: Is there a story behind that name? [00:50:30]Swyx: Yeah. [00:50:31]Kanjun: Abe loves CARBS. It's also cost aware. So as soon as he came up with cost aware, he was like, I need to figure out how to make this work. But the cost awareness of it was really important. So that curiosity led us to this really cool hyperparameter optimizer. That's actually a big part of how we do our research. It lets us experiment on smaller models. And for those experiment results to carry to larger ones. [00:50:56]Swyx: Which you also published a scaling laws, which is great. I think the scaling laws paper from OpenAI was like the biggest. And from Google, I think, was the greatest public service to machine learning that any research lab can do. Yeah, totally. [00:51:10]Kanjun: What was nice about CARBS is it gave us scaling laws for all sorts of hyperparameters. So yeah, that's cool. It basically hasn't changed very much. So there's some curiosity. And then there's some goal oriented parts. Like Avalon, it was like a six to eight week sprint for all of us. And we got this thing out. And then now different projects do like more curiosity or more goal orientation at different times. Cool. [00:51:36]Swyx: Another one of your questions that we highlighted was, how can we enable artificial agents to permanently learn new abstractions and processes? I think this is might be called online learning. [00:51:45]Kanjun: Yeah. So I struggle with this because, you know, that scientist example I gave. As a scientist, I've like permanently learned a lot of new things. And I've updated and created new abstractions and learned them pretty reliably. And you were talking about like, okay, we have this RAM that we can store learnings in. But how well does online learning actually work? And the answer right now seems to be like, as models get bigger, they fine tune faster. So they're more sample efficient as they get bigger. [00

Future of Coding
A Small Matter of Programming by Bonnie Nardi

Future of Coding

Play Episode Listen Later Aug 23, 2023 154:50


This community is a big tent. We welcome folks from all backgrounds, and all levels of experience with computers. Heck, on our last episode, we celebrated an article written by someone who is, rounding down, a lawyer! A constant question I ponder is: what's the best way to introduce someone to the world of FoC? If someone is a workaday programmer, or a non-programmer, what can we share with them to help them understand our area of interest? A personal favourite is the New Media Reader, but it's long and dense. An obvious crowd-pleaser is Inventing on Principle. Bonnie Nardi's A Small Matter of Programming deserves a place on the list, especially if the reader is already an avid programmer who doesn't yet understand the point of end-user programming. They might ask, "Why should typical computer users bother learning to program?" Well, that's the wrong question! Instead, we should start broader. Why do we use computers? What do we use them to do? What happens when they don't do what we want? Who controls what they do? Will this ever change? What change do we want? Nardi challenges us to explore these questions, and gives the reader a gentle but definitive push in a positive direction. Next time, we're… considered harmful? #### $ We have launched a Patreon! => patreon.com/futureofcoding If, with the warmth in your heart and the wind in your wallet, you so choose to support this show then please know that we are tremendously grateful. Producing this show takes a minor mountain of effort, and while the countless throngs of adoring fair-weather fans will surely arrive eventually, the small kilo-cadre of diehard listeners we've accrued so far makes each new episode a true joy to share. Through thick and thin (mostly thin since the sponsorship landscape turned barren) we're going to keep doing our darnedest to make something thought-provoking with an independent spirit. If that tickles you pink, throw some wood in our fireplace! (Yes, Ivan is writing this, how can you tell?) Also, it doesn't hurt that the 2nd bonus episode — "Inherently Spatial" — is one of the best episodes of the show yet. It defrags so hard; you'll love it. #### Init Bug report: Frog Fractions. Oh the indignity! Hey, it's The Witness in our show notes again. Getting Over It with Bennett Foddy is the better game, even if it spawned Only Up and other copycats that miss the point. The Looker gets the point. Getting Over It is a triumph that emerged from a genre of games that are hard to play: Octodad, QWOP, I Am Bread Braid arguably spawned the genre of high-minded & heady puzzlers that all try to say something profound through their design. Cookie Clicker and Universal Paperclips are good incremental games. Jump King and Only Up are intentionally bad. Flappy Bird was accidentally good. Surgeon Simulator and Goat Simulator are purely for the laughs. Stanley Parable, like Getting Over It, brings in the voice of the creator to (say) invite rumination on the fourth wall, which is what make them transcendent. Here's the trailer for Bennett Foddy's new game, Baby Steps. So on the one hand we have all these "bad" and """bad""" and sometimes badgames, which actually end up doing quite well in advancing the culture. On the other hand we have The Witness, The Talos Principal, Swapper, Antichamber, QUBE, and all these high-minded puzzly games, which despite their best efforts to say something through their design… kinda don't. When comparing the "interactivity" of these games, it's tempting to talk about the mechanics (or dynamics), but that formal definition feels a little too precise. We mean something looser — something closer to the colloquial meaning when "Gamers" talk about "game mechanics". Silent Football might be an example of "sports as art". Mao is a card game where explaining the rules is forbidden. #### Main The Partially Examined Life is one of Jimmy's favourite philosophy podcasts. Two essays from Scientific American's 1991 Special Issue Communications, Computers and Networks are referenced in the first chapter, one by Larry Tesler and one by Alan Kay. The other essays in this issue are also quite interesting to reflect on from our position 30 years hence. Apple's Knowledge Navigator video, and HP's 1995 video, are speculative fiction marketing about conversational agents. Rewind.ai is one of those "Computer, when did I last degauss the tachyon masticator?" tools. (Oh, Lifestreams…) S-GPT is Federico Viticci's iOS/Mac Shortcut that strings together ChatGPT and various Shortcuts features, so that you can do some nifty automation stuff via a conversational interface. It feels like similar things could be built — heck, probably already have been built — with "If-Tuh-Tuh-Tuh" or Zapier. When Ivan reaches for domain-specific terminology, LUT, Arri Alexa, and Redcome easily because, like, he wishes he had occasion to use them. To hear the story about the Secret Service busting down young Jimmy's door, listen to his episode on the Code With Jason podcast. C Is Not a Low-level Language — a fantastic article about the illusion that our source code closely matches what actually happens during execution. What Follows from Empirical Software Research? Not much, according to Jimmy in this delightful article. Jimmy likes to reference Minecraft's "redstone" which acts a bit like a programming system, so here, have a video about redstone. Ivan saw this video via Mastodon, about someone making a "real" camera in Blender, and… just… 

Prepping Academy
PrepperNet: Ready for Anything - News, Events, and Why You Need to Join!

Prepping Academy

Play Episode Listen Later Aug 3, 2023 67:00


Welcome to another exciting episode of the Prepping Academy podcast! In this episode, we're delving into all things PrepperNet, discussing what's new and coming and why you should consider joining this thriving community.We're thrilled to announce that PrepperNet has hit a significant milestone with a staggering 100,000 dedicated members! This growing community is a testament to the importance of preparedness in today's uncertain world.But that's not all – PrepperNet's reach extends far and wide, with 115 active city groups nationwide. These local groups provide a fantastic opportunity for like-minded individuals to come together, share knowledge, and build a strong support network.For those looking to connect with fellow preppers online, preppernet.net is the place to be, boasting 8,000 members. This digital hub serves as an invaluable platform for information exchange and fostering camaraderie within the community.The variety of activities and resources within PrepperNet is astounding. Members can enjoy meetups, participate in local city groups, engage in Ham Nets and Comms Groups, and take advantage of the partnership with AmRRON for emergency radio communications.But wait, there's more! PrepperNet also offers Zello Nets, a Book Club for sharing knowledge, a Singles Group for those looking for like-minded companions, and a yearly Bug Out Camp for practical training and bonding experiences.For those who love staying informed, PrepperNet provides access to live videos, engaging podcasts (like the one you're listening to), and informative webinars covering a wide range of preparedness topics.And the perks don't stop there – PrepperNet fosters a strong sense of community by organizing Potlucks, a Sisterhood Group, and Regional Camp Outs to connect members from various locations.So, if you're serious about preparedness and want to be part of a supportive, diverse, and ever-evolving community, PrepperNet is the place for you. Don't miss all the exciting opportunities and the chance to better prepare for the future. Join PrepperNet today and secure your place in this dynamic network of like-minded individuals. Stay ready, stay connected!www.preppernet.netwww.preppingacademy.comwww.forrestgarvin.comJoin PrepperNet.Net - https://www.preppernet.netPrepperNet is an organization of like-minded individuals who believe in personal responsibility, individual freedoms and preparing for disasters of all origins.PrepperNet Support the showPlease give us 5 Stars! www.preppingacademy.com Contact us: https://preppingacademy.com/contact/ www.preppernet.net Amazon Store: https://amzn.to/3lheTRTwww.forrestgarvin.com

The International Risk Podcast
Episode 122: Peter Hammerstedt and the International Risk of Overfishing in West Africa

The International Risk Podcast

Play Episode Listen Later Jul 2, 2023 36:04


The International Risk Podcast is a weekly podcast for senior executives, board members and risk advisors. In these podcasts, we speak with risk management specialists from around the world. Our host is Dominic Bowen, originally from Australia, is one of Europe's leading international risk specialists. Having spent the last 20 years successfully establishing large and complex operations in the world's highest risk areas and conflict zones, Dominic now joins you to speak with exciting guests from around the world to discuss risk.The International Risk Podcast – Reducing risk by increasing knowledgeFollow us on Facebook, Twitter, Instagram, and LinkedIn for all our great updates.In today's episode we are joined by Peter Hammarstedt, who is the Director of Campaigns for Sea Shepherd Global. He also sits on the Board of Sea Shepherd Global and is Chairman of Sea Shepherd Australia. He is captain of the ocean-going vessel Alan Kay, having spent more than 20 years at-sea including ten years in Antarctica.In 2014 and 15, the Ship Bob Barker, with Peter as Captain, set the world record for the longest pursuit of a poaching vessel at sea after chasing the Interpol wanted fishing vessel "Thunder" for 110 days— covering three oceans and 11,000 nautical miles—before its captain intentionally sank his own vessel in a bid to destroy evidence. See the film here. Read more about Sea Shepherd here.Produced by Hannah Smith.

The Stephen Wolfram Podcast
History of Science & Technology Q&A (September 7, 2022)

The Stephen Wolfram Podcast

Play Episode Listen Later Jun 16, 2023 91:58


Stephen Wolfram answers questions from his viewers about the history science and technology as part of an unscripted livestream series, also available on YouTube here: https://wolfr.am/youtube-sw-qa Questions include: Did any ancient unit systems use base 10, or did they all use more easily dividable bases like 12, 20, 60, etc.? - What is the history of design patterns in software engineering? How did people come to them? - Did you ever meet Niklaus Wirth, Dennis Ritchie, Brian Kernighan, Alan Kay and/or Paul Allen? - Have Julia sets and Fatou sets played a significant role in the development of computer programming languages? - Agreed. Computer programming languages should be object oriented for the language and structure to make sense instead of coming off as abstract and convoluted, and also so they are easier to work with and learn. - Did eighteenth-century engineers/craftsmen make use of the paradigm of Newtonian mechanics? - Why is it that Isaac Newton spent most of his time trying to prove theological ideas? - When will Moore's law expire? Apple announced four-nanometer chip technology, and there has to be a limit. - I wonder whether the future will be multicomputational, but to be honest, computers nowadays are more than powerful enough for the average user.

Chicago Jazz Audio Experience
Michael Lerich - How He Built His Wedding Music Business in Chicago

Chicago Jazz Audio Experience

Play Episode Listen Later May 26, 2023 32:53


Episode 001 of OPEN MIC with MIKE JEFFERS features an interview with Michael Lerich who, during the 1970's, 80's, 90's and early 2000's was one of the busiest and most successful music contractors in only in Chicago but the country. Starting out at age 22 Michael began working with bandleader Alan Kay, learning the business of contracting musical groups for private events and weddings. By 1981 he was contracting over 1800 jobs a year with numerous groups throughout the city of Chicago and employing hundreds of musicians each week. Mike Jeffers talks with Michael on how he started his business, the important things he learned when working with clients, what the keys to running a great event and much more.  

Anyway.FM 设计杂谈
№167: 一台五十岁的个人电脑和一家一百多岁的公司

Anyway.FM 设计杂谈

Play Episode Listen Later Apr 19, 2023 51:15


看了一篇介绍 Xerox Alto 电脑的怀旧文之后颇感唏嘘,所以久违地尝试了一期考古节目,让我们回到「硅谷」这个词都还没诞生的美国南加州,寻找一下这台算得上 GUI 鼻祖的电脑的足迹,以及他背后那个历史悠久的伟大公司——施乐~准备略显仓促,后期补录了一些「混浊」的声音,请大家见谅~ 另外 Leon 让我务必要在 shownotes 里告诉大家他获奖了,让大家都来恭喜他!# 内容提要04:34 · 施乐公司的历史14:21 · 著名的 PARC 实验室22:35 · 首台搭载了图形用户界面的个人电脑 Alto34:22 · 施乐、Apple 和诺基亚42:27 · Leon 说他要做个广告!# 参考链接本台聊「通用魔术」的那期播客 2:09本期播客的主要资料来源《50 Years Later, We're Still Living in the Xerox Alto's World》 3:14施乐公司(Xerox) 4:04更正:复印件的发明者是 Chester Carlson,JJ 错念成了「卡斯隆」 6:36静电复印技术(Xerography) 7:31以猩猩为主角的施乐复印机器电视广告 12:05Alto 电脑的 UI 截图 13:08藤校——常春藤大学联盟 15:21美国加州城市 Palo Alto(帕洛阿托) 16:54PARC 实验室的早期员工、计算机行业的先驱 Alan Kay 18:53Alan Kay 在 1968 年构想的个人电脑 Dynabook 19:53施乐 Alto 电脑,诞生于 1973 年 22:36Alto 的后续产品:施乐 Star 工作站,诞生于 1981 年 28:13可以透过当年 Alto 的广告来看看实际用起来时候的样子 28:20面向对象的编程这个概念就来自 Alan Kay 28:42曾是内部实验室,如今早已从 HP 独立出来的安捷伦 36:48诺基亚所开发的 Symbian 塞班手机操作系统 39:06路特斯 Eletre 纯电汽车 44:41青年汽车和「莲花」汽车之间的纠葛 46:19Eletre 搭载的车机系统 Hyper OS 获得了 2023 年度的 IF 设计奖 49:01

Prepping Academy
PrepperNet LIVE with Raymond Mhor - The Kilted Prepper

Prepping Academy

Play Episode Listen Later Apr 7, 2023 79:58


Welcome to the latest episode of the Prepping Academy Podcast. In this special edition, we're bringing you the PrepperNet LIVE event featuring a very special guest, Raymond Mhor, also known as The Kilted Prepper.During the show, Raymond and Garvin discuss various topics, including the current state of the dollar, how preppers can protect themselves from viruses, and the best ways for new preppers to start preparing for emergencies. Raymond also talks about his books and the various activities he is involved in to help the prepper community.If you're interested in prepping and want to hear from an experienced and knowledgeable prepper, this episode is for you. Raymond offers valuable insights and tips to help listeners of all experience levels prepare for any eventuality.Tune into the Prepping Academy Podcast and listen to this PrepperNet LIVE event with Raymond Mhor - The Kilted Prepper.Join PrepperNet.Net - https://www.preppernet.netPrepperNet is an organization of like-minded individuals who believe in personal responsibility, individual freedoms and preparing for disasters of all origins.PrepperNet Support the showPlease give us 5 Stars! www.preppingacademy.com Contact us: https://preppingacademy.com/contact/ www.preppernet.net Amazon Store: https://amzn.to/3lheTRTwww.forrestgarvin.com

Prepping Academy
Mountain Readiness Expo May 2023

Prepping Academy

Play Episode Play 30 sec Highlight Listen Later Apr 4, 2023 43:51 Transcription Available


In this exciting episode of the podcast, we have a special guest named "T," the founder and organizer of the Mountain Readiness Expo. The Expo is a three-day event from May 5th to 7th, 2023, in Harmony, North Carolina, and promises to be an event like no other.T shares some incredible lineup of guests they have for the Expo, including Angery American, Alan Kay, The Stoney Ridge Farmer, Dr. Bones & Nurse Amy, Hakim Isler, Forrest Garvin, and many more. The PrepperNet family will also be there with a huge gathering.Throughout the episode, T discusses the importance of being prepared for any eventuality and how the Mountain Readiness Expo is an excellent opportunity for people to learn valuable skills, gain knowledge, and network with like-minded individuals.If you're interested in prepping, survivalism, or want to learn more about being ready for anything, this is an episode you won't want to miss. Tune in now to hear about the Mountain Readiness Expo and what you can expect from this incredible event.Join PrepperNet.Net - https://www.preppernet.netPrepperNet is an organization of like-minded individuals who believe in personal responsibility, individual freedoms and preparing for disasters of all origins.PrepperNet Support the showPlease give us 5 Stars! www.preppingacademy.com Contact us: https://preppingacademy.com/contact/ www.preppernet.net Amazon Store: https://amzn.to/3lheTRTwww.forrestgarvin.com

The Deal Board
Buy or Sell a Business with Real Estate

The Deal Board

Play Episode Listen Later Mar 20, 2023 41:42


Andy and Jessica welcome you to another episode of The Deal Board Podcast. This week they are accompanied by two of Transworld's Real Estate partners to talk about how commercial real estate leads business brokerage and affects business sales now and in the future. Andy and Jessica are joined today by Phil Kubat, who dives deep into how commercial real estate is now in the Rocky Mountains area, and Alan Kay, who shares his expertise in commercial deals. Listing of the week: Jenny Abramson (Beverly Hills) is selling a highly reputable non-medical homecare agency; it is a franchise, so training will be provided by the franchisor and franchisee as well. The business has grown steadily for the last ten years with an impeccable reputation. The asking price is $2.1 million and the business made $2.8 million in revenue in 2022. Call Jenny at (424) 259-1248 or email her at jabramson@tworld.com Key takeaways:[3:57] Phil Kubat, an expert in commercial real estate, shares his perspective on the real estate market.[7:32] Phil talks about the market rates at a national level.[8:00] Phil shares about retail today.[9:18] Jessica speaks of the Industrial market.[11:06] Phil mentions important aspects of the current commercial marketplace.[11:54] What advice does Phil have for commercial real estate sellers and buyers?[13:53] Deal of the week: Scott Gorrill (Dallas Central Office) sold a deer corn manufacturer which started in 2015. The company makes $4 million in revenue and $500,000 in SDE. Books and records were super clean, which helped the buyer's due diligence.[17:35] Alan Kay, from Transworlds's commercial division, shares how things have changed in the commercial real estate sector.[18:52] Alan speaks of the effect of the high interest rates.[20:10] What are some of the “big players” currently doing in the marketplace?[21:46] What is happening in the office field?[24:54] Alan talks about the retail space.[27:55] Alan describes the beauty of SBA financing.[29:16] Alan and Andrew discuss how interest rates impact retirement plans.[30:04] Alan emphasizes the upside of investing in Real Estate.[31:14] To negotiate with landlords is a challenging task, and Alan explains why.[35:40] Andy affirms: “It is a great time to buy.”[40:05] Listing of the week: Jenny Abramson (Beverly Hills) is selling a highly reputable non-medical homecare agency, it is a franchise, so training will be provided by the franchisor and franchisee as well. The business has grown steadily for the last ten years with an impeccable reputation. The asking price is $2.1 million and the business made $2.8 million in revenue in 2022. Mentioned in this Episode:The Deal Board PodcastSubscribe to The Deal Board Podcast YouTube ChannelUnited Franchise GroupTransworld Business AdvisorsTransworld on LinkedinTransworld on FacebookCall us — (888) 719-9098Email us thedealboard@tworld.com Call Phil Kiubat at (303) 981-1936 or email him at phil@transworldcie.com.Email Scott Gorrill at sgorrill@tworld.com or call him at (940)230-1280.Email Alan Kay at 888ye@tworld.com.Call Jenny Abramson at (424) 259-1248 or email her at jabramson@tworld.com.

Podcasty Retro Nation
Wolfcast 77: Cesta k Dynabooku: Mobilní IT 1

Podcasty Retro Nation

Play Episode Listen Later Mar 20, 2023 48:35


Pohled do filozofie, kterou jsme se v IT bohužel nevydali. Michal Rybka se ve svém Wolfcastu zaměří na historii Dynabooku a filozofii, která stála za jeho konceptem. V prvním díle rozsáhlého povídání nás uvede do problematiky a představí muže jménem Alan Kay – počítačového vědce, který je s Dynabookem spojován. Video, o kterém se Wolf v podcastu zmiňuje, najdete na tomto odkaze: https://youtu.be/IwL3yXdupv0 Poslechněte si ostatní Wolfcasty, historický přehled naleznete na retronation.cz.

The Technium
APL: You need a special keyboard to program in this language (S04E06)

The Technium

Play Episode Listen Later Feb 16, 2023 102:29


APL is an array based programming language developed by Kenneth Iverson in the 1960s. Its central data type is the multi-dimensional array and hence it's very useful for workloads involving a lot of matrix math. APL predominantly uses symbols and it leverages consistent composability and execution rules to enable it as a notation as a tool for thought. Links/Resources:* https://mathspp.com/blog/why-apl-is-a-language-worth-knowing* https://www.quora.com/What-made-APL-programming-so-revolutionary?share=1* https://tryapl.org/* A Personal View of APL* Notation as a Tool for Thought* Heaviside Operator Calculus* Conway's Game of Life in APL* Maxwell's Equations: From 20 to 4* Alan Kay's answer to “What made APL so revolutionary?”* Testimonies    * https://news.ycombinator.com/item?id=27463149    * https://news.ycombinator.com/item?id=27464005    * https://news.ycombinator.com/item?id=27465512* https://github.com/razetime/ngn-k-tutorial/blob/main/01-intro.md00 - Objectives and Prerequisites | "Learn APL with Neural Networks"https://dl.acm.org/doi/pdf/10.1145/3315454.3329960Chapters:[00:00:00] Intros[00:03:36] What is APL?[00:05:42] Yes, it uses weird symbols[00:06:51] A language should change your thinking[00:13:08] Notation as a tool of thought[00:20:33] Connections to functional programming[00:28:23] What it feels like to program in APL[00:33:10] APL and IBM = big bucks[00:35:20] The echos of APL in modern data science tools[00:43:52] APL for big data pipelines[00:48:56] APL for Machine learning[00:54:59] APL for JSON manipulation[01:03:18] APL as a spreadsheet language[01:16:11] Superhuman programming: APL and LLMs[01:26:59] Making APL more explainable[01:38:18] Outro===== About “The Technium” =====The Technium is a weekly podcast discussing the edge of technology and what we can build with it. Each week, Sri and Wil introduce a big idea in the future of computing and extrapolate the effect it will have on the world.Follow us for new videos every week on web3, cryptocurrency, programming languages, machine learning, artificial intelligence, and more!===== Socials =====WEBSITE: https://technium.transistor.fm/SPOTIFY: https://open.spotify.com/show/1ljTFMgTeRQJ69KRWAkBy7APPLE PODCASTS: https://podcasts.apple.com/us/podcast/the-technium/id1608747545

IT Visionaries
Success Now Mini Series: Rocket and Salesforce

IT Visionaries

Play Episode Listen Later Jan 24, 2023 42:00


Researcher Alan Kay is widely credited for saying, “The best way to predict the future is to invent it.” Our guest Brian Woodring, CIO of Rocket Companies concurs, as he is part of a company that is currently trying to define the future of mortgages. Working with Salesforce, his organization is continually trying (and learning from) innovative ways to reduce costs, increase efficiency, and generate ROI. Hear how Brian and his team may not have a “master plan” but still succeed through discovery, feedback, and a consistent focus on customers.Tune in to learn:Why there's no “master plan” for growth in IT businesses (5:00)How did Rocket measure the progress of its digital transformation (14:45)The differences between an OKR and a key result (25:14)IT Visionaries is brought to you by Salesforce. With Salesforce's low-code app dev tools, you can be more efficient, more productive and save money by reducing development time by up to 90%. Get Salesforce's Low-Code Playbook and increase time to value for your team and your customers. Download the free playbook today.Mission.org is a media studio producing content for world-class clients. Learn more at mission.org.

Murder With My Husband
141. Lynda Mann and Dawn Ashworth - The Pathway Murders

Murder With My Husband

Play Episode Listen Later Dec 5, 2022 54:45


On this episode of MWMH, Payton and Garrett discuss the first case to ever use DNA profiling. This was the murder of both Lynda Mann and Dawn Ashworth. Live Show Tickets: https://www.moment.co/murderwithmyhusband Links: https://linktr.ee/murderwithmyhusband Case Sources: Medical Detectives, “The Footpath Murders,” broadcast October 23, 1996 on TLC Wikipedia.org, Colin Pitchfork The Blooding (1989, Random House Publishing Group), by Joseph Wambaugh Sky News: news.sky.com, “Colin Pitchfork: Double child killer arrested and recalled to prison after 'breaching licence conditions',” November 19, 2021 bbc.com, "Colin Pitchfork recalled to jail after approaching young women,” November 22, 2021  Newspaper.com sources: Tony Donnelly, Nottingham Post, "Hunt is on for two killers," 23 November 1983, archived (https://www.newspapers.com/image/760618392); citing print edition, p.1 Leicester Mercury, "Lynda ‘might have screamed'," 1 December 1983, archived (https://www.newspapers.com/image/860734136); citing print edition, p.1 Leicester Mercury, "Lynda: Youth seen running," 2 December 1983, archived (https://www.newspapers.com/image/860734331); citing print edition, p.27 Leicester Mercury, "Fresh plea over Lynda murder," 7 December 1983, archived (https://www.newspapers.com/image/860735339); citing print edition, p.13 Leicester Mercury, "Murder hunt police seek 'crying youth'," 8 December 1983, archived (https://www.newspapers.com/image/860735594); citing print edition, p.21 Leicester Mercury, "Scarf used to kill Lynda, inquest told," 26 January 1984, archived (https://www.newspapers.com/image/860742794); citing print edition, p.19 Leicester Mercury, "Lynda: Police issue picture," 1 February 1984, archived (https://www.newspapers.com/image/860747472); citing print edition, p.1 Leicester Mercury, "Lynda in walk to death on Black Pad," 3 February 1984, archived (https://www.newspapers.com/image/860748945); citing print edition, p.30 Leicester Mercury, "Lynda seen with punk in city," 17 May 1984, archived (https://www.newspapers.com/image/860751836); citing print edition, p.23 Leicester Mercury, "Murder 'started road into debt'," 9 July 1984, archived (https://www.newspapers.com/image/860879334); citing print edition, p.6 Associated Press, The Charlotte Observer, "DNA 'Fingerprinting' Process Nearly Foolproof, Scientists Say," 7 December 1985, archived (https://www.newspapers.com/image/624686909); citing print edition, p.17A Leicester Mercury, "Murder of Lynda: Police issue new appeal for clues," 26 April 1986, archived (https://www.newspapers.com/image/860868746); citing print edition, p.11 Leicester Mercury, "Dawn vanishes after trip: Huge hunt for missing schoolgirl," 1 August 1986, archived (https://www.newspapers.com/image/860906178); citing print edition, pp.1, 15 Leicester Mercury, "Three years on — a killer still at large," 2 August 1986, archived (https://www.newspapers.com/image/860907003); citing print edition, p.9 Leicester Mercury, "Dawn's killer 'has struck before'," 3 August 1986, archived (https://www.newspapers.com/image/846785964); citing print edition, p.1 John Meehan and Alan Kay, Coventry Evening Telegraph, "Tell on Strangler: Don't shield girls double murder, pleads mum," 4 August 1986, archived (https://www.newspapers.com/image/819772229); citing print edition, pp.1-2 Leicester Mercury, "Tiny scratch 'could be a vital clue'—Dawn: Hunt for Double Killer," 4 August 1986, archived (https://www.newspapers.com/image/860908353); citing print edition, pp. 1, 5 Leicester Mercury, "Dawn: Man seen fleeing from scene," 5 August 1986, archived (https://www.newspapers.com/image/860908553); citing print edition, p.1 Leicester Mercury, "Sadistic double killer is jailed for life," 23 January 1988, archived (https://www.newspapers.com/image/861261211); citing print edition, pp.1, 10-11 Learn more about your ad choices. Visit podcastchoices.com/adchoices

The Array Cast
The Many Languages of Romilly Cocking

The Array Cast

Play Episode Listen Later Aug 20, 2022 82:47


Array Cast - August 19, 2022 Show NotesMany thanks to Marshall Lochbaum, Rodrigo Girão Serrão, Bob Therriault, Conor Hoekstra, Adám Brudzewsky and Romilly Cocking for gathering these links:[01] 00:00:03 BYTE magazine https://en.wikipedia.org/wiki/Byte_(magazine)[02] 00:01:02 Org Mode https://orgmode.org/[03] 00:02:58 Toronto Meet-up https://www.meetup.com/en-AU/programming-languages-toronto-meetup/events/287695788/ New York Meet-up https://www.meetup.com/programming-languages-toronto-meetup/events/287729348/[04] 00:04:19 Morten Kromberg episode https://www.arraycast.com/episodes/episode21-morten-kromberg[05] 00:05:01 Romilly's video 'An Excellent Return' https://dyalog.tv/Dyalog08/?v=thr-7QfQWJw[06] 00:06:12 Ferranti Pegasus computer https://en.wikipedia.org/wiki/Ferranti_Pegasus[07] 00:09:09 System 360 in APL http://keiapl.org/archive/APL360_UsersMan_Aug1968.pdf[08] 00:16:50 Mind Map https://en.wikipedia.org/wiki/Mind_map[09] 00:17:00 Dyalog https://www.dyalog.com/[10] 00:18:20 Digitalk https://winworldpc.com/product/digital-smalltalk/5x[11] 00:18:30 Smalltalk https://en.wikipedia.org/wiki/Smalltalk[12] 00:21:17 Raspberry Pi https://www.raspberrypi.org/[13] 00:22:10 Robotics on Dyalog website https://www.dyalog.com/blog/2014/08/dancing-with-the-bots/[14] 00:22:45 Neural Network https://en.wikipedia.org/wiki/Neural_network David Marr https://en.wikipedia.org/wiki/David_Marr_(neuroscientist)[15] 00:23:21 Jetson Nano https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-nano/[16] 00:23:38 Spiking neural networks https://en.wikipedia.org/wiki/Spiking_neural_network[17] 00:24:02 JAX https://jax.readthedocs.io/en/latest/notebooks/quickstart.html[18] 00:27:00 Numpy https://numpy.org/[19] 00:28:21 Nested arrays https://aplwiki.com/wiki/Nested_array[20] 00:29:07 flip Numpy https://numpy.org/doc/stable/reference/generated/numpy.flip.html flipud https://numpy.org/doc/stable/reference/generated/numpy.flipud.html#numpy.flipud[21] 00:31:07 Pegasus Autocode http://blog.rareschool.com/2014/09/pegasus-autocode-revisited.html[22] 00:32:05 Atlas computer 1966 https://en.wikipedia.org/wiki/Atlas_(computer)[23] 00:34:30 Raspberry Pi pico https://www.raspberrypi.com/products/raspberry-pi-pico/[24] 00:36:33 Booker and Morris https://dl.acm.org/doi/pdf/10.1145/364520.364521[25] 00:38:12 Romilly's Blog Markdown http://blog.rareschool.com/2022/05/apl-and-python-go-head-to-head.html[26] 00:41:30 Languages that are built from concatenation https://en.wikipedia.org/wiki/Agglutination[27] 00:44:30 Alan Kay https://en.wikipedia.org/wiki/Alan_Kay[28] 00:47:12 Clojure https://en.wikipedia.org/wiki/Alan_Kay Forth https://en.wikipedia.org/wiki/Forth_(programming_language) Haskell https://www.haskell.org/[29] 00:50:00 Cosy http://www.cosy.com/language/[30] 00:51:38 Py'n'APL https://dyalog.tv/Dyalog21/?v=gOUFXBUMv_A[31] 01:00:12 Logic Analyzer https://en.wikipedia.org/wiki/Logic_analyzer[32] 01:02:15 Back propagation in neural networks https://en.wikipedia.org/wiki/Backpropagation[33] 01:07:38 Stefan Kruger 'Learn APL' https://xpqz.github.io/learnapl/intro.html[34] 01:08:10 Rodrigo Girão Serrão videos https://www.youtube.com/channel/UCd_24S_cYacw6zrvws43AWg[35] 01:08:27 João Araújo episode https://www.arraycast.com/episodes/episode33-joao-araujo[36] 01:08:59 Rodrigo Girão Serrão Neural networks https://www.youtube.com/playlist?list=PLgTqamKi1MS3p-O0QAgjv5vt4NY5OgpiM[37] 01:10:55 Functional Geekery podcast https://www.functionalgeekery.com/[38] 01:11:36 Conor's Security talk https://www.youtube.com/watch?v=ajGX7odA87k[39] 01:12:38 SICP https://en.wikipedia.org/wiki/Structure_and_Interpretation_of_Computer_Programs[40] 01:12:55 Alan McKean Rebecca Wirfs-Brock "Object Design" https://books.google.ca/books?id=vUF72vN5MY8C&printsec=copyright&redir_esc=y#v=onepage&q&f=false[41] 01:13:35 Growing Object Oriented Guided by Tests http://www.growing-object-oriented-software.com/[42] 01:15:01 Design Patterns vs Anti pattern in APL https://www.youtube.com/watch?v=v7Mt0GYHU9A[43] 01:18:25 Pop2 https://hopl.info/showlanguage.prx?exp=298&language=POP-2 Pop2 on pdf-11 https://www.cs.bham.ac.uk/research/projects/poplog/retrieved/adrian-howard-pop11.html[44] 01:18:52 Donald Michie https://en.wikipedia.org/wiki/Donald_Michie[45] 01:21:30 Menace robot http://chalkdustmagazine.com/features/menace-machine-educable-noughts-crosses-engine/[46] 01:22:05 Menace in APL https://romilly.github.io/o-x-o/an-introduction.html

The Array Cast
João Araújo and Tools of Thought

The Array Cast

Play Episode Listen Later Aug 6, 2022 71:43


Array Cast - August 5, 2022 Show NotesMany thanks to Bob Therriault, João Araújo and Rodrigo Girão Serrão for gathering these links:[01] 00:01:40 J wiki features https://www.youtube.com/watch?v=dWqixYyb52Q[02] 00:02:31 J promo video https://www.youtube.com/watch?v=vxibe2QOA0s[03] 00:03:00 British APL Association https://britishaplassociation.org/ Vector https://vector.org.uk/[04] 00:03:27 Conor's Twin Algorithms presentation https://www.youtube.com/watch?v=NiferfBvN3s[05] 00:08:13 Numpy https://numpy.org/ JAX https://jax.readthedocs.io/en/latest/notebooks/quickstart.html Julia https://julialang.org/[06] 00:08:49 João's array Google search engine https://cse.google.com/cse?cx=e5ff9c06c246f4ca5[07] 00:09:00 João's Iverson mirror site https://joaogui1.github.io/keiapl/ Original link http://keiapl.org/[08] 00:11:55 João's website https://joaogui1.netlify.app/[09] 00:13:10 BQN https://mlochbaum.github.io/BQN/ Dyalog APL https://www.dyalog.com/ J https://www.jsoftware.com/indexno.html[10] 00:13:50 Vannevar Bush https://en.wikipedia.org/wiki/Vannevar_Bush Alan Kay https://en.wikipedia.org/wiki/Alan_Kay[11] 00:14:13 Tool for Thought Rocks https://lu.ma/toolsforthoughtrockshttps://www.youtube.com/c/ToolsforThoughtRocks?app=desktop[12] 00:14:40 Obsidian discord https://discord.com/invite/veuWUTm[13] 00:15:10 Roam https://roamresearch.com/ Obsidian https://obsidian.md/ Logseq https://logseq.com/[14] 00:17:01 Anki https://logseq.com/ Muse https://museapp.com/[15] 00:18:25 Notion https://www.notion.so/ Remnote https://www.remnote.com/[16] 00:19:42 Marshall's BQN Markdown https://github.com/mlochbaum/BQN/blob/master/md.bqn[17] 00:22:06 Perlis https://en.wikipedia.org/wiki/Alan_Perlis[18] 00:22:49 Array Thinking https://www.arraycast.com/episodes/episode-00-why-i-like-array-languages[19] 00:24:50 Folds https://en.wikipedia.org/wiki/Fold_(higher-order_function)[20] 00:25:51 Rank concept https://aplwiki.com/wiki/Function_rank[22] 00:26:57 Short Term Memory https://www.simplypsychology.org/short-term-memory.html[23] 00:27:42 APL glyphs https://aplwiki.com/wiki/Typing_glyphs#By_method[24] 00:28:59 Stefan Kruger 'Learn APL' https://xpqz.github.io/learnapl/intro.html Rodrigo Girão Serrão 'Mastering Dyalog APL' https://mastering.dyalog.com/README.html[25] 00:29:35 Quarto https://quarto.org/[26] 00:32:33 Conor's original solution {≢∪⍵~0} Y[27] 00:32:40 Without APL ~ Without J -.[28] 00:32:50 BQN Without ¬∘∊/⊣[29] 00:33:55 Set Intersection APL X{⍺⌿⍨(≢⍵)≥⍵⍳⍺}Y Set Intersection J x (e. # [) y Set Union APL X{⍺⍪⍵⌿⍨(≢⍺)

Future of Coding
Personal Dynamic Media by Alan Kay & Adele Goldberg

Future of Coding

Play Episode Listen Later Jul 18, 2022 165:03


There once was a podcast episode. It was about a very special kind of book: the Dynabook. The podcast didn't know whether to be silly, or serious. Jimmy offered some thoughtful reflections, and Ivan stung him on the nose. Sponsored by Replit.com, who want to give you some reasons not to join Replit, and Theatre.js, who want to make beautiful tools for animating the web with you. futureofcoding.org/episodes/57 See omnystudio.com/listener for privacy information.

The Array Cast
Joel Kaplan

The Array Cast

Play Episode Listen Later May 14, 2022 90:54


Array Cast - May 13, 2022 Show Notes[01] 00:01:25 https://code.jsoftware.com/wiki/System/Forums[02] 00:02:10 https://www.arraycast.com/episodes/episode26-stevan-apter[03] 00:02:28 Joel Kaplan video https://www.youtube.com/watch?v=Ni0Kj3Xjk1k&t=1s[04] 00:03:10 https://www.morganstanley.com/[05] 00:03:15 https://aplwiki.com/wiki/Arthur_Whitney[06] 00:03:25 https://www.1010data.com/[07] 00:10:20 APL-DI https://dl.acm.org/doi/10.1145/800136.804492[08] 00:12:10 https://en.wikipedia.org/wiki/Fred_Brooks[09] 00:12:36 https://en.wikipedia.org/wiki/David_E._Shaw[10] 00:15:25 https://en.wikipedia.org/wiki/Jeff_Bezos[11] 00:17:00 https://en.wikipedia.org/wiki/Digital_Equipment_Corporation[12] 00:18:15 https://en.wikipedia.org/wiki/IBM_5100[13] 00:18:55 https://en.wikipedia.org/wiki/Bill_Gates[14] 00:18:55 Gates APL Interpreter https://americanhistory.si.edu/comphist/gates.htm#tc30[15] 00:23:11 https://aplwiki.com/wiki/Bob_Bernecky[16] 00:26:15 https://mathworld.wolfram.com/NearestNeighborProblem.html[17] 00:29:26 Generalisation of the Axis operator apl.wiki/Rank (operator)[18] 00:30:50 https://en.wikipedia.org/wiki/QWERTY[19] 00:31:37 https://aplwiki.com/wiki/A[20] 00:34:00 https://aplwiki.com/wiki/K[21] 00:34:17 APL machine https://aplwiki.com/wiki/APL_Machine[22] 00:35:07 Analogic https://www.analogic.com/?locale=en[23] 00:37:07 Aaron Hsu video https://www.youtube.com/watch?v=2FMBf6A2eAA[24] 00:41:19 http://www.nsl.com/[25] 00:43:45 https://en.wikipedia.org/wiki/Muhammad_ibn_Musa_al-Khwarizmi[26] 00:43:52 https://en.wikipedia.org/wiki/Gottfried_Wilhelm_Leibniz#Symbolic_thought[27] 00:52:30 https://www.ubs.com/ca/en.html[28] 00:54:20 https://en.wikipedia.org/wiki/Pete_Muller_(businessman_and_singer-songwriter)[29] 00:56:30 https://www.dyalog.com/[30] 00:57:10 https://shakti.com/[31] 01:00:35 https://en.wikipedia.org/wiki/Steve_Jobs[32] 01:01:30 https://www.jsoftware.com/#/README[33] 01:04:09 https://aplwiki.com/wiki/Ken_Iverson[34] 01:08:30 Steven's blog post https://www.5jt.com/all-that-jazz-the-librarian-s-song[35] 01:12:18 https://en.wikipedia.org/wiki/Alan_Perlis[36] 01:13:33 https://en.wikipedia.org/wiki/Benoit_Mandelbrot[37] 01:14:15 Society of Quantitative Analysts https://www.sqa-us.org/[38] 01:14:47 https://en.wikipedia.org/wiki/Shmuel_Winograd https://encyclopediaofmath.org/wiki/Winograd_Fourier_transform_algorithm[39] 01:14:41 Yorktown Heights: https://en.wikipedia.org/wiki/Thomas_J._Watson_Research_Center[40] 01:14:52 John Cocke https://www.youtube.com/watch?v=eYwd30iWVvw https://en.wikipedia.org/wiki/John_Cocke_(computer_scientist)[41] 01:15:25 https://en.wikipedia.org/wiki/Leon_Cooper[42] 01:16:19 https://en.wikipedia.org/wiki/Philip_Wolfe_(mathematician) https://en.wikipedia.org/wiki/Quadratic_programming https://pages.cs.wisc.edu/~brecht/cs838docs/wolfe-qp.pdf[43] 01:16:41 https://en.wikipedia.org/wiki/Sharpe_ratio[44] 01:18:54 https://en.wikipedia.org/wiki/Alan_Kay[45] 01:18:58 https://en.wikipedia.org/wiki/Alexander_Stepanov[46] 01:20:09 https://en.wikipedia.org/wiki/Rob_Pike[47] 01:22:05 https://www.reddit.com/r/apljk/[48] 01:22:30 https://en.wikipedia.org/wiki/Lisp_(programming_language)[49] 01:24:43 Conor's videos https://www.youtube.com/channel/UC1kBxkk2bcG78YBX7LMl9pQ[50] 01:25:13 Rodrigo's videos https://www.youtube.com/channel/UCd_24S_cYacw6zrvws43AWg[51] 001:25:01 "Easy to Learn - Worth Mastering" https://dyalog.tv/APLSeeds22/?v=o-0xk96_BNw[52] 01:25:55 https://aplwiki.com/wiki/Outer_Product[53] 01:26:55 BQN https://mlochbaum.github.io/BQN/[54] 01:27:25 https://aplwiki.com/wiki/Inner_Product[55] 01:29:55 Ripple shuffle expression https://tryapl.org/?clear&q=%7B%E2%8D%B5%5B%E2%8D%8B%E2%8D%922%7C%E2%8D%B3%E2%89%A2%E2%8D%B5%5D%7D%27ABCDEabcde%27&run

The Technium
Smalltalk (The Technium Podcast S02 E05)

The Technium

Play Episode Listen Later Feb 10, 2022 65:23


Smalltalk was one of the first object oriented programming languages and a software authoring environment that was highly influential in the history of computing.We discuss its designer Alan Kay, his vision of personal computing, and the lasting influence of Smalltalk despite its limited commercial adoption.Links/Resources:Actor Model - https://www.brianstorti.com/the-actor-model/https://dspace.mit.edu/handle/1721.1/6272Smalltalk Demo - https://www.youtube.com/watch?v=uknEhXyZgsgSmalltalk Build your own Debugger - https://youtu.be/NqKyHEJe9_w?t=355Early History of Smalltalk - http://worrydream.com/EarlyHistoryOfSmalltalk/“A remarkable number of artists, scientists, philosophers are quite dull outside of their specialty (and one suspects within it as well). The first siren's song we need to be wary of is the one that promises a connection between an interesting pursuit and interesting thoughts. The music is not in the piano, and it is possible to graduate Juilliard without finding or feeling it.”Why was smalltalk not a commercial success? http://www.wirfs-brock.com/allen/posts/914https://gbracha.blogspot.com/2020/05/bits-of-history-words-of-advice.htmlLively and Caffine https://lively-kernel.org/Chapters:0:00 Intros1:57 The Retro Future2:34 Origin and Motivation of Smalltalk7:20 Object Orientation is about message passing11:51 Meta-circular inspiration14:31 Smalltalk is an entire environment18:59 Designed with the GUI in mind24:35 The Music isn't in the Piano27:21 Taking a bet on the future31:42 Surprising things about Smalltalk34:27 Learning to Program By Reading "Literature”39:56 Did Smalltalk Achieve It's Goals?44:43 The Failure of Enterprise Smalltalk52:30 Even Good Ideas Have to Survive the Marketplace55:23 Big Ideas Meet Reality===== About “The Technium” =====The Technium is a weekly podcast discussing the edge of technology and what we can build with it. Each week, Sri and Wil introduce a big idea in the future of computing and extrapolate the effect it will have on the world.Follow us for new videos every week on web3, cryptocurrency, programming languages, machine learning, artificial intelligence, and more!===== Socials =====WEBSITE: https://technium.transistor.fm/SPOTIFY: https://open.spotify.com/show/1ljTFMgTeRQJ69KRWAkBy7APPLE PODCASTS: https://podcasts.apple.com/us/podcast/the-technium/id1608747545

JUXT Cast
S2E16 - re:Clojure Interviews 16 - Eric Normand

JUXT Cast

Play Episode Listen Later Nov 29, 2021 31:43


Episode Notes It was a great pleasure to speak again with our friend @ericnormand about Alan Kay, Lisp, Mr Wizard, Logo and much more! Enjoy! Mr Wizard TV TV series The Logo programming language Alan Kay The TCP/IP stack Paul Graham Essays The 50th anniversary of Lisp conference The Little Schemer book by Dan Friedman Frequencies clojure function Grokking Simplicity book

Publicly Sited
Media, Technology & Culture 05 (2nd Edition): Computational Technologies

Publicly Sited

Play Episode Listen Later Oct 27, 2021 28:49


Most people know very well that social and cultural transformations are complex. And yet, we often seem prepared to think of individual media as bringing change. We believe that there was a situation before this or that media, and then another situation after. Sometimes there are worries about this subsequent situation; or nostalgia for how things were before. In other instances, people wager hope that novel media might bring positive or empowering changes. When media technologies are seen as transformative, they have often been described as ‘new media'. The term ‘new media' began to acquire some currency in the 1960s, in the age of television. But its use exploded in the late 1990s and early 2000s. Why? Many answers were put forward: the internet, interactivity, multimedia, mobile devices, user-generated content. But for some, the new media of this moment came out of a longer-term and more general development: the rise of the computer as a media technology. Not just a new addition to all the other technologies. Rather, an emergent backbone for virtually all mediated communication and experience. In this episode, we look at how this argument is exemplified by the work of digital media theorist Lev Manovich, who suggests that what makes new media ‘new' is its creation, storage, distribution and display via the language (i.e. software code) and hardware of digital computation. On a basic level, computational media all share a basic metabolism of binary code: ultimately describable with nothing more than 1s and 0s. The question, however, is broader than this: beyond previous media formats becoming absorbed into the medium of the computer, are we seeing the rise of a specifically ‘computational' culture? Thinkers Discussed: Lev Manovich (The Language of New Media / Software Takes Command); Mark B.N. Hansen (New Philosophy for New Media); Alexander Galloway (The Interface Effect); Jay Bolter and Richard Grusin (Remediation: Understanding New Media) Gabriele Balbi and Paolo Magaudda (A History of Digital Media: An Intermedia and Global Perspective); Lewis Mumford (Authoritarian and Democratic Technics); Fred Turner (From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism); Jennifer Light (When Computers Were Women); Mar Hicks (Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing); Alan Kay and Adele Goldberg (Personal Dynamic Media); David Berry (Against Remediation).

Perfectly Boring
Innovating in Hardware, Software, and the Public Cloud with Steve Tuck, CEO/Co-Founder of Oxide Computer

Perfectly Boring

Play Episode Listen Later Sep 27, 2021 53:14


In this episode, we cover:00:00:00 - Reflections on the Episode/Introduction 00:03:06 - Steve's Bio00:07:30 - The 5 W's of Servers and their Future00:14:00 - Hardware and Software00:21:00 - Oxide Computer 00:30:00 - Investing in Oxide and the Public Cloud00:36:20 - Oxide's Offerings to Customers 00:43:30 - Continious Improvement00:49:00 - Oxide's Future and OutroLinks: Oxide Computer: https://oxide.computer Perfectlyboring.com: https://perfectlyboring.com TranscriptJason: Welcome to the Perfectly Boring podcast, a show where we talk to the people transforming the world's most boring industries. I'm Jason Black, general partner at RRE ventures.Will: And I'm Will Coffield, general partner at Riot Ventures.Jason: Today's boring topic of the day: servers.Will: Today, we've got Steve Tuck, the co-founder and CEO of Oxide Computer, on the podcast. Oxide is on a mission to fundamentally transform the private cloud and on-premise data center so that companies that are not Google, or Microsoft, or Amazon can have hyper scalable, ultra performant infrastructure at their beck and call. I've been an investor in the company for the last two or three years at this point, but Jason, this is your first time hearing the story from Steve and really going deep on Oxide's mission and place in the market. Curious what your initial thoughts are.Jason: At first glance, Oxide feels like a faster horse approach to an industry buying cars left and right. But the shift in the cloud will add $140 billion in new spend every year over the next five years. But one of the big things that was really interesting in the conversation was that it's actually the overarching pie that's expanding, not just demand for cloud but at the same rate, a demand for on-premise infrastructure that's largely been stagnant over the years. One of the interesting pivot points was when hardware and software were integrated back in the mainframe era, and then virtual machines kind of divorced hardware and software at the server level. Opening up the opportunity for a public cloud that reunified those two things where your software and hardware ran together, but the on-premises never really recaptured that software layer and have historically struggled to innovate on that domain.Will: Yeah, it's an interesting inflection point for the enterprise, and for basically any company that is operating digitally at this point, is that you're stuck between a rock and a hard place. You can scale infinitely on the public cloud but you make certain sacrifices from a performance security and certainly from an expense standpoint, or you can go to what is available commercially right now and you can cobble together a Frankenstein-esque solution from a bunch of legacy providers like HP, and Dell, and SolarWinds, and VMware into a MacGyvered together on-premise data center that is difficult to operate for companies where infrastructure isn't, and they don't want it to be, their core competency. Oxide is looking to step into that void and provide a infinitely scalable, ultra-high-performance, plug-and-play rack-scale server for everybody to be able to own and operate without needing to rent it from Google, or AWS, or Microsoft.Jason: Well, it doesn't sound very fun, and it definitely sounds [laugh] very boring. So, before we go too deep, let's jump into the interview with Steve.Will: Steve Tuck, founder and CEO of Oxide Computer. Thank you for joining us today.Steve: Yeah, thanks for having me. Looking forward to it.Will: And I think maybe a great way to kick things off here for listeners would be to give folks a baseline of your background, sort of your bio, leading up to founding Oxide.Steve: Sure. Born and raised in the Bay Area. Grew up in a family business that was and has been focused on heating and air conditioning over the last 100-plus years, Atlas. And went to school and then straight out of school, went into the computer space. Joined Dell computer company in 1999, which was a pretty fun and exciting time at Dell.I think that Dell had just crossed over to being the number one PC manufacturer in the US. I think number two worldwide at Compaq. Really just got to take in and appreciate the direct approach that Dell had taken in a market to stand apart, working directly with customers not pushing everything to the channel, which was customary for a lot of the PC vendors at the time. And while I was there, you had the emergence of—in the enterprise—hardware virtualization company called VMware that at the time, had a product that allowed one to drive a lot more density on their servers by way of virtualizing the hardware that people were running. And watching that become much more pervasive, and working with companies as they began to shift from single system, single app to virtualized environments.And then at the tail end, just watching large tech companies emerge and demand a lot different style computers than those that we had been customarily making at Dell. And kind of fascinated with just what these companies like Facebook, and Google, and Amazon, and others were doing to reimagine what systems needed to look like in their hyperscale environments. One of the companies that was in the tech space, Joyent, a cloud computing company, is where I went next. Was really drawn in just to velocity and the innovation that was taking place with these companies that were providing abstractions on top of hardware to make it much easier for customers to get access to the compute, and the storage, and the networking that they needed to build and deploy software. So, spent—after ten years at Dell, I was at Joyent for ten years. That is where I met my future co-founders, Bryan Cantrill who was at Joyent, and then also Jess Frazelle who we knew working closely while she was at Docker and other stops.But spent ten years as a public cloud infrastructure operator, and we built that service out to support workloads that ran the gamut from small game developers up to very large enterprises, and it was really interesting to learn about and appreciate what this infrastructure utility business looked like in public cloud. And that was also kind of where I got my first realization of just how hard it was to run large fleets of the systems that I had been responsible for providing back at Dell for ten years. We were obviously a large customer of Dell, and Supermicro, and a number of switch manufacturers. It was eye-opening just how much was lacking in the remaining software to bind together hundreds or thousands of these machines.A lot of the operational tooling that I wished had been there and how much we were living at spreadsheets to manage and organize and deploy this infrastructure. While there, also got to kind of see firsthand what happened as customers got really, really big in the public cloud. And one of those was Samsung, who was a very large AWS customer, got so large that they needed to figure out what their path on-premise would look like. And after going through the landscape of all the legacy enterprise solutions, deemed that they had to go buy a cloud company to complete that journey. And they bought Joyent. Spent three years operating the Samsung cloud, and then that brings us to two years ago, when Jess, Bryan, and I started Oxide Computer.Will: I think maybe for the benefit of our listeners, it would be interesting to have you define—and what we're talking about today is the server industry—and to maybe take a step back and in your own words, define what a server is. And then it would be really interesting to jump into a high-level history of the server up until today, and maybe within that, where the emergence of the public cloud came from.Steve: You know, you'll probably get different definitions of what a server is depending on who you ask, but at the highest level, a server differs from a typical PC that you would have in your home in a couple of ways, and more about what it is being asked to do that drives the requirements of what one would deem a server. But if you think about a basic PC that you're running in your home, a laptop, a desktop, a server has a lot of the same components: they have CPUs, and DRAM memory that is for non-volatile storage, and disks that are storing things in a persistent way when you shut off your computer that actually store and retain the data, and a network card so that you can connect to either other machines or to the internet. But where servers start to take on a little bit different shape and a little bit different set of responsibilities is the workloads that they're supporting. Servers, the expectations are that they are going to be running 24/7 in a highly reliable and highly available manner. And so there are technologies that have gone into servers, that ECC memory to ensure that you do not have memory faults that lose data, more robust components internally, ways to manage these things remotely, and ways to connect these to other servers, other computers.Servers, when running well, are things you don't really need to think about, are doing that, are running in a resilient, highly available manner. In terms of the arc of the server industry, if you go back—I mean, there's been servers for many, many, many, many decades. Some of the earlier commercially available servers were called mainframes, and these were big monolithic systems that had a lot of hardware resources at the time, and then were combined with a lot of operational and utilization software to be able to run a variety of tasks. These were giant, giant machines; these were extraordinarily expensive; you would typically find them only running in universities or government projects, maybe some very, very large enterprises in the'60s and'70s. As more and more software was being built and developed and run, the market demand and need for smaller, more accessible servers that were going to be running this common software, were driving machines that were coming out—still hardware plus software—from the likes of IBM and DEC and others.Then you broke into this period in the '80s where, with the advent of x86 and the rise of these PC manufacturers—the Dells and Compaqs and others—this transition to more commodity server systems. A focus, really a focus on hardware only, and building these commodity x86 servers that were less expensive, that were more accessible from an economics perspective, and then ultimately that would be able to run arbitrary software, so one could run any operating system or any body of software that they wanted on these commodity servers. When I got to Dell in 1999, this is several years into Dell's foray into the server market, and you would buy a server from Dell, or from HP, or from Compaq, or IBM, then you would go find your software that you were going to run on top of that to stitch these machines together. That was, kind of, that server virtualization era, in the '90s, 2000s. As I mentioned, technology companies were looking at building more scalable systems that were aggregating resources together and making it much easier for their customers to access the storage, the networking that they needed, that period of time in which the commodity servers and the software industry diverged, and you had a bunch of different companies that were responsible for either hardware or the software that would bring these computers together, these large hyperscalers said, “Well, we're building purpose-built infrastructure services for our constituents at, like, a Facebook. That means we really need to bind this hardware and software together in a single product so that our software teams can go very quickly and they can programmatically access the resources that they need to deploy software.”So, they began to develop systems that looked more monolithic, kind of, rack-level systems that were driving much better efficiency from a power and density perspective, and hydrating it with software to provide infrastructure services to their businesses. And so you saw, what started out in the computer industry is these monolithic hardware plus software products that were not very accessible because they were so expensive and so large, but real products that were much easier to do real work on, to this period where you had a disaggregation of hardware and software where the end-user bore the responsibility of tying these things together and binding these into those infrastructure products, to today, where the largest hyperscalers in the market have come to the realization that building hardware and software together and designing and developing what modern computers should look like, is commonplace, and we all know that well or can access that as public cloud computing.Jason: And what was the driving force behind that decoupling? Was it the actual hardware vendors that didn't want to have to deal with the software? Or is that more from a customer-facing perspective where the customers themselves felt that they could eke out the best advantage by developing their own software stack on top of a relatively commodity unopinionated hardware stack that they could buy from a Dell or an HP?Steve: Yeah, I think probably both, but one thing that was a driver is that these were PC companies. So, coming out of the'80s companies that were considered, quote-unquote, “The IBM clones,” Dell, and Compaq, and HP, and others that were building personal computers and saw an opportunity to build more robust personal computers that could be sold to customers who were running, again, just arbitrary software. There wasn't the desire nor the DNA to go build that full software stack and provide that out as an opinionated appliance or product. And I think then, part of it was also like, hey, if we just focus on the hardware, then got this high utility artifact that we can go sell into all sorts of arbitrary software use cases. You know, whether this is going to be a single server or three servers that's going to go run in a closet of cafe, or it's going to be a thousand servers that are running in one of these large enterprise data centers, we get to build the same box, and that box can run underneath any different type of software. By way of that, what you ultimately get in that scenario is you do have to boil things down to the lowest common denominators to make sure that you've got that compatibility across all the different software types.Will: Who were the primary software vendors that were helping those companies take commodity servers and specialize into particular areas? And what's their role now and how has that transformed in light of the public cloud and the offerings that are once again generalized, but also reintegrated from a hardware and software perspective, just not maybe in your own server room, but in AWS, or Azure, or GCP?Steve: Yeah, so you have a couple layers of software that are required in the operation of hardware, and then all the way up through what we would think about as running in a rack, a full rack system today. You've first got firmware, and this is the software that runs on the hardware to be able to connect the different hardware components, to boot the system, to make sure that the CPU can talk to its memory, and storage, and the network. That software may be a surprise to some, but that firmware that is essential to the hardware itself is not made by the server manufacturer themselves. That was part of this outsourcing exercise in the '80s where not only the upstack software that runs on server systems but actually some of the lower-level downstack software was outsourced to these third-party firmware shops that would write that software. And at the time, probably made a lot of sense and made things a lot easier for the entire ecosystem.You know, the fact that's the same model today, and given how proprietary that is and, you know, where that can actually lead to some vulnerabilities and security issues is more problematic. You've got firmware, then you've got the operating system that runs on top of the server. You have a hypervisor, which is the emulation layer that translates that lower-level hardware into a number of virtual machines that applications can run in. You have control plane software that connects multiple systems together so that you can have five or ten or a hundred, or a thousand servers working in a pool, in a fleet. And then you've got higher-level software that allows a user to carve up the resources that they need to identify the amount of compute, and memory, and storage that they want to spin up.And that is exposed to the end-user by way of APIs and/or a user interface. And so you've got many layers of software that are running on top of hardware, and the two in conjunction are all there to provide infrastructure services to the end-user. And so when you're going to the public cloud today, you don't have to worry about any of that, right? Both of you have probably spun up infrastructure on the public cloud, but they call it 16 digits to freedom because you just swipe a credit card and hit an API, and within seconds, certainly within a minute, you've got readily available virtual servers and services that allow you to deploy software quickly and manage a project with team members. And the kinds of things that used to take days, weeks, or even months inside an enterprise can be done now in a matter of minutes, and that's extraordinarily powerful.But what you don't see is all the integration of these different components running, very well stitched together under the hood. Now, for someone who's deploying their own infrastructure in their own data center today, that sausage-making is very evident. Today, if you're not a cloud hyperscaler, you are having to go pick a hardware vendor and then figure out your operating system and your control plane and your hypervisor, and you have to bind all those things together to create a rack-level system. And it might have three or four different vendors and three or four different products inside of it, and ultimately, you have to bear the responsibility of knitting all that together.Will: Because those products were developed in silos from each other?Steve: Yeah.Will: They were not co-developed. You've got hardware that was designed in a silo separate from oftentimes it sounds like the firmware and all of the software for operating those resources.Steve: Yeah. The hardware has a certain set of market user requirements, and then if you're a Red Hat or you're a VMware, you're talking to your customers about what they need and you're thinking at the software layer. And then you yourself are trying to make it such that it can run across ten or twenty different types of hardware, which means that you cannot do things that bind or provide hooks into that underlying hardware which, unfortunately, is where a ton of value comes from. You can see an analog to this in thinking about the Android ecosystem compared to the Apple ecosystem and what that experience is like when all that hardware and software is integrated together, co-designed together, and you have that iPhone experience. Plenty of other analogs in the automotive industry, with Tesla, and health equipment, and Peloton and others, but when hardware and software—we believe certainly—when hardware and software is co-designed together, you get a better artifact and you get a much, much better user experience. Unfortunately, that is just not the case today in on-prem computing.Jason: So, this is probably a great time to transition to Oxide. Maybe to keep the analogy going, the public cloud is that iPhone experience, but it's just running in somebody else's data center, whether that's AWS, Azure, or one of the other public clouds. You're developing iOS for on-prem, for the people who want to run their own servers, which seems like kind of a countertrend. Maybe you can talk us through the dynamics in that market as it stands today, and how that's growing and evolving, and what role Oxide Computer plays in that, going forward.Steve: You've got this what my co-founder Jess affectionately refers to as ‘infrastructure privilege' in the hyperscalers, where they have been able to apply the money, and the time, and the resources to develop this, kind of, iPhone stack, instead of thinking about a server as a single 1U unit, or single machine, had looked at, well, what does a rack—which is the case that servers are slotted into in these large data centers—what does rack-level computing look like and where can we drive better power efficiency? Where can we drive better density? How can we drive much better security at scale than the commodity server market today? And doing things like implementing hardware Roots of Trust and Chain of Trust, so that you can ensure the software that is running on your machines is what is intended to be running there. The blessing is that we all—the market—gets access to that modern infrastructure, but you can only rent it.The only way you can access it is to rent, and that means that you need to run in one of the three mega cloud providers' data centers in those locations, that you are having to operate in a rental fee model, which at scale can become very, very prohibitively expensive. Our fundamental belief is that the way that these hyperscale data centers have been designed and these products have been designed certainly looks a lot more like what modern computers should look like, but the rest of the market should have access to the same thing. You should be able to buy and own and deploy that same product that runs inside a Facebook data center, or Apple data center, or Amazon, or a Google data center, you should be able to take that product with you wherever your business needs to run. A bit intimidating at the top because what we signed up for was building hardware, and taking a clean sheet paper approach to what a modern server could look like. There's a lot of good hardware innovation that the hyperscalers have helped drive; if you go back to 2010, Facebook pioneered being a lot more open about these modern open hardware systems that they were developing, and the Open Compute Project, OCP, has been a great collection point for these hyperscalers investing in these modern rack-level systems and doing it in the open, thinking about what the software is that is required to operate modern machines, importantly, in a way that does not sink the operations teams of the enterprises that are running them.Again, I think one of the things that was just so stunning to me, when I was at Joyent—we were running these machines, these commodity machines, and stitching together the software at scale—was how much of the organization's time was tied up in the deployment, and the integration, and the operation of this. And not just the organization's time, but actually our most precious resource, our engineering team, was having to spend so much time figuring out where a performance problem was coming from. For example in [clear throat], man, those are the times in which you really are pounding your fist on the table because you will try and go downstack to figure out, is this in the control plane? Is this in the firmware? Is this in the hardware?And commodity systems of today make it extremely, extremely difficult to figure that out. But what we set out to do was build same rack-level system that you might find in a hyperscaler data center, complete with all the software that you need to operate it with the automation required for high availability and low operational overhead, and then with a CloudFront end, with a set of services on the front end of that rack-level system that delight developers, that look like the cloud experience that developers have come to love and depend on in the public cloud. And that means everything is programmable, API-driven services, all the hardware resources that you need—compute, memory, and storage—are actually a pool of resources that you can carve up and get access to and use in a very developer-friendly way. And the developer tools that your software teams have come to depend on just work and all the tooling that these developers have invested so much time in over the last several years, to be able to automate things, to be able to deploy software faster are resident in that product. And so it is definitely kind of hardware and software co-designed, much like some of the original servers long, long, long ago, but modernized with the hardware innovation and open software approach that the cloud has ushered in.Jason: And give us a sense of scale; I think we're so used to seeing the headline numbers of the public cloud, you know, $300-and-some billion dollars today, adding $740-some billion over the next five years in public cloud spend. It's obviously a massive transformation, huge amount of green space up for grabs. What's happening in the on-prem market where your Oxide Computer is playing and how do you think about the growth in that market relative to a public cloud?Steve: It's funny because as Will can attest, as we were going through and fundraising, the prevalent sentiment was, like, everything's going to the public cloud. As we're talking to the folks in the VC community, it was Amazon, Microsoft, and Google are going to own the entirety of compute. We fundamentally disagreed because, A, we've lived it, and b, we went out as we were starting out and talked to dozens and dozens of our peers in the enterprise, who said, “Our cloud ambitions are to be able to get 20, 30, 40% of our workloads out there, and then we still have 60, 70% of our infrastructure that is going to continue to run in our own data centers for reasons including regulatory compliance, latency, security, and in a lot of cases, cost.” It's not possible for these enterprises that are spending half a billion, a billion dollars a year to run all of their infrastructure in the public cloud. What you've seen on-premises, and it depends on who you're turning to, what sort of poll and research you're turning to, but the on-prem market, one is growing, which I think surprises a lot of folks; the public cloud market, of course, it's growing like gangbusters, and that does not surprise a lot of folks, but what we see is that the combined market of on-prem and cloud, you can call it—if on-premise on the order of $100 billion and cloud is on the order of $150 billion, you are going to see enormous growth in both places over the next 10, 15 years.These markets are going to look very, very small compared to where they will be because one of the biggest drivers of whether it's public cloud or on-prem infrastructure, is everything shifting to digital formats. The digitalization that is just all too commonplace, described everywhere. But we're still very, very early in that journey. I think that if you look at the global GDP, less than 10% of the global GDP is on the internet, is online. Over the coming 10, 20 years, as that shifts to 20%, 30%, you're seeing exponential growth. And again, we believe and we have heard from the market that is representative of that $100 billion that investments in the public cloud and on-prem is going to continue to grow much, much more as we look forward.Will: Steve, I really appreciate you letting listeners know how special a VC I am.Steve: [laugh].Will: [laugh]. It was really important point that I wanted to make sure we hit on.Steve: Yeah, should we come back to that?Will: Yeah, yeah yeah—Steve: Yeah, let's spend another five or ten minutes on that.Will: —we'll revisit that. We'll revisit that later. But when we're talking about the market here, one of the things that got us so excited about investing in Oxide is looking at the existing ecosystem of on-prem commercial providers. I think if you look at the public cloud, there are fierce competitors there, with unbelievably sophisticated operations and product development. When you look at the on-prem ecosystem and who you would go to if you were going to build your own data center today, it's a lot of legacy companies that have started to optimize more for, I would say, profitability over the last couple of years than they have for really continuing to drive forward from an R&D and product standpoint.Would love maybe for you to touch on briefly, what does competition for you look like in the on-prem ecosystem? I think it's very clear who you're competing with, from a public cloud perspective, right? It's Microsoft, Google, Amazon, but who are you going up against in the on-prem ecosystem?Steve: Yeah. And just one note on that. We don't view ourselves as competing with Amazon, Google, and Microsoft. In fact, we are ardent supporters of cloud in the format, namely this kind of programmable API-fronted infrastructure as being the path of the future of compute and storage and networking. That is the way that, in the future, most software should be deployed to, and operated on, and run.We just view the opportunity for, and what customers are really, really excited about is having those same benefits of public cloud, but in a format in which they can own it and being able to have access to that everywhere their business needs to run, so that it's not, you know, do I get all this velocity, and this innovation, and this simplicity when I rent public cloud, or do I own my infrastructure and have to give up a lot of that? But to the first part of your question, I think the first issue is that it isn't one vendor that you are talking about what is the collection of vendors that I go to to get servers, software to make my servers talk to each other, switches to network together these servers, and additional software to operate, and manage, and monitor, and update. And there's a lot of complexity there. And then when you take apart each one of those different sets of vendors in the ecosystem, they're not designing together, so you've got these kind of data boundaries and these product boundaries that start to become really, really real when you're operating at scale, and when you're running critical applications to your business on these machines. And you find yourself spending an enormous amount of the company's time just knitting this stuff together and operating it, which is all time lost that could be spent adding additional features to your own product and better competing with your competitors.And so I think that you have a couple of things in play that make it hard for customers running infrastructure on-premises, you've got that dynamic that it's a fractured ecosystem, that these things are not designed together, that you have this kit car that you have to assemble yourself and it doesn't even come with a blueprint of the particular car design that you're building. I think that you do have some profit-taking in that it is very monopolized, especially on the software side where you've only got a couple of large players that know that there are few alternatives for companies. And so you are seeing these ELAs balloon, and you are seeing practices that look a lot like Oracle Enterprise software sales that are really making this on-prem experience not very economically attractive. And so our approach is, hardware should come with all the software required to operate it, it should be tightly integrated, the software should be all open-source. Something we haven't talked about.I think open-source is playing an enormous role in accelerating the cloud landscape and the technology landscapes. We are going to be developing our software in an open manner, and truly believe whether it's from a security view through to the open ecosystem, that it is imperative that software be open. And then we are integrating the switch into that rack-level product so that you've got networking baked in. By doing that, it opens up a whole new vector of value to the customer where, for example, you can see for the first time what is the path of traffic from my virtual machine to a switchboard? Or when things are not performing well, being able to look into that path, and the health, and see where things are not performing as well as they should, and being able to mitigate those sorts of issues.It does turn out if you are able to get rid of a lot of the old, crufty artifacts that have built up inside even these commodity system servers, and you are able to start designing at a rack level where you can drive much better power efficiency and density, and you bake in the software to effectively make this modern rack-level server look like a cloud in a box, and ensure these things can snap together in a grid, where in that larger fleet, operational management is easy because you've got the same automation capabilities that the big cloud hyperscalers have today. It can really simplify life. It ends up being an economic win and maybe most importantly, presents the infrastructure in a way that the developers love. And so there's not this view of the public cloud being the fast, innovative path for developers and on-prem being this, submit a trouble ticket and try and get access to a VM in six days, which sadly is the experience that we hear a lot of companies are still struggling with in on-prem computing.Jason: Practically, when you're going out and talking to customers, you're going to be a heterogeneous environment where presumably they already have their own on-prem infrastructure and they'll start to plug in—Steve: Yeah.Jason: —Oxide Computer alongside of it. And presumably, they're also to some degree in the public cloud. It's a fairly complex environment that you're trying to insert yourself into. How are your customers thinking about building on top of Oxide Computer in that heterogeneous environment? And how do you see Oxide Computer expanding within these enterprises, given that there's a huge amount of existing capital that's gone into building out their data centers that are already operating today, and the public cloud deployments that they have?Steve: As customers are starting to adopt Oxide rack-level computing, they are certainly going to be going into environments where they've got multiple generations of multiple different types of infrastructure. First, the discussions that we're having are around what are the points of data exfiltration, of data access that one needs to operate their broader environment. You can think about handoff points like the network where you want to make sure you've got a consistent protocol to, like, BGP or other, to be able to speak from your layer 2 networks to your layer 3 networks; you've got operational software that is doing monitoring and alerting and rolling up for service for your SRE teams, your operations teams, and we are making sure that Oxide's endpoint—the front end of the Oxide product—will integrate well, will provide the data required for those systems to run well. Another thorny issue for a lot of companies is identity and access management, controlling the authentication and the access for users of their infrastructure systems, and that's another area where we are making sure that the interface from Oxide to the systems they use today, and also resident in the Oxide product such as one wants to use it directly, has a clean cloud-like identity and access management construct for one to use. But at the highest level it is, make sure that you can get out of the Oxide infrastructure, the kind of data and tooling required to incorporate into management of your overall fleet.Over time, I think customers are going to experience a much simpler and much more automated world inside of the Oxide ecosystem; I think they're going to find that there are exponentially fewer hours required to manage that environment and that is going to inevitably just lead to wanting to replace a hundred racks of the extant commodity stack with, you know, sixty racks of Oxide that provide much better density, smaller footprint in the data center, and again, software-driven in the way that these folks are looking for.Jason: And in that answer, you alluded to a lot of the specialization and features that you guys can offer. I've always loved Alan Kay's quote, “People who are really serious about software make their own hardware.”Steve: Yeah.Jason: Obviously, you've got some things in here that only Oxide Computer can do. What are some of those features that traditional vendors can't even touch or deliver that you'll be able to, given your hardware-software integration?Steve: Maybe not the most exciting example, but I think one that is extremely important to a lot of the large enterprise company that we're working with, and that is at a station, being able to attest to the software that is running on your hardware. And why is that important? Well, as we've talked about, you've got a lot of different vendors that are participating in that system that you're deploying in your data center. And today, a lot of that software is proprietary and opaque and it is very difficult to know what versions of things you are running, or what was qualified inside that package that was delivered in the firmware. We were talking to a large financial institution, and they said their teams are spending two weeks a month just doing, kind of a proof of trust in their infrastructure that their customer's data is running on, and how cumbersome and hard it is because of how murky and opaque those lower-level system software world is.What do the hyperscalers do? They have incorporated hardware Root of Trust, which ensures from that first boot instruction, from that first instruction on the microprocessor, that you have a trusted and verifiable path, from the system booting all the way up through the control plane software to, say, a provisioned VM. And so what this does is it allows the rest of the market access to a bunch of security innovation that has gone on where these hyperscalers would never run without this. Again, having the hardware Root of Trust anchored at a station process, the way to attest all that software running is going to be really, really impactful for more than just security-conscious customers, but certainly, those that are investing more in that are really, really excited. If you move upstack a little bit, when you co-design the hardware with the control plane, both the server and the switch hardware with the control plane, it opens up a whole bunch of opportunity to improve performance, improve availability because you now have systems that are designed to work together very, very well.You can now see from the networking of a system through to the resources that are being allocated on a particular machine, and when things are slow, when things are broken, you are able to identify and drive those fixes, in some cases that you could not do before, in much, much, much faster time, which allows you to start driving infrastructure that looks a lot more like the five nines environment that we expect out of the public cloud.Jason: A lot of what you just mentioned, actually, once again, ties back to that analogy to the iPhone, and having that kind of secure enclave that powers Touch ID and Face ID—Steve: Yep.Jason: —kind of a server equivalent, and once again, optimization around particular workflows, the iPhone knows exactly how many photos every [laugh] iOS user takes, and therefore they have a custom chip dedicated specifically to processing images. I think that tight coupling, just relating it back to that iOS and iPhone integration, is really exciting.Steve: Well, and the feedback loop is so important because, you know, like iPhone, we're going to be able to understand where there are rough edges and where things are—where improvements can even can continue to be made. And because this is software-driven hardware, you get an opportunity to continuously improve that artifact over time. It now stops looking like the old, your car loses 30% of the value when you drive it off the lot. Because there's so much intelligent software baked into the hardware, and there's an opportunity to update and add features, and take the learnings from that hardware-software interaction and feed that back into an improving product over time, you can start to see the actual hardware itself have a much longer useful life. And that's one of the things we're really excited about is that we don't think servers should be commodities that the vendors are trying to push you to replace every 36 months.One of the things that is important to keep in mind is as Moore's laws is starting to slow or starting to hit some of the limitations, you won't have CPU density and some of these things, driving the need to replace hardware as quickly. So, with software that helps you drive better utilization and create a better-combined product in that rack-level system, we think we're going to see customers that can start getting five, six, seven years of useful life out of the product, not the typical two, or three, or maybe four that customers are seeing today in the commodity systems.Will: Steve, that's one of the challenges for Oxide is that you're taking on excellence in a bunch of interdisciplinary sciences here, between the hardware, the software, the firmware, the security; this is a monster engineering undertaking. One of the things that I've seen as an investor is how dedicated you have got to be to hiring, to build basically the Avengers team here to go after such a big mission. Maybe you could touch on just how you've thought about architecting a team here. And it's certainly very different than what the legacy providers from an on-prem ecosystem perspective have taken on.Steve: I think one of the things that has been so important is before we even set out on what we were going to build, the three of us spent time and focused on what kind of company we wanted to build, what kind of company that we wanted to work at for the next long chunk of our careers. And it's certainly drawing on experiences that we had in the past. Plenty of positives, but also making sure to keep in mind the negatives and some of the patterns we did not want to repeat in where we were working next. And so we spent a lot of time just first getting the principles and the values of the company down, which was pretty easy because the three of us shared these values. And thinking about all the headwinds, just all the foot faults that hurt startups and even big companies, all the time, whether it be the subjectivity and obscurity of compensation or how folks in some of these large tech companies doing performance management and things, and just thinking about how we could start from a point of building a company that people really want to work for and work with.And I think then layering on top of that, setting out on a mission to go build the next great computer company and build computers for the cloud era, for the cloud generation, that is, as you say, it's a big swing. And it's ambitious, and exhilarating and terrifying, and I think with that foundation of focusing first on the fundamentals of the business regardless of what the business is, and then layering on top of it the mission that we are taking on, that has been appealing, that's been exciting for folks. And it has given us the great opportunity of having terrific technologists from all over the world that have come inbound and have wanted to be a part of this. And we, kind of, will joke internally that we've got eight or nine startups instead of a startup because we're building hardware, and we're taking on developing open-source firmware, and a control plane, and a switch, and hardware Root of Trust, and in all of these elements. And just finding folks that are excited about the mission, that share our values, and that are great technologists, but also have the versatility to work up and down the stack has been really, really key.So far, so great. We've been very fortunate to build a terrific, terrific team. Shameless plug: we are definitely still hiring all over the company. So, from hardware engineering, software engineering, operations, support, sales, we're continuing to add to the team, and that is definitely what is going to make this company great.Will: Maybe just kind of a wrap-up question here. One of the things Jason and I always like to ask folks is, if you succeed over the next five years, how have you changed the market that you're operating in, and what does the company look like in five years? And I want you to know as an investor, I'm holding you to this. Um, so—Steve: Yeah, get your pen ready. Yeah.Will: Yeah, yeah. [laugh].Steve: Definitely. Expect to hear about that in the next board meeting. When we get this product in the market and five years from now, as that has expanded and we've done our jobs, then I think one of the most important things is we will see an incredible amount of time given back to these companies, time that is wasted today having to stitch together a fractured ecosystem of products that were not designed to work together, were not designed with each other in mind. And in some cases, this can be 20, 30, 40% of an organization's time. That is something you can't get back.You know, you can get more money, you can—there's a lot that folks can control, but that loss of time, that inefficiency in DIY your own cloud infrastructure on-premises, will be a big boon. Because that means now you've got the ability for these companies to capitalize on digitalizing their businesses, and just the velocity of their ability to go improve their own products, that just will have a flywheel effect. So, that great simplification where you don't even consider having to go through and do these low-level updates, and having to debug and deal with performance issues that are impossible to sort out, this—aggregation just goes away. This system comes complete and you wouldn't think anything else, just like an iPhone. I think the other thing that I would hope to see is that we have made a huge dent in the efficiency of computing systems on-premises, that the amount of power required to power your applications today has fallen by a significant amount because of the ability to instrument the system, from a hardware and software perspective, to understand where power is being used, where it is being wasted.And I think that can have some big implications, both to just economics, to the climate, to a number of things, by building and people using smarter systems that are more efficient. I think generally just making it commonplace that you have a programmable infrastructure that is great for developers everywhere, that is no longer restricted to a rental-only model. Is that enough for five years?Will: Yeah, I think I think democratizing access to hyperscale infrastructure for everybody else sounds about right.Steve: All right. I'm glad you wrote that down.Jason: Well, once again, Steve, thanks for coming on. Really exciting, I think, in this conversation, talking about the server market as being a fairly dynamic market still, that has a great growth path, and we're really excited to see Oxide Computer succeed, so thanks for coming on and sharing your story with us.Steve: Yeah, thank you both. It was a lot of fun.Will: Thank you for listening to Perfectly Boring. You can keep up the latest on the podcast at perfectlyboring.com, and follow us on Apple, Spotify, or wherever you listen to podcasts. We'll see you next time.

The Anxiety Coaches Podcast
688: 8 Tiny Habits For More Peace and Calm

The Anxiety Coaches Podcast

Play Episode Listen Later Feb 3, 2021 20:19


In today's episode, Gina shares eight helpful habits for maintaining more peace and calm in your life. These tiny habits are minor behavioral adjustments that are easy to implement and can help you realize more peace in your life immediately. Listen in today!  10-minute body scan https://www.anxietycoachespodcast.com/body-scan Episode mentioned in today's show https://www.theanxietycoachespodcast.com/episodes/686-self-soothing-with-affirmations-for-stress-and-anxiety To learn more go to: http://www.theanxietycoachespodcast.com Join our Group Coaching Full or Mini Membership Program  Learn more about our One-on-One Coaching What is anxiety? Quote: The best way to predict the future is to invent it. -Alan Kay

Bob McLain
The Bob McLain Show 12-7 hr 2

Bob McLain

Play Episode Listen Later Dec 7, 2020 34:10


Alan Kay, Chairman of Parker District SPD, talks sewer; Covid vaccine

AWESome EarthKind
Building the Business Climate Voice with Bill Weihl

AWESome EarthKind

Play Episode Listen Later Aug 27, 2020 45:58


Quantum Quote: “The best way to predict the future - is to invent it”. - Alan Kay, former Xerox researcher. Bill Weihl started his career in computer science on the faculty at MIT, then led climate and sustainability work at Google & Facebook. He founded Climate Voice to empower the workforce to urge companies to go “all in” on climate, both in business practices and policy advocacy. Take the Climate Voice Pledge at climatevoice.org SuperNova #1. Google, Facebook, Apple, Ikea, Walmart, Amazon, Sales Force & other large companies already are - or soon will be - climate neutral and 100% powered by clean energy. SuperNova #2.  While it's important for all of us - and all companies and organizations - to make the right decisions and move to clean energy - EVERYONE needs to do it. The only way to have everyone do it is with government policy, because that policy reduces the risk and drives innovation. SuperNova #3. Policy is about influence. Businesses have a lot of influence. If businesses say they want good carbon policies - we'll get them.  SuperNova #4. Carbon reductions of 8-9% / year are necessary, technically doable,  & make economic sense. But - we need to change the rules of the system. SuperNova #5. Companies can't just “stay in their lanes”. Even though different policies might not directly apply to their particular situation - we need to unite around the comprehensive policy changes that are being held back from the established fossil fuel companies. SuperNova #6. Get involved in local & state policy regarding utilities and local zoning. Over 30 communities have banned fossil fuels in new construction. Ah-Ha:  Decide on where you want to go, then figure out how to get there Best Advice he's ever received: Fear is normal. The right response is action, not paralysis. Personal Success:  Questioning many things - assumptions, statements of fact, received wisdom - leads to creative approaches & solutions  Internet Resources: Climate Nexus: climatenexus.org   Book Recommendation: Designing Climate Solutions by Hal Harvey Magic Wand: Restore trust in expertise. Question assumptions & facts, but trust experts & their expertise. One thing Energized about today: This is a complicated time. Lots of disruption, and the world is turned upside down. But the door is opening for BIG CHANGE - racial, economic, & climate justice and civil rights.  WTF or F:  On the beach having fun in North Carolina in 2004. Didn't know that there was a hurricane coming & had to evacuate. Parting Advice: Don't just think about where we are and what seems possible - think about where you want to go and what it will take to get there - then go do it.

The History of Computing

Visual Basic Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Today we're going to cover an important but often under appreciated step on the path to ubiquitous computing: Visual Basic. Visual Basic is a programming language for Windows. It's in most every realistic top 10 of programming languages of all time. It's certainly split into various functional areas over the last decade or so, but it was how you did a lot of different tasks in Windows automation and programming for two of the most important decades through a foundational period of the PC movement. But where did it come from? Let's go back to 1975. This was a great year. The Vietnam War ended, Sony gave us Betamax, JVC gave us VHS. Francisco Franco died. I don't wish ill on many, but if I could go back in time and wish ill on him, I would. NASA launched a joint mission with the Soviet Union. The UK voted to stay the EU. Jimmy Hoffa disappears. And the Altair ships. Altair Basic is like that lego starter set you buy your kid when you think they're finally old enough to be able to not swallow the smallest pieces. From there, you buy them more and more, until you end up stepping on those smallest pieces and cursing. Much as I used to find myself frequently cursing at Visual Basic. And such is life. Or at least, such is giving life to your software ideas. No matter the language, there's often plenty of cursing. So let's call the Altair a proto-PC. It was underpowered, cheap, and with this Microsoft Basic programming language you could, OMG, feed it programs that would blink lights, or create early games. That was 1978. And based largely on the work of John Kemeny and Thomas Kurts, the authors of the original BASIC in 1964, at Dartmouth College. As the PC revolution came, BASIC was popular on the Apple II and original PCs with QuickBASIC coming in 1985, and an IDE, or Integrated Development Environment, for QuickBASIC shipped in 2.0. At the time Maestro was the biggest IDE in use, but they'd been around since Microsoft released the first in 1974. Next, you could compile these programs into DOS executables, or .exe files in 3.0 and 4.0 brought debugging in the IDE. Pretty sweet. You could run the interpreter without ever leaving the IDE! No offense to anyone but Apple was running around the world pitching vendors to build software for the Mac, but had created an almost contentious development environment. And it showed from the number of programs available for the Mac. Microsoft was obviously investing heavily in enabling developers to develop in a number of languages and it showed; Microsoft had 4 times the software titles. Many of which were in BASIC. But the last version of QuickBASIC as it was known by then came in 4.5, in 1988, the year the Red Army withdrew from Afghanistan - probably while watching Who Framed Roger Rabbit on pirated VHS tapes. But by the late 80s, use began to plummet. Much as my daughters joy of the legos began to plummet when she entered tweenhood. It had been a huge growth spurt for BASIC but the era of object oriented programming was emerging. But Microsoft was in an era of hyper growth. Windows 3.0 - and what's crazy is they were just entering the buying tornado. 1988, the same year as the final release of QuickBASIC, Alan Cooper created a visual programming language he'd been calling Ruby. Now, there would be another Ruby later. This language was visual and Apple had been early to the market on Visual programming, with the Mac - introduced in 1984. Microsoft had responded with Windows 1.0 in 1985. But the development environment just wasn't very… Visual. Most people at the time used Windows to open a Window of icky text. Microsoft leadership knew they needed something new; they just couldn't get it done. So they started looking for a more modern option. Cooper showed his Ruby environment to Bill Gates and Gates fell in love. Gates immediately bought the product and it was renamed to Visual Basic. Sometimes you build, sometimes you partner, and sometimes you buy. And so in 1991, Visual Basic was released at Comdex in Atlanta, Georgia and came around for DOS the next year. I can still remember writing a program for DOS. They faked a GUI using ASCII art. Gross. VB 2 came along in 1992, laying the foundations for class modules. VB 3 came in 93 and brought us the JET database engine. Not only could you substantiate an object but you had somewhere to keep it. VB 4 came in 95 because we got a 32-bit option. That adds a year or 6 for every vendor. The innovations that Visual Basic brought to Windows can still be seen today. VBX and DLL are two of the most substantial. A DLL is a “dynamic link library” file that holds code and procedures that Windows programs can then consume. DLL allow multiple programs to use that code, saving on memory and disk space. Shared libraries are the cornerstone of many an object-oriented language. VBX isn't necessarily used any more as they've been replaced with OCXs but they're similar and the VBX certainly spawned the innovation. These Visual Basic Extensions, or VBX for short, were C or C++ components that were assembled into an application. When you look at applications you can still see DLLs and OCXs. VB 4 was when we switched from VBX to OCX. VB 5 came in 97. This was probably the most prolific, both for software you wanted on your computer and malware. We got those crazy ActiveX controls in VB 5. VB 6 came along in 1998, extending the ability to create web apps. And we sat there for 10 years. Why? The languages really started to split with the explosion of web tools. VBScript was put into Active Server Pages . We got the .NET framework for compiled web pages. We got Visual Basic for Applications, allowing Office to run VB scripts using VBA 7. Over the years the code evolved into what are now known as Unified Windows Platform apps, written in C++ with WinRT or C++ with CX. Those shared libraries are now surfaced in common APIs and sandboxed given that security and privacy have become a much more substantial concern since the Total Wave of the Internet crashed into our lego sets, smashing them back to single blocks. Yah, those blocks hurt when you step on them. So you look for ways not to step on them. And controlling access to API endpoints with entitlements is a pretty good way to walk lightly. Bill Gates awarded Cooper the first “Windows Pioneer Award” for his work on Visual Basic. Cooper continued to consult with companies, with this crazy idea of putting users first. He was an earlier proponent of User Experience and putting users first when building interfaces. In fact, his first book was called “About Face: The Essentials of User Interface Design.” That was published in 1995. He still consults and trains on UX. Honestly, Alan Cooper only needs one line on his resume: “The Father of Visual Basic.” Today Eclipse and Visual Studio are the most used IDEs in the world. And there's a rich ecosystem of specialized IDEs. The IDE gives code completion, smart code completion, code search, cross platform compiling, debugging, multiple language support, syntax highlighting, version control, visual programming, and so much more. Much of this isn't available on every platform or for every IDE, but those are the main features I look for - like the first time I cracked open IntelliJ. The IDE is almost optional in functional programming - but In an era of increasingly complex object-oriented programming where classes are defined in hundreds or thousands of itty bitty files, a good, smart, feature-rich IDE is a must. And Visual Studio is one of the best you can use. Given that functional programming is dead, there's no basic remaining in any of the languages you build modern software in. The explosion of object-orientation created flaws in operating systems, but we've matured beyond that and now get to find all the new flaws. Fun right? But it's important to think, from Alan Kay's introduction of Smalltalk in 1972, new concepts in programming in programming had been emerging and evolving. The latest incarnation is the API-driven programming methodology. Gone are the days when we accessed memory directly. Gone are the days when the barrier of learning to program was understanding functional and top to bottom syntax. Gone are the days when those Legos were simple little sets. We've moved on to building Death Stars out of legos with more than 3500 pieces. Due to increasingly complex apps we've had to find new techniques to keep all those pieces together. And as we did we learned that we needed to be much more careful. We've learned to write code that is easily tested. And we've learned to write code that protects people. Visual Basic was yet another stop towards the evolution to modern design principals. We've covered others and we'll cover more in coming episodes. So until next time, think of the continuing evolution and what might be next. You don't have to be in front of it, but it does help to have a nice big think on how it can impact projects you're working on today. So thank you for tuning in to yet another episode of the History of Computing Podcast. We're so lucky to have you. Have a great day!

The History of Computing
Boring Old Application Programming Interfaces

The History of Computing

Play Episode Listen Later Nov 4, 2019 15:00


Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past, we're able to be prepared for the innovations of the future! Todays episode is gonna' be a bit boring. It's on APIs. An API is an Application Program Interface this is a set of tools, protocols or routines used for building applications. See boring! Most applications and code today are just a collection of REST endpoints interconnected with fancy development languages. We can pull in a lot of information from other apps and get a lot of code as we call it these days “for free”. It's hard to imagine a world without APIs. It's hard to imagine what software would be like if we still had to write memory to a specific register in order to accomplish basic software tasks. Obfuscating these low level tasks is done by providing classes of software to give developers access to common tasks they need to perform. These days, we just take this for granted. But once upon a time, you did have to write all of that code over and over, on PCs, initially in BASIC, PASCAL, or assembly for really high performance tasks. Then along comes Roy Fieldings. He writes the Architectural Styles and Design of Network-based Software Architectures dissertation in 2000. But APIs came out of a need for interaction between apps and devices. Between apps and web services. Between objects and other objects. The concept of the API started long before y2k though. In the 60s, we had libraries in operating systems. But what Subrata Dasgupta referred to as the second age of computer science in the seminal book by the same name began in 1970. And the explosion of computer science as a field in the 70s gave us the rise of Message Oriented Middleware and then Enterprise Application Integration (EAI) becomes the bridge into mainframe systems. This started a weird time. IBM ruled the world, but they were listening to the needs of customers and released MQSeries, to facilitate message queues. I release message queues are boring. Sorry. I've always felt like the second age of computer science is split right down the middle. The 1980s brought us into the era of object oriented programming when Alan Kotok and his coworkers from Xerox PARC gave us Smallltalk, the first popular object oriented programming language and began to codify methods and classes. Life was pretty good. This led to a slow adoption across the world of the principals of Alan Kay vis a viz Doug Engelbart vis a viz and Vanever Bush. The message passing and queuing systems were most helpful in very large software projects where there were a lot of procedures or classes you might want to share to reduce the cyclomatic complexity of those projects. Suddenly distributed computing began to be a thing. And while it started in research institutes like PARC and academia, it proliferated into the enterprise throughout the 80s. Enterprise computing is boring. Sorry again. The 90s brought grunge. And I guess this little uninteresting thing called the web. And with the web came JavaScript. It was pretty easy to build an API endpoint, or a programmatic point that you programmed to talk to a site, using a JSP or JavaServer Page helps software developers create dynamically generated pages such as those that respond to a query for information and then pass that query on to a database and provide the response. You could also use PHP, Ruby, ASP, and even NeXT's Web Objects, the very name of which indicates an Object Oriented Programming language. The maturity of an API development environment led to Service-Oriented Architectures in the early 2000s, where we got into more function-based granularity. Instead of simply writing an endpoint to make data that was in our pages accessible, we would build those endpoints to build pages on and then build contracts for those endpoints that guaranteed that we would not break the functionality other teams needed. Now other teams could treat our code as classes they'd written themselves. APIs had shot into the mainstream. Roy Fielding's dissertation legitimized APIs and over the next few years entire methodologies for managing teams based on the model began to emerge. Fielding wasn't just an academic. He would help create the standards for HTTP communication. And suddenly having an API became a feature that helped propel the business. This is where APIs get a bit more interesting. You could transact online. eBay shipped an API in 2000, giving developers the ability to build their own portals. They also released low-code options called widgets that you could just drop into a page and call to produce a tile, or iFrame. The first Amazon APIs shipped in 2002, in an early SOAP iteration, along with with widgets as well. In fact, embedding widgets became much bigger than APIs and iFrames are still common practice today, although I've never found a *REAL* developer who liked them. I guess I should add that to my interview questions. The Twitter API, released in 2006, gave other vendors the ability to write their own Twitter app, but also gave us the concept of OAuth, a federated identity. Amazon released their initial APIs that year, making it possible to use their storage and compute clusters and automate the tasks to set them up and tear them down. Additional APIs would come later, giving budding developers the ability to write software and host data in databases, even without building their own big data compute clusters. This too helped open the doors to an explosion of apps and web apps. These days they basically offer everything, including machine learning, as a service, all accessible through an API. The iPhone 3g wasn't boring. It came along in 2009. All of a sudden; and suddenly the world of mobile app development was unlocked. Foursqure came along at about the same time and opened up their APIs. This really gave the whole concept of using other vendor APIs as a way to accomplish various tasks without having to write all the code to do some of those tasks themselves. From there, more and more vendors began to open APIs and not only could you pull in information but you could also push more information out. And the ability to see settings gives us the ability to change them as well. From the consumer Foursqure to the Enterprise, now we have microservices available to do anything you might want to do. Microservices are applications that get deployed as modular services. Private APIs, or those that are undocumented. Public APIs, or interfaces anyone can access. Partner APIs, or those requiring a key to access. At this point, any data you might want to get into an app, is probably available through an API. Companies connect to their own API to get data, especially for apps. And if a vendor refuses to release their own API, chances are some enterprising young developer will find a way if there's an actual desire to leverage their data, which is what happened to Instagram. Until they opened up their API at least. And Facebook, who released their API to any developer well over a decade is probably the most villainized in this regard. You see, Facebook allowed a pretty crazy amount of data to be accessible in their API until all of a sudden Cambridge Analytica supposedly stole elections with that data. There's nothing boring about stealing elections! Whether you think that's true or not, the fact that Facebook is the largest and most popular social network in the history of the world shines a light when technology currently being used by everyone in the industry is taken advantage of. I'm not sticking up for them or villainizing them; but when I helped to write one of the early Facebook games and I was shown what we now refer to as personally identifiable data, and able to crawl a user to get to their friends to invite them to add our game, and then their friends, it didn't seem in the least bit strange. We'd done spidery things with other games. Nothing weird here. The world is a better place now that we have OAUth grant types and every other limiter on the planet. Stripe in fact gave any developer access to quickly and easily process financial transactions. And while there were well-entrenched competitors, they took over the market by making the best APIs available. They understood that if you make it easy and enjoyable for developers, they will push for adoption. And cottage industries of apps have sprung up over the years, where apps aggregate data into a single pane of glass from other sources. Tools like Wikipedia embrace this, banks allow Mint and Quickbooks to aggregate and even control finances, while advertising-driven businesses like portals and social networks seem to despise it, understandably. Sometimes they allow it to gain market share and then start to charge a licensing fee when they reach a point where the cost is too big not to, like what happened with Apple using Google Maps until suddenly they started their own mapping services. Apple by the way has never been great about exposing or even documenting their publicly accessible APIs outside of those used in their operating systems, APNs and profile management environment. The network services Apple provides have long been closed off. Today, if you write software, you typically want that software to be what's known as API-first. API-first software begins with the tasks users want your software to perform. The architecture and design means the front-end or any apps just talk to those backend services and perform as little logic not available through an API as possible. This allows you to issue keys to other vendors and build integrations so those vendors can do everything you would do, and maybe more. Suddenly, anything is possible. Combined with continuous deployment, contiuous testing, continuous design, and continuous research, we heavily reduce the need to build so much, slashing the time it takes to market and the cost it takes to get to market substantially. When I think of what it means to be nimble. No matter how big the team, that's what I think of. Getting new products and innovations to market shouldn't be boring. APIs have helped to fulfill some of the best promises of the Information Age, putting an unparalleled amount of information at our fingertips. The original visionary of all of this, Vannevar Bush, would be proud. But I realize that this isn't the most exciting of topics. So thank you for tuning in to yet another episode of the History of Computing Podcast. We're so lucky to have you. Have a great day!

The History of Computing
Susan Kare, The Happy Mac, And The Trash Can

The History of Computing

Play Episode Listen Later Oct 26, 2019 12:58


Susan Kare Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past, we're able to be prepared for the innovations of the future! Today we'll talk about a great innovator, Susan Kare. Can you imagine life without a Trash Can icon? What about the Mac if there had never been a happy Mac icon. What would writing documents be like if you always used Courier and didn't have all those fonts named after cities? They didn't just show up out of nowhere. And the originals were 8 bit. But they were were painstakingly designed, reviewed, reviewed again, argued over, obsessed over. Can you imagine arguing with Steve Jobs? He's famous for being a hard person to deal with. But one person brought us all of these things. One pioneer. One wizard. She cast her spell over the world. And that spell was to bring to an arcane concept called the desktop metaphor into everyday computers. Primitive versions had shipped in Douglas Engelbart's NLS, in Alan Kay's Smalltalk. In Magic Desk on the Commodore 64. But her class was not an illusionist as those who came before her were, but a mage, putting hexadecimal text derived from graph paper so the bits would render on the screen the same, for decades to come. And we still use her visionary symbols, burned into the spell books of all visual designers from then to today. She was a true innovator. She sat in a room full of computer wizards that were the original Mac team, none was more important than Susan Kare. Born in 1954 in Ithaca, New York this wizard got her training in the form of a PhD from New York University and then moved off to San Francisco in the late 1970s, feeling the draw of a generation's finest to spend her mage apprenticeship as a curator at a Fine Arts Museum. But like Gandalph, Raistlin, Dumbledoor, Merlin, Glinda the good witch and many others, she had a destiny to put a dent in the universe. To wield the spells of the infant user interface design art to reshape the universe, 8-bits at a time. She'd gone to high school with a different kind of wizard. His name was Andy Hertzfeld and he was working at a great temple called Apple Computer. And his new team team would build a new kind of computer called the Macintosh. They needed some graphics and fonts help. Susan had used an Apple II but had never done computer graphics. She had never even dabbled in typography. But then, Dr Strange took the mantle with no experience. She ended up taking the job and joining Apple as employee badge number 3978. She was one of two women on the original Macintosh team. She had done sculpture and some freelance work as a designer. But not this weird new art form. Almost no one had. Like any young magician, she bought some books and studied up on design, equating bitmap graphics to needlepoint. She would design the iconic fonts, the graphics for many of the applications, and the icons that went into the first Mac. She would conjure up the hex (that's hexadecimal) for graphics and fonts. She would then manually type them in to design icons and fonts. Going through every letter of every font manually. Experimenting. Testing. At the time, fonts were reserved for high end marketing and industrial designers. Apple considered licensing existing fonts but decided to go their own route. She painstakingly created new fonts and gave them the names of towns along train stops around Philadelphia where she grew up. Steve Jobs went for the city approach but insisted they be cool cities. And so the Chicago, Monaco, New York, Cairo, Toronto, Venice, Geneva, and Los Angeles fonts were born - with her personally developing Geneva, Chicago, and Cairo. And she did it in 9 x 7. I can still remember the magic of sitting down at a computer with a graphical interface for the first time. I remember opening MacPaint and changing between the fonts, marveling at the typefaces. I'd certainly seen different fonts in books. But never had I made a document and been able to set my own typeface! Not only that they could be in italics, outline, and bold. Those were all her. And she painstakingly created them out of pixels. The love and care and detail in 8-bit had never been seen before. And she did it with a world class wizard: someone with a renowned attention to detail and design sense like Steve Jobs looking over her shoulder and pressuring her to keep making it better. They brought the desktop metaphor into the office. Some of it pre-existed her involvement. The trash can had been a part of the Lisa graphics already. She made it better. The documents icon pre-dated her. She added a hand holding a pencil to liven it up, making it clear which files were applications and which were documents. She made the painting brush icon for MacPaint that, while modernized, is still in use in practically every drawing app today. In fact when Bill Atkinson was writing MacSketch and saw her icon, the name was quickly changed to MacPaint. She also made the little tool that you use to draw shapes and remove them called the lasso, with Bill Atkinson. Before her, there were elevators to scroll around in a window. After her, they were called scroll bars. After her, the places you dropped your images was called the Scrapbook. After her the icon of a floppy disk meant save. She gave us the dreaded bomb. The stop watch. The hand you drag to move objects. The image of a speaker making sound. The command key, still on the keyboard of every Mac made. You can see that symbol on Nordic maps and it denotes an “area of interest” or more poignant for the need: “Interesting Feature”. To be clear, I never stole one of those signs while trampsing around Europe. But that symbol is a great example of what a scholarly mage can pull out of ancient tomes, as it is called a Gorgon knot or Saint John Arm's and dates back over fifteen hundred years - and you can see that in other hieroglyphs she borrowed from obscure historical references. And almost as though those images are burned into our DNA, we identified with them. She worked with the traditionally acclaimed wizards of the Macintosh: Andy Hertzfeld, Bill Atkinson, Bruce Horn, Bud Tribble, Donn Denman, Jerome Coonen, Larry Kenos, and Steve Capps. She helped Chris Espinosa, Clement Mok, Ellen Romana, and Tom Hughes out with graphics for manuals, and often on how to talk about a feature. But there was always Steve Jobs. Some icons took hours; others took days. And Jobs would stroll in and have her recast her spell if it wasn't just right. Never acknowledging the effort. If it wasn't right, it wasn't right. The further the team pushed on the constantly delayed release of the Mac the more frantic the wizards worked. The less they slept. But somehow they knew. It wasn't just Jobs' reality distortion field as Steven Levy famously phrased it. They knew that what they were building would put a dent in the Universe. And when they all look back, her designs on “Clarus the Dogcow” were just the beginning of her amazing contributions. The Mac launched. And it did not turn out to be a commercial success, leading to the ouster of Steve Jobs - Sauron's eye was firmly upon him. Kare left with Jobs to become the tenth employee at NeXT computer. But she introduced Jobs to Paul Rand, who had helped design the IBM logo, to design their logo. When IBM, the Voldemort of the time, was designing OS/2, she helped with their graphics. When Bill Gates, the Jafar of the computer industry called, she designed the now classic solitaire for Windows. And she gave them Notepad and Control Panels. And her contributions have continued. When Facebook needed images for the virtual gifts feature. They called Kare. You know that spinning button when you refresh Pinterest. That's Kare. And she still does work all the time. The Museum of Modern Art showed her original Sketches in a 2015 Exhibit called “This is for everyone.” She brought us every day metaphors to usher in the and ease the transition into a world of graphical user interfaces. Not a line of the original code remains. But it's amazing how surrounded by all the young wizards, one that got very little attention in all the books and articles about the Mac was the biggest wizard of them all. Without her iconic designs, the other wizards would likely be forgotten. She is still building one of the best legacies in all of the technology industry. By simply putting users into user interface. When I transitioned from the Apple II to the Mac, she made it easy for me with those spot-on visual cues. And she did it in only 8 bits. She gave the Mac style and personality. She made it fun, but not so much fun that it would be perceived as a toy. She made the Mac smile. Who knew that computers could smile?!?! The Mac Finder still smiles at me every day. Truly Magical. Thanks for that, Susan Kare. And thanks to you inquisitive and amazing listeners. For my next trick. I'll disappear. But thank you for tuning in to yet another episode of the History of Computing Podcast. We're so lucky to have you. Have a great day!