Podcasts about xerox parc

Research and development company

  • 144PODCASTS
  • 219EPISODES
  • 44mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • May 2, 2025LATEST
xerox parc

POPULARITY

20172018201920202021202220232024


Best podcasts about xerox parc

Latest podcast episodes about xerox parc

All TWiT.tv Shows (MP3)
This Week in Space 159: AI in Space!

All TWiT.tv Shows (MP3)

Play Episode Listen Later May 2, 2025 73:51 Transcription Available


Seems we can't go through an hour without hearing news about artificial intelligence these days. There are a lot of exciting developments, and some of the most exciting when thinking about space are coming from the USRA's Research Institute for Advanced Computer Science (RIACS), which is on the cutting edge of the cutting edge. In this episode, we're speaking with the institute's director, Dr. David Bell, who will walk us through the differences between current AI, agentic AI, and--are you ready?--quantum-powered AI, and their current and future potential to revolutionize space exploration and development. Join us!Headlines Trump budget cuts: The Trump administration's fiscal 2026 "skinny" budget proposes slashing NASA's funding by $6 billion—24 % of its current $24.8 billion—threatening SLS, Orion, Gateway, and Mars Sample Return programs. Planet 9 revival: Scientists re-examining 1980s IRAS and 2006–2011 Akari infrared data have uncovered new gravitational signatures suggesting a hidden Planet 9 at ~700 AU, bringing the search closer to confirmation. Speed-round catch-up: NASA's Psyche asteroid mission is battling low fuel pressure; the decades-old Soviet Cosmos 42 Venus probe is slated to re-enter around May 10; and a recent poll finds over half of Gen Z and millennials believe in alien cover-ups. Main Topic – AI in Space with Dr. David Bell USRA & QuAIL overview: Dr. Bell outlines USRA's Research Institute for Advanced Computer Science (RIACS) and its Quantum Artificial Intelligence Lab—a collaboration with Google and NASA Ames driving AI and quantum computing integration in space missions Career path & pivotal shifts: With 20+ years at USRA and a prior decade at Xerox PARC, Bell traces AI's journey from 1959's first neural nets to the 2017 transformer breakthrough that sparked today's LLM revolution. Early AI successes: AutoClass's unsupervised learning on the 1980s IRAS mission discovered a new class of infrared stars, and ExoMiner's deep-learning engine has since validated over 300 exoplanets from Kepler data. Agent-based autonomy: USRA deployed mobile agents on the ISS to automate file transfers and Deep Space One's Remote Agent performed onboard planning, execution, and anomaly recovery in deep space during the 1990s. Evolution of planning & scheduling: The Europa planning engine—used daily for Mars rovers—has evolved into SPIFe (Spiffy) and real-time collaborative "playbook" apps, optimizing workflows on both robotic and crewed missions. Natural language interfaces: Clarissa, a precursor to Siri deployed on the ISS five years before commercial voice assistants, let astronauts query and navigate complex procedures by voice. Robotic assistants: Projects like Astrobee free-flying robots on the ISS and analog-terrain rover simulations demonstrate how AI-driven machines can support astronauts in exploration and maintenance tasks. Foundation models for Earth & space: USRA's Generative AI Lab is building multipurpose foundation models on global satellite data that now outperform traditional numerical simulations—forecasting weather faster and more accurately. Workforce development: Through the Feynman Quantum Academy and NASA-integrated data science curricula, USRA immerses students These show notes have been truncated due to length. For the full show notes, visit https://twit.tv/shows/this-week-in-space/episodes/159 Hosts: Rod Pyle and Tariq Malik Guest: Dr. David Bell

This Week in Space (Audio)
TWiS 159: AI in Space! - USRA's Dr. Bell on Robots, Rovers, and Autonomous Frontiers

This Week in Space (Audio)

Play Episode Listen Later May 2, 2025 73:51 Transcription Available


Seems we can't go through an hour without hearing news about artificial intelligence these days. There are a lot of exciting developments, and some of the most exciting when thinking about space are coming from the USRA's Research Institute for Advanced Computer Science (RIACS), which is on the cutting edge of the cutting edge. In this episode, we're speaking with the institute's director, Dr. David Bell, who will walk us through the differences between current AI, agentic AI, and--are you ready?--quantum-powered AI, and their current and future potential to revolutionize space exploration and development. Join us!Headlines Trump budget cuts: The Trump administration's fiscal 2026 "skinny" budget proposes slashing NASA's funding by $6 billion—24 % of its current $24.8 billion—threatening SLS, Orion, Gateway, and Mars Sample Return programs. Planet 9 revival: Scientists re-examining 1980s IRAS and 2006–2011 Akari infrared data have uncovered new gravitational signatures suggesting a hidden Planet 9 at ~700 AU, bringing the search closer to confirmation. Speed-round catch-up: NASA's Psyche asteroid mission is battling low fuel pressure; the decades-old Soviet Cosmos 42 Venus probe is slated to re-enter around May 10; and a recent poll finds over half of Gen Z and millennials believe in alien cover-ups. Main Topic – AI in Space with Dr. David Bell USRA & QuAIL overview: Dr. Bell outlines USRA's Research Institute for Advanced Computer Science (RIACS) and its Quantum Artificial Intelligence Lab—a collaboration with Google and NASA Ames driving AI and quantum computing integration in space missions Career path & pivotal shifts: With 20+ years at USRA and a prior decade at Xerox PARC, Bell traces AI's journey from 1959's first neural nets to the 2017 transformer breakthrough that sparked today's LLM revolution. Early AI successes: AutoClass's unsupervised learning on the 1980s IRAS mission discovered a new class of infrared stars, and ExoMiner's deep-learning engine has since validated over 300 exoplanets from Kepler data. Agent-based autonomy: USRA deployed mobile agents on the ISS to automate file transfers and Deep Space One's Remote Agent performed onboard planning, execution, and anomaly recovery in deep space during the 1990s. Evolution of planning & scheduling: The Europa planning engine—used daily for Mars rovers—has evolved into SPIFe (Spiffy) and real-time collaborative "playbook" apps, optimizing workflows on both robotic and crewed missions. Natural language interfaces: Clarissa, a precursor to Siri deployed on the ISS five years before commercial voice assistants, let astronauts query and navigate complex procedures by voice. Robotic assistants: Projects like Astrobee free-flying robots on the ISS and analog-terrain rover simulations demonstrate how AI-driven machines can support astronauts in exploration and maintenance tasks. Foundation models for Earth & space: USRA's Generative AI Lab is building multipurpose foundation models on global satellite data that now outperform traditional numerical simulations—forecasting weather faster and more accurately. Workforce development: Through the Feynman Quantum Academy and NASA-integrated data science curricula, USRA immerses students These show notes have been truncated due to length. For the full show notes, visit https://twit.tv/shows/this-week-in-space/episodes/159 Hosts: Rod Pyle and Tariq Malik Guest: Dr. David Bell

This Week in Space (Video)
TWiS 159: AI in Space! - USRA's Dr. Bell on Robots, Rovers, and Autonomous Frontiers

This Week in Space (Video)

Play Episode Listen Later May 2, 2025 73:51 Transcription Available


Seems we can't go through an hour without hearing news about artificial intelligence these days. There are a lot of exciting developments, and some of the most exciting when thinking about space are coming from the USRA's Research Institute for Advanced Computer Science (RIACS), which is on the cutting edge of the cutting edge. In this episode, we're speaking with the institute's director, Dr. David Bell, who will walk us through the differences between current AI, agentic AI, and--are you ready?--quantum-powered AI, and their current and future potential to revolutionize space exploration and development. Join us!Headlines Trump budget cuts: The Trump administration's fiscal 2026 "skinny" budget proposes slashing NASA's funding by $6 billion—24 % of its current $24.8 billion—threatening SLS, Orion, Gateway, and Mars Sample Return programs. Planet 9 revival: Scientists re-examining 1980s IRAS and 2006–2011 Akari infrared data have uncovered new gravitational signatures suggesting a hidden Planet 9 at ~700 AU, bringing the search closer to confirmation. Speed-round catch-up: NASA's Psyche asteroid mission is battling low fuel pressure; the decades-old Soviet Cosmos 42 Venus probe is slated to re-enter around May 10; and a recent poll finds over half of Gen Z and millennials believe in alien cover-ups. Main Topic – AI in Space with Dr. David Bell USRA & QuAIL overview: Dr. Bell outlines USRA's Research Institute for Advanced Computer Science (RIACS) and its Quantum Artificial Intelligence Lab—a collaboration with Google and NASA Ames driving AI and quantum computing integration in space missions Career path & pivotal shifts: With 20+ years at USRA and a prior decade at Xerox PARC, Bell traces AI's journey from 1959's first neural nets to the 2017 transformer breakthrough that sparked today's LLM revolution. Early AI successes: AutoClass's unsupervised learning on the 1980s IRAS mission discovered a new class of infrared stars, and ExoMiner's deep-learning engine has since validated over 300 exoplanets from Kepler data. Agent-based autonomy: USRA deployed mobile agents on the ISS to automate file transfers and Deep Space One's Remote Agent performed onboard planning, execution, and anomaly recovery in deep space during the 1990s. Evolution of planning & scheduling: The Europa planning engine—used daily for Mars rovers—has evolved into SPIFe (Spiffy) and real-time collaborative "playbook" apps, optimizing workflows on both robotic and crewed missions. Natural language interfaces: Clarissa, a precursor to Siri deployed on the ISS five years before commercial voice assistants, let astronauts query and navigate complex procedures by voice. Robotic assistants: Projects like Astrobee free-flying robots on the ISS and analog-terrain rover simulations demonstrate how AI-driven machines can support astronauts in exploration and maintenance tasks. Foundation models for Earth & space: USRA's Generative AI Lab is building multipurpose foundation models on global satellite data that now outperform traditional numerical simulations—forecasting weather faster and more accurately. Workforce development: Through the Feynman Quantum Academy and NASA-integrated data science curricula, USRA immerses students These show notes have been truncated due to length. For the full show notes, visit https://twit.tv/shows/this-week-in-space/episodes/159 Hosts: Rod Pyle and Tariq Malik Guest: Dr. David Bell

AZ Tech Roundtable 2.0
Palantir Technologies, CEO Alex Karp & the New Era of Tech Defense Contractors - AZ TRT S06 EP05 (266) 3-9-2025

AZ Tech Roundtable 2.0

Play Episode Listen Later Apr 11, 2025 24:58


Palantir Technologies, CEO Alex Karp & the New Era of Tech Defense Contractors   - AZ TRT S06 EP05 (266) 3-9-2025                 What We Learned This Week Palantir - AI powered automation for every decision Palantir is named after the all seeing stone in Lord of the Rings Software integrates with company software to allow for searching and use of big data Palantir mission is for more accountability within Government Palantir has contracts with the U.S. Government helping with security and fighting terrorism   Notes: Palantir Technologies & CEO Alex Karp Karp background in academics and philosophy, also Stanford law  Palantir founders Karp & Joe Lonsdale worked together at PayPal, funded by Peter Thiel Was not profitable for 3 years - one of the secrets of Silicon Valley, build around an idea, work on how you're going to make money off of it later  Passion project, so need people who are dedicated, not just money driven Every text, email, business, it has all data and need to save somewhere Big data and data centers are one of the fastest growing industries and along with machine learning affect so many aspects of our life, both business, and personal Dataset and Data mining are thriving industries   https://en.wikipedia.org/wiki/Palantir_Technologies Palantir Technologies Inc. is an American publicly traded company that specializes in software platforms[3] for big data analytics. Headquartered in Denver, Colorado, it was founded by Peter Thiel,[4] Stephen Cohen, Joe Lonsdale,[5] and Alex Karp in 2003. The company has four main projects: Palantir Gotham, Palantir Foundry, Palantir Apollo, and Palantir AIP. Palantir Gotham is an intelligence and defense tool used by militaries and counter-terrorism analysts. Its customers included the United States Intelligence Community (USIC) and United States Department of Defense.[6] Their software as a service (SaaS) is one of five offerings authorized for Mission Critical National Security Systems (IL5[7]) by the U.S. Department of Defense.[8][9] Palantir Foundry is used for data integration and analysis by corporate clients such as Morgan Stanley, Merck KGaA, Airbus, Wejo, Lilium, PG&E and Fiat Chrysler Automobiles.[10] Palantir Apollo is a platform to facilitate continuous integration/continuous delivery (CI/CD) across all environments.[11][12] Palantir's original clients were federal agencies of the USIC. It has since expanded its customer base to serve both international as well as state and local governments, and also to private companies.[13]     Palantir software connects data, analytics, and operations to help organizations make decisions and improve efficiency. Palantir's software is used by government agencies and commercial enterprises.  How Palantir works 1.    Connects data: Palantir connects to data systems, data lakes, and platforms.  2.    Analyzes data: Palantir analyzes data to find trends, relationships, and anomalies.  3.    Visualizes data: Palantir visualizes data to help users understand insights.  4.    Automates processes: Palantir automates processes to help users save time and improve efficiency.  5.    Improves decision-making: Palantir helps users make better decisions by providing data-driven insights.    Palantir has multiple platforms, including: ·         Palantir Gotham: Used by government agencies to detect patterns and derive insights from large amounts of data  ·         Palantir Foundry: Used by commercial enterprises to integrate data, perform simulations, and optimize workflows  ·         Palantir AIP: Used to deploy large language models and other AI within a private network    Failure of 911 terrorist attacks where government organizations were not sharing information. Government has to be able to sift through large amounts of data, looking for a terrorist network, the old needle in a haystack. Software allows government to go thru data, and also share information. In the past governments could run spy networks only, now with computer hackers, it could be run by anybody with a computer. Hard to search for terrorist, very creative. In carps view, you have to think like an entrepreneur and be tactical when going after them.   Cannot think in a static fashion, how did they do it in the past. When a terrorist is caught using a cell phone, they adapt to figure out how do they get caught and then use a different method. It's like game theory, you have to think ahead of the terrorist and find their patterns before they even realize they are leaving pattern. Terrorist may think in different terms that society deems as destructive, but it still may be very creative, almost like an entrepreneur. Per carp, you need creative and adaptive thinkers to go after the bad guys.   Cyber war is a real threat and not going anywhere. Need the government to combat it, but also must watch what the government is doing to not trample on civil liberties. Need to be able to track the data to see how the government went about things and did its targeting. Data destruction & Tag data - Know where the data came from, so government can use it lawfully.   You do not want to share data with the government, and then have the government use it against you. Because of technology and computers spying is democracized, a group of three teenagers at a coffee shop can launch a cyber attack. Systems can track down where these terrorists are, and show you the patterns of who they might be even if they can identify them directly.   Government and large health insurance companies already have a lot of data. The question is, how are they using it, is it being used in a lawful way? With Palantir software, you cannot only look for the terrorist, but you can also watch how the government uses the data   Can use Palantir software on top of current software to work through data Palantir and SpaceX companies – achieved $ Billion dollar valuation Unicorn status  Funded at loss for years, took decade to get Govt contracts   Name comes from the seeing stone in Lord of the Rings Powerful technology, that can help watch over the world, has massive, ethical implications Software helps government and businesses look over data and watch on people, but can infringe on privacy - Paradox of security vs freedom Also raises questions about privacy, verse convenience, a kin to the issue with current social media  Solve terrorism problem in big way Fight terrorism on a large scale, verse just smaller tactics with airport security   Fight terrorism at the high-level, verse low level tech with airport security and other measures that are very cumbersome and overbearing Coordinate resources better Hard to start in defense company, and this is the next generation   Palantir is coming up with a simple high-tech solution, to handle a serious and complicated problem Pre-911, government not prepared or organized to handle global terrorist threat, and many of the solutions were over the top and heavy handed   Company provides targeted efficient reactions, verse broad wide solutions There is both philosophical and technological debate on how this software can and should be used They also believe they can be more transparent, show accountability, and actually prevent government overreach Check NSA and FISA courts if used, it is not Security and CIA type orgs need secrecy Palantir could track actions of these orgs for review   Large organization, bureaucracy, often have outdated technology, and reporting, so hard to do oversight, can be very confusing Often these organizations want plausible deniability, so they don't want their accounting to be reviewed, and will list expenditures under different things, this could be seen as fraud Technology is both disruptive and how it can go through data, but also disruptive that I can force accountability and bring stuff to light   Creative accounting and inefficiency could come to an end. This forces people to adapt and change their ways. Human nature is not always open to this. Belief by CEO, how important it is to choose the right partner in person and business You want to work with people who will challenge your ideas, so you have the discipline and rigor to think out and give evidence behind when while your idea is right, or at the very least not wrong   Scale to be plausibly right, and not wrong is very valuable in life  People must be resilient enough to challenge, even their own ideas. Company, culture, fosters, and environment, where people are open to think, challenge, status quo, but also must defend their thoughts. They foster independent thought, and not just one way thinking in the company Also ambition to work on bigger national projects   Future of defense contractors is in software, which they don't have a good history with. A lot of the best defense contractors make hardware. Palantir reviewed what the government was doing to fight terrorism, and how they were spending tens of billions of dollars on it. They were spending it in the wrong way, and the process needed to be rethought. Took years to get in with government. Building software for spies and intelligence industry. Has both commercial private clients and government client.   A few different products that help big organizations analyze their data using AI, and make the data more understandable. This can help a company in many ways, be more efficient, cut cost, raise profits, understand their own company better AI and data are the new languages of the modern world. There's a lot of data and it is critical to keep it organized, but very hard. Their software goes beyond just storing and managing data. It helps them to utilize the data which is key.   Silicon Valley tree - Paypal to Palantir to Anduril Anduril makes Roadrunner – takeoff software **company seems like Stark Industries Anduril Industries is a defense technology company with a mission to transform U.S. and allied military capabilities with advanced technology. By bringing the expertise, technology, and business model of the 21st century's most innovative companies to the defense industry, Anduril is changing how military systems are designed, built and sold.   Anduril's family of systems is powered by Lattice, an AI software platform that turns thousands of data streams into a realtime, 3D command and control center. As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge AI, computer vision, sensor fusion, and networking technology to the military in months, not years.   For more information, visit www.anduril.com. https://investors.palantir.com/news-details/2024/Anduril-and-Palantir-to-Accelerate-AI-Capabilities-for-National-Security/   https://en.wikipedia.org/wiki/Anduril_Industries Anduril Industries, Inc. is an American defense technology company that specializes in autonomous systems. It was cofounded in 2017 by inventor and entrepreneur Palmer Luckey and others.[3][4] Anduril aims to sell to the U.S. Department of Defense, including artificial intelligence and robotics. Anduril's major products include unmanned aerial systems (UAS) and counter-UAS (CUAS), semi-portable autonomous surveillance systems, and networked command and control software.     Related Show: Zero to One - Peter Thiel Contrarian Thinker + Disruption AZ TRT S04 EP50 (213) 12-17-2023   What We Learned This Week Contrarian Thinking – think for yourself and differently than everyone else Innovation great companies have unique products that go from Zero to one, vertical Founders are important and challenge the Status Quo to change the world Competition is for losers, strive for a Monopoly Secrets – What Great Company is No One Building? Disruption in Business & Tech World - How to Handle The Innovator's Dilemma    Zero to One: Notes on Startups, or How to Build the Future (c- 2014) Full Show: Here     PayPal Mafia - The Founders Story & Their Battle w/ EBAY w/ Jimmy Soni  - BRT S03 EP36 (135) 8-7-2022 What We Learned This Week PayPal Mafia – alumni created or involved many other co's – Tesla, SpaceX, Palantir, Yelp, Yammer, LinkedIn, Facebook, YouTube & more PayPal had may contributors & a real long shot to happen during the DOTCOM Crash of 2000 Claude Shannon – creator of Information Theory, predecessor to the modern computer age, & algorithms Bell Labs was a classic Tech Incubator like Fairfield Semiconductor, Xerox Parc, Menlo Park – Edison / GE, Manhattan Project, Tuxedo Park PayPal sold to EBAY in 2002 for $1.5 Billion, prior to this, the two companies were rivals as EBAY wanted a different payment system   Guest: Jimmy Soni, Author https://jimmysoni.com/ https://twitter.com/jimmyasoni   Full Show: Here   AZ TRT 2.0 - Best of Tech Part 1 - Data Centers, IT, EV Charging, Minerals & AI Software AZ TRT S05 EP21 (236) 5-26-2024    What We Learned This Week: Host  Matt on Data Centers + Energy Usage Lucian Aguayo of Redgear on IT Infrastructure Broc TenHouten of Intrinsic Power on EV Charging Brian Stevens of Neural Magic on AI Software Dr. Nick Sakharav of Reclaimed Minerals on Energy   ‘Best of' Clips from previous Tech themed aired in the first half of 2024  Full Show: Here       Biotech Shows: https://brt-show.libsyn.com/category/Biotech-Life+Sciences-Science   AZ Tech Council Shows:  https://brt-show.libsyn.com/size/5/?search=az+tech+council *Includes Best of AZ Tech Council show from 2/12/2023   Tech Topic: https://brt-show.libsyn.com/category/Tech-Startup-VC-Cybersecurity-Energy-Science  Best of Tech: https://brt-show.libsyn.com/size/5/?search=best+of+tech   ‘Best Of' Topic: https://brt-show.libsyn.com/category/Best+of+BRT      Thanks for Listening. Please Subscribe to the AZ TRT Podcast.     AZ Tech Roundtable 2.0 with Matt Battaglia The show where Entrepreneurs, Top Executives, Founders, and Investors come to share insights about the future of business.  AZ TRT 2.0 looks at the new trends in business, & how classic industries are evolving.  Common Topics Discussed: Startups, Founders, Funds & Venture Capital, Business, Entrepreneurship, Biotech, Blockchain / Crypto, Executive Comp, Investing, Stocks, Real Estate + Alternative Investments, and more…    AZ TRT Podcast Home Page: http://aztrtshow.com/ ‘Best Of' AZ TRT Podcast: Click Here Podcast on Google: Click Here Podcast on Spotify: Click Here                    More Info: https://www.economicknight.com/azpodcast/ KFNX Info: https://1100kfnx.com/weekend-featured-shows/     Disclaimer: The views and opinions expressed in this program are those of the Hosts, Guests and Speakers, and do not necessarily reflect the views or positions of any entities they represent (or affiliates, members, managers, employees or partners), or any Station, Podcast Platform, Website or Social Media that this show may air on. All information provided is for educational and entertainment purposes. Nothing said on this program should be considered advice or recommendations in: business, legal, real estate, crypto, tax accounting, investment, etc. Always seek the advice of a professional in all business ventures, including but not limited to: investments, tax, loans, legal, accounting, real estate, crypto, contracts, sales, marketing, other business arrangements, etc.  

Code-Garage
Code-Garage #116 - L'histoire du Xerox PARC

Code-Garage

Play Episode Listen Later Feb 17, 2025 7:43


Le centre de recherche mythique, à l'origine de nombreuses innovations comme l'interface graphique et l'éthernet, au centre de la révolution de l'informatique moderne.Notes de l'épisode : La "mother of all demos" : https://code-garage.fr/videos?id=34 Cours sur l'accessibilité web : https://code-garage.com/courses/accessibilite-web

Go To Market Grit
#229 Former CEO Activision Blizzard, Bobby Kotick w/ Bing Gordon: Change the Game

Go To Market Grit

Play Episode Listen Later Feb 10, 2025 119:32


Guest: Bobby Kotick, former CEO of Activision Blizzard; and Bing Gordon, general partner at Kleiner PerkinsIn 2020, when President Trump signed the executive order that would ban TikTok in the U.S., Bobby Kotick called his old friend Steven Mnuchin. The former Secretary of the Treasury told him that, if TikTok's U.S. operations were to be sold to an American company, Microsoft would be the only bidder.A couple calls later, he reached ByteDance founder and CEO Zhang Yiming, who said he'd rather sell to Bobby than Microsoft. Concerned about his ability to get the deal done solo, Bobby called Microsoft CEO Satya Nadella and offered to make a joint bid. Nadella declined, but added, “ if the deal doesn't get done, we should sit down and talk about us buying Activision.” TikTok currently remains Chinese-owned, but three years later, Microsoft paid $75 billion for Activision Blizzard.Chapters:Mentioned in this episode: Harvard-Westlake School, Alison Ressler, Vivendi, Berkshire Hathaway, Bruce Hack and Arnaud de Puyfontaine, John Riccitiello and EA, Call of Duty, Bizarre Creations, Atari, Apple II, Commodore 64, Jean-Louis Gassée, Apple Lisa, Howard Lincoln, Philips, Magnavox Odyssey, Sutter Hill Ventures, Infocom and Zork, Toys-R-Us, Howard Hughes, E. Parry Thomas, Sun Valley, Thom Weisel, William Morris Endeavor, Guitar Hero, Davidson & Associates, Michael Morhaime, Allen Adham, World of Warcraft, Medal of Honor, Steven Spielberg, Michael Crichton, Chris Roberts, Overwatch, Tencent, Time Warner, Jeff Bewkes, Sheryl Sandberg, Lean In, Lina Khan, Samsung, Elon Musk, James L. Jones, UFC, E. Floyd Kvamme, Toy Story 2, Procter & Gamble, Ron Doornik, John Lasseter, Xerox PARC, Shigeru Miyamoto, Satoru Iwata, Goldeneye 007, James Bond, Barbara Broccoli, Oculus, Apple Vision Pro, Bill Gates, Steve Ballmer, Sam Altman, Mustafa Suleyman, Spotify, Candy Crush Saga, Disney, Phil Spencer, Clarence Avant and Motown Records. Links:Connect with BobbyTwitterLinkedInConnect with BingTwitterLinkedInConnect with JoubinTwitterLinkedInEmail: grit@kleinerperkins.com Learn more about Kleiner PerkinsThis episode was edited by Eric Johnson from LightningPod.fm

Danielle Newnham Podcast
Pixar Co-Founder Alvy Ray Smith (REPLAY)

Danielle Newnham Podcast

Play Episode Listen Later Feb 3, 2025 59:49


Dr Alvy Ray Smith is the co-founder of Pixar, a computer scientist and pioneer in the field of computer graphics and to celebrate 39 years to the day that Pixar was officially founded, I wanted to release my interview with Alvy from Series 3.After starting his career in academia, Alvy had an epiphany following a serious skiing accident. He decided to move to California to combine his two passions - art and computers - in a place where he felt something good was about to happen. Alvy was always a pioneer. From creating his first computer graphic in 1965, Alvy became an original member of the Computer Graphics Lab at the New York Institute of Technology, he witnessed the birth of the personal computer at Xerox PARC, and he was the first director of computer graphics at George Lucas's Lucasfilm. It was there that Alvy gathered some of the smartest people he knew to develop computer graphics software, including early renderer technology. He and colleague Ed Catmull then spun out to co-found the famous Pixar, soon followed by the hiring of Lucasfilm colleague John Lasseter, and Steve Jobs as an investor. It was at Pixar that Toy Story would be made - the very first, entirely computer-animated, feature film. In 2006, Pixar was sold to Disney for $7.4 billion.In this interview, Alvy recounts his career from the early days at Xerox PARC to how Pixar got started. We discuss the Pixar journey in detail, as well as his latest book – A Biography of the Pixel  (you can buy here)- including how innovation is born from three strands: An idea, chaos and a tyrant. And how Steve jobs was both the saviour and the tyrant in the incredible Pixar story.A true pioneer, this is one of my favourite conversations.Enjoy!-----NB This episode was first released in Series 3.Let us know what you think of this episode and please rate, review and share - it means the world to me and helps others to find it too.Danielle Twitter / Instagram / Substack Newsletter / YouTubeAll my podcast episodes are edited with Descript - try it for FREE hereAlvy Ray Smith on Twitter @alvyray / website Buy Alvy Ray Smith's book A Biography of the Pixel here. -----This episode was hosted by me - Danielle Newnham, a recovering founder, author and writer who has been interviewing tech founders and innovators for ten years - and produced by Jolin Cheng. Image of Alvy Ray by Christopher Michel.

E69: Lost Tapes: Elad Gil on the Eve of the AI Boom

Play Episode Listen Later Dec 18, 2024 50:17


This ‘lost episode' of Turpentine VC with investor Elad Gil was recorded at a unique moment in time (September 2022) — before ChatGPT, Claude and Perplexity launched. Drawing from his experiences at Google, Elad discusses early ML systems, the open source debate, NVIDIA, labor markets, and AI alignment and shares some behind the scenes anecdotes. — 

Crazy Wisdom
Episode #411: From Gutenberg to Jobs: The Threads of Technological Evolution

Crazy Wisdom

Play Episode Listen Later Nov 22, 2024 49:03


On this episode of the Crazy Wisdom Podcast, host Stewart Alsop interviews Tim Bajarin, Chairman of Creative Strategies, Inc., for a fascinating exploration of the evolution of technology. The conversation spans Tim's early career during the dawn of personal computing in the 1980s, historical reflections on pivotal inventions like Gutenberg's printing press, the legacy of Xerox PARC, and the rise of Apple's graphical interface and desktop publishing. They also discuss the human dynamics of innovation, from the tight-knit tech communities of Silicon Valley to parallels with historic institutions like the Royal Society. For more insights into Tim Bajarin's ongoing work, you can explore his articles on Forbes or visit Creative Strategies at creativestrategies.com.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction and Guest Background00:54 Entering the PC Market in the 1980s05:39 Historical Context and Technological Evolution13:21 The Impact of Desktop Publishing24:54 The Role of Historical Knowledge in Technology38:12 The Influence of British Technological Advancements47:30 Conclusion and Final ThoughtsKey InsightsThe Historical Context of Innovation is Crucial for Understanding Technology's Future: Tim Bajarin emphasizes that to forecast the future of technology, one must understand its historical roots. His career as an analyst has been informed by studying transformative moments like Gutenberg's printing press and innovations in the 1800s, including the Royal Society's influence on science and technology. This perspective underscores how historical breakthroughs set the stage for modern advancements.The Birth of Personal Computing Was a Collaborative Effort: Bajarin's entry into the tech industry coincided with the IBM PC launch in 1981. He became one of the first PC analysts, working with companies like Compaq, Dell, and Apple. The development of personal computing was fueled by close-knit communities of engineers and innovators who shared ideas, much like the collaborative environment of historical groups like the Royal Society.Xerox PARC's Innovations Were the Bedrock for Modern Computing: The role of Xerox PARC in shaping today's computing landscape is highlighted as pivotal. Bajarin recounts their invention of the graphical user interface (GUI) and the mouse, which were foundational for Apple's Mac. Although Xerox didn't capitalize on these ideas, their contributions enabled Steve Jobs and others to build the computing paradigms we use today.Desktop Publishing Revolutionized Communication and Creativity: Bajarin predicted the desktop publishing boom, thanks to innovations like Apple's laser printer, PageMaker software, and PostScript technology. These advancements transformed the publishing industry, allowing individuals and small businesses to create professional-quality content, democratizing access to creative tools.Steve Jobs' Return to Apple Marked a Turning Point in Design and Vision: When Steve Jobs returned to Apple in 1997, the company was near bankruptcy. Bajarin describes how Jobs refocused Apple on its core customers, introduced innovative industrial design, and created products like the colorful iMac. This redefined how consumers viewed computers, blending functionality with aesthetic appeal and cementing Apple's market position.The Evolution of Technology is Driven by Both Process and Innovation: Bajarin explains how every major technological leap, from the printing press to the PC, has involved the convergence of innovative devices and refined processes. For instance, advancements in printing presses during the 1800s mirrored the systematic innovations in the tech industry during the 1980s and 1990s.The Role of Community and Networks in Driving Innovation: The episode draws a parallel between the 1980s tech clubs in Silicon Valley and earlier knowledge-sharing networks, such as the letter-writing analysts of Renaissance Italy or the Royal Society. Bajarin illustrates how communities of like-minded individuals, whether in tech or science, have always been instrumental in fostering innovation.

Video Game Newsroom Time Machine

Jack buys Atari Imagine goes belly up Nintendo's Famicom gets Zapped These stories and many more on this episode of the VGNRTM! This episode we will look back at the biggest stories in and around the video game industry in July 1984.  As always, we'll mostly be using magazine cover dates, and those are of course always a bit behind the actual events. Alex Smith of They Create Worlds is our cohost.  Check out his podcast here: https://www.theycreateworlds.com/ and order his book here: https://www.theycreateworlds.com/book Get us on your mobile device: Android:  https://www.google.com/podcasts?feed=aHR0cHM6Ly92aWRlb2dhbWVuZXdzcm9vbXRpbWVtYWNoaW5lLmxpYnN5bi5jb20vcnNz iOS:      https://podcasts.apple.com/de/podcast/video-game-newsroom-time-machine And if you like what we are doing here at the podcast, don't forget to like us on your podcasting app of choice, YouTube, and/or support us on patreon! https://www.patreon.com/VGNRTM Send comments on Mastodon @videogamenewsroomtimemachine@oldbytes.space Or twitter @videogamenewsr2 Or Instagram https://www.instagram.com/vgnrtm Or videogamenewsroomtimemachine@gmail.com Links: If you don't see all the links, find them here: 7 Minutes in Heaven: Pit Fall 2 Video Version: https://www.patreon.com/posts/7-minutes-in-ii-115206120     https://www.mobygames.com/game/6946/pitfall-ii-lost-caverns/  Corrections: June 1984 Ep - https://www.patreon.com/posts/june-1984-112063864 Ethan's fine site The History of How We Play: https://thehistoryofhowweplay.wordpress.com/     https://www.arcade-museum.com/Arcade/space-flight     https://en.wikipedia.org/wiki/The_Lord_of_the_Rings#Motion_pictures     https://www.imdb.com/title/tt0076929/?ref_=fn_al_tt_2 1974:         Atari makes Pong cute     https://www.worldradiohistory.com/Archive-All-Music/Cash-Box/70s/1974/CB-1974-07-06-I.pdf     https://www.worldradiohistory.com/Archive-All-Music/Cash-Box/70s/1974/CB-1974-07-20.pdf        https://www.arcade-museum.com/Videogame/tv-basketball       https://www.worldradiohistory.com/Archive-All-Music/Cash-Box/70s/1974/CB-1974-07-27.pdf     https://en.wikipedia.org/wiki/Doctor_Pong Bluesky hucksters descend on video games     The Franchise Hustlers, Chicago Tribune, 22 July 1974, page 8 Barcode scanning comes to supermarkets         https://www.worldradiohistory.com/Archive-Poptronics/70s/1974/Poptronics-1974-07.pdf  pg. 22     https://www.nytimes.com/1974/10/04/archives/electronic-checkout-speeds-food-buying-checkout-speeded.html      Xerox Parc gets out the painbrush     https://archive.org/details/197407PccV2N6/page/n5/mode/2up 1984: CES takeaways     https://archive.org/details/computer-entertainer-3-4/page/50/mode/2up Atari gets tight with announcements     https://archive.org/details/computer-entertainer-3-4/page/60/mode/1up?view=theater Atari announces Super Chip games     https://archive.org/details/computer-entertainer-3-4/page/60/mode/1up?view=theater Atari announces new console and computer games     https://archive.org/details/computer-entertainer-3-4/page/60/mode/1up?view=theater 5200 RIP     https://archive.org/details/computer-entertainer-3-4/page/61/mode/1up?view=theater Coleco sweetens Adam deal     https://archive.org/details/computer-entertainer-3-4/page/56/mode/2up      Jack Attack July 2     Jack buys Atari     https://archive.org/details/computer-entertainer-3-4/page/61/mode/1up?view=theater         Warner sells Atari operations, Warner sells Atari operations, United Press International, July 2, 1984, Monday, AM cycle         Warner sells Atari operations, United Press International, July 2, 1984, Monday, BC cycle, Section: Financial      July 3         Tramiel Buys Atari, Sets Sights On Former Company, The Associated Press,July 3, 1984, Tuesday, PM cycle        https://www.nytimes.com/1984/07/03/business/warner-sells-atari-to-tramiel.html?searchResultPosition=1 July 4         "Home computer veteran to challenge high end of market, Financial Times (London,England) July 4, 1984, Wednesday, Section: SECTION II; International Companies; Pg. 17" July 6         Widespread Layoffs Begin Under New Leader, The Associated Press, July 6, 1984, Friday, BC cycle      July 10         https://www.nytimes.com/1984/07/14/business/commodore-trade-secrets.html?searchResultPosition=11        https://archive.org/details/popular-computing-weekly-1984-07-19 July 13         No Headline In Original, PR Newswire, July 13, 1984, Friday July 16         A Tough Man for a Tough Job, Newsweek, July 16, 1984, UNITED STATES EDITION, Section: BUSINESS; Pg. 50 July 19         Warner omits payout in reshape, Financial Times (London,England), July 19, 1984, Thursday, Section: SECTION I; Pg. 20, Byline: BY TERRY DODSWORTH IN NEW YORK July 23         Atari Tells Agencies to Freeze and Puts Networks at Ease, ADWEEK, July 23, 1984, Eastern Edition, Length: 514 words, Byline: By Gail Belsky July 27         https://archive.org/details/popular-computing-weekly-1984-07-26     CHINAGLIA PURCHASES CONTROL OF COSMOS, The New York Times, July 27, 1984, Friday, Late City Final Edition, Section: Section A; Page 15, Column 1; Sports Desk July 30         "ADVERTISING; Consolidating Domestically at Wells, The New York Times, July 30, 1984, Monday, Late City Final Edition, Section: Section D; Page 7, Column 4; Financial Desk, Byline: By Philip H. Dougherty         Tramiel's Atari Picks Wells, Rich; DDB Is Out ADWEEK, July 30, 1984, Eastern Edition, Byline: By Gail Belsky" Imagine management splits     Home Computing Weekly No. 71, July 17-23         https://rk.nvg.ntnu.no/sinclair/industry/publishers/imagine_crash0185.htm     Popular Computing Weekly, 19 July 1984, pg. 5     Popular Computing Weekly, 5-11 July 1984     https://youtu.be/ZoDh61sgCOg?si=h4ML1gsN2kVbDXWM Sierra On-Line no more!     https://archive.org/details/computer-entertainer-3-4/page/52/mode/2up Coleco numbers continue down     Control Data, Coleco Post Sharply Lower Profits, The Associated Press, July 19, 1984, Thursday, AM cycle     No Headline In Original, United Press International, July 19, 1984, Thursday, BC cycle, Section: Financial     COLECO AND CONTROL DATA FALL - Correction Appended, The New York Times, Correction Appended,Section: Section D; Page 3, Column 2; Financial Desk, Length: 472 words, Byline: By STEVEN GREENHOUSE     Playthings July 1984 Milton Bradley sees turnaround     Profits Up 45 Percent At Toymaker, The Associated Press, July 20, 1984, Friday, BC cycle     No Headline In Original, United Press International, July 20, 1984, Friday, BC cycle, Copyright 1984 U.P.I., Section: Financial, Length: 302 words, Dateline: LOS ANGELES        Toys Hobbies & Crafts, July 1984, pg. 10 Bally positive despite games downturn     BALLY-MANUFACTURING; Financial results, Business Wire, July 26, 1984, Thursday Activision losses high     ACTIVISION; Financial results, Business Wire, July 26, 1984, Thursday Interest rates put more pressure on coinop     Replay July 1984, pg. 3 Conversion kits take center stage!     Play Meter July 15, 1985     Replay July 1984, pg. 4     Play Meter July 15, 1985 pg. 43 Mylstar continues suit against Bally     Play Meter July 15, 1985 pg. 16 Mylstar brings people to FMV     https://www.worldradiohistory.com/Archive-All-Music/Cash-Box/80s/1984/CB-1984-07-21.pdf     https://www.youtube.com/watch?v=2lfIpajVaAI Universal moves into restaurant/arcade biz     Restaurant-game center to open, The Japan Economic Journal,,July 17, 1984, Section: SPECIAL U.S. SECTION; Pg. 11 Photon zaps onto the scene     Close Encounters on Photon, Newsweek, July 23, 1984, UNITED STATES EDITION, Copyright 1984 Newsweek, Section: ENTERTAINMENT; Pg. 62, Length: 580 words, Byline: LYNN LANGWAY with BARBARA BURGOWER in Dallas      Stern Electronics files for Chapter 11     https://www.worldradiohistory.com/Archive-All-Music/Cash-Box/80s/1984/CB-1984-07-28.pdf Twin Galaxies shuts down     Play Meter July 15, 1984 pg. 18 Industry giants pass     https://www.worldradiohistory.com/Archive-All-Music/Cash-Box/80s/1984/CB-1984-07-21.pdf      Family computer goes lightgun     AM Life July 1, 1984, 58 CPC goes on sale     Home Computing Weekly No. 69 July 3-9     https://www.devuego.es/blog/2023/01/10/press-start-cinco-duros/ Sinclair looks overseas     Home Computing Weekly No. 69 July 3-9  pg. 5 QL confusion         Popular Computing Weekly July 26,  pg. 5 Acorn and BBC extend deal     Popular Computing Weekly July 19, pg. 5 Sanyo and Canon to launch MSX in Europe     Sanyo to ship MSX PCs to Europe, The Japan Economic Journal, July 17, 1984, Section: ELECTRICALS & ELECTRONICS; Pg. 15    CANON TO EXPORT MSX PERSONAL COMPUTERS TO EUROPE, JULY 17, 1984, TUESDAY Commodore unveils C16     https://archive.org/details/computer-entertainer-3-4/page/56/mode/2up Korea joins the chip wars     Massive investment would cause oversupply; Trilateral friction in microchip business seen among Japan, U.S. and Korea, The Japan Economic Journal, July 24, 1984, Section: Pg. 20, Byline: By TETSURO WADA Mastertronic joins with Galactic     Home Computing Weekly No. 71 July 17-23, 1984 pg. 5 Mastertronic saves Carnell Software     Popular Computing Weekly July 19, pg. 5 Okimate brings color to the Commodore     https://archive.org/details/computer-entertainer-3-4/page/57/mode/1up?view=theater Pitfall Harry is  yours to command!     https://archive.org/details/program-pitfall/mode/2up         https://archive.org/details/computer-entertainer-3-4/page/57/mode/1up?view=theater      WarGames gets real     No Headline In Original, The Associated Press, July 17, 1984, Tuesday, AM cycle     https://www.zerothreesecurity.com/index.php/about-us/founder The history of games revealed     https://archive.org/details/book_video_games/page/n77/mode/2up The book of adventure games     https://archive.org/details/computer-entertainer-3-4/page/50/mode/2up        https://archive.org/details/the-book-of-adventure-games/page/n185/mode/2up Activision sues Microdeal     Popular Computing Weekly 26 July, 1984, pg. 1, 2 Tax cuts for computer purchases slashed     PERSONAL FINANCE;LIMITING TAX BREAKS FOR COMPUTERS The New York Times, July 1, 1984, Sunday, Late City Final Edition, Section: Section 3; Page 11, Column 1; Financial Desk      1984 - the Year of the VCR     Sales of Color TVs and VCRs Booming, The Associated Press, July 15, 1984, Sunday, AM cycle, Section: Washington Dateline, byline: By NORMAN BLACK, Associated Press Writer Video game palsy is the new scare      LA Games get high tech boost     A reporter's Olympics notebook;How Do I Thank Thee? Let Me Count the Ways, United Press International, July 26, 1984, Thursday, BC cycle, Section: Sports News, Byline: By RONALD E. COHEN The  Last Starfighter released     https://archive.org/details/computer-entertainer-3-4/page/60/mode/1up?view=theater        'Ghostbusters,'' ''Gremlins'' still top box office after six weeks, United Press International, July 16, 1984, Monday, AM cycle, Section: Domestic News, Byline: By FRANK SANELLO, UPI Entertainment Reporter      Sirius files for Chapter 11     https://archive.org/details/computer-entertainer-3-4/page/56/mode/2up Recommended Links: The History of How We Play: https://thehistoryofhowweplay.wordpress.com/ Gaming Alexandria: https://www.gamingalexandria.com/wp/ They Create Worlds: https://tcwpodcast.podbean.com/ Digital Antiquarian: https://www.filfre.net/ The Arcade Blogger: https://arcadeblogger.com/ Retro Asylum: http://retroasylum.com/category/all-posts/ Retro Game Squad: http://retrogamesquad.libsyn.com/ Playthrough Podcast: https://playthroughpod.com/ Retromags.com: https://www.retromags.com/ Games That Weren't - https://www.gamesthatwerent.com/ Sound Effects by Ethan Johnson of History of How We Play. Copyright Karl Kuras  

Catherine Toon
EP #249 - What is the Good News of the Gospel? Interview with Rod Williams - Audio

Catherine Toon

Play Episode Listen Later Oct 22, 2024 72:07


The Gospel, when rightly comprehended, is Life from death, Light from darkness, and Truth as a Person, Who sets free ALL of humanity and the created realm. The good news of the gospel reflects a God Who took/takes it upon Himself to cover every aspect of what it took/takes to redeem, heal, restore, renew and transform all for all His beloved kids and all creation back to original design. And He did/does this at His expense - talk about pure Love and pure grace! Join Rod Williams and myself as we discuss what the Gospel actually means, from the only eternally sound lens/hermeneutic: the lens of Christ in union with Father & Holy Spirit and all of humanity. Rod Williams is a compelling speaker and writer with a passion to empower believers to know their true identity and live joyfully out of their present union with Christ made possible by what Jesus accomplished through the Cross of His Grace. Rod has served in many capacities in churches over the years, including Senior Pastor at The Santa Cruz Church, Senior Coordinator at Cana Seminary and Alumni Professor at School of Kingdom. He holds a degree in Theology from Pacific Coast Baptist Bible College and participated in leadership training intensives at Bethel Church in Redding California, Global Celebration, Jesus Ministry and Global Awakening ministries. Rod previously worked as an entrepreneur and consultant in Silicon Valley for 30 years developing network computing technology and writing for companies such as Apple Computer, Cisco Systems, Microsoft, Xerox PARC, Cadence, and several semiconductor companies. His writing has been featured in Electronic Engineering Times and other industry publications, including one of the first books on wireless hand held computing. LINKS MENTIONED: Social Media - ► Facebook: @Rodeen Williams ► X: deltakainos Websites - ► GAN: https://GANTV.com ► thenewmystics.com ► johncrowder.net ► https://perichoresis.org/across-all-worlds-live-with-baxter/ YouTube Accounts Mentioned: ► @JohnCrowder ► @BlissCoCo Interview: Rod Williams interviewed by Jason Clark, Rethinking God with Tacos: https://afamilystory.org/?s=Rod+williams Please rate, review, share, and subscribe - - a little thing that makes a big difference!! Thank you! "Marked by Love, Revised & Expanded Edition" is here: #1 Best Seller & #1 New Release in our category! Find out more here! https://bit.ly/3UGeJBI Nab your copy: https://amzn.to/3K2J9ZV CONNECT WITH CATHERINE: ► Website: https://catherinetoon.com/ ► Facebook: / catherinetoonmd ► Instagram: / catherinetoon ► Twitter: / catherinetoonmd ► Pinterest: https://pin.it/4lHhOll FREE RESOURCES: ► Podcast: https://catherinetoon.com/perspective... ► Free eBooks: https://catherinetoon.com/free-downlo... ► Sign up for weekly prophetic emails: https://catherinetoon.com/ ► Blog: https://catherinetoon.com/blog/ ► Free chapter of Marked by Love: https://markedbylovebook.com/free-cha... ABOUT CATHERINE: Encouraging you to experience God and discover who you truly are! Catherine has been in the business of changing lives for decades as an author, speaker, and prophetic coach. She is incredibly gifted at calling forth personal destiny and has helped thousands of individuals who are on that journey.

SunCast
734: From Google's Moonshot to GM's Vehicle-to-Everything | Lessons in Innovation from Ty Jagerson

SunCast

Play Episode Listen Later Aug 29, 2024 78:14


How does an idea go from concept to lab, from lab to field, and finally attract massive capital investment? What can a decade of business building behind the iron curtain in post-cold-war Eastern Europe teach you?Ty Jagerson has spent his over 25 years executive career launching and growing businesses at Xerox (PARC), SolFocus, Google X, and most recently General Motors. His work at Google X, on Tapestry-the Moonshot Factory project, focused on overcoming the data bottlenecks that hinder the integration of distributed energy resources (DERs) like solar and electric vehicles into the grid. Long before he spun novel technology out of PARC (you might have heard of SolFocus?) he was learning the ropes in Ukraine and Russia exporting space tech, in the 90s! Throughout his career Jagerson's work reflects a relentless pursuit of integrating renewable energy into the grid, all while navigating the high-stakes, competitive landscape that he likens to the "Hunger Games" of the energy industry.You'll discover:The long arc of innovation and how it often takes (7 to 14 years!) for technology to really become commercially viable.Insights into how corporate environments may not always be the best incubators for innovation but can serve as safe havens for developing ideas from external sources. The importance of industry "scar tissue" and how past failures contribute to future successes.Insights from Ty's work at Google X and V2X (Vehicle-to-Everything) at GM on integrating DERs.Listen in to learn how one iconic clean energy pioneer, Ty Jagerson, navigated the challenges of integrating DERs with the grid, offering critical lessons for entrepreneurs striving to innovate and succeed in the energy sector.If you want to connect with today's guest, you'll find links to his contact info in the show notes on the blog at https://mysuncast.com/suncast-episodes/.Our Platinum Presenting Sponsor for SunCast is CPS America!SunCast is proudly supported by Trina Solar.You can learn more about all the sponsors who help make this show free for you at www.mysuncast.com/sponsors.Remember, you can always find resources, learn more about today's guest and explore recommendations, book links, and more than 730 other founder stories and startup advice at www.mysuncast.com.Subscribe to Valence, our weekly LinkedIn Newsletter, and learn the elements of compelling storytelling: https://www.linkedin.com/newsletters/valence-content-that-connects-7145928995363049472/You can connect with me, Nico Johnson, on:Twitter - https://www.twitter.com/nicomeoLinkedIn - https://www.linkedin.com/in/nickalusMentioned in this episode:CPS July 2024 V2

Training Data
Fireworks Founder Lin Qiao on How Fast Inference and Small Models Will Benefit Businesses

Training Data

Play Episode Listen Later Aug 13, 2024 39:18


In the first wave of the generative AI revolution, startups and enterprises built on top of the best closed-source models available, mostly from OpenAI. The AI customer journey moves from training to inference, and as these first products find PMF, many are hitting a wall on latency and cost. Fireworks Founder and CEO Lin Qiao led the PyTorch team at Meta that rebuilt the whole stack to meet the complex needs of the world's largest B2C company. Meta moved PyTorch to its own non-profit foundation in 2022 and Lin started Fireworks with the mission to compress the timeframe of training and inference and democratize access to GenAI beyond the hyperscalers to let a diversity of AI applications thrive. Lin predicts when open and closed source models will converge and reveals her goal to build simple API access to the totality of knowledge. Hosted by: Sonya Huang and Pat Grady, Sequoia Capital  Mentioned in this episode: Pytorch: the leading framework for building deep learning models, originated at Meta and now part of the Linux Foundation umbrella Caffe2 and ONNX: ML frameworks Meta used that PyTorch eventually replaced Conservation of complexity: the idea that that every computer application has inherent complexity that cannot be reduced but merely moved between the backend and frontend, originated by Xerox PARC researcher Larry Tesler  Mixture of Experts: a class of transformer models that route requests between different subsets of a model based on use case Fathom: a product the Fireworks team uses for video conference summarization  LMSYS Chatbot Arena: crowdsourced open platform for LLM evals hosted on Hugging Face  00:00 - Introduction 02:01 - What is Fireworks? 02:48 - Leading Pytorch 05:01 - What do researchers like about PyTorch? 07:50 - How Fireworks compares to open source 10:38 - Simplicity scales 12:51 - From training to inference 17:46 - Will open and closed source converge? 22:18 - Can you match OpenAI on the Fireworks stack? 26:53 - What is your vision for the Fireworks platform? 31:17 - Competition for Nvidia? 32:47 - Are returns to scale starting to slow down? 34:28 - Competition 36:32 - Lightning round

Go To Market Grit
#203 CEO Niantic, John Hanke: Buried Ships

Go To Market Grit

Play Episode Listen Later Aug 12, 2024 69:17


Guest: John Hanke, CEO of NianticWhen Pokémon Go launched, Niantic CEO John Hanke was enjoying a tranquil walk through a bamboo forest near Kyoto with his son. When he got back, it was all hands on deck: Building on a platform Niantic had developed for its previous game, Ingress, Pokémon Go was a runaway success story, earning $100 million dollars in revenue in its first week, and $1 billion in its first seven months. “I had a huge amount of anxiety that this is just too good to be true,” John recalls. “When are the wheels going to come off? What's going to go wrong?”In this episode, John and Joubin discuss San Francisco's history, Noam Bardin, Google Street View, David Lawee, AR glasses, Field Trip and Ingress, Tsunekazu Ishihara, gaming outside, Gilman Louie, Frank Slootman, mellowing out, Thomas Kurian, Jay Chaudhry, commute burnout, daily yoga, Xerox PARC, Mark Zuckerberg, Apple Vision Pro, the history of gaming, and talking to computers.Chapters:(02:17) - Waze and Google Maps (05:39) - John's childhood heroes (07:38) - Pokémon Go's first week (10:13) - Maps as a platform (13:56) - Spinning Niantic off of Google (17:36) - Hyperscaling (19:05) - Finding Niantic's mission (22:45) - Startups and families (24:15) - Adrenaline and gas (30:17) - Drive without desperation (34:42) - Negotiating with the Pokémon Company (38:25) - Zero to a million (41:28) - Relief and responsibility (43:44) - Sustaining engagement (47:18) - Enjoying the ride more (50:57) - Rules for balance (55:42) - Augmented reality and wearables (01:01:38) - Social games (01:04:14) - LLMs and the voice UI (01:06:52) - Who Niantic is hiring Links:Connect with JohnTwitterLinkedInConnect with JoubinTwitterLinkedInEmail: grit@kleinerperkins.com Learn more about Kleiner PerkinsThis episode was edited by Eric Johnson from LightningPod.fm

Climate Stack
AI Weather Forecasts for Climate Adaptation with Dr. Peetak Mitra

Climate Stack

Play Episode Listen Later Jul 11, 2024 37:54


In this episode we speak with Dr Peetak Mitra, veteran of countless climate change projects, on the founding team of Excarta, core member of ClimateChange.AI, and gracious human being. He illuminates the role AI/ML can play in adapting to a warming planet, describes the ML techniques his company employs in their breakthrough tools, and gives advice for engineers looking to move into the climate space - in short, ‘just do it'. We also discuss growth in the climate sector, and he shares that despite a widespread economic slowdown, investment in climate technology continues to increase. We were delighted to have him on the show. About Dr Peetak MitraPeetak is a San Francisco-based technologist passionate about leveraging AI to combat climate change. He's on the Founding team of Excarta, a venture-backed startup building a breakthrough AI-powered weather intelligence platform for businesses. Prior to Excarta, he was a Member of Research Staff at the Xerox PARC (now SRI-PARC), where he co-led projects for AI climate forecasting funded in part by DARPA, and NASA. He has been part of Climate Change AI, organizing impactful workshops at major ML conferences including ICLR, AAAI, and NeurIPS with Turing Laureate Prof. Yoshua Bengio. He has been a featured speaker on Climate and AI at MIT, SF Climate Week, OpenAI, NSF among others. He holds a PhD in Scientific Machine Learning from the University of Massachusetts Amherst and a Bachelor's degree from BIT Mesra.https://www.linkedin.com/in/peetak/PapersThe paper Peetak mentioned: Tackling Climate Change with Machine Learning - https://dl.acm.org/doi/10.1145/3485128A milestone paper summarizing the application of ML to climate problems. Abstract: “Climate change is one of the greatest challenges facing humanity, and we, as machine learning (ML) experts, may wonder how we can help. Here we describe how ML can be a powerful tool in reducing greenhouse gas emissions and helping society adapt to a changing climate. From smart grids to disaster management, we identify high impact problems where existing gaps can be filled by ML, in collaboration with other fields. Our recommendations encompass exciting research questions as well as promising business opportunities. We call on the ML community to join the global effort against climate change.”Companies and OrganizationsClimate Change AIClimate Change AI (CCAI) is an organization composed of volunteers from academia and industry who believe that tackling climate change requires concerted societal action, in which machine learning can play an impactful role. Since it was founded in June 2019 (and established as a US domestic non-profit on June 14, 2021), CCAI has led the creation of a global movement in climate change and machine learning, encompassing researchers, engineers, entrepreneurs, investors, policymakers, companies, and NGOs.9zero Climate Co-working Space.  Launched during San Francisco Climate Week 2024, 9Zero is the hub for all things climate. Starting with coworking and events, we're uniting the entire ecosystem. Startups, investors, corporations, service providers, policymakers, academics: if you're working toward a healthier, more resilient world, you belong at 9Zero. Expanding to Seattle and LA this year. Sign up at www.9ZeYour Hosts Mansi Shah - Joshua Marker ClimateStack website - https://climatestack.podcastpage.io/

Acquired
Microsoft

Acquired

Play Episode Listen Later Apr 22, 2024 263:10


Microsoft. After nearly a decade of Acquired episodes, we are finally ready to tackle the most valuable company ever created. The company that put a computer on every desk and in every home. The company that invented the software business model. The company that so thoroughly and completely dominated every conceivable competitor that the United States government intervened and kneecapped it… yet it's STILL the most valuable company in the world today.This episode tells the story of Microsoft in its heyday, the PC Era. We cover its rise from a teenage dream to the most powerful business and technology force in history — the 20-year period from 1975 to 1995 that took Bill and Paul from the Lakeside high school computer room to launching Windows 95 alongside Jay Leno and the Rolling Stones. From BASIC to DOS, Windows, Office, Intel, IBM, Xerox PARC, Apple, Steve Jobs, Steve Ballmer… it's all here, and it's all amazing. Tune in and enjoy… Microsoft.Sponsors:Many thanks to our fantastic Season 14 partners:J.P. Morgan PaymentsServiceNowPilotLinks:Congress changing copyright law in 1980 to include “computer programs”Acquired “classic” on Microsoft's 1987 acquisition of Forethought / PowerPointAll episode sourcesCarve Outs:LGRAndré 3000's new album + GQ InterviewMeta Ray-BansVisual Designer Julia RundbergSummer HealthMore Acquired:Get email updates with hints on next episode and follow-ups from recent episodesJoin the SlackSubscribe to ACQ2Check out the latest swag in the ACQ Merch Store!Note: references to Fortune in ServiceNow sponsor sections are from Fortune ©2023. Used under license.‍Note: Acquired hosts and guests may hold assets discussed in this episode. This podcast is not investment advice, and is intended for informational and entertainment purposes only. You should do your own research and make your own independent decisions when considering any financial transactions.

UNIQUEWAYS WITH THOMAS GIRARD
187 Bill Buxton, Design Legend

UNIQUEWAYS WITH THOMAS GIRARD

Play Episode Listen Later Apr 17, 2024 107:34


Bill Buxton has had a 50+ year romance with human aspects of technology, interaction design, telepresence, multi-modal adaptive designs, access, and the nature of innovation. Morphing from musician to designer/researcher he has practiced his craft at the University of Toronto, Xerox PARC, Alias Research, SGI and Microsoft Research. Awards include four honourary doctorates, co-recipient of an Academy Award for scientific and technical achievement, ACM/SIGCHI Lifetime Achievement Award, and Fellow of the ACM. In Dec. 2023 he was appointed an Officer of the Order of Canada. Writer, speaker and consultant, he is also an Adjunct Professor, University of Toronto, and Distinguished Professor of Industrial Design, TU/Eindhoven. He is currently largely occupied curating his collection of over 900 artifacts documenting the history of interactive technologies. Outside of work, he has a passion for his family, books and the outdoors.

Secrets of the High Demand Coach
Scaling Innovation through Corporate Rebels with Jim Verquist - Ep. 150

Secrets of the High Demand Coach

Play Episode Listen Later Apr 1, 2024 27:14


In this rigorous episode, Jim Verquist, Founder of Engine2 Innovation, shares how he brought the corporate rebel model to tech-based companies around the world.You will discover:- Why do successful companies lose their mojo over time- Why the problem isn't bureaucracy and what it is instead- The 3 things you need to create a second engine of growth for your business
Before launching 3 Silicon Valley startups, Jim Verquist served four years in the U.S. Marine Corps. He then went on to earn his MBA and went to work for big companies. He led a turnaround at Millennium and a Fast Strategy transformation at Best Doctors. Now, he's now launched Engine2 Innovation for businesses that need a second growth engine. His firm uses the corporate rebel model. Corporate rebels have created more billion-dollar breakthroughs than Bell Labs and Xerox PARC combined.
Want to learn more about Jim Verquist's work at Engine2 Innovation? Check out his website at https://engine2.us/

Software Defined Talk
Episode 455: LTS: Let Thou Support it

Software Defined Talk

Play Episode Listen Later Feb 23, 2024 52:48


This week, we discuss open source forks, what's going on at OpenAI and checkin on the IRS Direct File initiative. Plus, plenty of thoughts on taking your annual Code of Conduct Training. Watch the YouTube Live Recording of Episode (https://www.youtube.com/watch?v=PAwXvnb53iY) 455 (https://www.youtube.com/watch?v=PAwXvnb53iY) Runner-up Titles I live my life one iCal screen at a time We always have sparklers Meta-parenting Everyone is always tired Cheaper version of Red Hat This week in “Do we need to be angry?” All we get is wingdings. I'm in a Socialist mood this week Pies shot out of my eyes and stuff Those dingalings bought my boat Dingalings of the mind Rundown CIQ Offers Long-Term Support for Rocky Linux 8.6, 8.8 and 9.2 Images Through AWS Marketplace (https://ciq.com/press-release/ciq-offers-long-term-support-for-rocky-linux-8-6-8-8-and-9-2-images-through-aws-marketplace/) Will CIQ's new support program alienate the community (https://medium.com/@gordon.messmer/will-ciqs-new-support-program-alienate-the-community-it-built-on-an-objection-to-subscriber-only-fb58ea6a810e) NGINX fork (https://narrativ.es/@janl/111935559549855751)? freenginx.org (http://freenginx.org/en/) Struggling database company MariaDB could be taken private in $37M deal (https://techcrunch.com/2024/02/19/struggling-database-company-mariadb-could-be-taken-private-in-a-37m-deal/) Tofu (https://opentofu.org) So Where's That New OpenAI Board? (https://www.theinformation.com/articles/so-wheres-that-new-openai-board?utm_source=ti_app&rc=giqjaz) The IRS has all our tax data. Why doesn't its new website use it? (https://www.washingtonpost.com/business/2024/02/04/direct-file-irs-taxes/) Relevant to your Interests Apple on course to break all Web Apps in EU within 20 days - Open Web Advocacy (https://open-web-advocacy.org/blog/apple-on-course-to-break-all-web-apps-in-eu-within-20-days/) Bringing Competition to Walled Gardens - Open Web Advocacy (https://open-web-advocacy.org/walled-gardens-report/#apple-has-effectively-banned-all-third-party-browsers) Introducing the Column Explorer: a bird's-eye view of your data (https://motherduck.com/blog/introducing-column-explorer/?utm_medium=email&_hsmi=294232392&_hsenc=p2ANqtz-8vobC3nom9chsGc_Y8KM9pO75KKvrGTtL7uS-sfcNQ1sNd8ThaMnP5KsfbSUWCWW2KOjlPpa3AwC4ToYbaCmYOAMva0rvKIZ2jkB461YKJX2TLQtg&utm_content=294233055&utm_source=hs_email) Apple TV+ Became HBO Before HBO Could Become Netflix (https://spyglass.org/its-not-tv-its-apple-tv-plus/?utm_source=substack&utm_medium=email) Sora: Creating video from text (https://openai.com/sora) Sustainability, a surprisingly successful KPI: GreenOps survey results - ClimateAction.Tech (https://climateaction.tech/blog/sustainability-kpi-greenops-survey-results/) Slack AI has arrived (https://slack.com/intl/en-gb/blog/news/slack-ai-has-arrived) What's new and cool? - Adam Jacob (https://youtu.be/gAYMg6LNEMs?si=9PRiK1BBHaBGSypy) Apple is reportedly working on AI updates to Spotlight and Xcode (https://www.theverge.com/2024/2/15/24074455/apple-generative-ai-xcode-spotlight-testing) Apple Readies AI Tool to Rival Microsoft's GitHub Copilot (https://www.bloomberg.com/news/articles/2024-02-15/apple-s-ai-plans-github-copilot-rival-for-developers-tool-for-testing-apps) VMs on Kubernetes with Kubevirt session at Kubecon (https://kccnceu2024.sched.com/event/1YhIE/sponsored-keynote-a-cloud-native-overture-to-enterprise-end-user-adoption-fabian-deutsch-senior-engineering-manager-red-hat-michael-hanulec-vice-president-and-technology-fellow-goldman-sachs) Air Canada must honor refund policy invented by airline's chatbot (https://arstechnica.com/tech-policy/2024/02/air-canada-must-honor-refund-policy-invented-by-airlines-chatbot/?comments=1&comments-page=1) Microsoft 'retires' Azure IoT Central in platform rethink (https://www.theregister.com/2024/02/15/microsoft_retires_azure_iot_central/) The big design freak-out: A generation of design leaders grapple with their future (https://www.fastcompany.com/91027996/the-big-design-freak-out-a-generation-of-design-leaders-grapple-with-their-future) Most of the contents of the Xerox PARC team's work were tossed into a dumpster (https://x.com/DynamicWebPaige/status/1759071289401368635?s=20) 1Password expands its endpoint security offerings with Kolide acquisition (https://techcrunch.com/2024/02/20/1password-expands-its-endpoint-security-offerings-with-kolide-acquisition/) Microsoft Will Use Intel to Manufacture Home-Grown Processor (https://www.bloomberg.com/news/articles/2024-02-21/microsoft-will-use-intel-to-manufacture-home-grown-processor) In a First, Apple Captures Top 7 Spots in Global List of Top 10 Best-selling Smartphones - Counterpoint (https://www.counterpointresearch.com/insights/apple-captures-top-7-spots-in-global-top-10-best-selling-smartphones/) Google Is Giving Away Some of the A.I. That Powers Chatbots (https://www.nytimes.com/2024/02/21/technology/google-open-source-ai.html) Apple Shuffles Leadership of Team Responsible for Audio Products (https://www.bloomberg.com/news/articles/2024-02-20/apple-shuffles-leadership-of-team-responsible-for-audio-products?srnd=premium) Signal now lets you keep your phone number private with the launch of usernames (https://techcrunch.com/2024/02/20/signal-now-lets-you-keep-your-phone-number-private-with-the-launch-of-usernames/) How Google is killing independent sites like ours (https://housefresh.com/david-vs-digital-goliaths/) VMware takes a swing at Nutanix, Red Hat with VM converter (https://www.theregister.com/2024/02/21/vmware_kvm_converter/) (https://narrativ.es/@janl/111935559549855751)## Nonsense An ordinary squirt of canned air achieves supersonic speeds - engineer spots telltale shock diamonds (https://www.tomshardware.com/desktops/pc-building/an-ordinary-squirt-of-canned-air-achieves-supersonic-speeds-engineer-spots-telltale-shock-diamonds) Conferences SCaLE 21x/DevOpsDays LA, March 14th (https://www.socallinuxexpo.org/scale/21x)– (https://www.socallinuxexpo.org/scale/21x)17th, 2024 (https://www.socallinuxexpo.org/scale/21x) — Coté speaking (https://www.socallinuxexpo.org/scale/21x/presentations/we-fear-change), sponsorship slots available. KubeCon EU Paris, March 19 (https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/)– (https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/)22 (https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/) — Coté on the wait list for the platform side conference. Get 20% off with the discount code KCEU24VMWBC20. DevOpsDays Birmingham, April 17–18, 2024 (https://talks.devopsdays.org/devopsdays-birmingham-al-2024/cfp) Exe (https://ismg.events/roundtable-event/dallas-robust-security-java-applications/?utm_source=cote&utm_campaign=devrel&utm_medium=newsletter&utm_content=newsletterUpcoming)cutive dinner in Dallas that Coté's hosting on March 13st, 2024 (https://ismg.events/roundtable-event/dallas-robust-security-java-applications/?utm_source=cote&utm_campaign=devrel&utm_medium=newsletter&utm_content=newsletterUpcoming). If you're an “executive” who might want to buy stuff from Tanzu to get better at your apps, than register. There is also a Tanzu exec event coming up in the next few months, email Coté (mailto:cote@broadcom.com) if you want to hear more about it. SDT news & hype Join us in Slack (http://www.softwaredefinedtalk.com/slack). Get a SDT Sticker! Send your postal address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and we will send you free laptop stickers! Follow us: Twitch (https://www.twitch.tv/sdtpodcast), Twitter (https://twitter.com/softwaredeftalk), Instagram (https://www.instagram.com/softwaredefinedtalk/), Mastodon (https://hachyderm.io/@softwaredefinedtalk), BlueSky (https://bsky.app/profile/softwaredefinedtalk.com), LinkedIn (https://www.linkedin.com/company/software-defined-talk/), TikTok (https://www.tiktok.com/@softwaredefinedtalk), Threads (https://www.threads.net/@softwaredefinedtalk) and YouTube (https://www.youtube.com/channel/UCi3OJPV6h9tp-hbsGBLGsDQ/featured). Use the code SDT to get $20 off Coté's book, Digital WTF (https://leanpub.com/digitalwtf/c/sdt), so $5 total. Become a sponsor of Software Defined Talk (https://www.softwaredefinedtalk.com/ads)! Recommendations Brandon: Fair Play (https://www.netflix.com/title/81674326) on Netflix (https://www.netflix.com/title/81674326) Matt: Julia Evans: Popular Git Config Options (https://jvns.ca/blog/2024/02/16/popular-git-config-options/) Coté: Anker USB C Charger (Nano II 65W) Pod 3-Port PPS Fast Charger (https://www.amazon.de/dp/B09LLRNGSD?psc=1&ref=ppx_yo2ov_dt_b_product_details). Photo Credits Header (https://unsplash.com/photos/a-couple-of-large-sculptures-sitting-on-top-of-a-cement-floor-g4xIcepnx6I) Google Gemini

AZ Tech Roundtable 2.0
Pre Silicon Valley - Claude Shannon & Bell Labs w/ Jimmy Soni - AZ TRT S04 EP49 (212) 12-10-2023

AZ Tech Roundtable 2.0

Play Episode Listen Later Dec 14, 2023 33:26


Pre Silicon Valley - Claude Shannon & Bell Labs w/ Jimmy Soni  AZ TRT S04 EP49 (212) 12-10-2023   Revisit the Show w/ Clips From: PayPal Mafia - The Founders Story & Their Battle w/ EBAY w/ Jimmy Soni  - BRT S03 EP36 (135) 8-7-2022  Full Show: HERE    What We Learned This Week PayPal Mafia – alumni created or involved many other co's – Tesla, SpaceX, Palantir, Yelp, Yammer, LinkedIn, Facebook, YouTube & more PayPal had may contributors & a real long shot to happen during the DOTCOM Crash of 2000 Claude Shannon – creator of Information Theory, predecessor to the modern computer age, & algorithms Bell Labs was a classic Tech Incubator like Fairfield Semiconductor, Xerox Parc, Menlo Park – Edison / GE, Manhattan Project, Tuxedo Park PayPal sold to EBAY in 2002 for $1.5 Billion, prior to this, the two companies were rivals as EBAY wanted a different payment system   Full Show: HERE     Guest: Jimmy Soni, Author https://jimmysoni.com/ https://twitter.com/jimmyasoni   https://www.linkedin.com/in/jimmysoni/ My books are passion projects. My topics come because I look for a book to buy on the subject and can't find one. I know it's supposed to be fancier than that, or that there must be some grand theory of my work, but there isn't one. That said, my readers seem to enjoy what I've written, so maybe it's fine? I am inspired by my literary heroes, including Robert Caro, Laura Hillenbrand, Candice Millard, Daniel James Brown, and Barbara Tuchman, among many others. They are all rigorous researchers—but reading their books doesn't feel like doing homework. That's what I'm going for, and hopefully I hit the mark a few times. For me, books are all-consuming projects, leaving little other time for the things that should populate this section like hobbies, interests, and even the ability to remain in basic touch with people. I enjoy obsessing over a subject for years, and my goal is to find as much information as possible and then make the material readable for a general audience.  When not writing or reading, I spend time with my daughter in Brooklyn, NY.  If you'd like to connect, please drop me a line at hello [@] jimmysoni.com.   https://jimmysoni.com/books/   The Founders: The Story of PayPal and the Entrepreneurs Who Shaped Silicon Valley A definitive, deeply reported look at the origin of PayPal and its founding team, including Elon Musk, Peter Thiel, Reid Hoffman, Max Levchin, and others whose stories have never before been told. They defined the modern world. This experience defined them.   https://en.wikipedia.org/wiki/PayPal_Mafia Paypal Mafia   Elon Musk – Tesla, Space X, Boring Co. Peter Thiel – 1st FB Investor, AirBnB Investor, Founders Fund, Palantir Reid Hoffman – LinkedIn (sold to Microsoft) Max Levchin – Affirm, Investor in Yelp David O. Sacks – Geni.com & Yammer Chad Hurley – YouTube Russel Simmons – Yelp   https://fintechboomer.com/guide-evaluate-the-founders-the-story-of-paypal-and-the-entrepreneurs-who-formed-silicon-valley/   https://www.pressreader.com/india/the-hindu-business-line/20220620/281758452959411   https://twitter.com/jimmyasoni/status/1488992532268732419       A Mind at Play: How Claude Shannon Invented the Information Age In this elegantly written, exhaustively researched biography, Soni and Goodman reveal Claude Shannon's full story for the first time. With unique access to Shannon's family and friends, A Mind at Play brings this singular innovator and always playful genius to life.   https://www.quantamagazine.org/how-claude-shannons-information-theory-invented-the-future-20201222/ QUANTIZED COLUMNS How Claude Shannon Invented the Future Today's information age is only possible thanks to the groundbreaking work of a lone genius.   https://www.youtube.com/watch?v=M9hfWiQKhcs&t=2s A Mind at Play | Jimmy Soni & Rob Goodman | Talks at Google     Life in Code and Digits: When Shannon met ... - ScienceOpen  Shannon is credited with the invention of signal-flow graphs, in 1942. He discovered the topological gain formula while investigating the functional operation of an analog computer. For two months early in 1943, Shannon came into contact with the leading British mathematician Alan Turing.   Ed Thorp, Claude Shannon and the World's First ... - Winton https://www.winton.com › technology › 2018/07 › ed-t... Jul 13, 2018 — Thorp, 85, is a former American mathematics professor and hedge fund manager, who became a New York Times bestselling author in 1962 with his ...   https://www.nytimes.com/2009/02/15/magazine/15Battier-t.html The No-Stats All-Star     Notes: Claude Shannon Bio – A Mind at Play (2017) Claude Shannon – mathematician & MIT professor created Father of Information Theory – How do you make info transferrable, & secure in wartime? Friend of Alan Turing (British Mathematician), both worked on coding in WW2, German code breaking scientists became celebrities in WW2 and raised funding The science behind compressing info, digitizing info and MP3 files, transfer data Mathematics Theory of Communication, Shannon's paper and theory considered the Magna Carta of information age. Great paper theoretically and practically. Shannon created algorithm called sigsally. Imitation Game – WW2 bio movie about Alan Turing Shannon's work used for Gun torrents on Navy ships, target projectiles Bell Labs – math group that Shannon was a part of Famous Groups of Genius - Menlo Park – Edison/GE, Manhattan Project – Built the A Bomb Fairfield Semiconductor – predecessor to Intel and other Silicon Valley tech co's Bell Labs had money and started as R&D Dept. in Bell Telephone Bell Telephone ran all land lines in America, had a Fed guaranteed monopoly on the phone system Bell invented touch tone dialing, transistor, satellite tech, cell tech, communication networks We are all affected by Bell tech and inventions, modern age owes a solid to Bell Had big group of talent and could afford all of it, leading scientists of the time. During WW2 many major U.S. corporations – Bell, Ford were recruited by the US Government. War effort created urgency – math used to shoot down the enemy. The Founders – story of PayPal (2022) Dot Com burst created urgency to Pay Pal, bleeding money, had to survive. Dotcom crash – companies started 1 day, & BK out of business next day. Rise like a rocket and crash in 2 years Next Gen of Genius Teams - Xerox Parc, Microsoft, Apple Music Producer – Brian Eno coined the term “scenious” Scene meets genius - Clusters of talent American Revolution – Hamilton, Jefferson, Washington, Adams, Franklin all together for 1 cause Inklings, Fugitive Poets, 1960's British Music scene, Bill Walsh 49ers Coaching staff of the 1980s Paypal is the story of many – Elon Musk, Peter Thiel, Max Levchin, Reed Hoffman Alumni of Fairchild Semi led to Intel, Atari and Xerox Parc led to Apple. Post WW2 Bell Labs pressure decreased compared to PayPal. Bell Labs allowed free wheeling, could work on a project for 10 years.   PayPal Mafia - The Founders Story & Their Battle w/ EBAY w/ Jimmy Soni  - BRT S03 EP36 (135) 8-7-2022 Full Show: HERE   More on Bell Labs:   'The Idea Factory': How Bell Labs invented the future – Article HERE           Bell Labs: The research center behind the transistor, and so much more – Article HERE         Best of Biotech from AZ Bio & Life Sciences to Jellatech: HERE   Biotech Shows: HERE   AZ Tech Council Shows:  https://brt-show.libsyn.com/size/5/?search=az+tech+council *Includes Best of AZ Tech Council show from 2/12/2023     ‘Best Of' Topic: https://brt-show.libsyn.com/category/Best+of+BRT      Thanks for Listening. Please Subscribe to the BRT Podcast.     AZ Tech Roundtable 2.0 with Matt Battaglia The show where Entrepreneurs, Top Executives, Founders, and Investors come to share insights about the future of business.  AZ TRT 2.0 looks at the new trends in business, & how classic industries are evolving.  Common Topics Discussed: Startups, Founders, Funds & Venture Capital, Business, Entrepreneurship, Biotech, Blockchain / Crypto, Executive Comp, Investing, Stocks, Real Estate + Alternative Investments, and more…    AZ TRT Podcast Home Page: http://aztrtshow.com/ ‘Best Of' AZ TRT Podcast: Click Here Podcast on Google: Click Here Podcast on Spotify: Click Here                    More Info: https://www.economicknight.com/azpodcast/ KFNX Info: https://1100kfnx.com/weekend-featured-shows/   Disclaimer: The views and opinions expressed in this program are those of the Hosts, Guests and Speakers, and do not necessarily reflect the views or positions of any entities they represent (or affiliates, members, managers, employees or partners), or any Station, Podcast Platform, Website or Social Media that this show may air on. All information provided is for educational and entertainment purposes. Nothing said on this program should be considered advice or recommendations in: business, legal, real estate, crypto, tax accounting, investment, etc. Always seek the advice of a professional in all business ventures, including but not limited to: investments, tax, loans, legal, accounting, real estate, crypto, contracts, sales, marketing, other business arrangements, etc.  

Rehash: A Web3 Podcast
S6 E9 | Crypto-Enabled Livestreaming w/Ravi Bakhai

Rehash: A Web3 Podcast

Play Episode Listen Later Dec 7, 2023 38:10


On this episode of Rehash, we're speaking with Ravi Bakhai, founder of Hypeshot, about why livestreaming is a compelling medium across so many industries, what a crypto-enabled live streaming platform unlocks for creators, and future livestreaming trends we can expect to see in the next five years. Even though livestreaming has technically been around since the early 1990s, with the very first livestream taking place in 1993 when a few employees from Xerox PARC in California decided to livestream their buddies' band performance, livestreaming didn't really pick up steam until the late 2000s or early 2010s when YouTube launched livestreaming and Twitch was born. Since then, livestreaming has seen a slow but steady growth, primarily in niche communities. More recently, however, livestreaming has been seeping into a wider array of industries and creator types. Now we're not just seeing gamers streaming on Twitch, but we're also seeing artists, musicians, and other types of creators in the streaming scene. On the opposite end of that spectrum, we're seeing big corporations building livestreaming into their corporate marketing and communications strategies as well.Ravi also shares the latest on Hypeshot, the crypto-enabled live streaming platform he's building, as well as how some of his favorite streamers have been using the crypto elements there. Finally, we riff on some serious and some not so serious use cases we'd like to see on Hypeshot in the future. COLLECT THIS EPISODEhttps://www.rehashweb3.xyz/ FOLLOW USRehash: https://twitter.com/rehashweb3Diana: https://twitter.com/ddwchenRavi: https://twitter.com/ravibakhaiHypeshot: https://twitter.com/HypeshotHQ LINKS“Life is Non-fungible” Ted Talk (Roham Gharegozlou): https://youtu.be/u894J50AqOs?si=9cHa9r15HVK6eZvA TIMESTAMPS0:00 Intro2:29 Origin story of Hypeshot6:25 Free to consume, valuable to own7:33 History of livestreaming13:35 Monetizing from livestreaming14:35 What crypto unlocks for livestreaming18:11 Hypeshot vs Unlonely vs Livepeer20:21 Distribution in web3 media23:10 Ravi's favorite Hypeshot streamers23:42 New use cases for crypto-enabled livestreaming27:28 Future of web3 livestreaming32:49 Who Ravi wants to hear on the podcast33:39 How Ravi crypto pills his friends and family36:35 Follow Ravi DISCLAIMER: The information in this video is the opinion of the speaker(s) only and is for informational purposes only. You should not construe it as investment advice, tax advice, or legal advice, and it does not represent any entity's opinion but those of the speaker(s). For investment or legal advice, please seek a duly licensed professional.

When It Worked
When It Worked Podcast Jeoparty - Jim Verquist & David Buck

When It Worked

Play Episode Listen Later Nov 10, 2023 40:26


About Jim Verquist 3 Silicon Valley startups. 2 Fast Strategy transformations. Endlessly surfing waves of passion+purpose.

Simplifying Complexity
How economic policies are gamed

Simplifying Complexity

Play Episode Listen Later Oct 16, 2023 36:54


Economic policies are often gamed by individuals for personal benefit. In this episode, we explore how this gaming takes place and what economics can do about it. To do that, we're joined again by W. Brian Arthur, External Professor at the Santa Fe Institute, and Researcher at the Palo Alto Research Center, formerly Xerox PARC.    Connect: Simplifying Complexity on Twitter Sean Brady on Twitter Sean Brady on LinkedIn Brady Heywood website This show is produced in collaboration with Wavelength Creative. Visit wavelengthcreative.com for more information.

Advent of Computing
Episode 118 - Viral Dark Ages

Advent of Computing

Play Episode Listen Later Oct 15, 2023 75:35


It's finally Spook Month here on Advent of Computing! To kick things off I'm tackling a bit of a mystery. Between 1972 and 1982 there is only one well documented virus. This period is book ended with plenty of sources and, yes, even viruses. But this decade long span of time has almost nothing! Was this era truly safe from the grips of malicious code? Or is there a secret history lurking just beneath the surface?   Selected Sources:   https://dl.acm.org/doi/pdf/10.1145/358453.358455 - Worms at Xerox PARC!   https://archive.org/details/crimebycomputer0000park - Crime by Computer   https://archive.org/details/dr_dobbs_journal_vol_05_201803/page/n89/mode/2up - Programming Pastimes and Pleasures

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Thanks to the over 11,000 people who joined us for the first AI Engineer Summit! A full recap is coming, but you can 1) catch up on the fun and videos on Twitter and YouTube, 2) help us reach 1000 people for the first comprehensive State of AI Engineering survey and 3) submit projects for the new AI Engineer Foundation.See our Community page for upcoming meetups in SF, Paris, NYC, and Singapore. This episode had good interest on Twitter.Last month, Imbue was crowned as AI's newest unicorn foundation model lab, raising a $200m Series B at a >$1 billion valuation. As “stealth” foundation model companies go, Imbue (f.k.a. Generally Intelligent) has stood as an enigmatic group given they have no publicly released models to try out. However, ever since their $20m Series A last year their goal has been to “develop generally capable AI agents with human-like intelligence in order to solve problems in the real world”.From RL to Reasoning LLMsAlong with their Series A, they announced Avalon, “A Benchmark for RL Generalization Using Procedurally Generated Worlds”. Avalon is built on top of the open source Godot game engine, and is ~100x faster than Minecraft to enable fast RL benchmarking and a clear reward with adjustable game difficulty.After a while, they realized that pure RL isn't a good path to teach reasoning and planning. The agents were able to learn mechanical things like opening complex doors, climbing, but couldn't go to higher level tasks. A pure RL world also doesn't include a language explanation of the agent reasoning, which made it hard to understand why it made certain decisions. That pushed the team more towards the “models for reasoning” path:“The second thing we learned is that pure reinforcement learning is not a good vehicle for planning and reasoning. So these agents were able to learn all sorts of crazy things: They could learn to climb like hand over hand in VR climbing, they could learn to open doors like very complicated, like multiple switches and a lever open the door, but they couldn't do any higher level things. And they couldn't do those lower level things consistently necessarily. And as a user, I do not want to interact with a pure reinforcement learning end to end RL agent. As a user, like I need much more control over what that agent is doing.”Inspired by Chelsea Finn's work on SayCan at Stanford, the team pivoted to have their agents do the reasoning in natural language instead. This development parallels the large leaps in reasoning that humans have developed as the scientific method:“We are better at reasoning now than we were 3000 years ago. An example of a reasoning strategy is noticing you're confused. Then when I notice I'm confused, I should ask:* What was the original claim that was made? * What evidence is there for this claim? * Does the evidence support the claim? * Is the claim correct? This is like a reasoning strategy that was developed in like the 1600s, you know, with like the advent of science. So that's an example of a reasoning strategy. There are tons of them. We employ all the time, lots of heuristics that help us be better at reasoning. And we can generate data that's much more specific to them.“The Full Stack Model LabOne year later, it would seem that the pivot to reasoning has had tremendous success, and Imbue has now reached a >$1B valuation, with participation from Astera Institute, NVIDIA, Cruise CEO Kyle Vogt, Notion co-founder Simon Last, and others. Imbue tackles their work with a “full stack” approach:* Models. Pretraining very large (>100B parameter) models, optimized to perform well on internal reasoning benchmarks, with a ~10,000 Nvidia H100 GPU cluster lets us iterate rapidly on everything from training data to architecture and reasoning mechanisms.* Tools and Agents. Building internal productivity tools from coding agents for fixing type checking and linting errors, to sophisticated systems like CARBS (for hyperparameter tuning and network architecture search).* Interface Invention. Solving agent trust and collaboration (not merely communication) with humans by creating better abstractions and interfaces — IDEs for users to program computers in natural language.* Theory. Publishing research about the theoretical underpinnings of self-supervised learning, as well as scaling laws for machine learning research.Kanjun believes we are still in the “bare metal phase” of agent development, and they want to take a holistic approach to building the “operating system for agents”. We loved diving deep into the Imbue approach toward solving the AI Holy Grail of reliable agents, and are excited to share our conversation with you today!Timestamps* [00:00:00] Introductions* [00:06:07] The origin story of Imbue* [00:09:39] Imbue's approach to training large foundation models optimized for reasoning* [00:12:18] Imbue's goals to build an "operating system" for reliable, inspectable AI agents* [00:15:37] Imbue's process of developing internal tools and interfaces to collaborate with AI agents* [00:17:27] Imbue's focus on improving reasoning capabilities in models, using code and other data* [00:19:50] The value of using both public benchmarks and internal metrics to evaluate progress* [00:21:43] Lessons learned from developing the Avalon research environment* [00:23:31] The limitations of pure reinforcement learning for general intelligence* [00:28:36] Imbue's vision for building better abstractions and interfaces for reliable agents* [00:31:36] Interface design for collaborating with, rather than just communicating with, AI agents* [00:37:40] The future potential of an agent-to-agent protocol* [00:39:29] Leveraging approaches like critiquing between models and chain of thought* [00:45:49] Kanjun's philosophy on enabling team members as creative agents at Imbue* [00:53:51] Kanjun's experience co-founding the communal co-living space The Archive* [01:00:22] Lightning RoundShow Notes* Imbue* Avalon* CARBS (hyperparameter optimizer)* Series B announcement* Kanjun/Imbue's Podcast* MIT Media Lab* Research mentioned:* Momentum Contrast* SimClr* Chelsea Finn - SayCan* Agent Protocol - part of the AI Engineer Foundation* Xerox PARC* Michael Nielsen* Jason Benn* Outset Capital* Scenius - Kevin Kelly* South Park Commons* The Archive* Thursday Nights in AITranscriptAlessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, Partner and CTO at Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai. [00:00:19]Swyx: Hey, and today in the studio we have Kanjun from Imbue. Welcome. So you and I have, I guess, crossed paths a number of times. You're formerly named Generally Intelligent and you've just announced your rename, rebrand in huge, humongous ways. So congrats on all of that. And we're here to dive in into deeper detail on Imbue. We like to introduce you on a high level basis, but then have you go into a little bit more of your personal side. So you graduated your BS at MIT and you also spent some time at the MIT Media Lab, one of the most famous, I guess, computer hacking labs in the world. Then you graduated MIT and you went straight into BizOps at Dropbox, where you're eventually chief of staff, which is a pretty interesting role we can dive into later. And then it seems like the founder bug hit you. You were basically a three times founder at Ember, Sorceress, and now at Generally Intelligent slash Imbue. What should people know about you on the personal side that's not on your LinkedIn? That's something you're very passionate about outside of work. [00:01:12]Kanjun: Yeah. I think if you ask any of my friends, they would tell you that I'm obsessed with agency, like human agency and human potential. [00:01:19]Swyx: That's work. Come on.Kanjun: It's not work. What are you talking about?Swyx: So what's an example of human agency that you try to promote? [00:01:27]Kanjun: With all of my friends, I have a lot of conversations with them that's kind of helping figure out what's blocking them. I guess I do this with a team kind of automatically too. And I think about it for myself often, like building systems. I have a lot of systems to help myself be more effective. At Dropbox, I used to give this onboarding talk called How to Be Effective, which people liked. I think like a thousand people heard this onboarding talk, and I think maybe Dropbox was more effective. I think I just really believe that as humans, we can be a lot more than we are. And it's what drives everything. I guess completely outside of work, I do dance. I do partner dance. [00:02:03]Swyx: Yeah. Lots of interest in that stuff, especially in the sort of group living houses in San Francisco, which I've been a little bit part of, and you've also run one of those. [00:02:12]Kanjun: That's right. Yeah. I started the archive with two friends, with Josh, my co-founder, and a couple of other folks in 2015. That's right. And GPT-3, our housemates built. [00:02:22]Swyx: Was that the, I guess, the precursor to Generally Intelligent, that you started doing more things with Josh? Is that how that relationship started? Yeah. [00:02:30]Kanjun: This is our third company together. Our first company, Josh poached me from Dropbox for Ember. And there we built a really interesting technology, laser raster projector, VR headset. And then we were like, VR is not the thing we're most passionate about. And actually it was kind of early days when we both realized we really do believe that in our lifetimes, like computers that are intelligent are going to be able to allow us to do much more than we can do today as people and be much more as people than we can be today. And at that time, we actually, after Ember, we were like, work on AI research or start an AI lab. A bunch of our housemates were joining OpenAI, and we actually decided to do something more pragmatic to apply AI to recruiting and to try to understand like, okay, if we are actually trying to deploy these systems in the real world, what's required? And that was Sorceress. That taught us so much about maybe an AI agent in a lot of ways, like what does it actually take to make a product that people can trust and rely on? I think we never really fully got there. And it's taught me a lot about what's required. And it's kind of like, I think informed some of our approach and some of the way that we think about how these systems will actually get used by people in the real world. [00:03:42]Swyx: Just to go one step deeper on that, you're building AI agents in 2016 before it was cool. You got some muscle and you raised $30 million. Something was working. What do you think you succeeded in doing and then what did you try to do that did not pan out? [00:03:56]Kanjun: Yeah. So the product worked quite well. So Sorceress was an AI system that basically looked for candidates that could be a good fit and then helped you reach out to them. And this was a little bit early. We didn't have language models to help you reach out. So we actually had a team of writers that like, you know, customized emails and we automated a lot of the customization. But the product was pretty magical. Like candidates would just be interested and land in your inbox and then you can talk to them. As a hiring manager, that's such a good experience. I think there were a lot of learnings, both on the product and market side. On the market side, recruiting is a market that is endogenously high churn, which means because people start hiring and then we hire the role for them and they stop hiring. So the more we succeed, the more they... [00:04:39]Swyx: It's like the whole dating business. [00:04:40]Kanjun: It's the dating business. Exactly. Exactly. And I think that's the same problem as the dating business. And I was really passionate about like, can we help people find work that is more exciting for them? A lot of people are not excited about their jobs and a lot of companies are doing exciting things and the matching could be a lot better. But the dating business phenomenon like put a damper on that, like it's actually a pretty good business. But as with any business with like relatively high churn, the bigger it gets, the more revenue we have, the slower growth becomes because if 30% of that revenue you lose year over year, then it becomes a worse business. So that was the dynamic we noticed quite early on after our Series A. I think the other really interesting thing about it is we realized what was required for people to trust that these candidates were like well vetted and had been selected for a reason. And it's what actually led us, you know, a lot of what we do at Imbue is working on interfaces to figure out how do we get to a situation where when you're building and using agents, these agents are trustworthy to the end user. That's actually one of the biggest issues with agents that, you know, go off and do longer range goals is that I have to trust, like, did they actually think through this situation? And that really informed a lot of our work today. [00:05:52]Alessio: Let's jump into GI now, Imbue. When did you decide recruiting was done for you and you were ready for the next challenge? And how did you pick the agent space? I feel like in 2021, it wasn't as mainstream. Yeah. [00:06:07]Kanjun: So the LinkedIn says that it started in 2021, but actually we started thinking very seriously about it in early 2020, late 2019, early 2020. So what we were seeing is that scale is starting to work and language models probably will actually get to a point where like with hacks, they're actually going to be quite powerful. And it was hard to see that at the time, actually, because GPT-3, the early versions of it, there are all sorts of issues. We're like, oh, that's not that useful, but we could kind of see like, okay, you keep improving it in all of these different ways and it'll get better. What Josh and I were really interested in is how can we get computers that help us do bigger things? Like, you know, there's this kind of future where I think a lot about, you know, if I were born in 1900 as a woman, like my life would not be that fun. I'd spend most of my time like carrying water and literally like getting wood to put in the stove to cook food and like cleaning and scrubbing the dishes and, you know, getting food every day because there's no refrigerator, like all of these things, very physical labor. And what's happened over the last 150 years since the industrial revolution is we've kind of gotten free energy, like energy is way more free than it was 150 years ago. And so as a result, we've built all these technologies like the stove and the dishwasher and the refrigerator, and we have electricity and we have infrastructure, running water, all of these things that have totally freed me up to do what I can do now. And I think the same thing is true for intellectual energy. We don't really see it today, but because we're so in it, but our computers have to be micromanaged. You know, part of why people are like, oh, you're stuck to your screen all day. Well, we're stuck to our screen all day because literally nothing happens unless I'm doing something in front of my screen. I don't, you know, I can't send my computer off to do a bunch of stuff for me. And there is a future where that's not the case, where, you know, I can actually go off and do stuff and trust that my computer will pay my bills and figure out my travel plans and do the detailed work that I am not that excited to do so that I can like be much more creative and able to do things that I as a human, I'm very excited about and collaborate with other people. And there are things that people are uniquely suited for. So that's kind of always been the thing that has been really exciting to me. Like Josh and I have known for a long time, I think that, you know, whatever AI is, it would happen in our lifetimes. And the personal computer kind of started giving us a bit of free intellectual energy. And this is like really the explosion of free intellectual energy. So in early 2020, we were thinking about this and what happened was self-supervised learning basically started working across everything. So worked in language, SimClear came out, I think MoCo had come out, Momentum Contrast had come out earlier in 2019, SimClear came out in early 2020. And we're like, okay, for the first time, self-supervised learning is working really well across images and text and suspect that like, okay, actually it's the case that machines can learn things the way that humans do. And if that's true, if they can learn things in a fully self-supervised way, because like as people, we are not supervised. We like go Google things and try to figure things out. So if that's true, then like what the computer could be is much bigger than what it is today. And so we started exploring ideas around like, how do we actually go? We didn't think about the fact that we could actually just build a research lab. So we were like, okay, what kind of startup could we build to like leverage self-supervised learning? So that eventually becomes something that allows computers to become much more able to do bigger things for us. But that became General Intelligence, which started as a research lab. [00:09:39]Alessio: So your mission is you aim to rekindle the dream of the personal computer. So when did it go wrong and what are like your first products and user facing things that you're building to rekindle it? [00:09:53]Kanjun: Yeah. So what we do at Imbue is we train large foundation models optimized for reasoning. And the reason for that is because reasoning is actually, we believe the biggest blocker to agents or systems that can do these larger goals. If we think about something that writes an essay, like when we write an essay, we like write it. We put it and then we're done. We like write it and then we look at it and we're like, oh, I need to do more research on that area. I'm going to go do some research and figure it out and come back and, oh, actually it's not quite right. The structure of the outline. So I'm going to rearrange the outline, rewrite it. It's this very iterative process and it requires thinking through like, okay, what am I trying to do? Is the goal correct? Also like, has the goal changed as I've learned more? So as a tool, like when should I ask the user questions? I shouldn't ask them questions all the time, but I should ask them questions in higher risk situations. How certain am I about the like flight I'm about to book? There are all of these notions of like risk certainty, playing out scenarios, figuring out how to make a plan that makes sense, how to change the plan, what the goal should be. That are things that we lump under the bucket of reasoning and models today, they're not optimized for reasoning. It turns out that there's not actually that much explicit reasoning data on the internet as you would expect. And so we get a lot of mileage out of optimizing our models for reasoning in pre-training. And then on top of that, we build agents ourselves and we, I can get into, we really believe in serious use, like really seriously using the systems and trying to get to an agent that we can use every single day, tons of agents that we can use every single day. And then we experiment with interfaces that help us better interact with the agents. So those are some set of things that we do on the kind of model training and agent side. And then the initial agents that we build, a lot of them are trying to help us write code better because code is most of what we do every day. And then on the infrastructure and theory side, we actually do a fair amount of theory work to understand like, how do these systems learn? And then also like, what are the right abstractions for us to build good agents with, which we can get more into. And if you look at our website, we build a lot of tools internally. We have a like really nice automated hyperparameter optimizer. We have a lot of really nice infrastructure and it's all part of the belief of like, okay, let's try to make it so that the humans are doing the things humans are good at as much as possible. So out of our very small team, we get a lot of leverage. [00:12:18]Swyx: And so would you still categorize yourself as a research lab now, or are you now in startup mode? Is that a transition that is conscious at all? [00:12:26]Kanjun: That's a really interesting question. I think we've always intended to build, you know, to try to build the next version of the computer, enable the next version of the computer. The way I think about it is there's a right time to bring a technology to market. So Apple does this really well. Actually, iPhone was under development for 10 years, AirPods for five years. And Apple has a story where iPhone, the first multi-touch screen was created. They actually were like, oh wow, this is cool. Let's like productionize iPhone. They actually brought, they like did some work trying to productionize it and realized this is not good enough. And they put it back into research to try to figure out like, how do we make it better? What are the interface pieces that are needed? And then they brought it back into production. So I think of production and research as kind of like these two separate phases. And internally we have that concept as well, where like things need to be done in order to get to something that's usable. And then when it's usable, like eventually we figure out how to productize it. [00:13:20]Alessio: What's the culture like to make that happen, to have both like kind of like product oriented, research oriented. And as you think about building the team, I mean, you just raised 200 million. I'm sure you want to hire more people. What are like the right archetypes of people that work at Imbue? [00:13:35]Kanjun: I would say we have a very unique culture in a lot of ways. I think a lot about social process design. So how do you design social processes that enable people to be effective? I like to think about team members as creative agents, because most companies, they think of their people as assets and they're very proud of this. And I think about like, okay, what is an asset? It's something you own that provides you value that you can discard at any time. This is a very low bar for people. This is not what people are. And so we try to enable everyone to be a creative agent and to really unlock their superpowers. So a lot of the work I do, you know, I was mentioning earlier, I'm like obsessed with agency. A lot of the work I do with team members is try to figure out like, you know, what are you really good at? What really gives you energy and where can we put you such that, how can I help you unlock that and grow that? So much of our work, you know, in terms of team structure, like much of our work actually comes from people. Carbs, our hyperparameter optimizer came from Abe trying to automate his own research process doing hyperparameter optimization. And he actually pulled some ideas from plasma physics. He's a plasma physicist to make the local search work. A lot of our work on evaluations comes from a couple of members of our team who are like obsessed with evaluations. We do a lot of work trying to figure out like, how do you actually evaluate if the model is getting better? Is the model making better agents? Is the agent actually reliable? A lot of things kind of like, I think of people as making the like them shaped blob inside imbue and I think, you know, yeah, that's the kind of person that we're, we're hiring for. We're hiring product engineers and data engineers and research engineers and all these roles. We have projects, not teams. We have a project around data, data collection and data engineering. That's actually one of the key things that improve the model performance. We have a pre-training kind of project with some fine tuning as part of that. And then we have an agent's project that's like trying to build on top of our models as well as use other models in the outside world to try to make agents then we actually use as programmers every day. So all sorts of different, different projects. [00:15:37]Swyx: As a founder, you're now sort of a capital allocator among all of these different investments effectively at different projects. And I was interested in how you mentioned that you were optimizing for improving reasoning and specifically inside of your pre-training, which I assume is just a lot of data collection. [00:15:55]Kanjun: We are optimizing reasoning inside of our pre-trained models. And a lot of that is about data. And I can talk more about like what, you know, what exactly does it involve? But actually big, maybe 50% plus of the work is figuring out even if you do have models that reason well, like the models are still stochastic. The way you prompt them still makes, is kind of random, like makes them do random things. And so how do we get to something that is actually robust and reliable as a user? How can I, as a user, trust it? We have all sorts of cool things on the, like, you know, I was mentioning earlier when I talked to other people building agents, they have to do so much work, like to try to get to something that they can actually productize and it takes a long time and agents haven't been productized yet for, partly for this reason is that like the abstractions are very leaky. We can get like 80% of the way there, but like self-driving cars, like the remaining 20% is actually really difficult. We believe that, and we have internally, I think some things that like an interface, for example, that lets me really easily like see what the agent execution is, fork it, try out different things, modify the prompt, modify like the plan that it is making. This type of interface, it makes it so that I feel more like I'm collaborating with the agent as it's executing, as opposed to it's just like doing something as a black box. That's an example of a type of thing that's like beyond just the model pre-training, but on the model pre-training side, like reasoning is a thing that we optimize for. And a lot of that is about what data do we put in. [00:17:27]Swyx: It's interesting just because I always think like, you know, out of the levers that you have, the resources that you have, I think a lot of people think that running foundation model company or a research lab is going to be primarily compute. And I think the share of compute has gone down a lot over the past three years. It used to be the main story, like the main way you scale is you just throw more compute at it. And now it's like, Flops is not all you need. You need better data, you need better algorithms. And I wonder where that shift has gone. This is a very vague question, but is it like 30-30-30 now? Is it like maybe even higher? So one way I'll put this is people estimate that Llama2 maybe took about three to $4 million of compute, but probably 20 to $25 million worth of labeling data. And I'm like, okay, well that's a very different story than all these other foundation model labs raising hundreds of millions of dollars and spending it on GPUs. [00:18:20]Kanjun: Data is really expensive. We generate a lot of data. And so that does help. The generated data is close to actually good, as good as human labeled data. [00:18:34]Swyx: So generated data from other models? [00:18:36]Kanjun: From our own models. From your own models. Or other models, yeah. [00:18:39]Swyx: Do you feel like there's certain variations of this? There's the sort of the constitutional AI approach from Anthropic and basically models sampling training on data from other models. I feel like there's a little bit of like contamination in there, or to put it in a statistical form, you're resampling a distribution that you already have that you already know doesn't match human distributions. How do you feel about that basically, just philosophically? [00:19:04]Kanjun: So when we're optimizing models for reasoning, we are actually trying to like make a part of the distribution really spiky. So in a sense, like that's actually what we want. We want to, because the internet is a sample of the human distribution that's also skewed in all sorts of ways. That is not the data that we necessarily want these models to be trained on. And so when we're generating data, we're not really randomly generating data. We generate very specific things that are like reasoning traces and that help optimize reasoning. Code also is a big piece of improving reasoning. So generated code is not that much worse than like regular human written code. You might even say it can be better in a lot of ways. So yeah. So we are trying to already do that. [00:19:50]Alessio: What are some of the tools that you thought were not a good fit? So you built Avalon, which is your own simulated world. And when you first started, the metagame was like using games to simulate things using, you know, Minecraft and then OpenAI is like the gym thing and all these things. And I think in one of your other podcasts, you mentioned like Minecraft is like way too slow to actually do any serious work. Is that true? Yeah. I didn't say it. [00:20:17]Swyx: I don't know. [00:20:18]Alessio: That's above my pay grade. But Avalon is like a hundred times faster than Minecraft for simulation. When did you figure that out that you needed to just like build your own thing? Was it kind of like your engineering team was like, Hey, this is too slow. Was it more a long-term investment? [00:20:34]Kanjun: Yeah. At that time we built Avalon as a research environment to help us learn particular things. And one thing we were trying to learn is like, how do you get an agent that is able to do many different tasks? Like RL agents at that time and environments at that time. What we heard from other RL researchers was the like biggest thing keeping holding the field back is lack of benchmarks that let us explore things like planning and curiosity and things like that and have the agent actually perform better if the agent has curiosity. And so we were trying to figure out in a situation where, how can we have agents that are able to handle lots of different types of tasks without the reward being pretty handcrafted? That's a lot of what we had seen is that like these very handcrafted rewards. And so Avalon has like a single reward it's across all tasks. And it also allowed us to create a curriculum so we could make the level more or less difficult. And it taught us a lot, maybe two primary things. One is with no curriculum, RL algorithms don't work at all. So that's actually really interesting. [00:21:43]Swyx: For the non RL specialists, what is a curriculum in your terminology? [00:21:46]Kanjun: So a curriculum in this particular case is basically the environment Avalon lets us generate simpler environments and harder environments for a given tasks. What's interesting is that the simpler environments, what you'd expect is the agent succeeds more often. So it gets more reward. And so, you know, kind of my intuitive way of thinking about it is, okay, the reason why it learns much faster with a curriculum is it's just getting a lot more signal. And that's actually an interesting general intuition to have about training these things as like, what kind of signal are they getting? And like, how can you help it get a lot more signal? The second thing we learned is that reinforcement learning is not a good vehicle, like pure reinforcement learning is not a good vehicle for planning and reasoning. So these agents were not able to, they were able to learn all sorts of crazy things. They could learn to climb like hand over hand in VR climbing, they could learn to open doors like very complicated, like multiple switches and a lever open the door, but they couldn't do any higher level things. And they couldn't do those lower level things consistently necessarily. And as a user, I do not want to interact with a pure reinforcement learning end to end RL agent. As a user, like I need much more control over what that agent is doing. And so that actually started to get us on the track of thinking about, okay, how do we do the reasoning part in language? And we were pretty inspired by our friend Chelsea Finn at Stanford was I think working on SACAN at the time where it's basically an experiment where they have robots kind of trying to do different tasks and actually do the reasoning for the robot in natural language. And it worked quite well. And that led us to start experimenting very seriously with reasoning. [00:23:31]Alessio: How important is the language part for the agent versus for you to inspect the agent? You know, like is it the interface to kind of the human on the loop really important or? [00:23:43]Kanjun: Yeah, I personally think of it as it's much more important for us, the human user. So I think you probably could get end to end agents that work and are fairly general at some point in the future. But I think you don't want that. Like we actually want agents that we can like perturb while they're trying to figure out what to do. Because, you know, even a very simple example, internally we have like a type error fixing agent and we have like a test generation agent. Test generation agent goes off rails all the time. I want to know, like, why did it generate this particular test? [00:24:19]Swyx: What was it thinking? [00:24:20]Kanjun: Did it consider, you know, the fact that this is calling out to this other function? And the formatter agent, if it ever comes up with anything weird, I want to be able to debug like what happened with RL end to end stuff. Like we couldn't do that. Yeah. [00:24:36]Swyx: It sounds like you have a bunch of agents operating internally within the company. What's your most, I guess, successful agent and what's your least successful one? [00:24:44]Kanjun: The agents don't work. All of them? I think the only successful agents are the ones that do really small things. So very specific, small things like fix the color of this button on the website or like change the color of this button. [00:24:57]Swyx: Which is now sweep.dev is doing that. Exactly. [00:25:00]Kanjun: Perfect. Okay. [00:25:02]Swyx: Well, we should just use sweep.dev. Well, I mean, okay. I don't know how often you have to fix the color of a button, right? Because all of them raise money on the idea that they can go further. And my fear when encountering something like that is that there's some kind of unknown asymptote ceiling that's going to prevent them, that they're going to run head on into that you've already run into. [00:25:21]Kanjun: We've definitely run into such a ceiling. But what is the ceiling? [00:25:24]Swyx: Is there a name for it? Like what? [00:25:26]Kanjun: I mean, for us, we think of it as reasoning plus these tools. So reasoning plus abstractions, basically. I think actually you can get really far with current models and that's why it's so compelling. Like we can pile debugging tools on top of these current models, have them critique each other and critique themselves and do all of these, like spend more computer inference time, context hack, retrieve augmented generation, et cetera, et cetera, et cetera. Like the pile of hacks actually does get us really far. And a way to think about it is like the underlying language model is kind of like a noisy channel. Actually I don't want to use this analogy. It's actually a really bad analogy, but you kind of like trying to get more signal out of the channel. We don't like to think about it that way. It's what the default approach is, is like trying to get more signal out of this noising channel. But the issue with agents is as a user, I want it to be mostly reliable. It's kind of like self-driving in that way. Like it's not as bad as self-driving, like in self-driving, you know, you're like hurtling at 70 miles an hour. It's like the hardest agent problem. But one thing we learned from Sorceress and one thing we learned by using these things internally is we actually have a pretty high bar for these agents to work. You know, it's actually really annoying if they only work 50% of the time and we can make interfaces to make it slightly less annoying. But yeah, there's a ceiling that we've encountered so far and we need to make the models better. We also need to make the kind of like interface to the user better. And also a lot of the like critiquing. I hope what we can do is help people who are building agents actually like be able to deploy them. I think, you know, that's the gap that we see a lot of today is everyone who's trying to build agents to get to the point where it's robust enough to be deployable. It just, it's like an unknown amount of time. Okay. [00:27:12]Swyx: So this goes back into what Embu is going to offer as a product or a platform. How are you going to actually help people deploy those agents? Yeah. [00:27:21]Kanjun: So our current hypothesis, I don't know if this is actually going to end up being the case. We've built a lot of tools for ourselves internally around like debugging, around abstractions or techniques after the model generation happens. Like after the language model generates the text and like interfaces for the user and the underlying model itself, like models talking to each other, maybe some set of those things kind of like an operating system. Some set of those things will be helpful for other people. And we'll figure out what set of those things is helpful for us to make our agents. Like what we want to do is get to a point where we can like start making an agent, deploy it, it's reliable, like very quickly. And there's a similar analog to software engineering, like in the early days, in the seventies and the sixties, like to program a computer, like you have to go all the way down to the registers and write things and eventually we had assembly. That was like an improvement. But then we wrote programming languages with these higher levels of abstraction and that allowed a lot more people to do this and much faster. And the software created is much less expensive. And I think it's basically a similar route here where we're like in the like bare metal phase of agent building. And we will eventually get to something with much nicer abstractions. [00:28:36]Alessio: We had this conversation with George Hotz and we were like, there's not a lot of reasoning data out there. And can the models really understand? And his take was like, look, with enough compute, you're not that complicated as a human. Like the model can figure out eventually why certain decisions are made. What's been your experience? Like as you think about reasoning data, like do you have to do a lot of like manual work or like is there a way to prompt models to extract the reasoning from actions that they [00:29:03]Swyx: see? [00:29:03]Kanjun: So we don't think of it as, oh, throw enough data at it and then it will figure out what the plan should be. I think we're much more explicit. You know, a way to think about it is as humans, we've learned a lot of reasoning strategies over time. We are better at reasoning now than we were 3000 years ago. An example of a reasoning strategy is noticing you're confused. Then when I notice I'm confused, I should ask like, huh, what was the original claim that was made? What evidence is there for this claim? Does the evidence support the claim? Is the claim correct? This is like a reasoning strategy that was developed in like the 1600s, you know, with like the advent of science. So that's an example of a reasoning strategy. There are tons of them. We employ all the time, lots of heuristics that help us be better at reasoning. And we didn't always have them. And because they're invented, like we can generate data that's much more specific to them. So I think internally, yeah, we have a lot of thoughts on what reasoning is and we generate a lot more specific data. We're not just like, oh, it'll figure out reasoning from this black box or like it'll figure out reasoning from the data that exists. Yeah. [00:30:04]Alessio: I mean, the scientific method is like a good example. If you think about hallucination, right, people are thinking, how do we use these models to do net new, like scientific research? And if you go back in time and the model is like, well, the earth revolves around the sun and people are like, man, this model is crap. It's like, what are you talking about? Like the sun revolves around the earth. It's like, how do you see the future? Like if the models are actually good enough, but we don't believe them, it's like, how do we make the two live together? So you're like, you use Inbu as a scientist to do a lot of your research and Inbu tells you, hey, I think this is like a serious path you should go down. And you're like, no, that sounds impossible. Like how is that trust going to be built? And like, what are some of the tools that maybe are going to be there to inspect it? [00:30:51]Kanjun: Really there are two answers to this. One element of it is as a person, like I need to basically get information out of the model such that I can try to understand what's going on with the model. Then the second question is like, okay, how do you do that? And that's kind of some of our debugging tools, they're not necessarily just for debugging. They're also for like interfacing with and interacting with the model. So like if I go back in this reasoning trace and like change a bunch of things, what's going to happen? Like, what does it conclude instead? So that kind of helps me understand like, what are its assumptions? And, you know, we think of these things as tools. And so it's really about like, as a user, how do I use this tool effectively? I need to be willing to be convinced as well. It's like, how do I use this tool effectively? And what can it help me with? [00:31:36]Swyx: And what can it tell me? There's a lot of mention of code in your process. And I was hoping to dive in even deeper. I think we might run the risk of giving people the impression that you view code or you use code just as like a tool within InView just for coding assistance. But I think you actually train code models. And I think there's a lot of informal understanding about how adding code to language models improves their reasoning capabilities. I wonder if there's any research or findings that you have to share that talks about the intersection of code and reasoning. Hmm. Yeah. [00:32:08]Kanjun: So the way I think about it intuitively is like code is the most explicit example of reasoning data on the internet. [00:32:15]Swyx: Yeah. [00:32:15]Kanjun: And it's not only structured, it's actually very explicit, which is nice. You know, it says this variable means this, and then it uses this variable. And then the function does this. As people, when we talk in language, it takes a lot more to extract that explicit structure out of our language. And so that's one thing that's really nice about code is I see it as almost like a curriculum for reasoning. I think we use code in all sorts of ways. The coding agents are really helpful for us to understand what are the limitations of the agents. The code is really helpful for the reasoning itself. But also code is a way for models to act. So by generating code, it can act on my computer. And, you know, when we talk about rekindling the dream of the personal computer, kind of where I see computers going is, you know, like computers will eventually become these much more malleable things where I, as a user today, I have to know how to write software code, like in order to make my computer do exactly what I want it to do. But in the future, if the computer is able to generate its own code, then I can actually interface with it in natural language. And so one way we think about agents is kind of like a natural language programming language. It's a way to program my computer in natural language that's much more intuitive to me as a user. And these interfaces that we're building are essentially IDEs for users to program our computers in natural language. Maybe I should say what we're doing that way. Maybe it's clearer. [00:33:47]Swyx: I don't know. [00:33:47]Alessio: That's a good pitch. What do you think about the different approaches people have, kind of like text first, browser first, like multi-on? What do you think the best interface will be? Or like, what is your, you know, thinking today? [00:33:59]Kanjun: In a lot of ways, like chat as an interface, I think Linus, Linus Lee, you had on this. I really like how he put it. Chat as an interface is skeuomorphic. So in the early days, when we made word processors on our computers, they had notepad lines because that's what we understood these like objects to be. Chat, like texting someone is something we understand. So texting our AI is something that we understand. But today's word documents don't have notepad lines. And similarly, the way we want to interact with agents, like chat is a very primitive way of interacting with agents. What we want is to be able to inspect their state and to be able to modify them and fork them and all of these other things. And we internally have, think about what are the right representations for that? Like architecturally, like what are the right representations? What kind of abstractions do we need to build? And how do we build abstractions that are not leaky? Because if the abstractions are leaky, which they are today, like, you know, this stochastic generation of text is like a leaky abstraction. I cannot depend on it. And that means it's actually really hard to build on top of. But our experience and belief is actually by building better abstractions and better tooling, we can actually make these things non-leaky. And now you can build like whole things on top of them. So these other interfaces, because of where we are, we don't think that much about them. [00:35:17]Swyx: Yeah. [00:35:17]Alessio: I mean, you mentioned, this is kind of like the Xerox Spark moment for AI. And we had a lot of stuff come out of Parc, like the, what you see is what you got editors and like MVC and all this stuff. But yeah, but then we didn't have the iPhone at Parc. We didn't have all these like higher things. What do you think it's reasonable to expect in like this era of AI, you know, call it like five years or so? Like what are like the things we'll build today and what are things that maybe we'll see in kind of like the second wave of products? [00:35:46]Kanjun: That's interesting. I think the waves will be much faster than before. Like what we're seeing right now is basically like a continuous wave. Let me zoom a little bit earlier. So people like the Xerox Parc analogy I give, but I think there are many different analogies. Like one is the like analog to digital computer is kind of an example, like another analogy to where we are today. The analog computer Vannevar Bush built in the 1930s, I think, and it's like a system of pulleys and it can only calculate one function. Like it can calculate like an integral. And that was so magical at the time because you actually did need to calculate this integral bunch, but it had a bunch of issues like in analog errors compound. And so there was actually a set of breakthroughs necessary in order to get to the digital computer, like Turing's decidability, Shannon. I think the like whole like relay circuits can be thought of as can be mapped to Boolean operators and a set of other like theoretical breakthroughs, which essentially were abstractions. They were like creating abstractions for these like very like lossy circuits. They were creating abstractions for these like very analog circuits and digital had this nice property of like being error correcting. And so when I talk about like less leaky abstractions, that's what I mean. That's what I'm kind of pointing a little bit to. It's not going to look exactly the same way. And then the Xerox PARC piece, a lot of that is about like, how do we get to computers that as a person, I can actually use well. And the interface actually helps it unlock so much more power. So the sets of things we're working on, like the sets of abstractions and the interfaces, like hopefully that like help us unlock a lot more power in these systems. Like hopefully that'll come not too far in the future. I could see a next version, maybe a little bit farther out. It's like an agent protocol. So a way for different agents to talk to each other and call each other. Kind of like HTTP. [00:37:40]Swyx: Do you know it exists already? [00:37:41]Kanjun: Yeah, there is a nonprofit that's working on one. I think it's a bit early, but it's interesting to think about right now. Part of why I think it's early is because the issue with agents, it's not quite like the internet where you could like make a website and the website would appear. The issue with agents is that they don't work. And so it may be a bit early to figure out what the protocol is before we really understand how these agents get constructed. But, you know, I think that's, I think it's a really interesting question. [00:38:09]Swyx: While we're talking on this agent to agent thing, there's been a bit of research recently on some of these approaches. I tend to just call them extremely complicated chain of thoughting, but any perspectives on kind of meta-GPT, I think it's the name of the paper. I don't know if you care about at the level of individual papers coming out, but I did read that recently and TLDR, it beat GPT-4 and human eval by role-playing software agent development agency, instead of having sort of single shot or single role, you have multiple roles and how having all of them criticize each other as agents communicating with other agents. [00:38:45]Kanjun: Yeah, I think this is an example of an interesting abstraction of like, okay, can I just plop in this like multi-role critiquing and see how it improves my agent? And can I just plop in chain of thought, tree of thought, plop in these other things and see how they improve my agent? One issue with this kind of prompting is that it's still not very reliable. It's like, there's one lens, which is like, okay, if you do enough of these techniques, you'll get to high reliability. And I think actually that's a pretty reasonable lens. We take that lens often. And then there's another lens that's like, okay, but it's starting to get really messy what's in the prompt and like, how do we deal with that messiness? And so maybe you need like cleaner ways of thinking about and constructing these systems. And we also take that lens. So yeah, I think both are necessary. Yeah. [00:39:29]Swyx: Side question, because I feel like this also brought up another question I had for you. I noticed that you work a lot with your own benchmarks, your own evaluations of what is valuable. I would say I would contrast your approach with OpenAI as OpenAI tends to just lean on, hey, we played StarCraft or hey, we ran it on the SAT or the, you know, the AP bio test and that did results. Basically, is benchmark culture ruining AI? [00:39:55]Swyx: Or is that actually a good thing? Because everyone knows what an SAT is and that's fine. [00:40:04]Kanjun: I think it's important to use both public and internal benchmarks. Part of why we build our own benchmarks is that there are not very many good benchmarks for agents, actually. And to evaluate these things, you actually need to think about it in a slightly different way. But we also do use a lot of public benchmarks for like, is the reasoning capability in this particular way improving? So yeah, it's good to use both. [00:40:26]Swyx: So for example, the Voyager paper coming out of NVIDIA played Minecraft and set their own benchmarks on getting the Diamond X or whatever and exploring as much of the territory as possible. And I don't know how that's received. That's obviously fun and novel for the rest of the engineer, the people who are new to the scene. But for people like yourselves, you build Avalon just because you already found deficiencies with using Minecraft. Is that valuable as an approach? Oh, yeah. I love Voyager. [00:40:57]Kanjun: I mean, Jim, I think is awesome. And I really like the Voyager paper and I think it has a lot of really interesting ideas, which is like the agent can create tools for itself and then use those tools. [00:41:06]Swyx: He had the idea of the curriculum as well, which is something that we talked about earlier. Exactly. [00:41:09]Kanjun: And that's like a lot of what we do. We built Avalon mostly because we couldn't use Minecraft very well to like learn the things we wanted. And so it's like not that much work to build our own. [00:41:19]Swyx: It took us, I don't know. [00:41:22]Kanjun: We had like eight engineers at the time, took about eight weeks. So six weeks. [00:41:27]Swyx: And OpenAI built their own as well, right? Yeah, exactly. [00:41:30]Kanjun: It's just nice to have control over our environment. But if you're doing our own sandbox to really trying to inspect our own research questions. But if you're doing something like experimenting with agents and trying to get them to do things like Minecraft is a really interesting environment. And so Voyager has a lot of really interesting ideas in it. [00:41:47]Swyx: Yeah. Cool. One more element that we had on this list, which is context and memory. I think that's kind of like the foundational, quote unquote, RAM of our era. I think Andrej Karpathy has already made this comparison. So there's nothing new here. And that's just the amount of working knowledge that we can fit into one of these agents. And it's not a lot, right? Especially if you need to get them to do long running tasks. If they need to self-correct from errors that they observe while operating in their environment. Do you see this as a problem? Do you think we're going to just trend to infinite context and that'll go away? Or how do you think we're going to deal with it? [00:42:22]Kanjun: I think when you talked about what's going to happen in the first wave and then in the second wave, I think what we'll see is we'll get like relatively simplistic agents pretty soon. And they will get more and more complex. And there's like a future wave in which they are able to do these like really difficult, really long running tasks. And the blocker to that future, one of the blockers is memory. And that was true of computers too. You know, I think when von Neumann made the von Neumann architecture, he was like, the biggest blocker will be like, we need this amount of memory, which is like, I don't remember exactly like 32 kilobytes or something to store programs. And that will allow us to write software. He didn't say it this way because he didn't have these terms, but that only really was like happened in the seventies with the microchip revolution. It may be the case that we're waiting for some research breakthroughs or some other breakthroughs in order for us to have like really good long running memory. And then in the meantime, agents will be able to do all sorts of things that are a little bit smaller than that. I do think with the pace of the field, we'll probably come up with all sorts of interesting things like, you know, RAG is already very helpful. [00:43:26]Swyx: Good enough, you think? [00:43:27]Kanjun: Maybe good enough for some things. [00:43:29]Swyx: How is it not good enough? I don't know. [00:43:31]Kanjun: I just think about a situation where you want something that's like an AI scientist. As a scientist, I have learned so much about my fields and a lot of that data is maybe hard to fine tune or on, or maybe hard to like put into pre-training. Like a lot of that data, I don't have a lot of like repeats of the data that I'm seeing. You know, like if I'm a scientist, I've like accumulated so many little data points. And ideally I'd want to store those somehow, or like use those to fine tune myself as a model somehow, or like have better memory somehow. I don't think RAG is enough for that kind of thing. But RAG is certainly enough for like user preferences and things like that. Like what should I do in this situation? What should I do in that situation? That's a lot of tasks. We don't have to be a scientist right away. Awesome. [00:44:21]Swyx: I have a hard question, if you don't mind me being bold. Yeah. I think the most comparable lab to InView is Adept. You know, a research lab with like some amount of product situation on the horizon, but not just yet, right? Why should people work for InView over Adept? And we can cut this if it's too like... Yeah. [00:44:40]Kanjun: The way I think about it is I believe in our approach. The type of thing that we're doing is we're trying to like build something that enables other people to build agents and build something that really can be maybe something like an operating system for agents. I know that that's what we're doing. I don't really know what everyone else is doing. You know, I can kind of like talk to people and have some sense of what they're doing. And I think it's a mistake to focus too much on what other people are doing, because extremely focused execution on the right thing is what matters. To the question of like, why us? I think like strong focus on reasoning, which we believe is the biggest blocker, on inspectability, which we believe is really important for user experience and also for the power and capability of these systems. Building non-leaky, good abstractions, which we believe is solving the core issue of agents, which is around reliability and being able to make them deployable. And then really seriously trying to use these things ourselves, like every single day, and getting to something that we can actually ship to other people that becomes something that is a platform. Like, it feels like it could be Mac or Windows. I love the dogfooding approach. [00:45:49]Swyx: That's extremely important. And you will not be surprised how many agent companies I talk to that don't use their own agent. Oh no, that's not good. That's a big surprise. [00:45:59]Kanjun: Yeah, I think if we didn't use our own agents, then we would have all of these beliefs about how good they are. Wait, did you have any other hard questions you wanted to ask? [00:46:08]Swyx: Yeah, mine was just the only other follow-up that you had based on the answer you just gave was, do you see yourself releasing models or do you see yourself, what is the artifacts that you want to produce that lead up to the general operating system that you want to have people use, right? And so a lot of people just as a byproduct of their work, just to say like, hey, I'm still shipping, is like, here's a model along the way. Adept took, I don't know, three years, but they released Persimmon recently, right? Like, do you think that kind of approach is something on your horizon? Or do you think there's something else that you can release that can show people, here's kind of the idea, not the end products, but here's the byproducts of what we're doing? [00:46:51]Kanjun: Yeah, I don't really believe in releasing things to show people like, oh, here's what we're doing that much. I think as a philosophy, we believe in releasing things that will be helpful to other people. [00:47:02]Swyx: Yeah. [00:47:02]Kanjun: And so I think we may release models or we may release tools that we think will help agent builders. Ideally, we would be able to do something like that, but I'm not sure exactly what they look like yet. [00:47:14]Swyx: I think more companies should get into the releasing evals and benchmarks game. Yeah. [00:47:20]Kanjun: Something that we have been talking to agent builders about is co-building evals. So we build a lot of our own evals and every agent builder tells me, basically evals are their biggest issue. And so, yeah, we're exploring right now. And if you are building agents, please reach out to me because I would love to, like, figure out how we can be helpful based on what we've seen. Cool. [00:47:40]Swyx: That's a good call to action. I know a bunch of people that I can send your way. Cool. Great. [00:47:43]Kanjun: Awesome. [00:47:44]Swyx: Yeah. We can zoom out to other interests now. [00:47:46]Alessio: We got a lot of stuff. So we have Sherif from Lexicon, the podcast. He had a lot of interesting questions on his website. You similarly have a lot of them. Yeah. [00:47:55]Swyx: I need to do this. I'm very jealous of people with personal websites right there. Like, here's the high level questions of goals of humanity that I want to set people on. And I don't have that. [00:48:04]Alessio: It's never too late, Sean. [00:48:05]Swyx: Yeah. [00:48:05]Alessio: It's never too late. [00:48:06]Kanjun: Exactly. [00:48:07]Alessio: There were a few that stuck out as related to your work that maybe you're kind of learning [00:48:12]Swyx: more about it. [00:48:12]Alessio: So one is why are curiosity and goal orientation often at odds? And from a human perspective, I get it. It's like, you know, would you want to like go explore things or kind of like focus on your career? How do you think about that from like an agent perspective? Where it's like, should you just stick to the task and try and solve it as in the guardrails as possible? Or like, should you look for alternative solutions? [00:48:34]Swyx: Yeah. [00:48:34]Kanjun: I think one thing that's really interesting about agents actually is that they can be forked. Like, you know, we can take an agent that's executed to a certain place and said, okay, here, like fork this and do a bunch of different things. I try a bunch of different things. Some of those agents can be goal oriented and some of them can be like more curiosity driven. You can prompt them in slightly different ways. And something I'm really curious about, like what would happen if in the future, you know, we were able to actually go down both paths. As a person, why I have this question on my website is I really find that like I really can only take one mode at a time and I don't understand why. And like, is it inherent in like the kind of context that needs to be held? That's why I think from an agent perspective, like forking it is really interesting. Like I can't fork myself to do both, but I maybe could fork an agent to like add a certain point in a task. [00:49:26]Swyx: Yeah. Explore both. Yeah. [00:49:28]Alessio: How has the thinking changed for you as the funding of the company changed? That's one thing that I think a lot of people in the space think is like, oh, should I raise venture capital? Like, how should I get money? How do you feel your options to be curious versus like goal oriented has changed as you raise more money and kind of like the company has grown? [00:49:50]Kanjun: Oh, that's really funny. Actually, things have not changed that much. So we raised our Series A $20 million in late 2021. And our entire philosophy at that time was, and still kind of is, is like, how do we figure out the stepping stones, like collect stepping stones that eventually let us build agents, kind of these new computers that help us do bigger things. And there was a lot of curiosity in that. And there was a lot of goal orientation in that. Like the curiosity led us to build CARBS, for example, this hyperparameter optimizer. Great name, by the way. [00:50:28]Swyx: Thank you. [00:50:29]Kanjun: Is there a story behind that name? [00:50:30]Swyx: Yeah. [00:50:31]Kanjun: Abe loves CARBS. It's also cost aware. So as soon as he came up with cost aware, he was like, I need to figure out how to make this work. But the cost awareness of it was really important. So that curiosity led us to this really cool hyperparameter optimizer. That's actually a big part of how we do our research. It lets us experiment on smaller models. And for those experiment results to carry to larger ones. [00:50:56]Swyx: Which you also published a scaling laws, which is great. I think the scaling laws paper from OpenAI was like the biggest. And from Google, I think, was the greatest public service to machine learning that any research lab can do. Yeah, totally. [00:51:10]Kanjun: What was nice about CARBS is it gave us scaling laws for all sorts of hyperparameters. So yeah, that's cool. It basically hasn't changed very much. So there's some curiosity. And then there's some goal oriented parts. Like Avalon, it was like a six to eight week sprint for all of us. And we got this thing out. And then now different projects do like more curiosity or more goal orientation at different times. Cool. [00:51:36]Swyx: Another one of your questions that we highlighted was, how can we enable artificial agents to permanently learn new abstractions and processes? I think this is might be called online learning. [00:51:45]Kanjun: Yeah. So I struggle with this because, you know, that scientist example I gave. As a scientist, I've like permanently learned a lot of new things. And I've updated and created new abstractions and learned them pretty reliably. And you were talking about like, okay, we have this RAM that we can store learnings in. But how well does online learning actually work? And the answer right now seems to be like, as models get bigger, they fine tune faster. So they're more sample efficient as they get bigger. [00

The Gradient Podcast
Terry Winograd: AI, HCI, Language, and Cognition

The Gradient Podcast

Play Episode Listen Later Aug 24, 2023 93:21


In episode 87 of The Gradient Podcast, Daniel Bashir speaks to Professor Terry Winograd. Professor Winograd is Professor Emeritus of Computer Science at Stanford University. His research focuses on human-computer interaction design and the design of technologies for development. He founded the Stanford Human-Computer Interaction Group, where he directed the teaching programs and HCI research. He is also a founding faculty member of the Stanford d.school and a founding member and past president of Computer Professionals for Social Responsibility.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pubSubscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (03:00) Professor Winograd's background* (05:10) At the MIT AI Lab* (05:45) The atmosphere in the MIT AI Lab, Minsky/Chomsky debates* (06:20) Blue-sky research, government funding for academic research* (10:10) Isolation and collaboration between research groups* (11:45) Phases in the development of ideas and how cross-disciplinary work fits in* (12:26) SHRDLU and the MIT AI Lab's intellectual roots* (17:20) Early responses to SHRDLU: Minsky, Dreyfus, others* (20:55) How Prof. Winograd's thinking about AI's abilities and limitations evolved* (22:25) How this relates to current AI systems and discussions of intelligence* (23:47) Repetitive debates in AI, semantics and grounding* (27:00) The concept of investment, care, trust in human communication vs machine communication* (28:53) Projecting human-ness onto AI systems and non-human things and what this means for society* (31:30) Time after leaving MIT in 1973, time at Xerox PARC, how Winograd's thinking evolved during this time* (38:28) What Does It Mean to Understand Language? Speech acts, commitments, and the grounding of language* (42:40) Reification of representations in science and ML* (46:15) LLMs, their training processes, and their behavior* (49:40) How do we coexist with systems that we don't understand?* (51:20) Progress narratives in AI and human agency* (53:30) Transitioning to intelligence augmentation, founding the Stanford HCI group and d.school, advising Larry Page and Sergey Brin* (1:01:25) Chatbots and how we consume information* (1:06:52) Evolutions in journalism, progress in trust for modern AI systems* (1:09:18) Shifts in the social contract, from institutions to personalities* (1:12:05) AI and HCI in recent years* (1:17:05) Philosophy of design and the d.school* (1:21:20) Designing AI systems for people* (1:25:10) Prof. Winograd's perspective on watermarking for detecting GPT outputs* (1:25:55) The politics of being a technologist* (1:30:10) Echos of the past in AI regulation and competition and learning from history* (1:32:34) OutroLinks:* Professor Winograd's Homepage* Papers/topics discussed:* SHRDLU* Beyond Programming Languages* What Does It Mean to Understand Language?* The PageRank Citation Ranking* Stanford Digital Libraries project* Talk: My Politics as a Technologist Get full access to The Gradient at thegradientpub.substack.com/subscribe

Ideas Sleep Furiously
The Genius of Galton | Gavan Tredoux

Ideas Sleep Furiously

Play Episode Listen Later Jun 10, 2023 70:27


Gavan Tredoux is a Fellow of the Royal Anthropological Institute, and a mathematician and statistician by training, with extensive interests in the History of Science (galton.org, burtoniana.org). He was a Senior Scientist in Research and Development at Xerox PARC, and currently works in the Data Science field. Follow him on Twitter here. Without doubt, this is one of our most educational and fascinating podcasts so far. We hope you enjoy.

Bootstrapping Your Dreams Show
#325 Know How To Make Impossible, Possible by Founding Member & Global Consultant at Silicon Valley Alliances

Bootstrapping Your Dreams Show

Play Episode Listen Later May 3, 2023 36:33


Kimberly Wiefling has extensive experience in the R&D and manufacturing of complex systems of hardware, software, firmware, chemistry, and physics. She's held leadership positions ranging from NPI Program Manager to VP of Program Management & Organizational Effectiveness for a Xerox Parc spinoff, and has helped companies from startups to Global 1000 successfully grow and – more importantly - SCALE. A physicist by education, she has found that many of these are “human” problems that don't require “rocket science”. All that's required to achieve a significant positive impact in many cases is to make common sense into common PRACTICE.FROM STARTUPS to GLOBAL 1000. Kimberly's a serial entrepreneur, the founder of Wiefling Consulting, and co-founder of Silicon Valley Alliances. She's been involved in launching ~ a dozen startups and was the VP of Program Management & Organizational Effectiveness for one that was acquired by Google in 2001. She invested in and coached the founder of EmbeddedWorks, which bootstrapped to 7 figure annual revenues.MAKING A POSITIVE DIFFERENCE: Kimberly is passionate about making a meaningful positive difference by working with organizations committed to solving the problems of Our World – profitably, so this good work continues!Support the showFollow me on Facebook ⬇️https://www.facebook.com/manuj.aggarwal❤️ ID - Manuj Aggarwal■ LinkedIn: https://www.linkedin.com/in/manujaggarwal/ ■ Facebook: https://www.facebook.com/realmanuj■ Instagram: ...

Les Technos
Bonus 399 : Amazon Bedrock, Xerox Parc, Hugging Face

Les Technos

Play Episode Listen Later May 1, 2023 15:24


Dans notre bonus 399 avec Sébastien S. et Benoit.• Parc : Xerox offre le PARC à SRI (source)• Bedrock : Amazon plus intelligent (source)• Chat : Hugging Face lance sont IA de chat (source)• Architecture : ZHA s'inspire dans l'IA (source)• Sécurité : Pas de licenciement abusif si on ne corrige pas des failles de sécurité suffisement rapidement (source)

The History of Computing
Adobe: From Pueblos to Fonts and Graphics to Marketing

The History of Computing

Play Episode Listen Later Apr 16, 2023 22:02


The Mogollon culture was an indigenous culture in the Western United States and Mexico that ranged from New Mexico and Arizona to Sonora, Mexico and out to Texas. They flourished from around 200 CE until the Spanish showed up and claimed their lands. The cultures that pre-existed them date back thousands more years, although archaeology has yet to pinpoint exactly how those evolved. Like many early cultures, they farmed and foraged. As they farmed more, their homes become more permanent and around 800 CE they began to create more durable homes that helped protect them from wild swings in the climate. We call those homes adobes today and the people who lived in those peublos and irrigated water, often moving higher into mountains, we call the Peubloans - or Pueblo Peoples. Adobe homes are similar to those found in ancient cultures in what we call Turkey today. It's an independent evolution. Adobe Creek was once called Arroyo de las Yeguas by the monks from Mission Santa Clara and then renamed to San Antonio Creek by a soldier Juan Prado Mesa when the land around it was given to him by the governor of Alto California at the time, Juan Bautista Alvarado. That's the same Alvarado as the street if you live in the area. The creek runs for over 14 miles north from the Black Mountain and through Palo Alto, California. The ranchers built their adobes close to the creeks. American settlers led the Bear Flag Revolt in 1846, and took over the garrison of Sonoma, establishing the California Republic - which covered much of the lands of the Peubloans. There were only 33 of them at first, but after John Fremont (yes, he of whom that street is named after as well) encouraged the Americans, they raised an army of over 100 men and Fremont helped them march on Sutter's fort, now with the flag of the United States, thanks to Joseph Revere of the US Navy (yes, another street in San Francisco bears his name).  James Polk had pushed to expand the United States. Manfiest Destiny. Remember The Alamo. Etc. The fort at Monterey fell, the army marched south. Admiral Sloat got involved. They named a street after him. General Castro surrendered - he got a district named after him. Commodore Stockton announced the US had taken all of Calfironia soon after that. Manifest destiny was nearly complete. He's now basically the patron saint of a city, even if few there know who he was. The forts along the El Camino Real that linked the 21 Spanish Missions, a 600-mile road once walked by their proverbial father, Junípero Serra following the Portolá expedition of 1769, fell. Stockton took each, moving into Los Angeles, then San Diego. Practically all of Alto California fell with few shots. This was nothing like the battles for the independence of Texas, like when Santa Anna reclaimed the Alamo Mission.  Meanwhile, the waters of Adobe Creek continued to flow. The creek was renamed in the 1850s after Mesa built an adobe on the site. Adobe Creek it was. Over the next 100 years, the area evolved into a paradise with groves of trees and then groves of technology companies. The story of one begins a little beyond the borders of California.  Utah was initialy explored by Francisco Vázquez de Coronado in 1540 and settled by Europeans in search of furs and others who colonized the desert, including those who established the Church of Jesus Christ of Latter-day Saints, or the Mormons - who settled there in 1847, just after the Bear Flag Revolt. The United States officially settled for the territory in 1848 and Utah became a territory and after a number of map changes wher ethe territory got smaller, was finally made a state in 1896. The University of Utah had been founded all the way back in 1850, though - and re-established in the 1860s.  100 years later, the University of Utah was a hotbed of engineers who pioneered a number of graphical advancements in computing. John Warnock went to grad school there and then went on to co-found Adobe and help bring us PostScript. Historically, PS, or Postscript was a message to be placed at the end of a letter, following the signature of the author. The PostScript language was a language to describe a page of text computationally. It was created by Adobe when Warnock, Doug Brotz, Charles Geschke, Bill Paxton (who worked on the Mother of All Demos with Doug Englebart during the development of Online System, or NLS in the late 70s and then at Xerox PARC), and Ed Taft. Warnock invented the Warnock algorithm while working on his PhD and went to work at Evans & Sutherland with Ivan Sutherland who effectively created the field of computer graphics. Geschke got his PhD at Carnegie Melon in the early 1970s and then went of to Xerox PARC. They worked with Paxton at PARC and before long, these PhDs and mathematicians had worked out the algorithms and then the languages to display images on computers while working on InterPress graphics at Xerox and Gerschke left Xerox and started Adobe. Warnock joined them and they went to market with Interpress as PostScript, which became a foundation for the Apple LaswerWriter to print graphics.  Not only that, PostScript could be used to define typefaces programmatically and later to display any old image.    Those technologies became the foundation for the desktop publishing industry. Apple released the 1984 Mac and other vendors brought in PostScript to describe graphics in their proprietary fashion and by 1991 they released PostScript Level 2 and then PostScript 3 in 1997. Other vendors made their own or furthered standards in their own ways and Adobe could have faded off into the history books of computing. But Adobe didn't create one product, they created an industry and the company they created to support that young industry created more products in that mission.  Steve Jobs tried to buy Adobe before that first Mac as released, for $5,000,000. But Warnock and Geschke had a vision for an industry in mind. They had a lot of ideas but development was fairly capital intensive, as were go to market strategies. So they went public on the NASDAQ in 1986. They expanded their PostScript distribution and sold it to companies like Texas Instruments for their laser printer, and other companies who made IBM-compatible companies. They got up to $16 million in sales that year. Warnock's wife was a graphic designer. This is where we see a diversity of ideas help us think about more than math. He saw how she worked and could see a world where Ivan Sutherland's Sketchpad was much more given how far CPUs had come since the TX-0 days at MIT. So Adobe built and released Illustrator in 1987. By 1988 they broke even on sales and it raked in $19 million in revenue. Sales were strong in the universities but PostScript was still the hot product, selling to printer companies, typesetters, and other places were Adobe signed license agreements.  At this point, we see where the math, cartesian coordinates, drawn by geometric algorithms put pixels where they should be. But while this was far more efficient than just drawing a dot in a coordinate for larger images, drawing a dot in a pixel location was still the easier technology to understand.  They created Adobe Screenline in 1989 and Collectors Edition to create patterns. They listened to graphic designers and built what they heard humans wanted. Photoshop Nearly every graphic designer raves about Adobe Photoshop. That's because Photoshop is the best selling graphics editorial tool that has matured far beyond most other traditional solutions and now has thousands of features that allow users to manipulate images in practically any way they want.  Adobe Illustrator was created in 1987 and quickly became the de facto standard in vector-based graphics. Photoshop began life in 1987 as well, when Thomas and John Knoll, wanted to build a simpler tool to create graphics on a computer. Rather than vector graphics they created a raster graphical editor.  They made a deal with Barneyscan, a well-known scanner company that managed to distribute over two hundred copies of Photoshop with their scanners and Photoshop became a hit as it was the first editing software people heard about. Vector images are typically generated with Cartesian coordinates based on geometric formulas and so scale out more easily. Raster images are comprised of a grid of dots, or pixels, and can be more realistic.  Great products are rewarded with competitions. CorelDRAW was created in 1989 when Michael Bouillon and Pat Beirne built a tool to create vector illustrations. The sales got slim after other competitors entered the market and the Knoll brothers got in touch with Adobe and licensed the product through them. The software was then launched as Adobe Photoshop 1 in 1990. They released Photoshop 2 in 1991. By now they had support for paths, and given that Adobe also made Illustrator, EPS and CMYK rasterization, still a feature in Photoshop.  They launched Adobe Photoshop 2.5 in 1993, the first version that could be installed on Windows. This version came with a toolbar for filters and 16-bit channel support. Photoshop 3 came in 1994 and Thomas Knoll created what was probably one of the most important features added, and one that's become a standard in graphical applications since, layers. Now a designer could create a few layers that each had their own elements and hide layers or make layers more transparent. These could separate the subject from the background and led to entire new capabilities, like an almost faux 3 dimensional appearance of graphics..  Then version four in 1996 and this was one of the more widely distributed versions and very stable. They added automation and this was later considered part of becoming a platform - open up a scripting language or subset of a language so others built tools that integrated with or sat on top of those of a product, thus locking people into using products once they automated tasks to increase human efficiency.  Adobe Photoshop 5.0 added editable type, or rasterized text. Keep in mind that Adobe owned technology like PostScript and so could bring technology from Illustrator to Photoshop or vice versa, and integrate with other products - like export to PDF by then. They also added a number of undo options, a magnetic lasso, improved color management and it was now a great tool for more advanced designers. Then in 5.5 they added a save for web feature in a sign of the times. They could created vector shapes and continued to improve the user interface. Adobe 5 was also a big jump in complexity. Layers were easy enough to understand, but Photoshop was meant to be a subset of Illustrator features and had become far more than that. So in 2001 they released Photoshop Elements. By now they had a large portfolio of products and Elements was meant to appeal to the original customer base - the ones who were beginners and maybe not professional designers. By now, some people spent 40 or more hours a day in tools like Photoshop and Illustrator.  Adobe Today Adobe had released PostScript, Illustrator, and Photoshop. But they have one of the most substantial portfolios of products of any company. They also released Premiere in 1991 to get into video editing. They acquired Aldus Corporation to get into more publishing workflows with PageMaker. They used that acquisition to get into motion graphics with After Effects. They acquired dozens of companies and released their products as well. Adobe also released the PDF format do describe full pages of information (or files that spread across multiple pages) in 1993 and Adobe Acrobat to use those. Acrobat became the de facto standard for page distribution so people didn't have to download fonts to render pages properly. They dabbled in audio editing when they acquired Cool Edit Pro from Syntrillium Software and so now sell Adobe Audition.  Adobe's biggest acquisition was Macromedia in 2005. Here, they added a dozen new products to the portfolio, which included Flash, Fireworks, WYSYWIG web editor Dreamweaver, ColdFusion, Flex, and Breeze, which is now called Adobe Connect. By now, they'd also created what we call Creative Suite, which are packages of applications that could be used for given tasks. Creative Suite also signaled a transition into a software as a service, or SaaS mindset. Now customers could pay a monthly fee for a user license rather than buy large software packages each time a new version was released. Adobe had always been a company who made products to create graphics. They expanded into online marketing and web analytics when they bought Omniture in 2009 for $1.8 billion. These products are now normalized into the naming convention used for the rest as Adobe Marketing Cloud. Flash fell by the wayside and so the next wave of acquisitions were for more mobile-oriented products. This began with Day Software and then Nitobi in 2011. And they furthered their Marketing Cloud support with an acquisition of one of the larger competitors when they acquired Marketo in 2018 and acquiring Workfront in 2020.  Given how many people started working from home, they also extended their offerings into pure-cloud video tooling with an acquisition of Frame.io in 2021. And here we see a company started by a bunch of true computer sciencists from academia in the early days of the personal computer that has become far more. They could have been rolled into Apple but had a vision of a creative suite of products that could be used to make the world a prettier place. Creative Suite then Creative Cloud shows a move of the same tools into a more online delivery model. Other companies come along to do similar tasks, like infinite digital whiteboard Miro - so they have to innovate to stay marketable. They have to continue to increase sales so they expand into other markets like the most adjacent Marketing Cloud.  At 22,500+ employees and with well over $12 billion in revenues, they have a lot of families dependent on maintaining that growth rate. And so the company becomes more than the culmination of their software. They become more than graphic design, web design, video editing, animation, and visual effects. Because in software, if revenues don't grow at a rate greater than 10 percent per year, the company simply isn't outgrowing the size of the market and likely won't be able to justify stock prices at an inflated earnings to price ratio that shows explosive growth. And yet once a company saturates sales in a given market they have shareholders to justify their existence to. Adobe has survived many an economic downturn and boom time with smart, measured growth and is likely to continue doing so for a long time to come.

The History of Computing
The Evolution of Fonts on Computers

The History of Computing

Play Episode Listen Later Apr 10, 2023 20:04


Gutenburg shipped the first working printing press around 1450 and typeface was born. Before then most books were hand written, often in blackletter calligraphy. And they were expensive.    The next few decades saw Nicolas Jensen develop the Roman typeface, Aldus Manutius and Francesco Griffo create the first italic typeface. This represented a period where people were experimenting with making type that would save space. The 1700s saw the start of a focus on readability. William Caslon created the Old Style typeface in 1734. John Baskerville developed Transitional typefaces in 1757. And Firmin Didot and Giambattista Bodoni created two typefaces that would become the modern family of Serif. Then slab Serif, which we now call Antique, came in 1815 ushering in an era of experimenting with using type for larger formats, suitable for advertisements in various printed materials. These were necessary as more presses were printing more books and made possible by new levels of precision in the metal-casting. People started experimenting with various forms of typewriters in the mid-1860s and by the 1920s we got Frederic Goudy, the first real full-time type designer. Before him, it was part of a job. After him, it was a job. And we still use some of the typefaces he crafted, like Copperplate Gothic. And we saw an explosion of new fonts like Times New Roman in 1931. At the time, most typewriters used typefaces on the end of a metal shaft. Hit a kit, the shaft hammers onto a strip of ink and leaves a letter on the page. Kerning, or the space between characters, and letter placement were often there to reduce the chance that those metal hammers jammed. And replacing a font would have meant replacing tons of precision parts. Then came the IBM Selectric typewriter in 1961. Here we saw precision parts that put all those letters on a ball. Hit a key, the ball rotates and presses the ink onto the paper. And the ball could be replaced. A single document could now have multiple fonts without a ton of work. Xerox exploded that same year with the Xerox 914, one of the most successful products of all time. Now, we could type amazing documents with multiple fonts in the same document quickly - and photocopy them. And some of the numbers on those fancy documents were being spat out by those fancy computers, with their tubes. But as computers became transistorized heading into the 60s, it was only a matter of time before we put fonts on computer screens. Here, we initially used bitmaps to render letters onto a screen. By bitmap we mean that a series, or an array of pixels on a screen is a map of bits and where each should be displayed on a screen. We used to call these raster fonts, but the drawback was that to make characters bigger, we needed a whole new map of bits. To go to a bigger screen, we probably needed a whole new map of bits. As people thought about things like bold, underline, italics, guess what - also a new file. But through the 50s, transistor counts weren't nearly high enough to do something different than bitmaps as they rendered very quickly and you know, displays weren't very high quality so who could tell the difference anyways.  Whirlwind was the first computer to project real-time graphics on the screen and the characters were simple blocky letters. But as the resolution of screens and the speed of interactivity increased, so did what was possible with drawing glyphs on screens.  Rudolf Hell was a German, experimenting with using cathode ray tubes to project a CRT image onto paper that was photosensitive and thus print using CRT. He designed a simple font called Digital Grotesk, in 1968. It looked good on the CRT and the paper. And so that font would not only be used to digitize typesetting, loosely based on Neuzeit Book. And we quickly realized bitmaps weren't efficient to draw fonts to screen and by 1974 moved to outline, or vector, fonts. Here a Bézier curve was drawn onto the screen using an algorithm that created the character, or glyph using an outline and then filling in the space between. These took up less memory and so drew on the screen faster. Those could be defined in an operating system, and were used not only to draw characters but also by some game designers to draw entire screens of information by defining a character as a block and so taking up less memory to do graphics.  These were scalable and by 1979 another German, Peter Karow, used spline algorithms wrote Ikarus, software that allowed a person to draw a shape on a screen and rasterize that. Now we could graphically create fonts that were scalable.  In the meantime, the team at Xerox PARC had been experimenting with different ways to send pages of content to the first laser printers. Bob Sproull and Bill Newman created the Press format for the Star. But this wasn't incredibly flexible like what Karow would create. John Gaffney who was working with Ivan Sutherland at Evans & Sutherland, had been working with John Warnock on an interpreter that could pull information from a database of graphics. When he went to Xerox, he teamed up with Martin Newell to create J&M, which harnessed the latest chips to process graphics and character type onto printers. As it progressed, they renamed it to Interpress. Chuck Geschke started the Imaging Sciences Laboratory at Xerox PARC and eventually left Xerox with Warnock to start a company called Adobe in Warnock's garage, which they named after a creek behind his house. Bill Paxton had worked on “The Mother of All Demos” with Doug Engelbart at Stanford, where he got his PhD and then moved to Xerox PARC. There he worked on bitmap displays, laser printers, and GUIs - and so he joined Adobe as a co-founder in 1983 and worked on the font algorithms and helped ship a page description language, along with Chuck Geschke, Doug Brotz, and Ed Taft.  Steve Jobs tried to buy Adobe in 1982 for $5 million. But instead they sold him just shy of 20% of the company and got a five-year license for PostScript. This allowed them to focus on making the PostScript language more extensible, and creating the Type 1 fonts. These had 2 parts. One that was a set of bit maps And another that was a font file that could be used to send the font to a device.  We see this time and time again. The simpler an interface and the more down-market the science gets, the faster we see innovative industries come out of the work done. There were lots of fonts by now. The original 1984 Mac saw Susan Kare work with Jobs and others to ship a bunch of fonts named after cities like Chicago and San Francisco. She would design the fonts on paper and then conjure up the hex (that's hexadecimal) for graphics and fonts. She would then manually type the hexadecimal notation for each letter of each font.  Previously, custom fonts were reserved for high end marketing and industrial designers. Apple considered licensing existing fonts but decided to go their own route. She painstakingly created new fonts and gave them the names of towns along train stops around Philadelphia where she grew up. Steve Jobs went for the city approach but insisted they be cool cities. And so the Chicago, Monaco, New York, Cairo, Toronto, Venice, Geneva, and Los Angeles fonts were born - with her personally developing Geneva, Chicago, and Cairo. And she did it in 9 x 7.  I can still remember the magic of sitting down at a computer with a graphical interface for the first time. I remember opening MacPaint and changing between the fonts, marveling at the typefaces. I'd certainly seen different fonts in books. But never had I made a document and been able to set my own typeface! Not only that they could be in italics, outline, and bold. Those were all her. And she inspired a whole generation of innovation. Here, we see a clean line from Ivan Sutherland and the pioneering work done at MIT to the University of Utah to Stanford through the oNLine System (or NLS) to Xerox PARC and then to Apple. But with the rise of Windows and other graphical operating systems. As Apple's 5 year license for PostScript came and went they started developing their own font standard as a competitor to Adobe, which they called TrueType. Here we saw Times Roman, Courier, and symbols that could replace the PostScript fonts and updating to Geneva, Monaco, and others. They may not have gotten along with Microsoft, but they licensed TrueType to them nonetheless to make sure it was more widely adopted. And in exchange they got a license for TrueImage, which was a page description language that was compatible with PostScript. Given how high resolution screens had gotten it was time for the birth of anti-aliasing. He we could clean up the blocky “jaggies” as the gamers call them. Vertical and horizontal lines in the 8-bit era looked fine but distorted at higher resolutions and so spatial anti-aliasing and then post-processing anti-aliasing was born. By the 90s, Adobe was looking for the answer to TrueImage. So 1993 brought us PDF, now an international standard in ISO 32000-1:2008. But PDF Reader and other tools were good to Adobe for many years, along with Illustrator and then Photoshop and then the other products in the Adobe portfolio. By this time, even though Steve Jobs was gone, Apple was hard at work on new font technology that resulted in Apple Advanced Typography, or AAT. AAT gave us ligature control, better kerning and the ability to write characters on different axes.  But even though Jobs was gone, negotiations between Apple and Microsoft broke down to license AAT to Microsoft. They were bitter competitors and Windows 95 wasn't even out yet. So Microsoft started work on OpenType, their own font standardized language in 1994 and Adobe joined the project to ship the next generation in 1997. And that would evolve into an open standard by the mid-2000s. And once an open standard, sometimes the de facto standard as opposed to those that need to be licensed. By then the web had become a thing. Early browsers and the wars between them to increment features meant developers had to build and test on potentially 4 or 5 different computers and often be frustrated by the results. So the WC3 began standardizing how a lot of elements worked  in Extensible Markup Language, or XML. Images, layouts, colors, even fonts. SVGs are XML-based vector image. In other words the browser interprets a language that displays the image. That became a way to render Web Open Format or WOFF 1 was published in 2009 with contributions by Dutch educator Erik van Blokland, Jonathan Kew, and Tal Leming. This built on the CSS font styling rules that had shipped in Internet Explorer 4 and would slowly be added to every browser shipped, including Firefox since 3.6, Chrome since 6.0, Internet Explorer since 9, and Apple's Safari since 5.1. Then WOFF 2 added Brotli compression to get sizes down and render faster. WOFF has been a part of the W3C open web standard since 2011.  Out of Apple's TrueType came TrueType GX, which added variable fonts. Here, a single font file could contain a number or range of variants to the initial font. So a family of fonts could be in a single file. OpenType added variable fonts in 2016, with Apple, Microsoft, and Google all announcing support. And of course the company that had been there since the beginning, Adobe, jumped on board as well. Fewer font files, faster page loads.  So here we've looked at the progression of fonts from the printing press, becoming more efficient to conserve paper, through the advent of the electronic typewriter to the early bitmap fonts for screens to the vectorization led by Adobe into the Mac then Windows. We also see rethinking the font entirely so multiple scripts and character sets and axes can be represented and rendered efficiently.  I am now converting all my user names into pig Latin for maximum security. Luckily those are character sets that are pretty widely supported. The ability to add color to pig Latin means that OpenType-SVG will allow me add spiffy color to my glyphs. It makes us wonder what's next for fonts. Maybe being able to design our own, or more to the point, customize those developed by others to make them our own. We didn't touch on emoji yet. But we'll just have to save the evolution of character sets and emoji for another day. In the meantime, let's think on the fact that fonts are such a big deal because Steve Jobs took a caligraphy class from a Trappist monk named Robert Palladino while enrolled at Reed College. Today we can painstakingly choose just the right font with just the right meaning because Palladino left the monastic life to marry and have a son. He taught jobs about serif and san serif and kerning and the art of typography.  That style and attention to detail was one aspect of the original Mac that taught the world that computers could have style and grace as well. It's not hard to imagine if entire computers still only supported one font or even one font per document. Palladino never owned or used a computer though. His influence can be felt through the influence his pupil Jobs had. And it's actually amazing how many people who had such dramatic impacts on computing never really used one. Because so many smaller evolutions came after them. What evolutions do we see on the horizon today? And how many who put a snippet of code on a service like GitHub may never know the impact they have on so many?

Software Developer's Journey
#242 Dean Tribble between innovation and reinventing the wheel

Software Developer's Journey

Play Episode Play 34 sec Highlight Listen Later Feb 28, 2023 50:13


Dean placed the start of his journey in 6th grade, reading science fiction and dreaming of building robots. From there on, in the late 70s, the virus never left him. The rest of the interview was a history lesson about software engineering. Dean spoke of the first company he created in high school and funded by Atari founder Nolan Bushnell. He talked about his time working for Xerox Parc and then on project Xanadu, e.g., Hypertext. Beyond this history of the Silicon Valley, we spoke about "old" scientific papers and how they apply today. About workflows, language design, smart contracts, Blockchains, web3, etc., and how they all have the same elementary building blocks, sync/async & blocking/non-blocking. What a ride!Here are the links from the showhttps://www.twitter.com/DeanTribblehttps://www.twitter.com/agorichttps://www.linkedin.com/in/deantribble/https://agoric.com/blog/https://github.com/agorichttps://github.com/endojsCreditsCover Legends by HoliznaCC0 is licensed CC0 1.0 Universal License.Your host is Timothée (Tim) Bourguignon, more about him at timbourguignon.fr.Gift the podcast a rating on one of the significant platforms https://devjourney.info/subscribeSupport the show

Danielle Newnham Podcast
Alvy Ray Smith: Founder of Pixar (REPLAY)

Danielle Newnham Podcast

Play Episode Listen Later Feb 2, 2023 60:50


Dr Alvy Ray Smith is the co-founder of Pixar, a computer scientist and pioneer in the field of computer graphics. After starting his career in academia, Alvy had an epiphany following a serious skiing accident. He decided to move to California to combine his two passions - art and computers - in a place where he felt something good was about to happen. Alvy was always a pioneer. From creating his first computer graphic in 1965, Alvy became an original member of the Computer Graphics Lab at the New York Institute of Technology, he witnessed the birth of the personal computer at Xerox PARC, and he was the first director of computer graphics at George Lucas's Lucasfilm. It was there that Alvy gathered some of the smartest people he knew to develop computer graphics software, including early renderer technology. He and colleague Ed Catmull then spun out to co-found the famous Pixar, soon followed by the hiring of Lucasfilm colleague John Lasseter, and Steve Jobs as an investor. It was at Pixar that Toy Story would be made - the very first, entirely computer-animated, feature film. In 2006, Pixar was sold to Disney for $7.4 billion.Alvy also co-founded Altamira Software and has created a number of computer art pieces including the famous Sunstone with Ed Emshwiller which featured in the Museum of Modern Art in New York. Alvy was also the first Graphics Fellow at Microsoft.In this interview, Alvy recounts his career from the early days at Xerox PARC to how Pixar got started. We discuss the Pixar journey in detail, as well as his latest book – A Biography of the Pixel  (here)- including how innovation is born from three strands: An idea, chaos and a tyrant. And how Steve jobs was both the saviour and the tyrant in the incredible Pixar story.Alvy has combined his two passions – art and computer science – to spend his career showing the world what computers can do.  A true pioneer, this is one of my favourite conversations. I hope you enjoy it too.-----Let us know what you think of this episode and please rate, review and share - it means the world to me and helps others to find it too.Danielle on Twitter @daniellenewnham and  Instagram @daniellenewnhamAlvy Ray Smith on Twitter @alvyray / website Buy Alvy Ray Smith's book A Biography of the Pixel here. -----This episode was hosted by me - Danielle Newnham, a recovering founder, author and writer who has been interviewing tech founders and innovators for ten years - and produced by Jolin Cheng. Image of Alvy Ray by Christopher Michel.

Augmented - the industry 4.0 podcast
Episode 104: A Scandinavian Perspective on Industrial Operator Independence with Johan Stahre

Augmented - the industry 4.0 podcast

Play Episode Listen Later Nov 30, 2022 44:01


Augmented reveals the stories behind the new era of industrial operations, where technology will restore the agility of frontline workers. In this episode of the podcast, the topic is "A Scandinavian Perspective on Industrial Operator Independence." Our guest is Johan Stahre (https://www.linkedin.com/in/jstahre/), Professor and Chair of Production Systems at Chalmers University in Sweden. In this conversation, we talk about how the field of human-centered automation has evolved, the contemporary notion of operator 4.0, Scandinavian worker independence, shop floor innovation at Volvo, factories of the future, modern production systems, robots, and cobots in manufacturing. If you like this show, subscribe at augmentedpodcast.co (https://www.augmentedpodcast.co/). If you like this episode, you might also like Episode 84 on The Evolution of Lean with Professor Torbjørn Netland from ETH Zürich (https://www.augmentedpodcast.co/84). Augmented is a podcast for industry leaders, process engineers, and shop floor operators, hosted by futurist Trond Arne Undheim (https://trondundheim.com/) and presented by Tulip (https://tulip.co/). Follow the podcast on Twitter (https://twitter.com/AugmentedPod) or LinkedIn (https://www.linkedin.com/company/75424477/). Trond's Takeaway: Human-centered automation is the only kind of automation that we should be thinking about, and this is becoming more and more clear. Operators are fiercely independent, and so should they be. This is the only way they can spot problems on the shop floor, by combining human skills with automation in new ways augmenting workers. It seems the workforce does not so much need engagement as they need enablement. Fix that, and a lot can happen. Transcript: TROND: Welcome to another episode of the Augmented Podcast. Augmented brings industrial conversations that matter, serving up the most relevant conversations on industrial tech. Our vision is a world where technology will restore the agility of frontline workers. In this episode of the podcast, the topic is A Scandinavian Perspective on Industrial Operator Independence. Our guest is Johan Stahre, Professor and Chair of Production Systems at Chalmers University in Sweden. In this conversation, we talk about how the field of human-centered automation has evolved, the contemporary notion of operator 4.0, Scandinavian worker independence, shop floor innovation at Volvo, factories of the future, modern production systems, robots, and cobots in manufacturing. Augmented is a podcast for industrial leaders, process engineers, and shop floor operators hosted by futurist Trond Arne Undheim and presented by Tulip. Johan, Welcome. How are you? JOHAN: I'm fine, thank you, Trond. It's really nice to see you. TROND: Yeah, likewise. JOHAN: Fellow Nordic person. TROND: Fellow Nordic person. And I apologize for this very American greeting, you know, how are you? As you know, I'm from the Nordic region. I actually mean it, [laughs] you know, it was a question. So I do wonder. [laughs] JOHAN: I'm actually fine. It's just ending the vacation, so I'm a little bit sad about that because everyone...but it's a very nice time now because the rest of the world seems to be on vacation, so you can get a lot of work done. TROND: I concur; that is a wonderful time. Johan, I wanted to just briefly talk about your exciting background. You are an engineer, a mechanical engineer from Sweden. And you had your initial degree from Linköping University. Then you went on to do your Ph.D. a while back in manufacturing automation, and this was at Chalmers, the University in Sweden. And that's where you have done your career in manufacturing research. You are, I think, the first Scandinavian researcher certainly stationed currently in Sweden that we've had on the podcast. So I'm kind of curious, what is manufacturing like in Scandinavia? And what is it that fascinated you about this topic so that you have moved so deeply into it? JOHAN: Manufacturing in Sweden is the core; it's the backbone of our country in a sense. We have statistically too many large manufacturing companies in Sweden as compared to, I mean, we're only 10 million people, but we have like 10, 12 pretty large companies in the manufacturing area in automotive but also in electronics like Ericsson, you have Volvo, we have SKF. We have a lot of big companies. Sweden has an industrial structure that we have several small companies and a couple of large companies, not so many in the middle section there. This happened, actually, in the 1800s somewhere. There was a big growth of big companies, and there was a lot of effort from the government to support this, and that has been continued. So the Swedish government has supported the growth of industry in Sweden, and therefore we have a very strong industry and also quite good digital growth and maturity. TROND: So the Scandinavian background to me when I was there, I remember that one of the things that at least Scandinavian researchers think is distinct about Scandinavia is worker independence. And it's something that I kind of wanted to just tease out a little bit in the beginning of this podcast. Am I wrong in this, or is there something distinct about the relationship between, I guess, workers and managers in Scandinavia, particularly? One speaks about the Scandinavian model. Can you outline a little bit what that means in manufacturing if it still exists? It's an open question. JOHAN: From my perspective, Sweden usually ranks very high in innovation, also when it comes to international rankings. And I think some of that has to do with the openness and the freedom of thinking in a sense and not so hierarchical, more consensus-oriented, ability to test and check and experiment at work without getting repercussions from top management. And it is much easier. In fact, if you are at one department in a manufacturing company or in university as such and you want to collaborate with another colleague across the aisle, if you have a two hierarchical system, you need to go three levels up in order to be able to do that. But here, I think it's easier to just walk across the aisle to have this collaboration and establish a cooperative environment. I think that that's part of the reason. Also, we're not so many; I mean, I think historically, we needed to do a lot of things ourselves in Sweden. We were a country up north with not so many people, and we have harsh environments, and I think it's the same as Norway. I mean, you need to be self-sustainable in that sense, and that creates, I think, environmental collaboration. TROND: We'll go more deeply into your research on manufacturing and to what extent a question I asked here matters to that. But do you have a sense just at the outset here that this type of worker and operators sort of independence, relative independence, perhaps compared to other regions, is it changing at all? Or is this kind of a feature that is a staple of Scandinavian culture and will be hard to change both for good and for bad? JOHAN: I think that as everything...digitalization has sort of erased a lot of the cultural differences across the world in that sense. Because when I was a student, there was not this expressed digital environment, of course. The information environment was less complex. But I think now all the young people, as well as my mother, does her banking...she's 90, but she does her banking on her iPad; I mean, it's very well-spread. And I think that we are all moving towards a similar culture, and the technology is spreading so quick. So you cannot really have cultural differences in that sense. But I think that's still the way that we're using this. And I think that the collaborative sense I think that that is still there. The reason why Sweden is comparatively innovative still is that we still maintain our culture and use the technology to augment that capability. TROND: So, Johan, we'll talk about a bunch of your experiences because you obviously are based in Sweden. And because of Sweden's industrial situation, you have some examples, you know, Volvo, a world-famous company obviously, and also famous for its management practices, and its factory practices, we'll get into that. But you've also worked, and you're advising entities such as the World Economic Forum, and you are active on the European stage with the European Institute of Technology. Your activity clearly goes way, way beyond these borders. But why don't we maybe start with some of these Scandinavian experiences and research projects that you've done maybe with Volvo? What is it with Volvo that captured people's attention early on? And what sort of experience and research have you done with Volvo? JOHAN: I think that Volvo is very innovative, and Volvo today is two types of companies; one is the car company that has now gone fully electric. It was introduced at the stock market, most recently owned by a Chinese company, and before that, it was owned by Ford, and before that, it was also public. But you also have the other part, which is the Volvo Group, which is looking at trucks, and boats, and things like that. And they both share a high level of innovation, ambition, innovation, and power, I think, using the experiences already from the '60s, where you had a lot of freedom as an employee. And also very good collaboration with the union in investments and in all the changes in the company I think that has been very beneficial. And it's made them...what is now Volvo Cars was very, very early, for example, with digital twins. They were experimenting with digital twins already in the 1990s. And we work together with Volvo but also with SKF, which is a roller-bearing company here to look at how we can support frontline workers and augment their capabilities because they're very skilled and they're very experienced. But sometimes you need to have sensor input, and you need to have structures, and rules, and procedures, and instructions. So we worked quite early with them already, maybe in 2009, 2010, to see how can we transform their work situation, provide them with work instructions through wearable devices. It was very popular at that time. MIT was experimenting with cyborgs. And the people that were...I think it was Thad Starner; he was trying to put on a lot of computer equipment. Then he went through the security at the airport and had some problems there. But that's not the case for the operators. But it was a little bit too early, I think. We tried to experiment with some of the maintenance people at Volvo cars. And they were very interested in the technology, but the use for it was a little bit obscure. And this was at the time when you had the mobile connectivity was 9,600 kilobits through a mobile phone or in the modem, so Wi-Fi more or less did not exist. And the equipment: the batteries weighed two kilos, and the computer weighed one kilo. And then you had a headset that looked like you came from deployment in a war zone. So it was a little bit...it looked a little bit too spacy for them to be actually applicable. And then some 10 years later, we actually did a similar experiment with SKF, the roller bearing company where we deployed the first iPod touch, I think they were called. That was right before the iPhone. I think it was an experiment by Steve Jobs to see how can we create what then became the iPhone screen. And we put that on the arms of the operators and tried to see how can we give them an overview of the process situation. So they were constantly aware, and they were quite happy about this. And then, we wanted to finish the experiment. The operators actually said, "Well, we don't want to give the equipment back." And then we said, "Well, we need to have it back. Of course, you can use the software." So they brought their own phones, and they downloaded the software. And they're still using it, actually, not on their own phones anymore. But they use this kind of software that we developed at that time together with them. So that was quite interesting. TROND: That's fascinating. Extrapolating from some of these early experiences up until now, I wanted to just ask you this from a research perspective, but also, I guess, from a management perspective. So you work on production systems. What is really the goal here, or what has the objective been early on? You talked about these early MIT experiments. And I know control systems is a very old area of research. And from what I understand, in the early days, the use cases weren't just factories; they were also on spacecraft and things. But to your point, especially earlier, we were working with very, very different technology interfaces. But now, obviously, we are starting to roll out 5G, which gives a whole other type of richness. But does it really matter how rich the technology interface is? Or does it matter more what the objective is with these various types of augmentations that have been attempted really throughout the decades? Can you just give us a little sense of what researchers and yourself what you were trying to augment and how that depends or doesn't depend on the quality of technology? JOHAN: First, we need to realize that the manufacturing industry has always been a very, very early adopter. The first computers were used for war simulations and for making propellers for submarines to see how you can program the milling machines. This was in the 1950s. And the industrial robots in the '60s in the '70s were also very early applications of digitalization. Before anything else had computers, the manufacturing industry was using it, and that's still the case. That might surprise some people. When they walk out into a shop floor, they see no computers around because all the computers are built into the machines already. What is still missing is the link, perhaps to the people. So they are still using the screens. And they are the ones...people are the key components of handling complex and unforeseeable situations. So you need to provide them, I think...to be really productive, you need to provide the frontline staff with the equipment for them to avoid and to foresee and to handle unforeseen situations because that's what differs between the man and machine or a human and the machine. People are much more apt to solve a complex situation that was not programmed before. That's the augmentation part here; how can we augment the human capabilities? And people talk about augmented reality; I mean, I don't think it's the reality that needs to be augmented; it's the human to be handling the reality that needs to be augmented. TROND: Johan, this is so fascinating because, first of all, it's quite easy to dismiss manufacturing a little bit these days because, to the untrained eye, all the excitement is in the consumer space because that's where the new devices get released, and that's, obviously, where all the attention is these days unless you obviously are in manufacturing. But can you bring us back to those early days of computing when a lot of the use cases for computing were first explored with manufacturing? So you talked about MIT, and back at MIT and at Stanford, all the way back to the '60s, they were exploring this new and fascinating field of even artificial intelligence, but before that, just regular control systems, electronic interfaces. What fork in the road would you say happened there? Because clearly, the fascination has been with digitalizing everything and software kind of one for 30 years, but in manufacturing, it's more complicated. You say people, so it's people, and then it's kind of these production systems that you research. That's not the same as the use case of an individual with their phone, and they're sort of talking to people. There are many, many more variables in play here. What is the real difference? JOHAN: Last year actually the European Commission put forth industry 5.0, which should be the follower after industry 4.0. And they based that on three main challenges. One is sustainability, one is resilience, and the various kinds of resilience towards the shock of the war but also by climate, et cetera. And the third one is actually human-centeredness to see how can we really fully deploy human capabilities in a society and also in industry, of course. I think what you're referring to is the two guys at Stanford in the '60s; one was John McCarthy. He was the inventor of the artificial intelligence concept. His aim then was to replace human work. That was the ambition with the artificial intelligence because human work is not as productive as computing work, but it still has some drawbacks. But in the same place not so far away, in another department at Stanford, was a guy called Douglas Engelbart. And he was actually the father of...he called it intelligence augmentation. So it was AI and IA at that time. But his ambition was to augment human work to see how can you have this. And he was the one that invented hypertext and the mouse. And he put up the first hypermedia set in Silicon Valley. So this was a guy that inspired companies like Apple, and Xerox PARC, those kinds of institutions that had a huge bearing. There was a book by a research colleague at Oxford. He was comparing that over time, from the early industrial days and then forward, technology that replaces people always has more complications when introduced and scaled than technology that augments people. If you look at the acceptance and the adoption of the iPhone, that took months, or weeks, or whatever, seconds for some people, for me, for example. If you look at what happened in the industrial revolutions in the 1800s and the 1700s, you had a lot of upheaval, and already in the 1960s...I'm starting to sound like a university professor. But in '96, in the U.S., there was a Senate hearing about is automation taking the jobs from people or not? And the conclusion was that it is not, it is actually creating companies that then employ more people because of the productivity gains and the innovation gains. And you allow people to use the automation as augmentation, not only cognitive augmentation. We think a lot about augmentation as something that you do with your eyes and your brain. But robots are also augmenting people. It lifts heavy objects like cars or big containers, whatever. That's the kind of augmentation that maybe you don't consider when you look at it from an artificial or an augmented reality perspective. TROND: Well, so many things to pick up here. But the variety of meanings of augmentation are kind of astounding, aren't they? And you've written about this operator 4.0 several times. There's obviously cognitive augmentation, and then there's physical augmentation. Are there other types of augmentation that you can speak of? JOHAN: I really can't think of any. TROND: But those are the main ones. So it's either kind of your mentality or sort of your knowledge. So the work instruction parts go to the skills-based, I guess, augmentation, which perhaps is an additional one. Or I'm just thinking if manufacturing wants to make progress in these things, it would perhaps make sense to really verify what workers at any moment actually themselves express that they need. And I guess that's what I was fishing for a little bit here in this history of all of this, whether the technology developers at all moments really have a clear idea of what it is that the workers are saying themselves they're missing or that they obviously are missing. Because automation and augmentation, I mean, do you find them diametrically opposed, or are they merely complementary when it works well? JOHAN: I mean, automation traditionally has been the way to scale, and, I mean, in the beginning, you want to see what the machine is doing, right? And then you really don't want to see it. You just want it to work. So it's really helping you to scale up your work. And in that sense, automation, like collaborative robots, for example, which people are talking about robots, are something that is replacing jobs, but if you look at it, it is a very small portion of statistics. In Singapore, which is the highest user of robots installed, there were 950 maybe robots per 10,000 employees. And the average in the Americas is 100 robots per 10,000 employees, and that's not really a lot. And so there is plenty of space for robots to be the tools for people. So if you don't treat them as something that will replace you but something that will actually augment you, I think it would be much easier. What could happen, though, and I think that is maybe part of your question, is that, well, these tools are becoming so complex that you cannot use them unless you increase your skill. How do you do that? Because no company would like to end up in a situation where the tools that you have bought and invested a lot of money in are too complex for your employees to use. That's a lost investment. It's like you're building a big factory out in a very remote place, and you don't have enough electric power to run it. You don't want to end up in that situation. Like you expressed, I think that maybe what's missing and what's trending right now is that the upskilling of the workforce is becoming extremely important. TROND: And how do you do that, Johan? Because there's obviously...there's now an increased attention on upskilling. But that doesn't mean that everyone has the solution for it. And employers are always asking for other people to pay for it, for example, governments, or the initiative of the worker, perhaps. It seems like Europe has taken this challenge head-on. Germany, at least, is recognized as a leader in workforce training. The U.S. is a latecomer to the game from that perspective. But it typically shows up in a big way. So something is going to happen here in the U.S. when it comes to workforce training. What is the approach? I mean, there seems to be two approaches to me; one is to simplify the technology, so you need less training. And the other would be obviously an enormous reskilling effort that either is organized, perhaps ideally in the workplace itself, so it's not removed from the tasks. Or some enormous schooling effort that is highly efficient and perhaps online. What do you think are the winning approaches to re-skilling that entire manufacturing workforce continuously? Because it's not like you have to rescale them once, you have to rescale them every time. JOHAN: Well, I can only guess. I think that you need to do all of these, all of the above. One complicating factor is the demographics of, especially Japan; of course, we know that from a long time that, they have an aging population. But Europe is now becoming the new Japan in that sense. We have a very big problem in terms of aging populations, especially countries like Italy and perhaps Germany but also in northern countries. And we don't have perhaps...there's a lot of discussion on immigration right now. But actually, the workforce would need a lot of immigration to be able to respond to the needs of our industry in the forthcoming situation. I think that China is maybe 4 or 5 years behind Europe, and the U.S. is maybe 10-12 years behind Europe as well. So that will happen...the only non-affected regions right now are India and Africa. And that means that the European, and Chinese, and U.S. industries will have to compete with a rather young population in Africa and India. And so that will become over time, but it is a long time, so that means that it's not always on the political agenda. Things that take a long time are usually not the things that you speak about when you have election times that we have in Sweden right now. It's mostly what's on the table. So I think that how to do that is really complex. We had some collaboration within the World Economic Forum. It is a fantastic organization because it spans the whole globe. So that means that the information comes from different parts of the world, and you can see different aspects of this. And a country that has done a lot about this is Singapore, very good experiments, very nice projects, initiatives regarding upskilling. And Europe is now launching an innovation program where they want to go deeper into deep tech to try to...the commissioner for research and education in June launched a big initiative around innovation and how that can be supported by deep technology. So we'll see what comes out of that. It'll be very, very interesting to see. MID-ROLL AD: In the new book from Wiley, Augmented Lean: A Human-Centric Framework for Managing Frontline Operations, serial startup founder Dr. Natan Linder and futurist podcaster Dr. Trond Arne Undheim deliver an urgent and incisive exploration of when, how, and why to augment your workforce with technology, and how to do it in a way that scales, maintains innovation, and allows the organization to thrive. The key thing is to prioritize humans over machines. Here's what Klaus Schwab, Executive Chairman of the World Economic Forum, says about the book: "Augmented Lean is an important puzzle piece in the fourth industrial revolution." Find out more on www.augmentedlean.com, and pick up the book in a bookstore near you. TROND: Speaking about the World Economic Forum for a minute, Johan, you have been part of this group project called the Augmented Workforce Initiative. You told me when we spoke earlier that, in your opinion, this initiative couldn't have existed even just five years ago. Can you explain what you mean by that? Because augmentation, the way that you've been speaking about it now, is a perspective that was nascent, even in the early days of computing and manufacturing control systems. Yet, it seems to have disappeared a little bit, at least from the top end of the political and research agenda. Yet here we are and you said this initiative couldn't have existed five years ago. Can you explain what you meant by that? JOHAN: That is a very, very nice initiative by the World Economic Forum, and it's run by the forum and Cambridge University, who has a very, very good group on this and some very nice people. And I'm honored to be part of that group together with my colleague from Mexico, David Romero. You may know him as well. And I think that what they're looking at is the increased understanding. And that was actually one of the sessions at this World Economic Forum, you know, the Davos days that were run this year. And it was actually part of those days as a theme about how to engage, and how to support, and to augment the workforce, which has never happened before on that level. So it's really, really high on the agenda. The Forum has been running previous projects also on the future of work and how the demographic situation is affecting or how the skill situation is affecting the companies. They have come up with suggestions that more or less half the workforce needs to be upskilled within the next couple of years. And that's a huge undertaking. TROND: The novelty here is that the world's elite managers, I guess, who are represented at the World Economic Forum are increasingly aware of the complexity of workforce issues generally, and then specifically of upskilling, and maybe even upskilling in this very specific meaning of augmenting a worker which, I guess to my mind, is a little bit different from just generally speaking about robotic automation and hammering these efficiency points. But obviously, it's a much more challenging debate because it's one thing to find a budget for an automation effort and introduce a lot of computers or introduce a lot of whatever technology, usually hardware, but what we're talking about here is a lot more challenging because you need to tailor it to these workers. And there are many workers, obviously, so it's a complicated phenomenon. How is that going? What would you say are some of the findings of the Augmented Workforce Initiative? JOHAN: I think that companies like Tulip, companies like Black & Decker, and others have a lot of good use cases actually already, which may or may not before have been labeled augmentation. It might have been labeled as operator support, or decision-making support, or things like that, or upskilling. But I think that the findings are that there is a lot out there, but it's not emphasized as something that is really important for the company's survival in that sense. TROND: It wasn't so glorified before. A lot of the decision support systems were viewed as lower-level systems that were just kind of more like HR systems or just tinkering with necessary stuff that people had to know kind of a thing. And so you're saying it's been elevated now, yeah, as having a much more essential impact on the quality of work. JOHAN: It has a leveraging impact for the whole company, I would say, but that's also part of this industry 4.0 approach. And you have the hierarchical integration of companies where the CEO should be aware of what's going on on the shop floor and vice versa, as well as the horizontal integration where you have the companies up and down the supply chain and value chain knowing what's going on early. And that is really something that maybe stopped at mid-management level before, but now it needs to be distributed out to the places where the complexity is higher, and that's the frontline workers. Maybe...now I'm guessing, but I think that also the understanding that the investments done by this company in complex manufacturing equipment could be at risk if you don't have the right skills to use them is now penetrating, I think, a lot of the companies. In Europe, in 2019 or something like that, there were almost 30 million people employed in the manufacturing industry. And if you look at the number of...if you say that half of these need to be upskilled somehow over a period of three years...and I actually made a mock calculation that the re-skilling need for in-person months in Europe if we were to fulfill this is 50 million person-months, 50 million person-months, just the time for the people to participate in these trainings. So that's a huge undertaking. And I think that that scares companies as well as governments because just imagine taking 50 million person-months out of productivity or the production equation. But the alternative might be worse. If you lose your capability to use your equipment, that might even be worse. TROND: Wow, these are daunting things. I guess that brings me to the last section here and some thoughts from you on the future outlook. When it comes to technology and these tools for human augmentation, what are the timelines for, well, either making the improvements or, as you said, not losing competitiveness because of this skills crisis? What are we looking at here? Is there some imminent challenge and opportunity? Or is this going to play out over 25 years? JOHAN: I think that in 25 years, the demographic situations will have changed again, so I assume that they will look different. But right now, we have a problem with an aging population. And we have a lot of people going into retirement. A lot of knowledge will disappear unless we can store it somehow. A lot of people will not go into industry. I mean, when I talk to colleagues, they say, "Well, we need to make the manufacturing industry more sexy. It should be cleaner, or it should be nicer because young people don't go to industry." But if I go to the healthcare section, they will say the same thing, "Oh, we need to make it much better because people are not applying for these educations." TROND: [laughs] Where are people applying, the tech companies? JOHAN: No, that's the problem. They don't exist. They were never born. TROND: [laughs] Right. JOHAN: So the demographic bomb is that they are actually not there. So you cannot rely on employing young people because they are not existing in Europe and soon not in the U.S. to the extent that they were before. So therefore, you need to focus on the older people. So you need to re-upskill not only the middle-aged people but the people in their 50s and even in their 60s. That adds to the complexity. In the next 5 to 10 years, there will be a lot of discussions on how to fill the missing places in industry to remain competitive. I also think that you can see the augmentation here as a fantastic tool together with the upskilling because upskilling the new skills together with the augmented tools like collaborative robots, like cognitive support, like whatever you can put in an iPhone, or whatever phone, or tool, or watch, or whatever, you can add the capability to make decisions. And that's the augmentation you will see. And you will see a lot of digital twins try to foresee problems. You will see a lot of transversal technologies going from different high-tech industry into manufacturing industry to support especially the frontline people and to enable their innovation capabilities. TROND: Johan, you said earlier that the complexity is higher at the level of frontline workers. Did you mean that, basically, the complexity of frontline work of itself at an individual level is also underestimated? Or were you simply saying that because there are so many frontline workers and the various situations of various types of frontline workers is so different that it's obviously an underappreciated management challenge? Or were you truly saying that frontline work in and of itself is either complicated or becoming more complex? JOHAN: If a task was not automated, it is inherently complex. So you couldn't automate it, right? TROND: Right. JOHAN: Because if you can teach a robot or whatever to do tasks, then it's not difficult, and you can foresee the results. There was a lady called Lisanne Bainbridge. She put out The Paradox of Automation that the more you automate, the more dependent you become on the few people that are still there to handle the situations that are so complex that you could not foresee them. So everything that is programmed is programmed by a programmer, and the programmer tries to foresee every foreseeable situation, and to that extent, the robots and the automation works. But if these situations go out of hand, if they're too complex, and something happens, then there is no robot that can fix that. Unfortunately, AI is not there yet. TROND: Well, you said, "Unfortunately, AI is not there yet," but I would also conjecture that, fortunately, AI is not there yet because you're pointing to something missing, I think. And a lot of the AI debate is starting to come back now. And it was there in the '60s because people realized that for lots of different reasons, to have a human oversight over robotic processes is actually a good thing. And you talked to me earlier about the experiments with imagining a trip to Mars and having to execute robotic actions on Mars in a control system environment where you actually had to foresee the action and plan; it was always a supervised type of situation. So the supervisory control concept has been there from the beginning of computing. If you were to think of a future where AI actually does get more advanced, and a lot of people feel like that's imminent, maybe you and I don't, but in any case, let's imagine that it does become more advanced and becomes sort of a challenge, how do we maintain human control over those kinds of decisions? I mean, there are researchers that have imagined, you know, famously in Superintelligence, Bostrom imagines this paperclip factory that goes amok and starts to optimize for producing paperclips, and everyone is suddenly producing, you know, and the machine then just reallocates resources to this enormously ridiculous task of producing only paper clips. It's a very memorable example. But a lot of people feel that AI could soon or at some point reach that level. How do we, as a failsafe, avoid that that becomes an issue? Or do you see it as such a far-fetched topic in manufacturing that it would be decades, if not centuries, away? JOHAN: I think that AI has been seasonal if you allow the expression. There's talk about these AI winters every now and then, and they tend to come every 10 or 15 years, and that matches two Ph.D. lifetimes, Ph.D. development. I mean, people tend to forget the problems, and then they tend to use these Gartner curves. If you look at the Gartner curve, you have the expectation part. I'm not being arrogant towards the AI research. I think that AI is fantastic, but it should be seen, from my perspective, as what it is, as an advanced form of automation that can be used as an augmentation tool. I think it was Kasparov that started to collaborate with a chess computer maker or developer, and they won every tournament because the combination of the human and the chess computer was astounding. And now I think there are even competitions with chess computers plus chess experts comes with them. There was, I think, in the 1800s, there was a traveling exhibitionist where they had the Mechanical Turk, I think it was called. It was a chess player that was competing then against the people in the audience. And actually, inside this box, there was a small human that was making all the chess moves. And they were beating all the chess champions. So there was a man inside this. I think that there is still a man inside a lot of the automation. TROND: A man and a woman. I wanted to just lastly end on a more positive note because you told me earlier that you are more optimistic now than ten years ago on behalf of your industry that you've researched for so many years. Why is that? JOHAN: I think that the technology, I mean, I'm a techno-optimist. And I think that we have also the full scale, the full attention from the ICT industry on various industrial processes right now. It was a lot of service-oriented. And I think that that is playing out now in the platform wars, the different services, but these different services are actually making a lot of good in the manufacturing and the tougher industries. And so, there is a bigger focus now on creating CO2-less steel. And there's an exploration of different industries that are going across; you look at the electrification of vehicles which is cutting across several sectors in the industry, automotive industry, electronics industry. And I think that the problems in industry are becoming so complex. So the ICT attention is on industry now more than perhaps on consumers, as it were, and I think that that's promising. I see companies like Ericsson promoting 5G. I see companies doing the Amazon Web Services and such companies looking at services that are useful for industry. And that's also augmenting the people's capability in that sense, so that's why I'm so positive. I see all the sensors coming. I see all the computing power coming into the hands of the frontline operators. And I see also the use for the upskilling and the skilling technologies that are emerging. How do you do that? What they do in Matrix when the leading lady downloads the instructions for the helicopter or motorcycle or whatever it is. But how do you do that in real life? How do you prepare for something that's coming in the next few minutes? That is something that people are now looking at using technologies, augmenting technologies, digital twins, and things like that in a completely different way than they were five years ago. TROND: Wow. So these are exciting moments for learning in manufacturing with perhaps wide-ranging consequences if we succeed. Johan, I thank you so much for these reflections. You've spent a career investigating production systems, and manufacturing, and workers. And these are very rich debates. And it seems like they're not over, Johan. So, hopefully, we'll have you back when something happens. And we'll have you comment on some developments. Thank you very much. JOHAN: Thank you, Trond. Thank you for a very interesting discussion. You always learn a lot by being asked a lot of questions, so thank you so much for this learning experience. Thank you. TROND: You're very gracious. Thank you, Johan. You have just listened to another episode of the Augmented Podcast with host Trond Arne Undheim. The topic was a Scandinavian Perspective on Industrial Operator Independence. Our guest was Johan Stahre, Professor and Chair of Production Systems at Chalmers University of Sweden. In this conversation, we talked about how the field of human-centered automation has evolved. My takeaway is that human-centered automation is the only kind of automation that we should be thinking about, and this is becoming more and more clear. Operators are fiercely independent, and so should they be. This is the only way they can spot problems on the shop floor, by combining human skills with automation in new ways augmenting workers. It seems the workforce does not so much need engagement as they need enablement. Fix that, and a lot can happen. Thanks for listening. If you liked the show, subscribe at augmentedpodcast.co or in your preferred podcast player, and rate us with five stars. If you liked this episode, you might also like Episode 84 on The Evolution of Lean with Professor Torbjørn Netland from ETH Zürich. Hopefully, you'll find something awesome in these or in other episodes and if so, do let us know by messaging us. We would love to share your thoughts with other listeners. The Augmented Podcast is created in association with Tulip, the frontline operation platform that connects people, machines, devices, and systems in a production or logistics process in a physical location. Tulip is democratizing technology and empowering those closest to operations to solve problems. Tulip is also hiring, and you can find Tulip at tulip.co. Please share this show with colleagues who care about where industry and especially about where industrial tech is heading. To find us on social media is easy; we are Augmented Pod on LinkedIn and Twitter and Augmented Podcast on Facebook and YouTube. Augmented — industrial conversations that matter. See you next time. Special Guest: Johan Stahre.

Medsider Radio: Learn from Medical Device and Medtech Thought Leaders
Building a Consumer-Focused Medtech Business: Interview with Tivic Health Co-founder and CEO Jennifer Ernst

Medsider Radio: Learn from Medical Device and Medtech Thought Leaders

Play Episode Listen Later Nov 9, 2022 57:22


In this episode of Medsider Radio, we sat down with Jennifer Ernst, CEO of Tivic Health.Jennifer came to the medtech space after more than 20 years in the computing and electronics industries, serving in high-profile roles at Xerox PARC and Thin Film Electronics. In 2016, Jennifer founded direct-to-consumer medtech company Tivic Health and helped lead the development and commercialization of the company's flagship device ClearUP.In this interview, Jennifer shares how consumers can play a key role in boosting support and evidence for over-the-counter medical devices, and why she thinks direct-to-consumer business models will transform the medtech space.Before we jump into the conversation, I wanted to mention a few things:If you're into learning from proven medtech and healthtech leaders, and want to know when new content and interviews go live, head over to Medsider.com and sign up for our free newsletter. You'll get access to gated articles, and lots of other interesting healthcare content.Second, if you want even more inside info from proven experts, think about a Medsider premium membership. We talk to experienced healthcare leaders about the nuts and bolts of running a business and bringing products to market.This is your place for valuable knowledge on specific topics like seed funding, prototyping, insurance reimbursement, and positioning a medtech startup for an exit.In addition to the entire back catalog of Medsider interviews over the past decade, premium members get a copy of every volume of Medsider Mentors at no additional cost. If you're interested, go to medsider.com/subscribe to learn more.Lastly, here's the link to the full interview with Jennifer if you'd rather read it instead.

TNT Radio
Richard Moore on The No Fly Zone with Greg Maybury - 17 September 2022

TNT Radio

Play Episode Listen Later Sep 17, 2022 54:17


On today's show we discuss developments in the Russia/Ukraine conflict and European energy issues. GUEST OVERVIEW: For the past ten years Richard Moore has been focusing his research on how it might be possible for humanity to escape from elite domination. He has also covered the amazing breakthroughs that have been happening in science, usually by groups of independent researchers who are being ignored by mainstream scientists. Richard Moore focused on math and computer science at Stanford, and upon graduation jumped whole-heartedly into the emerging Silicon Valley scene doing software R&D. He worked at many of the leading-edge companies of their day, including Tymshare, Xerox PARC, Apple Computer, and Oracle. After thirty years in the software industry, Richard decided that there had to be more to life than commuting and trading days for dollars and moved to Ireland.

The Gradient Podcast
Christopher Manning: Linguistics and the Development of NLP

The Gradient Podcast

Play Episode Listen Later Sep 8, 2022 71:35


Have suggestions for future podcast guests (or other feedback)? Let us know here!In episode 41 of The Gradient Podcast, Daniel Bashir speaks to Christopher Manning.Chris is the Director of the Stanford AI Lab and an Associate Director of the Stanford Human-Centered Artificial Intelligence Institute. He is an ACM Fellow, an AAAI Fellow, and past President of ACL. His work currently focuses on applying deep learning to natural language processing; it has included tree recursive neural networks, GloVe, neural machine translation, and computational linguistic approaches to parsing, among other topics. Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:(00:00) Intro(02:40) Chris's path to AI through computational linguistics(06:10) Human language acquisition vs. ML systems(09:20) Grounding language in the physical world, multimodality and DALL-E 2 vs. Imagen(26:15) Chris's Linguistics PhD, splitting time between Stanford and Xerox PARC, corpus-based empirical NLP(34:45) Rationalist and Empiricist schools in linguistics, Chris's work in 1990s(45:30) GloVe and Attention-based Neural Machine Translation, global and local context in language(50:30) Different Neural Architectures for Language, Chris's work in the 2010s(58:00) Large-scale Pretraining, learning to predict the next word helps you learn about the world(1:00:00) mBERT's Internal Representations vs. Universal Dependencies Taxonomy(1:01:30) The Need for Inductive Priors for Language Systems(1:05:55) Courage in Chris's Research Career(1:10:50) Outro (yes Daniel does have a new outro with ~ music ~)Links:Chris's webpagePapers (1990s-2000s)Distributional Phrase Structure InductionFast exact inference with a factored model for Natural Language ParsingAccurate Unlexicalized ParsingCorpus-based induction of syntactic structureFoundations of Statistical Natural Language ProcessingPapers (2010s):Recursive Deep Models for Semantic Compositionality Over a Sentiment TreebankGloVeEffective Approaches to Attention-based Neural Machine TranslationStanford's Graph-based Neural dependency parserPapers (2020s)Electra: Pre-training text encoders as discriminators rather than generatorsFinding Universal Grammatical Relations in Multilingual BERTEmergent linguistic structure in artificial neural networks trained by self-supervision Get full access to The Gradient at thegradientpub.substack.com/subscribe

Thinking Inside the Box
Kimberly Wiefling: How a Physicist Sees Organizational Culture

Thinking Inside the Box

Play Episode Listen Later Jul 12, 2022 39:15 Transcription Available


In today's episode, I chat with Kimberly Wiefling, Founding Member & Global Consultant at Silicon Valley Alliances. She's also the Founder & President of Wiefling Consulting, a firm launched on Feb. 1, 2001 to support healthy and productive organization cultures, providing practical business leadership and project management guidance ever since.Kim is a Trusted Advisor, Valued Partner and Respected Colleague who, by her own admission, shocks companies into fixing the things that will Damage or Destroy them. With extensive experience in the R&D and manufacturing of complex systems of hardware, software, firmware, chemistry and physics, Kim has held leadership positions ranging from NPI Program Manager to VP of Program Management & Organizational Effectiveness for a Xerox Parc spinoff, and has helped companies from startups to Global 1000 successfully grow and – more importantly - SCALE. A physicist by education, she has found that many of these are “human” problems that don't require “rocket science”. All that's required to achieve significant positive impact in many cases is to make common sense into common PRACTICE.In the discussion, Kim & I cover a lot of ground. From her professional background as a physicist, to how she transforms work environments for clients around the world. We close by discussing the strategic imperative of Mental health & wellbeing.It was such a pleasure connecting with Kim. And I hope you enjoy it. Kimberly WieflingKimberly Wiefling is the founder of Wiefling Consulting a Silicon Valley-based global consultancy. She has worked with people from over 50 different countries. Her book, Scrappy Project Management, has gotten her invited to speak to audiences globally. She works globally with valued colleagues at Silicon Valley Alliances. Her keynotes and workSHOCKs enable people to make positive changes to overcome the predictable & avoidable leadership, team & organizational culture issues that damage or destroy organizations.LinkedInWebsiteBooksThinking Inside the BoxConstraints drive innovation. We tackle the most complex issues related to work & culture. And if you enjoy the work we're doing here, consider giving us a 5-star rating, leaving a comment & subscribing. It ensures you get updated whenever we release new content & really helps amplify our message. LinkedInInstagramTwitterWebsiteApple PodcastsGoogle PodcastsSpotifyStitcherPocket CastMatt BurnsMatt Burns is an award-winning executive, social entrepreneur and speaker. He believes in the power of community, simplicity & technology.LinkedInTwitter

Mind Body Health & Politics
Rhythm and Sound; How Music can Heal - Gary Muszysnki

Mind Body Health & Politics

Play Episode Listen Later Jun 14, 2022 82:24


This week I am pleased to welcome renowned musician Gary Muszysnki to the program.Gary Muszynski is a percussionist, composer, bandleader, and leadership coach who creates original music that combines jazz, world, and classical music. He plays a wide variety of world percussion, including handpan, berimbau, pandeiro, surdo, udu, mbira, conga, bongo, and cajon. He has performed at venues such as SF Jazz (with Bobby McFerrin), the Freight and Salvage in Berkeley, CA, the Country Music Hall of Fame in Nashville and at TEDxBerkeley. Gary received an artist's grant to study Brazilian folkloric and popular music at the Carlos Gomez Conservatory in Belem, Para (Brazil) at the mouth of the Amazon through the Partners of the Americas in 1989. It was at that time that he also met Martinho da Villa, one of Brazil's most important samba singers and composers, and began to study and parade with the Vila Isabel School of Samba in Rio de Janeiro, and then with Olodum in Salvador, Bahia in 2005.Gary was one of the first percussionists to spread samba in the US, founding a samba school in the Midwest in 1987. He founded One World Music at that time, a non-profit performing arts and education organization, receiving funding from the Missouri Arts Council and the St. Louis Regional Arts Commission.Currently residing in the SF Bay Area; Gary's newest album is called Roots and Wings, featuring Sting's pianist and arranger Frank Martin; Mark Summer, the former cellist and co-founder of the Turtle Island Quartet; Cuban jazz-piano legend, Omar Sosa, and Deepak Ram, North Indian bansuri flute master as well as many other musical luminaries. Roots & Wings won the top award at the Global Music Awards in 2021 and was then voted as one of the best CDs of 2021.In addition to his career as a recording and performing artists, Gary has been brining musical experiences and thinking into organizations to further leadership, collaboration, and innovation for the past 30 years. He has reached more than 150,000 leaders and managers through organizational trainings and interactive conference keynotes on five continents and has worked with Apple, Disney, Google, and Xerox Parc among many other organizations. You can see his work through Orchestrating Excellence here.

Engines of Our Ingenuity
Engines of Our Ingenuity 2786: Ingenuity Leashed

Engines of Our Ingenuity

Play Episode Listen Later Jun 2, 2022 3:49


Episode: 2786 Creativity Leashed: How a new way of looking at computers got away.  Today, learning from an Alto.

The History of Computing
Colossal Cave Adventure

The History of Computing

Play Episode Listen Later Jun 2, 2022 11:28


Imagine a game that begins with a printout that reads: You are standing at the end of a road before a small brick building. Around you is a forest. A small stream flows out of the building and down a gully. In the distance there is a tall gleaming white tower. Now imagine typing some information into a teletype and then reading the next printout. And then another. A trail of paper lists your every move. This is interactive gaming in the 1970s. Later versions had a monitor so a screen could just show a cursor and the player needed to know what to type. Type N and hit enter and the player travels north. “Search” doesn't work but “look” does. “Take water” works as does “Drink water” but it takes hours to find dwarves and dragons and figure out how to battle or escape. This is one of the earliest games we played and it was marvelous. The game was called Colossal Cave Adventure and it was one of the first conversational adventure games. Many came after it in the 70s and 80s, in an era before good graphics were feasible. But the imagination was strong.  The Oregon Trail was written before it, in 1971 and Trek73 came in 1973, both written for HP minicomputers. Dungeon was written in 1975 for a PDP-10. The author, Don Daglow, went on the work on games like Utopia and Neverwinter Nights Another game called Dungeon showed up in 1975 as well, on the PLATO network at the University of Illinois Champagne-Urbana. As the computer monitor spread, so spread games. William Crowther got his degree in physics at MIT and then went to work at Bolt Baranek and Newman during the early days of the ARPANET. He was on the IMP team, or the people who developed the Interface Message Processor, the first nodes of the packet switching ARPANET, the ancestor of the Internet. They were long hours, but when he wasn't working, he and his wife Pat explored caves. She was a programmer as well. Or he played the new Dungeons & Dragons game that was popular with other programmers. The two got divorced in 1975 and like many suddenly single fathers he searched for something for his daughters to do when they were at the house. Crowther combined exploring caves, Dungeons & Dragons, and FORTRAN to get Colossal Cave Adventure, often just called Adventure. And since he worked on the ARPANET, the game found its way out onto the growing computer network. Crowther moved to Palo Alto and went to work for Xerox PARC in 1976 before going back to BBN and eventually retiring from Cisco. Crowther loosely based the game mechanics on the ELIZA natural language processing work done by Joseph Weizenbaum at the MIT Artificial Intelligence Laboratory in the 1960s. That had been a project to show how computers could be shown to understand text provided to computers. It was most notably used in tests to have a computer provide therapy sessions. And writing software for the kids or gaming can be therapeutic as well. As can replaying happier times.  Crowther explored Mammoth Cave National Park in Kentucky in the early 1970s. The characters in the game follow along his notes about the caves, exploring the area around it using natural language while the computer looked for commands in what was entered. It took about 700 lines to do the original Fortran code for the PDP-10 he had at his disposal at BBN. When he was done he went off on vacation, and the game spread. Programmers in that era just shared code. Source needed to be recompiled for different computers, so they had to. Another programmer was Don Woods, who also used a PDP-10. He went to Princeton in the 1970s and was working at the Stanford AI Lab, or SAIL, at the time. He came across the game and asked Crowther if it would be OK to add a few features and did. His version got distributed through DECUS, or the Digital Equipment Computer Users Society. A lot of people went there for software at the time. The game was up to 3,000 lines of code when it left Woods. The adventurer could now enter the mysterious cave in search of the hidden treasures. The concept of the computer as a narrator began with Collosal Cave Adventure and is now widely used. Although we now have vast scenery rendered and can point and click where we want to go so don't need to type commands as often. The interpreter looked for commands like “move”, “interact” with other characters, “get” items for the inventory, etc. Woods went further and added more words and the ability to interpret punctuation as well. He also added over a thousand lines of text used to identify and describe the 40 locations. Woods continued to update that game until the mid-1990s. James Gillogly of RAND ported the code to C so it would run on the newer Unix architecture in 1977  and it's still part of many a BSD distribution. Microsoft published a version of Adventure in 1979 that was distributed for the Apple II and TRS-80 and followed that up in 1981 with a version for Microsoft DOS or MS-DOS. Adventure was now a commercial product. Kevin Black wrote a version for IBM PCs. Peter Gerrard ported it to Amiga Bob Supnik rose to a Vice President at Digital Equipment, not because he ported the game, but it didn't hurt. And throughout the 1980s, the game spread to other devices as well. Peter Gerrard implemented the version for the Tandy 1000. The Original Adventure was a version that came out of Aventuras AD in Spain. They gave it one of the biggest updates of all. Colossal Cave Adventure was never forgotten, even though it was Zork was replaced. Zork came along in 1977 and Adventureland in 1979. Ken and Roberta Williams played the game in 1979. Ken had bounced around the computer industry for awhile and had a teletype terminal at home when he came across Colossal Cave Adventure in 1979. The two became transfixed and opened their own company to make the game they released the next year called Mystery House. And the text adventure genre moved to a new level when they sold 15,000 copies and it became the first hit. Rogue, and others followed, increasingly interactive, until fully immersive graphical games replaced the adventure genre in general. That process began when Warren Robinett of Atari created the 1980 game, Adventure.  Robinett saw Colossal Cave Adventure when he visited the Stanford Artificial Intelligence Laboratory in 1977. He was inspired into a life of programming by a programming professor he had in college named Ken Thompson while he was on sabbatical from Bell Labs. That's where Thompason, with Dennis Ritchie and one of the most amazing teams of programmers ever assembled, gave the world Unix and the the C programming language at Bell Labs. Adventure game went on to sell over a million copies and the genre of fantasy action-adventure games moved from text to video.

FOSS and Crafts
46: Mark S. Miller on Distributed Objects, Part 1

FOSS and Crafts

Play Episode Listen Later Jun 1, 2022


Calling all programming language nerds! Distinguished computer scientist Mark S. Miller (presently at Agoric) joins us to tell us all about distributed object programming languages and their history! We talk about actors, a bit of Xanadu, and little known but incredibly influential programming languages like Flat Concurrent Prolog, Joule, and E!Actually there's so much to talk about that this episode is just part one! There's more to come!Links:The actor model (the core of which is sometimes distinguished from modified variants by as being called "the classic actor model"). Long history; Tony Garnock-Jones' History of Actors is maybe the cleanest writeupThe Agoric Open Systems papers by Mark Miller and Eric Drexler are a good background into the underlying motivations that got Mark into distributed objectsmarkm-talks and markm-more-talks which are mostly about object capability security topicsAPConf keynote, Architectures of Robust Openness by Mark S. Miller (YouTube copy)Mark diagraming a (certificate based) object capabilities flow at Rebooting Web of Trust 2017 (when Mark and Christine first met!)The history of Mark and company performing civil disobediance to make cryptography available to everyone is discussed in When Encryption Was a Crime: The 1990s Battle for Free Speech in Software, part of a four part seriesRSAXanadu, Ted Nelson, and Computer Lib/Dream MachinesXerox PARC, which is where the Vulcan group happened (which is hard to find information on, sadly).Mark mentions some of his colleagues who worked with him in the Vulcan group, including Dean Tribble (who worked on Joule, see more below) and Danny Bobrow who is famous for his groundbreaking program STUDENT (Natural Language Input for a Computer Proglem Solving System is an incredible read, detailing a program (written in lisp!) which could read algebra "word problems" written in plain English and solve them... in 1964!).Flat Concurrent Prolog... it's tough to find things about! Presumably here's the paper Mark mentioned that Dean lead on Flat Concurrent Prolog from the Vulcan group which lead to Joule's channels. A bit more on (go figure) erights.org!The Joule manual is still a very interesting read, if you can find the time. Talks about channels in depth.Here's the Communicating Sequential Processes book by Tony Hoare, quite a nerdy read!On capabilities and actors... we'll get to this more in the next episode, but for now we'll leave the Ode to the Granovetter Diagram paper here (it's a truly amazing document!)

ANTIC The Atari 8-bit Podcast
ANTIC Interview 434 - Michael Park: Swan and Fujiboink Demos, MIDI Maze

ANTIC The Atari 8-bit Podcast

Play Episode Listen Later May 14, 2022 19:29


Michael Park: Swan and Fujiboink Demos, MIDI Maze   Michael Park created two well-known demos that are familiar to many Atari enthusiasts: the Swan Demo and FujiBoink. In the Swan demo, a bird flies gracefully across the screen, in front of a spinning fuji logo. In FujiBoink, the Atari fuji spins and bounces over a red and white checkerboard, reminiscent of the Amiga Boing Ball demo.   Michael also helped create MIDI Maze, an early first-person shooter that used the Atari ST's MIDI ports to network up to 16 computers. He also worked on the 8-bit version of MIDI Maze, which was never officially released but became available nonetheless. Michael also created Shiny Bubbles, another demo for the Atari ST.   Michael was a friend of the owner of Xanth Computer Systems, an Atari dealer in Seattle, Washington. A 2013 article titled "Computer Dealer Demos: Selling Home Computers with Bouncing Balls and Animated Logos," published in the IEEE Annals of the History of Computing, stated:   "During the 1985 Winter CES, Atari presented the 130XE... This computer was promoted with a demo that included three animations—Atari Robot, Atari Swan, and Fuji Boink—made by a small software company named Xanth FX. The company's representative claimed in ANALOG Computing magazine, 'We are a large ST retailer. Our F/X division churns out demos for the betterment of Atari.'  According to the testimonies of Atari users in Seattle, it was actually a 'small computer store in downtown Seattle' and a small software company that employed a few people, among them programmer and graphic designer Michael A. Park."   "Xanth Park" (a play on Xerox PARC) and the "F/X division" were deliberate tricks to make the little company and a one or two great coders, seem like a big company.   Michael told me that neither he nor "Xanth Park" created the walking robot demo, another popular demo of the era. "I think we did combine robot/spaceship with the bouncing ball so they'd play sequentially, at Atari's request," he told me. He extracted the rotating fuji code from the Robot demo for re-use in his Swan demo.    After the interview, Michael sent an email: "Every now and then I hear from people who have enjoyed the Atari software that I was involved in way back when, and every time, I am reminded of the fun and excitement of those days. To those who have kept the Atari spirit alive all this time, I salute you!"   This interview took place on April 6, 2022.   Fujiboink! Behind the Bit Planes in START magazine   Computer Dealer Demos: Selling Home Computers With Bouncing Balls And Animated Logos by Patryk Wasiak   Demozoo's list of Michael's demos   Atariage discussion

The Voicebot Podcast
Chris Maeda CTO of Botco on Bots and eCommerce - Voicebot Podcast Ep 249

The Voicebot Podcast

Play Episode Listen Later Mar 31, 2022 65:04


Chris Maeda has an undergraduate degree in computer science from MIT and a PhD from Carnegie Mellon. He was hacking LISP machines in the late 1980s in his first AI job and then moved onto the famed Xerox PARC.   After that, he wound up doing a lot of customer relationship management software startups including co-founding Rubric Software which merged with Broadbase in 2000 and later was absorbed into Kana where he was EVP and CTO. He also became CEO of a company that eventually acquired the marketing software solutions from Kana and added the title of CEO of MailMonitor an email marketing software company.   In 2017, Mida co-founded Botco which is focused on AI-powered natural language marketing and eCommerce automation for enterprises.

The History of Computing
The Earliest Days of Microsoft Windows NT

The History of Computing

Play Episode Listen Later Mar 24, 2022 17:55


The first operating systems as we might think of them today (or at least anything beyond a basic task manager) shipped in the form of Multics in 1969. Some of the people who worked on that then helped created Unix at Bell Labs in 1971. Throughout the 1970s and 1980s, Unix flowed to education, research, and corporate environments through minicomputers and many in those environments thought a flavor of BSD, or Berkeley Software Distribution, might become the operating system of choice on microcomputers. But the microcomputer movement had a while other plan if only in spite of the elder minicomputers. Apple DOS was created in 1978 in a time when most companies who made computers had to mail their own DOS as well, if only so software developers could built disks capable of booting the machines. Microsoft created their Disk Operating System, or MS-DOS, in 1981. They proceeded to Windows 1 to sit on top of MS-DOS in 1985, which was built in Intel's 8086 assembler and called operating system services via interrupts. That led to poor programmers locking down points in order to access memory addresses and written assuming a single-user operating system. Then came Windows 2 in 1987, Windows 3 in 1992, and released one of the most anticipated operating systems of all time in 1995 with Windows 95. 95 turned into 98, and then Millineum in 2000. But in the meantime, Microsoft began work on another generation of operating systems based on a fusion of ideas between work they were doing with IBM, work architects had done at Digital Equipment Corporation (DEC), and rethinking all of it with modern foundations of APIs and layers of security sitting atop a kernel. Microsoft worked on OS/2 with IBM from 1985 to 1989. This was to be the IBM-blessed successor of the personal computer. But IBM was losing control of the PC market with the rise of cloned IBM architectures. IBM was also big, corporate, and the small, fledgeling Microsoft was able to move quicker. Really small companies that find success often don't mesh well with really big companies that have layers of bureaucracy. The people Microsoft originally worked with were nimble and moved quickly. The ones presiding over the massive sales and go to market efforts and the explosion in engineering team size was back to the old IBM. OS/2 had APIs for most everything the computer could do. This meant that programmers weren't just calling assembly any time they wanted and invading whatever memory addresses they wanted. They also wanted preemptive multitasking and threading. And a file system since by then computers had internal hard drives. The Microsoft and IBM relationship fell apart and Microsoft decided to go their own way. Microsoft realized that DOS was old and building on top of DOS was going to some day be a big, big problem. Windows 3 was closer, as was 95, so they continued on with that plan. But they started something similar to what we'd call a fork of OS/2 today. So Gates went out to recruit the best in the industry. He hired Dave Cutler from Digital Equipment to take on the architecture of the new operating system. Cutler had worked on the VMS operating system and helped lead efforts for next-generation operating system at DEC that they called MICA. And that moment began the march towards a new operating system called NT, which borrowed much of the best from VMS, Microsoft Windows, and OS/2 - and had little baggage. Microsoft was supposed to make version 3 of OS/2 but NT OS/2 3.0 would become just Windows NT when Microsoft stopped developing on OS/2. It took 12 years, because um, they had a loooooot of customers after the wild success of first Windows 3 and then Windows 95, but eventually Cutler and team's NT would replace all other operating systems in the family with the release of Windows 2000. Cutler wanted to escape the confines of what was by then the second largest computing company in the world. Cutler worked on VMS and RSX-12 before he got to Microsoft. There were constant turf battles and arguments about microkernels and system architecture and meetings weren't always conducive with actually shipping code. So Cutler went somewhere he could. At least, so long as they kept IBM at bay. Cutler brought some of the team from Digital with him and they got to work on that next generation of operating systems in 1988. They sat down to decide what they wanted to build, using the NS OS/2 operating system they had a starting point. Microsoft had sold Xenix and the team knew about most every operating system on the market at the time. They wanted a multi-user environment like a Unix. They wanted programming APIs, especially for networking, but different than what BSD had. In fact, many of the paths and structures of networking commands in Windows still harken back to emulating those structures. The system would be slow on the 8086 processor, but ever since the days of Xerox PARC, everyone knew Moore's Law was real and that the processors would double in speed every other year. Especially since Moore was still at Intel and could make his law remain true with the 286 and 386 chips in the pipeline. They also wanted the operating system to be portable since IBM selected the Intel CPU but there were plenty of other CPU architectures out there as well. The original name for NT was to be OS/2 3.0. But the IBM and Microsoft relationship fell apart and the two companies took their operating systems in different directions. OS/2 became went the direction of Warp and IBM never recovered. NT went in a direction where some ideas came over from Windows 95 or 3.1 but mostly the team just added layers of APIs and focused on making NT a fully 32-bit version of Windows that could that could be ported to other platforms including ARM, PowerPC, and the DEC Alpha that Cutler had exposure to from his days at Digital. The name became Windows NT and NT began with version 3, as it was in fact the third installment of OS/2. The team began with Cutler and a few others, grew to eight and by the time it finally shipped as NT 3.1 in 1993 there were a few hundred people working on the project. Where Windows 95 became the mass marketed operating system, NT took lessons learned from the Unix, IBM mainframe, and VMS worlds and packed them into an operating system that could run on a corporate desktop computer, as microcomputers were called by then. The project cost $150 million, about the same as the first iPhone. It was a rough start. But that core team and those who followed did what Apple couldn't in a time when a missing modern operating system nearly put Apple out of business. Cutler inspired, good managers drove teams forward, some bad managers left, other bad managers stayed, and in an almost agile development environment they managed to break through the conflicts and ship an operating system that didn't actually seem like it was built by a committee. Bill Gates knew the market and was patient enough to let NT 3 mature. They took the parts of OS/2 like LAN Manager. They took parts of Unix like ping. But those were at the application level. The microkernel was the most important part. And that was a small core team, like it always is. The first version they shipped to the public was Windows NT 3.1. The sales people found it easiest to often say that NT was the business-oriented operating system. Over time, the Windows NT series was slowly enlarged to become the company's general-purpose OS product line for all PCs, and thus Microsoft abandoned the Windows 9x family, which might or might not have a lot to do with the poor reviews Millennium Edition had. Other aspects of the application layer the original team didn't do much with included the GUI, which was much more similar to Windows 3.x. But based on great APIs they were able to move faster than most, especially in that era where Unix was in weird legal territory, changing hands from Bell to Novell, and BSD was also in dubious legal territory. The Linux kernel had been written in 1991 but wasn't yet a desktop-class operating system. So the remaining choices most business considered were really Mac, which had serious operating system issues at the time and seemed to lack a vision since Steve Jobs left the company, or Windows. Windows NT 3.5 was introduced in 1994, followed by 3.51 a year later. During those releases they shored up access control lists for files, functions, and services. Services being similar in nearly every way to a process in Unix. It sported a TCP/IP network stack but also NetBIOS for locating computers to establish a share and a file sharing stack in LAN Manager based on the Server Message Block, or SMB protocol that Barry Feigenbaum wrote at IBM in 1983 to turn a DOS computer into a file server. Over the years, Microsoft and 3COM add additional functionality and Microsoft added the full Samba with LDAP out of the University of Michigan as a backend and Kerberos (out of MIT) to provide single sign-on services. 3.51 also brought a lot of user-mode components from Windows 95. That included the Windows 95 common control library, which included the rich edit control, and a number of tools for developers. NT could run DOS software, now they were getting it to run Windows 95 software without sacrificing the security of the operating system where possible. It kinda' looked like a slightly more boring version of 95. And some of the features were a little harder to use, like configuring a SCSI driver to get a tape drive to work. But they got the ability to run Office 95 and it was the last version that ran the old Program Manager graphical interface. Cutler had been joined by Moshe Dunie, who led the management side of NT 3.1, through NT 4 and became the VP of the Windows Operating System Division so also had responsibility for Windows 98 and 2000. For perspective, that operating system group grew to include 3,000 badged Microsoft employees and about half that number of contractors. Mark Luovsky and Lou Perazzoli joined from Digital. Jim Alchin came in from Banyan Vines. Windows NT 4.0 was released in 1996, with a GUI very similar to Windows 95. NT 4 became the workhorse of the field that emerged for large deployments of computers we now refer to as enterprise computing. It didn't have all the animation-type bells and whistles of 95 but did perform about as well as any operating system could. It had the NT Explorer to browse files, a Start menu, for which many of us just clicked run and types cmd. It had a Windows Desktop Update and a task scheduler. They released a number of features that would take years for other vendors to catch up with. The DCOM, or Distributed Component Object Modeling and Object Linking & Embedding (or OLE) was a core aspect any developer had to learn. The Telephony API (or TAPI) allowed access to the modem. The Microsoft Transaction Server allowed developers to build network applications on their own sockets. The Crypto API allowed developers to encrypt information in their applications. The Microsoft Message Queuing service allowed queuing data transfer between services. They also built in DirectX support and already had OpenGL support. The Task Manager in NT 4 was like an awesome graphical version of the top command on Unix. And it came with Internet Explorer 2 built in. NT 4 would be followed by a series of service packs for 4 years before the next generation of operating system was ready. That was Windows 5, or more colloquially called Windows 2000. In those years NT became known as NT Workstation, the server became known as NT Server, they built out Terminal Server Edition in collaboration with Citrix. And across 6 service packs, NT became the standard in enterprise computing. IBM released OS/2 Warp version 4.52 in 2001, but never had even a fraction of the sales Microsoft did. By contrast, NT 5.1 became Windows XP and 6 became Vista in while OS/2 was cancelled in 2005.

Women to Watch™
Jennifer Ernst, Tivic Health

Women to Watch™

Play Episode Listen Later Feb 7, 2022 42:52


Jennifer Ernst, CEO & Co-Founder of Tivic Health, shared the story behind her title with us on Sunday, February 6, 2022.Jennifer is a co-founder and has served as the Chief Executive Officer and as a Director of Tivic Health Systems, Inc., a commercial-stage bioelectronic medicine company focused on treating diseases and conditions by modulating the electrical signals carried along various nerve pathways. They focus on non-invasive products that offer consumers a choice in the treatment of inflammation and related conditions.Jennifer has also served as the CEO and Chief Strategy Officer of the U.S. subsidiary of Thin Film Electronics ASA. She worked for Xerox PARC for over 20 years, where she held multiple go-to-market roles, including as the Director of Business Development. Additionally, she served as a director of FlexTech Alliance, the U.S. national consortium for flexible and printed electronics, for three years, including one year as the Chair.Jennifer earned her Master of Business Administration degree from Santa Clara University, and has been featured in Inc., The HealthCare Technology Report, and Thrive Global as a female leader who is shaking up things in the industry.SUE SAYS"Jennifer grew up as a "late in life" baby, the youngest of four, who spent a lot of time listening to the grownups talking. Her inquisitive nature early on in the area of math and science makes perfect sense today as she leads a company that is on the cutting edge of bioelectric medicine. Being a woman in STEM was never something Jennifer questioned or found a challenge as she continually moved up in her career and landed as an inevitable entrepreneur. "Support this podcast at — https://redcircle.com/women-to-watch-r/donationsAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

Beta Business
Innovating at Scale with Dr. Bob Metcalfe

Beta Business

Play Episode Listen Later Feb 2, 2022 35:42


Bob Metcalfe invented Ethernet in 1973 while working at Xerox PARC in Palo Alto, California. Over the decades to follow, Metcalfe became an internet pioneer by commercializing Ethernet as the local area network (LAN) standard of the internet. In 2011, Metcalfe and his wife, Robyn, moved to Austin where they both became professors at The University of Texas at Austin. Metcalfe served as Professor of Innovation in the Cockrell School of Engineering for ten years before retiring from UT Austin in December of 2021. Along the way, Dr. Metcalfe has helped countless campus and local Austin organizations help make Austin a better Silicon Valley.Beta Business is hosted by Nick Spiller, produced by Arturo Rolón, and owned by Beta Business LLC. Follow us @betayourbusiness on IG.Song Licenses:Track: Sad LO-FI, Piano Beat [LOFI Music] by MokkaMusic / Early Morning https://youtu.be/5UXbFdfFQ-EMusic provided by "MokkaMusic" channel and https://inaudio.org

Idea Machines
The Nature of Technology with Brain Arthur [Idea Machines #41]

Idea Machines

Play Episode Listen Later Oct 3, 2021 114:11


Dr. Brian Arthur and I talk about how technology can be modeled as a modular and evolving system, combinatorial evolution more broadly and dig into some fascinating technological case studies that informed his book The Nature of Technology. Brian is a researcher and author who is perhaps best known for his work on complexity economics, but I wanted to talk to him because of the fascinating work he's done building out theories of technology. As we discuss, there's been a lot of theorizing around science — with the works of Popper, Kuhn and others. But there's been less rigorous work on how technology works despite its effects on our lives. Brian currently works at PARC (formerly Xerox PARC, the birthplace of personal computing) and has also worked at the Santa Fe institute and was a professor Stanford university before that. Links W. Brian Arthur's Wikipedia Page The Nature of Technology on Amazon W. Brian Arthur's homepage at the Santa Fe Institute Transcript Brian Arthur [00:00:00]  In this conversation, Dr. Brian Arthur. And I talk about how technology can be modeled as modular and evolving system. Commentorial evolution more broadly, and we dig into some fascinating technological hae studies that informed your book, his book, the nature of tech. Brian is a researcher and author who is perhaps best known for his work on complexity economics. Uh, but I wanted to talk to him [00:01:00] because of the fascinating work he's done, building out theories of technology. Uh, as we discussed in the podcast, there's been a lot of theorizing around science, you know, with the works of popper and Kuhn and other. But there's has been much less rigorous work on how technology works despite its effect on our lives. As some background, Brian currently works at park formerly Xerox park, the birthplace of the personal computer, and has also worked at the Santa Fe Institute and was a professor at Stanford university before that. Uh, so without further ado, here's my conversation with Brian Arthur.  Mo far less interested in technology. So if anybody asks me about technology immediately search. Sure. But so the background to this is that mostly I'm known for a new framework and economic theory, which is called complexity economics. I'm not the [00:02:00] only developer of that, but certainly one of the fathers, well, grandfather, one of the fathers, definitely. I was thinking one of the co-conspirators I think every new scientific theory like starts off as a little bit of a conspiracy. Yes, yes, absolutely. Yeah. This is no exception anyways. So that's what I've been doing. I'm I've think I've produced enough papers and books on that. And I would, so I've been in South Africa lately for many months since last year got back about a month ago and I'm now I was, as these things work in life, I think there's arcs, you know, you're getting interested in something, you work it out or whatever it would be. Businesses, you [00:03:00] start children, there's a kind of arc and, and thing. And you work all that out. And very often that reaches some completion. So most of the things I've been doing, we've reached a completion. I thought maybe it's because I getting ancient, but I don't think so. I think it was that I just kept working at these things. And for some reason, technologies coming back up to think about it in 2009, when this book came out, I stopped thinking about technology people, norm they think, oh yeah, you wrote this book. You must be incredibly interested. Yeah. But it doesn't mean I want to spend the rest of your life. Just thinking about the site, start writing this story, like writing Harry Potter, you know, it doesn't mean to do that forever. Wait, like writing the book is like the whole [00:04:00] point of writing the book. So you can stop thinking about it. Right? Like you get it out of your head into the book. Yeah, you're done. So, okay. So this is very much Silicon valley and I left academia in 1996. I left Stanford I think was I'm not really an academic I'm, I'm a researcher sad that those two things have diverged a little bit. So Stanford treated me extraordinarily well. I've no objections, but anyway, I think I'd been to the Santa Fe Institute and it was hard to come back to standard academia after that.  So why, should people care about sort of, not just the output of the technology creation process, but theory behind technology. Why, why does that matter? Well[00:05:00]  I think that what a fine in in general, whether it's in Europe or China or America, People use tremendous amount of technology. If you ask the average person, what technology is, they tell you it's their smartphone, or it's catch a tree in their cars or something, but they're, most people are contend to make heavy use of technology of, I count everything from frying pans or cars but we make directly or indirectly, enormously heavy use of technology. And we don't think about where it comes from. And so there's a few kind of tendencies and biases, you know we watch we have incredibly good retinal displays these days on our computers. [00:06:00] We can do marvelous things with our smartphone. We switch on GPS and our cars, and very shortly that we won't have to drive at all presumably in a few years. And so all of this technology is doing marvelous things, but for some strange reason, We take it for granted in the sense, we're not that curious as to how it works. People trend in engineering is I am, or I can actually tell you that throughout my entire life, I've been interested in how things work, how technology works, even if it's just something like radios. I remember when I was 10, I like many other kids. I, I constructed a radio and a few instructions. I was very curious how all that worked and but people in general are not curious. So I [00:07:00] invite them quite often to do the following thought experiments. Sometimes them giving talks. All right. Technology. Well, it's an important, yeah, sort of does it matter? Probably while I would matter. And a lot of people manage to be mildly hostile to technology, but there are some of the heaviest users they're blogging on there on Facebook and railing about technology and then getting into their tech late and cars and things like that. So the thought experiment I like to pose to people is imagine you wake up one morning. And for some really weird or malign reason, all your technology is to super weird. So you wake up in your PJ's and you stagger off to the bathroom, but the toilet, [00:08:00] you trying to wash your hands or brush your teeth. That is no sink in the bathroom. There's no running water. You scratch your head and just sort of shrugged in you go off to make coffee, but there's no coffee maker, et cetera. You, in this aspiration, you leave your house and go to clinch your car to go to work. But there's no car. In fact, there's no gas stations. In fact, there's no cars on the roads. In fact, there's no roads and there's no buildings downtown and you're just standing there and naked fields. And wondering, where does this all go? And really what's happened in this weird Saifai set up is that let's say all technologies that were cooked up after say 1300. So what would that be? The last 700 years or so? I've disappeared. And and you've [00:09:00] just left there and. People then said to me, well, I mean, wouldn't there have been technologies then. Sure. So you know how to, if you're a really good architect, you might know how to build cathedrals. You might know how to do some stone bridges. You might know how to produce linen so that you're not walking around with any proper warm clothes and so on. But our whole, my point is that if you took away everything invented. So in the last few hundred years, our modern world or disappear, and you could say, well, we have science, Peter, but without technology, you wouldn't have any instruments to measure anything. There'd be no telescopes. Well, we still have our conceptual ideas. Well, we would still vote Republican or not as the case may be. Yeah, you'd have, and I'd still have my family. Yeah. But how long are your kids gonna [00:10:00] live? Because no modern medicine. Yeah, et cetera. So my point is that not only does technology influence us, it creates our entire world. And yet we take this thing that creates our entire world. Totally. For granted, I'd say by and large, there are plenty of people who are fascinated like you or me, but we tend to take it for granted. And so there isn't much curiosity about technology. And when I started to look into this seriously, I find that there's no ology of technology. There's theories about where science comes from and there's theories about music musicology and theories, endless theories about architecture and, and even theology. But there isn't a very [00:11:00] well-developed set of ideas or theories on what technology is when, where it comes from. Now, if you know, this area is a, was that true? On Thur, you know, I could mention 20 books on it and Stanford library, but when I went to look for them, I couldn't find very much compared with other fields, archi, ology, or petrol energy, you name it technology or knowledge. It was, I went to talk to a wonderful engineer in Stanford. I'm sure he's no longer alive. Cause this was about 15 years ago. He was 95 or so if I couldn't remember his name it's an Italian name, just a second. I brought this to prompts. Just a sec. I'm being sent to you. I remember his name and [00:12:00] make it the first name for him. Yeah. Walter VIN sent him. So I went to see one it's rarely top-notch aerospace engineers of the 20th century had lunch with them. And I said, have engineers themselves worked out a theory of the foundations of their subject. And he looked, he sort of looked slightly embarrassed. He says, no. I said, why not? And he paused. He was very honest. He just paused. And he says, engineers like problems they can solve. It's. So compared with other fields, there isn't as much thinking about what technology is or how it evolves over time, where it comes from how invention works. We've a theory of how new species come into existence since 1859 and Darwin. [00:13:00] We don't have much for theory at all. At least. This was 10, 15 years ago about how new technologies come into being. I started to think about this. And I reflected a lot because I was writing this book and people said, what are you writing about? I said, technology that is always followed by Y you know, I mean, I could say I was maybe writing the history of baseball. Nobody would've said why, but Y you know, what could be interesting about that? And I reflected further that and I argue in my book, the nature of technology, I reflected that technology's not just the backdrop or the whole foundation of our lives. We depend on it 200 years ago, the average length of life, might've been 55 in this country, or 45. [00:14:00] Now it's 80 something. And maybe that's an, a bad year, like the last year. So, and that's technology, medical technology. We've really good diagnostics, great instruments very good methods, surgical procedures. Those are all technology. And by and large, they assure you fairly well that if you're born this year in normal circumstances, Reasonably the normal circumstance through born, let's say this decade, that's with reasonable, lucky to live, to see your grandchildren and you might live to see them get married. So life is a lot longer. So I began to wonder who did research technology and strangely enough maybe not that strangely, it turns out to be if not engineers, a lot sociologists and economists. [00:15:00] And then I began to observe something further in that one was that a lot of people. So wondering about how things change and evolve had really interesting thoughts about how science, what science is and how that evolves. And so that like Thomas Kuhn's, there are many people speculated in that direction, whether they're correct or not. And that's very insightful, but with technology itself I discovered that the people writing about it were historians associates, which is an economist and nearly, always, they talked about it in general. We have the age off the steam engines or when railroads came along, they allowed the expansion of the entire United States Konami that connected his coast and west coast and [00:16:00] so on. So they're treating the technology has sort of like an exogenous effect sent there and they were treating that also. I discovered there's some brilliant books by economic historians and sociologists add constant is one. He wrote about the turbo chapter, super good studies about Silicon valley, how the internet started and so on. So I don't want to make too sweeping the statement here, but by and large, I came to realize that nobody looked inside technologies. So this is if you were set in the 1750s and by ology certain biologists, they would have been called social scientists, natural philosophers. That's right. Thank you. They would have been called natural philosophers and they would have been interested in if they were interested [00:17:00] in different species, say giraffes and Zebras and armadillos or something. It was as if they were trying to understand these from just looking outside. And it wasn't until a few decades later, the 1790s, the time of George cookie that people started to do. And that to me is, and they find striking similarities. So something might be a Bengal tiger and something might be some form of cheetah. And you could see very similar structures and postulate as Darwin's grandfather did that. There might be some relation as to how they evolved some evolutionary tree. By time, Darwin was writing. He wasn't that interested in evolution. He was interested in how new species are formed. So I began to realize that in [00:18:00] technology, people just by and large looking at the technology from the outside, and it didn't tell you much. I was at a seminar. I remember in Stanford where it was on technology every week. And somebody decided that they would talk about modems. Those are the items that just connect your PC. The wireless internet. And they're now unheard of actually they're built into your machine. I'm sure. And we talked for an hour and a half about modems or with an expert who from Silicon valley who'd been behind and venting. These never was the question asked, how does it work? Really? Yeah. Did, did everybody assume that everybody else knew how it worked? No. Oh, they just didn't care. No, no. Yeah, not quiet. It was [00:19:00] more, you didn't open the box. You assume there was a modem who is adopting modems. How fast were modems, what was the efficiency of modems? How would they change the economy? What was in the box itself by and large was never asked about now there are exceptions. There are some economists who really do get inside, but I remember one of my friends late Nate Rosenberg, superb economist of technological history here at Stanford. Rude poop called inside the black box, but he didn't even in that book, he didn't really open up too many technologies. So then I began to realize that people really didn't understand much about biology or zoology or evolution for that matter until this began to open up or can [00:20:00] isms and see similarities between species of toads and start to wonder how these different species had come about by getting inside. So to S set up my book, I decided that the key thing I was going to do, I didn't mention it much in the book, but was to get inside technologies. So if I wanted to talk about jet engines, I, wasn't just going to talk about thrust and about manufacturers and about people who brought it into being, I was going to talk about, you know heat pumps, exactly Sur anti surge systems for compressors different types of combustion systems and materials whole trains of compressors. Oh, assemblies of compressors the details of turbines that drove the compressors. [00:21:00] And I found that in technology, after technology, once you opened it up, you discovered many of the same components. Yeah. So let me hold that thought for a moment. I thought it was amazing that when you look at technologies from the outside, you know, see canoes and giraffes, they don't look at all similar legs. Yeah. But they all have the same thing, basic construction there. And then their case, their memos, and they have skeleton their vertebrates or et cetera, whatever they are or something. And so in technologies, I decided quite early on with the book that I would understand maybe 25 or so technology is pretty well. And of those [00:22:00] I'd understand at least a dozen very well, indeed, meaning spending maybe years trying to. Understand certain technologies are understanding. And and then what I was going to do is to see how they had come into being and what could be said about them, but from particular sources. So I remember calling up the chief engineer on the Boeing 7 47 and asking them questions personally, the cool thing about technology, unlike evolution is that we can actually go and talk to the people who made it right. If they're still alive. Yes. And so, so, so I decided that it would be important to get inside technologies. When I did that, I began to realize that I was seeing the same components [00:23:00] again and again. So in some industrial system, safe for pumping air into coal mines or something, fresh air, you'd see compressors taking in their piping, it done. And and yeah. Again, and again, you see piston engines or steam engines, or sometimes turbines powering something on the outside. They may look very different on the inside. You are seeing the same things again, again, and I reflected that in biology and say, and yeah, in biology save mammals we have roughly the same numbers of genes, very roughly it's kind of, we have a Lego kit of genes, maybe 23,000 case of humans slightly differently for other creatures. [00:24:00] And these genes were put together to express proteins and express different bone structures, skeletal structures, organs in different ways, but they were all put together or originated from roughly the same set of pieces put together differently or expressed differently, actuated differently. They would result in different animals. And I started to see the same thing with technology. So again, you take some. You take maybe in the 1880s some kind of a threshing machine or harvester that worked on steam summer inside. There there'd be a boiler. There'd be crying, Serbia steam engine. If you looked into railway locomotive, you'd see much the [00:25:00] same thing, polars and cranks, and the steam engine there be a place to keep fuel and to feed it with a coal or whatever it was operating on. So once I started to look inside technologies, I realized it was very different set of things that there's ceased to become a mystery. And so the whole theme of what I was looking at was see if I can get this into one sentence. Technologies are means to human purposes normally created from existing components at hand. So if I want to put up some structures and Kuala lumper, which is a high level high rise building, I've got all the pieces I needed. Pre-stressed concrete, whatever posts are needed to create. [00:26:00] Fundations the kinds of bolts and fasteners the do fastened together, concrete, high rise, cranes, and equipment et cetera. Assemblies made of steel to reinforce the whole thing and to make sure the structure stands properly. It's not so much of these are all standardized, but the type of technology, every technology I thought is made with pieces and parts, and they tend to come from the same toolbox used in different ways. They may be in Kuala, lumper used in Seattle's slightly different ways, but the whole idea was the same. So it's technology then cease to be a mystery. It was matter of combining or putting together things from a Lego sets in M where [00:27:00] I grew up in the UK. We'd call them mechano sets. What are they called here? Erector sets or, well, I mean, Legos are, or, but like, I mean, there's, there's metal ones, the metal ones. I think the metal ones are erector sets. There's also like the wood ones that are tinker toys. Anyway, I like Legos, like, like I'm kinda like, okay. Okay. So, and that goes and yeah. And then you could get different sorts of Lego sets. You know, a few were working in high pressure, high temperature, it'd be different types of things of you're working in construction. There'd be a different set of Lego blocks for that. I don't want to say this is all trivial. It's not a matter of just throwing together these things. There's a very, very high art behind it, but it is not these things being born in somebody's attic. And in fact [00:28:00] of you were sitting here and what used to be Xerox park and Xerox graphy was invented by not by Mr. Xerox. Anyway, somewhere in here, but xerography was invented by someone who knew a lot about processes. A lot about paper, a lot about chemical processes, a lot about developing things. And shining light on paper and then using that maybe chemically at first and in modern Sarah Buffy. Electrostatically. Yeah. And so what could born was rarely reflecting light known component of marks on paper, thinking of a copier machine focused with a lot of lenses, [00:29:00] well-known onto something that was fairly new, which was called a Xerox drum. And that was electrostatically charged. And so you arranged that the light effected the electrostatic charges on the Xerox drum and those electrostatic as the drum revolved, it picked up particles of printing, ink like dust and where being differentially charged, and then imprinted that on paper and then fused it. All of those pieces were known. It's and it's not a matter of someone. I think mine's name is Carlson by the way. It's not a matter of what's somebody working in an attic that guy actually, who was more like that, but usually it's a small team of [00:30:00] people who are, who see a principal to do something to say, okay, you know, we want to copy something. Alright. But it could, you know cathode Ray tube and maybe it could project it on to that. And then there might be electrons sensitive or heat sensitive paper, and it could make her copies that way. But certainly in here Xerox itself for zero park, the idea was to say, let's use an electrostatic method combined with Potter and a lot of optics to ride on a Xerox drum and then fuse that under high heat into something that, where the particles stuck to paper. So all of those things were known and given. So I guess there's sorry. There's, there's so many different directions that I, that I want to go. One. [00:31:00] So sort of just like on the idea of modularity for technology. Yeah. It feels like there's both I guess it feels like there's almost like two kinds of modularity. One is the modularity where you, you take a slice in time and you sort of break the technology down into the different components. Yeah. And then there's almost like modularity through time that, that progresses over time where you have to combine sort of different ideas, but it doesn't necessarily, but like those ideas are not necessarily like contained in the technology or there's like precursor technology, like for example there's you have the, the moving assembly line. Right. Which was a technology that was you originally for like butchering meat. Yup. Right. And so you had, you had car manufacturing [00:32:00] and then you had like a moving assembly line. Yep. And then Henry Ford came along and sort of like fused those together. And that feels like a different kind of modularity from the modularity of. Of like looking at the components of technology, M I D do you think that they're actually the same thing? How do you, how do you think about those sort of two types of modularity? I'm not quite sure what the difference is. So, so the, the Henry T I guess like the, the, the, the, the Ford factory did not, doesn't contain a slaughter house. Right. It contains like some components from the slider house. And some components, I guess. Let's see, I think, like, [00:33:00] this is like, I, I was like, sort of like thinking through this, it feels like, like when, when you think of like the sort of like intellectual lineages of technology the, like a technology does not always contain the thing that inspires it, I guess is and so, so there's this kind of like evolution over time of like, almost like the intellectual lineage of a technology that is not necessarily the same as like the. Correct evolutions of the final components of that technology like for yeah. Does that, does that make sense? Like th th th or am I just like, am I seeing a difference where there, there is no difference which could be completely possible? Well, I'm not sure. I think maybe the latter, let me see if I can explain the way I see it, please stop me again. If it [00:34:00] doesn't fit with what you're talking about. I could fascinated by the whole subject of invention, you know, where to radically new technologies come from, not just tweaks on a technology. So we might have we might have a Pratt and Whitney jet engine in 1996, and then 10 years later have a different version of that. That's a good summer different components. That's fine. That's innovation, but it's not ready. Invention invention is something that's quite radical. You go from having air piston engines, which spit like standard car engines, driving propellers systems, 1930s, and you that gets replaced by a jet engine system working on a different principle. So the question really is so I've [00:35:00] begun to realize that what makes an invention is that it works in a different principle. So when Cox came along, the really primitive ones in the 12 hundreds, or a bit later than that are usually made up, they're made with their water clocks and are relying on this idea that a drip of water is fairly regular. If you set it up that way and about the time of Galileo. And in fact, Galileo himself realized that the pendulum had a particular regular beat. And if you could harness that regularity, that might turn into something that can measure time I clock. So, and that's a different principle that the principle is to use the idea that something on the end of a string or on the end of a piece of wire, give you a regular. [00:36:00] Frequency or regular beat. So the country realize that inventions themselves something was carrying out unnecessary purpose using a different principle before the second world war in Britain, they in the mid 1930s, people got worried about aircraft coming from the continent. They thought it could well be terminated and and bombers coming over to bomb England and the standard methods then to detect bombers over the horizon was to get people with incredibly good hearing, quite often blind people and attach to their ear as the enormous air trumpet affair that went from their ear to some big concrete collecting amplifier, some air trumpet that was maybe 50 or a hundred [00:37:00] feet across to listen to what was going on in the sky. And a few years later in the mid thirties, actually the began to look for something better and then. Made a discovery that fact that being well-known in physics by then, that if you bounced a very high frequency beam electromagnetic beam of say piece of metal, the metal would distort the beam. It would kind of echo and you'd get to stores and see if it was just to adore three miles away, made a word, wouldn't have that effect, but it was metal. It would. So that that's different principle. You're not listening. You're actually sending out a beam of something and then trying to detect the echo. And that is a different principle. And from that you get radar, how do you create such a beam? How'd [00:38:00] you switch it off very fast. Search can listen for an echo or electronically how do you direct the beam, et cetera, et cetera. How do you construct the whole thing? How can you get a very high energy beam because needed to be very high energy. These are all problems that had to be solved. So in my, what I began to see, she was the same pattern giving invention guidance began usually an outstanding problem. How do we detect enemy bombers that might come from the east, from the continent, if we need to how do we produce a lot of cars more efficiently and then finding some principle to do that, meaning the idea of using some phenomenon in the case of ear trumpets, it was acoustic phenomena, but these could be greatly amplified for somebody's ear. If you directed them into a big [00:39:00] concrete here, right? Different ways to put out high frequency radio beams and listen for an echo of that. Once you have the principle, then it turns out there's sort of sub problems go with that in the case of radar, how do you switch the beam off so that you can, things are traveling at the speed of light. I just switched it off fast enough that the echo isn't drowned out by the original signal. So then you're into another layer of solving another problem and an invention. Usually not. Well, I could talk about some other ways to look at it, but my wife looking at an invention is that nearly always is a strong social need. What do we do about COVID? The time that [00:40:00] says February, March 20, 20 oh, cur we can do a vaccine. Oh, okay. The vaccine might work on a different principle, maybe messenger RNA rather than the standard sort of vaccines. And so you find a different principle, but that brings even getting that to work brings its own sub problems. And then if with a bit of luck and hard work, usually over several years or months, you solved the sub problems. You managed to put all that in material terms, not just conceptual ones, but make it into some physical thing that works and you have an invention. And so to double click on that, couldn't you argue that those, that the solution to those sub problems are also in themselves inventions. And so it's just like inventions all the way down. [00:41:00] No great point there. I haven't thought of that. Possibly the, if they need to use a new principal themselves, the sub solutions. Yeah. Then you'd have to invent how that might work. But very often they're standing by let me give you an example. I hope this isn't I don't want to be too sort of technical here, please go, go, go, go rotate. Here we go then. So it's 1972 here in Xerox park where I'm sitting and the engineer, Gary Starkweather is his name, brilliant engineer and trained in lasers and trend and optics PhD and master's degrees, really smart guy. And he's trying to [00:42:00] figure out how to how to print. If you have an image in a computer, say a photograph, how do you print that now at that time? In fact, I can remember that time there. There are things called line printers and they're like huge typewriter systems. There is one central computer you put in your job, the outputs it was figured out on the computer and then central line printer, which is like a big industrial typewriter. And then it clanked away on paper and somebody tore off the paper and handed it to through a window. Gary, Starkweather wondered how could you print texts? But more than that images where you weren't using a typewriter, it's very hard to his typewriters and very slow if you wanted to images. So he [00:43:00] cooked up a principle, he went through several principles, but the one that he finished up using was the idea that you could take the information from the computer screens, a photograph you could use computer processors to send that to a laser. The lasers beam would be incredibly, highly focused. And he realized that if he could use a laser beam to the jargon is to paint the image onto the Xerox drum. Then so that it electrically charged the Xerox drum, right then particles would stick to the Xerox, strung the charge places, and the rest would be zero graphy, like a copier machine. He was working in Xerox park. [00:44:00] This was not a huge leap of the imagination, but there were two men's sub-problems in as well. We want to mention, if you look at it there's an enormous two huge problems if you wanted. So you were trying to get these black dots to write on a zero extremity to paint them on a zero Ekstrom. I hope this is an obscure. No, this is great. And I'll, I'll, I'll include some like pictures and this is great. All right. So you suppose I'm writing or painting a photograph from the computer through a processor, send to a laser. The laser has to be able to switch on and off fast. If it's going to write this on a Xerox Trump, and if you work out commercially how fast it would have to operate. Starkweather came to the conclusion. He'd have to be able to switch his [00:45:00] Lezzer on and off black or white 50 million times a second. Okay. So 50 megahertz, but nobody had thought of modulating or doing that sort of switching at that speed. So he had to solve that. That's a major problem. He solved it by circuitry. He got some sort of pizza electric device that's kind of don't ask, but he got a electronic device that could switch on and off. And then he could send signals to modulator for that to modulator, to switch on and off the laser and make a black or white as needed. And so that was number one. Now that kind of, that in your terms acquired an invention, he had to think of a new principle to solve that problem. So how do you, how do you write images on a computer? Sorry, on [00:46:00] how do you write it? How do you write computer images? Print that onto paper. That's required a new principal switching on a laser and. 50 million times the second required a new principal or acquire a new principal. So those are two inventions. There's a third one and another sub problem. The device, by the way, he got to do this was as big as one of these rooms in 1972. If I have my if I have the numbers, right a decent laser would cost you about $50,000 and you could have bought a house for that in 1978 here. And it would be the size, not of a house, but of a pretty big lab, but not something inside a tiny machine, but an enormous apparatus. And so how do you take [00:47:00] a laser on the end of some huge apparatus that you're switching on and off the 15 million times a second and scan it back and forth. And because there's huge inertia, it's an enormous thing. And believe it or not, he, he solved that. Not with smoke, but with mirrors. So he actually, instead of moving the laser beam, He arranged for a series of mirrors under evolving a piece of apparatus, like actuate the mirrors. Yeah. All he had to do was 0.1 beam at the mirror, switch it on and off very quickly for the image. And then the mirror would direct it kind of like a lighthouse beam right across the page. And then the next [00:48:00] face of the mirror exactly little mirror would come along and do the next line. So how do you do that? Well, that was easier. But then he discovered that the different facets on this mirror you'd have to, they'd have to line up to some extraordinarily high precision that you could not manufacture them to. So that's another sub problem. So to solve that he used ope optics if there was so here's one facet of mirror here is the beam. So directs the beam right across the page, switching it off and on as need be. Then the next facet of the mirror comes round switches. The same beam that you want to line up extraordinary. Precisely. Couldn't do it manufactured. [00:49:00] In manufacturing technology. But you could do it with optics. It just said, okay, if there's a slight discrepancy, we will correct that. He did agree and optics. He really knew what he was doing with optics in the lab. So using different lenses, different condensing lenses, whatever lenses do he solved that problem. So it's took two or three years, and it's interesting to look at the lab notebooks that he made. But for me let me see if I can summarize this. There is no such thing as Gary Starkweather scratching his head saying, wouldn't it be lovely to wouldn't it be lovely to be able to print images off the computer and not have to use a big typewriter. And and so he sits in his attic, a star of some self for three months comes up with the solution, not at all. What he did was he envisaged a [00:50:00] different principle. We're writing the image, using a highly focused laser beam onto the Xerox drum. The rest then is just using a copier machine fair. But to do that, you have to switch on and off the laser beam problem. So that's at a lower level to invent a wedge to that. And he also had to invent a principle for scanning this beam across the Xerox strung, maybe whatever it would be 50 times a second, or maybe a hundred times the second without moving the entire apparatus. And the principally came up for that was mirrors. Yeah. And so, and then I could go down to another level, you have to align your mirrors. And so, so what I discovered and see if I can put this in a nutshell [00:51:00] invention, isn't a sort of doing something supremely creative in your mind. It finishes up that way. It might be very creative, but all inventions are basically as problem-solving. Yeah. So to do something more mundane imagine I live here in Palo Alto let's say I work in the financial district in San Francisco and let's say my car's in the shop getting repaired. How am I going to get to work? And or how am I going to get my work done tomorrow? I have no car. The level of principle is to say, okay, I can see an overall concept to do it with. So I might say, all right, if I can get to Caltrain, if I can get to the station I'll go in on the train, but hang on. How do I get to the station? So that's a sub problem. [00:52:00] Maybe I can get my daughter or my wife or her husband, whatever it is to, to drive me. Then the other end, I can get an Uber or I could get a a colleague to pick me up, but then I'd have to get up an hour earlier, or maybe I'll just sit at home and work from home, which is more of the solution we would do these days. But how will that work? Because I et cetera. So invention is not much different from that. In fact, that's the heart of invention. If we worked out that problem of getting worked when your car is gone nobody would stand up and say, this was brilliant yet you've gone through exactly the same process as the guy who invented the polymerase chain reaction. Again, I can't recall his name. Getting older. I can't [00:53:00] eat there, but anyway so what's really important in invention. I think this goes to your mission. If I understand it, rightly is the people who have produced inventions are people who are enormously familiar with what I would call functionalities. Yeah. How do you align beams using optical systems? How do you switch on and off lasers fast? And so the people who are fluent at invention are always people who know huge amounts about those functionalities. I'm trained as an electrical engineer. You're, what's it I'm trained as a mechanical engineer robotics. Oh yeah. Brilliant. So what's really important [00:54:00] in engineering, at least what they teach you apart from all that mathematics is to know certain functionalities. So you could use capacitors and inductors to create, and also electronic oscillations or regular waves. You can. Straighten out varying voltage by using induction in the system, you can store energy and use that in capacitors. You, you can actually change a beam using magnets. And so there's hundreds of such things. You can amplify things you can use using feedback as well to stabilize things. So there are many functionalities and learning engineering is a bit like becoming fluent in this set of functionalities, not learning anything that's semi [00:55:00] creative. What might that be? Yes. Paint learning to do plumbing. Yep. Learning to work as a plumber. Good. A true engineer. So it is a matter of becoming fluent. You want to connect pipes and plumbing. You want to loosen pipes. You want to unclog things you want to reduce. The piping systems or pumping system, you want to add a pump you want, so there's many different things you you're dealing with. Flows of liquids, usually and piping systems and pumping systems and filtration systems. So after maybe three to four years or whatever, it would be a for rail apprentice ship in this, not only can you do it, but you can do it unthinkingly, you know, the exact gauges, you know, the pieces, you know, the parts, you know where to get the parts, you know how to set them up and you look at [00:56:00] some problem and say, oh, okay. The real problem here is that whatever, the piping diameter here is wrong, I'm going to replace it with something a bit larger. So Lincoln's whatever. And here's how I do that. So, you know, being good at invention is not different people. Like Starkweather, Starkweather new, I think is still alive. Knows all about mirrors, but optical systems above all, he knew an awful lot about lasers. He knew a lot about electronics. He was fluent in all those. So if we don't, if we're not fluent ourselves, we stand back and say, wow, how did he do that? But it's a bit like saying, you know, you write a poem and French, let's say I don't speak French. French and support them and it worked, how did he [00:57:00] do that? But if I spoke French, I might, so, okay. Yeah, but I can see, so this actually touches on sort of like an extension of your framework that I wanted to actually run by you, which is what I would describe what you were just describing as talking about almost like the, the affordances and constraints of different pieces of technology and people who invent things being just very like intimately familiar with the, the affordances and constraints of different technologies, different systems. And so the, the question I have that I think is like an open question is whether there is a way of sort of describing or encoding these affordances and constraints [00:58:00] in a way that makes creating these inventions easier. So like in the sense that very often what you see is like someone who knows a lot about. One like the, the affordances in one area, right. When discipline and they sort of like come over to some other discipline and they're like, wait a minute, like, there's this analogy here. And and so they're like, oh, you have this, this constraint over here. Like, there's, there's like a sub problem. Right. And it's like, I know from the, the affordances of the things that I'm, I'm really familiar with, how to actually solve the sub problem. And so like, through that framework, like this framework of like modularity and constraints and affordances, like, is it possible to actually make the process easier or like less serendipitous? Yeah. In, in a couple of ways. One is that I [00:59:00] think quite often you see a pattern where some principle is borrowed from a neighboring discipline. So Henry you were saying that Henry Ford took the idea of a conveyor belt from the meat industry. Right. And and by analogy use the same principle with manufacturing cars. But to get that to work in the car industry, the limitations are different cars are a lot heavier, so you could have a whole side of beef and it's probably 300 pounds or whatever. It would be for a side of beef, but for the car, it could be at 10 and a half. So you have to think of different ways. Yeah. And in the meat industry to do conveyor belts, there's two different ways. You can have a belt standard, rubber thing or whatever it would be just moving along at a certain speed, or you [01:00:00] can have the carcass suspended from an over hanging belts working with a chain system and the carcass is cut in half or whatever and suspended. And you could be working on it pretty much vertically above you both. It was that second system that tended to get used cars as, so things don't translate principles translate from one area to another, and that's a very important mechanism. And so if you wanted to enhance innovation I think the thing would be to set up some institution or some way of looking at things, whereas. They're well-known principles for doing this in area in industry X, how would I do something equivalent in a different industry? So for [01:01:00] example blockchain is basically let's say it's a way of validating transactions that are made privately between two parties without using an intermediary, like a bank. And you could say, well, here's how this works with a Bitcoin trading or something. And somebody could come along and say, well, okay, I want to validate art sales using maybe some similar principle. And I don't want to have to go to some central authority and record there. So maybe I can use blockchain to do fine art sales, in fact, that's happening. So basically you see an enormous amount of analogous principle transfer of principles from [01:02:00] one field to another. And it's we tend to talk about inventions being adopted. At least we do an economic. So you could say the, the arts trading system adopts block chain, but it's not quite that it's something more subtle. You can get a new principal or new, fairly general technology comes out, say like blockchain and then different different industries or different sets of activities in conjure that they don't adopt it then countries. Oh, blockchain. Okay. No, I'm saying the medical insurance business let's say so I can record transactions this way and I don't have to involve a room or, and I particular, I don't have to go through banking systems and I can do it this way and then [01:03:00] inform insurance companies. And so they're encountering and wondering how they can use this new principle, but when they do, they're not just taking it off the shelf. Yeah. They're actually incorporating that into what they do. So here's an example. A GPS comes along quite a while ago. I'm sure. 1970s in principle using atomic clocks. Satellites or whatever. Basically it's a way of recording exactly time and using multiple satellites to know exactly where they are at the same time and allowing for tiny effects of even relativity. You figure out you can triangulate and figure out where something is precisely. Yeah, no, that just exists. But by the [01:04:00] time, so different industries say like Oceanwide Frazier shipping and you conjure it exists. Okay. And by the time they encounter it, they're not just saying I'm going to have a little GPS system in front of, in the Bennett code it's actually built in. And it becomes part of a whole navigational system. Yeah. So what happens in things like that is that some invention or some new possibility becomes a component in what's already done just as in banking around the 1970s, being able to. Process customer names, client names, and monetary months you could process that fast with electronic computers and there most days they were [01:05:00] called and data processing units that we don't think of it that way now, but you could process that. And then that changed the banking industry significantly. So by 1973, there was a, the market and futures in Chicago where you were dealing with say pork belly futures and things like that because computation coming home. Interesting. So the pattern there's always an industry exists using conventional ideas, a new set of technologies becomes available. But the industry doesn't quite adopted it, encounters it and combines it with many of its own operations. So banking has been recording people in ledgers and with machinery, it has been facilitating transactions, [01:06:00] maybe on paper unconscious computation. Now can do that. Yeah. Automatically using computation. So some hybrid thing is born out of banking and computation that goes into the Lego set and actually sort of related to that, something I was wondering is, do you think of social technology as technology, do you think that follows the same patterns? What do you mean social technology? I, I think like a very obvious one would be like for example, like mortgages, right? Like mortgages are like mortgages had to be invented. And they allow people to do things that they couldn't do before. But it's not technology in the sense of, of built. Yeah, exactly. It's not like, there's no, like you can create a mortgage with like you and me and a piece of [01:07:00] paper. Right. But it's, it's something that exists between us or like democracy. Right. And so, so I feel like there's, there's like one end, like, like sort of like things like new legal structures or new financial instruments that feel very much like technology and on the other end, there's like. Great. Just like new, like sort of like vague, like new social norms and like, yeah. Great question. And it's something I did have to think about. So things like labor unions nation states nature. Yes, exactly. These thing democracy itself, and in fact, communism, all kinds of things get created. Don't look like technologies. They don't have they don't have the same feel as physical technologies. They're not humming away in some room or other. They're not under the hood of your [01:08:00] car. And things like insurance for widows and pension systems. There's many of those social technologies even things like Facebook platforms for exchanging information. Sometimes very occasionally things like that are created by people sitting down scratching heads. That must have happened to some degree in the 1930s when Roosevelt said there should be a social security system. But that wasn't invented from scratch either. So what tends to come about in this case, just to get at the nitty gritty here, what tends to happen is that some arrangement happens. Somebody maybe could have been a feudal Lord says, okay, you're my trusted gamekeeper. You can have a [01:09:00] rather nice a single house on my estate. You haven't got the money to purchase and build it. I will lend you the money and you can repay me as time goes by. And in fact, the idea that so many of those things have French names, more, more cash. You know, it's actually, I think the act of something dying as far as my, my school friends would go, I don't know. But a lot of those things came about in the middle ages. There are other things like What happens when somebody dies the yeah. Probate again, these are all things that would go back for centuries and centuries. I believe the way they come about is not by deliberate invention. They come about by it being natural in [01:10:00] to something. And then that natural thing is used again. And again, it gets a name and then somebody comes along and says, let's institutionalize this. So I remember reading somewhere about the middle ages. They it was some Guild of some traders and they didn't feel they were being treated fairly. I think this was in London. And so they decided to withhold their services. I don't know what they're supplying. It could have been, you know, courage, transport, and along the streets or something. And some of these people were called violets. We were, would not be valet again, very French, but so they withheld their services. Now that wouldn't be the first time. [01:11:00] It goes back to Egypt and engineered people withholding their services, but that becomes, gets into circulation as a meme or as some repeated thing. Yeah. And then somebody says, okay, we're going to form an organization. And our Gilda's going to take this on board as being a usable strategy and we'll even give it a name that came to be called going on, strike or striking. And so social invention kind of should take place just by it being the sensible thing to do. The grand Lord allows you. It gives you the money to build your own house. And then you compare that person back over many years [01:12:00] and and put that, put that loan to to its death and mortgage it. So the I think in this case, what happens in these social inventions is that sensible things to do gets a name, gets instituted, and then something's built around it. Well, one could also say that many inventions are also the sensible thing to do where like it's someone realizes like, oh, I can like use this material instead of that material. Or like some small tweak that then enables like a new set of capabilities. Well, I'm not, yeah. In that case, I wouldn't call it really an invention that the, the vast majority of innovations, like 99 point something, something, something 9% or tweaks and, you know, [01:13:00] w we'll replace this material. Well, why doesn't that count as an invention? If, if, if it's like a material, like it's a different, like, I guess why doesn't that also count as, as a new principal, it's like bringing a new principal to the thing. The word to find a principal is it's the principles, the idea of using some phenomenon. And so you could say there's a sliding scale if you insist. Up until about 1926 or 1930 aircraft were made of wooden lengths covered with canvas dope. The dope, giving you waterproofing and so on. And and then the different way of doing that came along when they discovered that with better engines, you could have heavier aircraft, so you could make the skeleton out of [01:14:00] metal, right? And then the cladding might be metal as well. And so you had modern metallic aircraft. There's no new principal there, but there is a new material and you could argue, well, the new materials, different principle, then you're just talking about linguistics. So, so, so you would not consider the, like the transition from cloth aircraft to metal aircraft to be an invention. No. Huh? Not got another, I mean, sure might be a big deal, but I don't see it as a major invention going from air piston Angeles to jet engines. That's a different principle entirely. And I, so I, I've a fairly high bar for different principles. But you're not using a different phenomenon. That's my that's, that's my criteria. And if you have a very primitive clock [01:15:00] in this 16, 20 or 16, Forties that uses a string and a bulb on the end of the string. And then you replace the string where the wire or piece of metal rigid. You're not really using a new phenomenon, but you are using different materials and much of the story of technology isn't inventions, it's these small, but very telling improvements and material. In fact jet engines, weren't very useful until you got combustion systems where you were putting in aircraft fuel. Yeah. Atomizing that and setting the whole thing and fire the early systems down. When you could better material, you could make it work. So there's a difference between a primitive technology and [01:16:00] then one that's built out of better components. So I would say something like this, the if you take what the car looks like in 1919 0 5, is it a very, is it a different thing than using horses? Yeah, because it's auto motive. There is an engine. It's built in. So it's from my money. It's using a different principle. What have you changed? What if you like took the horse and you put it inside the carriage? Like what have you built the carriage around the horse? Would that be an automotive? Well then like, like what if I had a horse on a treadmill and that treadmill was driving the wheels of the vehicle with the horse on it, then I think it would be it would be less of an invention. I don't know. I mean, you're basically say I find it very useful to say that if [01:17:00] that radar uses a different principle from people listening, you could say, well, I mean, people listening are listening for vibrations. So is radar, you know, but just at a electro magnetic vibrations, what's different for my money. It's not so much around the word principle. All technologies are built around phenomena that they're harvesting or harnessing to make use of. And if you use a different set of phenomena, In a different way, I would call it an invention. So if you go from a water wheel, which is using water and gravity to turn something, and you say I'm using the steam engine, I would regard that as you're still, you [01:18:00] could argue, well, aren't you use a phenomenon phenomenon of the first thing you're using the weight of water and gravity, and the fact that you can turn something. And then the second thing you are using the different principle of heating something and having it expand. And so I don't see, I would say those are different principles. And if you're saying, well, there's a different principle, I'd go back to, well, what phenomena are you using? So, yeah, I mean, if you wanted to be part of a philosophy department, you could probably question every damned thing because yeah. I'm actually not trying to, to challenge it from a semantic standpoint. I think it's just actually from like really understanding, like what's going on. I think there's actually like a, sort of a debate of like, whether [01:19:00] it's. Like, whether it's like a fractal thing or whether there are like, like multiple different processes going on as well. Maybe I'm just too simple, but let's start to look at invention. The state of the art was pathetic. It wasn't very good because all papers, well, all the versions of invention, I was reading, all of us had a step, then something massively creative happens and that wasn't very satisfactory. And then there was another set of ideas that were Darwinian. If you have something new, like the railway locomotive that must have come out of variations somehow happening spontaneously, and might've been sufficiently different to qualify as radically new inventions. It doesn't do it for me either because you know, 1930 you could have varied [01:20:00] radio circuits until you're blue in the face. You'd never get radar. Yeah. So what the technology is fundamentally is the use of some set of phenomena to carry out some purpose. The, there are multiple phenomena. So but I would say in this maybe slightly too loose speaking, that's the principal phenomenon you're using or the, the key phenomenon constitutes the concept or principle behind that technology. So if you have a sailing ship, you could argue, well, you know, it, displaces water it's built to be not have water intake. It's got a cargo space, but actually for sailing ships, the key principle is to use the motive, power of wind in clever ways to be able to propel a [01:21:00] ship. If you're using steam and take the sails down you're using, in my opinion, a different principle, a different phenomenon. You're not using the mode of power of wind. You're actually using the energy that's in the, some coal fuel or oil and clever ways and to move the ship. So I would see those as two different principles you could say, well, we also changed whatever the staring system or as does that make it an invention. It makes maybe that part of it, an invention, but overall The story I'm giving is that inventions come along when you see a different principle or a set of phenomena that you want to use for some given purpose and you managed to solve the problems to put that into reality. Yeah. I completely agree [01:22:00] with that. I think the, the thing that I'm interested in is like like to, to use is the fact that sort of, again, we go back to like that modular view then, you're you sort of have like many layers down you, the, the like tinkering or, or the, the innovations are so based on changing the phenomena that are being harnessed, but like much, like much farther down the hierarchy of, of the modularity. Like, like in, in S like sailing ships you like introduce like Latin sales, right? Like, and it's like, you change the, into, like, you've invented a new sale system. You haven't invented a new kind of ship. Right. So you've changed the phenomenon, but yeah, I think the distinction you're making is totally on target. When you introduced Latina sales, you have invented a new. Cell system. Right. [01:23:00] But you haven't invented a new principle of a sailing ship. It's still a sailing ship. So I think you're getting into details that are worth getting into at the time I'm writing this. I I was trying to distinguish, I'm not trying to be defensive here. I hope, but I was just, I'm not trying to be offensive in any way. Wait for me to, I haven't thought about this for 10 years or more the I think what was important in yeah, let's just in case this whole thing that said innovation happens. Nobody's quite sure what innovation is. But we have a vague idea. It's new stuff that works better. Yes. In the book I wrote I make a distinction between radically new ways to do something. So it's radically new to propel the ship by a [01:24:00] steam engine. Even if you're using paddles versus by wind flow. Okay. However, not everything's right. Radically new. And if you look at any technology, be it computers or cars the insides, the actual car Bratcher system in the 1960s would have been like a perfume spray or a spraying gasoline and atomizing it, and then setting that in light. Now we might have as some sort of turbo injections system, that's, that's working, maybe not with a very different principle, but working much more efficiently. So you might have an invention or a technology that the insights are changing enormously. But the, the, I, the overall idea of that [01:25:00] technology hasn't changed much. So the radar would be perfect examples. So be the computer, the computers kept changing its inner circuitry, the materials it's using, and those inner circuits have gotten an awful lot faster. And so on. Now that you could take a circuit out and you could say, well, sometime around 1960, the circuit cease to be. Certainly it seems to be trialed, vacuum tubes and became transistors monitored on boards. But then sometime in that deck, could it became integrated circuits, was the integrated circuit and invention yeah. At the circuit level, at the computer level better component. Yeah. So hope that, that absolutely has I guess as, as actually a sort of a closing question is there, is there like work that you [01:26:00] hope people will sort of like do, based on what you've written like, is, is there, is there sort of like a line of work that you want people to be, to be doing, to like take the sort of the framework that you've laid out and run with it? Cause I, I, I guess I feel like there's like, there's so much more to do. Yeah. And so it's like, do you have a, do you have a sense of like what that program would look like? Like what questions, what questions are still unanswered in your mind? I think are really interesting. I think that's a wonderful question off the red cord. I'm really glad you're here because. It's it's like visiting where you grew up. I am. I'm the ghost of, of books. Oh, I don't know. I mean, it's funny. I was injured. This is just, yeah. I was interviewed a month or two ago on [01:27:00] this subject. I can send you a link if you want, please. Yeah. I listened to tons of podcasts, so, yeah. Anyway, but I went back and read the book. You're like, wow, I'm really smart. Well, it had that effect. And then I thought, well, God, you know, it could have been a lot better written. It had all sorts of different things. And, and the year this was produced and free press and New York actually Simon Schuster, they put it up for a Pulitzer prize. That really surprised me because I didn't set out to write something. Well-written I just thought of keep clarifying the thing. And it went to come back to your question. Yeah. My reflection is this the book I wrote the purpose of my book was to actually look inside technologies. So [01:28:00] when you open them up, meaning have you look at the inside components, how those work and how ultimately the parts of a technology are always using some, none, you know, we can ignite gasoline and a, in a cylinder, in a car, and that will expand rapidly and produce force. So there's all kinds of phenomena. These were things I wanted to stay at. And yeah, the book there's that book has had a funny effect. It has a very large number of followers, meaning people have read that and I think of a field for technology and they're grateful that somebody came along and gave them a way to look at technology. Yeah. But having, let me just say it carefully that I've done other things in research [01:29:00] that have had far more widespread notice than this. And I think it's something tech the study of technology, as I was saying earlier on is a bit of a backwater in academic studies. Yeah. It's eclipsed. Is that the word dazzled by science it's? So I think that it's very hard to we, if something wonderful happens, we put men on the moon, we put people on the moon. We, we come up with artificial intelligence. Some are vaguely. That's supposed to be done by scientists. It's not, it's done by engineers who are very often highly conversant, both with science and mathematics, but as a matter of prestige, then a [01:30:00] lot of what should have been theories of technologies, where they come from, it's sort of gone into theories of science and I would simply point out no technology, no science when you can't do much science without telescopes crystallography x-rays systems microscopes. So yeah, it's all. Yeah. So you need all of these technologies to give you modern science. Without those instruments, we'd still have technology. We'd still have science, but be at the level of the Greeks, which would

Luminary
Mitch Waldrop on The Dream Machine: Part 3 – JCR Licklider, Xerox PARC, and TCP/IP

Luminary

Play Episode Listen Later Aug 30, 2021 62:10


Our second season launches with a three-part series featuring the preeminent Mitch Waldrop. We discuss the history, ideas, and origins […]