POPULARITY
What if the next big leap in technology isn't just another gadget, but a transformative bridge to a smarter, more efficient future? In this eye-opening episode of Thrive LouD with Lou Diamond, meet Edward Crump, the remarkable inventor of Amazon Alexa and a significant force behind innovation at companies like Netflix and Nike. Now serving as the CTO of Champion Venture Partners, Crump shares the secrets behind his incredible journey through technology and entrepreneurship. Key Highlights: Discover how a lifelong technologist like Ed Crump started his career by selling his first program at the age of 12 and eventually played a pivotal role in creating Amazon Alexa. Learn about the pioneering projects he led at major companies like Netflix and Nike, and how he transitioned to the venture capital world with Champion Venture Partners. Gain insights into Crump's vision for the future, where personal AIs and smart environments revolutionize our daily lives while respecting data privacy. Understand the importance of interdisciplinary collaboration, as demonstrated by Champion Venture Partners, in driving innovative solutions in sports science, health, and wellness. Join Lou Diamond and Ed Crump on this enlightening exploration into the future of technology and innovation, and discover how you can be a part of this thrilling journey. TIMESTAMPED OVERVIEW 00:00 Entrepreneurial Journey: From Startups to Amazon 06:36 Tech Giants' Race: Amazon Outpaces Apple 09:52 Leveraging Innovation in Digital Experiences 10:40 Future of Personal AI and Data 14:17 Data Privacy Concerns in AI 19:57 New Venture Collaboration Insights 21:20 "Unique Access to Sports Ecosystem" 25:48 "Passion for Management Collaboration" 28:40 "Digital Twins: A Matrix Parallel" 30:52 Sugar-Free Mental Clarity 35:44 "Ed Crump: Coolest Guest Spotlight" Connect with Ed Check out Champion Venture Partners company's website Follow Us:
In this episode of Generation AI, hosts JC Bonia and Ardis Kadiu explore the rapidly evolving landscape of personal AI agents. They analyze four types of agents - general task agents, researcher agents, workflow agents, and code generation agents - before diving into the recent sensation around Manus AI. The hosts discuss how AI agents are shifting from simple task execution to complex autonomous workflows, potentially transforming how work gets done across industries. The conversation highlights a critical turning point where AI can now operate with minimal human supervision, challenging previous assumptions about the "human in the loop" paradigm.Introduction to AI Agents and Recent Developments (00:00:00)JC and Ardis introduce the episode's focus on AI agentsBrief mention of Manus AI creating waves in the tech worldSetting the stage for understanding personal AI agents in 2025Discussion of how these agents will shape workflows and productivityThe Evolution of AI Agents (00:01:48)The shift from frontier models to foundational agent AIHow 2023 saw the emergence of autonomous personal agentsRequirements for agents: reasoning, adaptation, and action capabilitiesTime compression in AI development with DeepSeek and Grok 3Categories of Personal AI Agents (00:05:52)Four main types of agents identified by the hostsGeneral task agents for personal productivity and schedulingResearcher agents for deep knowledge gathering and synthesisWorkflow agents for orchestrating multi-step processesCode generation agents for software developmentDeep Research Agents in Action (00:09:41)Ardis shares how research agents have changed his workflowExamples of using AI for market research and data gatheringTools like Perplexity offering free research capabilitiesClaude and Gemini providing deep research featuresThe value of referenced, data-backed insights for decision makingWorkflow Agents: Connecting the Pieces (00:15:10)Explanation of how workflow agents orchestrate multiple processesZapier and Make.com for workflow automationRelevance for personalizationLindy.ai for Gmail integration and automationThe ability to mix and match between different LLMs for optimal resultsCode Generation Agents (00:18:00)The acceleration of AI code writing capabilitiesReference to Anthropic CEO's claim about AI writing 90% of code soonCursor and Anthropic Code Agent toolsLovable - no-code solution for building applicationsBolt as another code generation platformIntroduction to "vibe coding" - collaborating with AI on code without technical expertiseManus AI: A Game-Changing Personal Agent (00:24:45)Overview of what makes Manus different from previous agentsThe massive waitlist and $10,000 access codes showing huge demandHow Manus provides autonomous task execution with minimal human inputThe open-source alternative "Anus" created using Manus itselfThe ability to work continuously in the background and notify when completeThe Future of Work with Autonomous Agents (00:32:16)Implications for knowledge work and job rolesThe shift from "human in the loop" to "human as supervisor" modelsArdis acknowledging the difficulty in claiming AI won't replace certain rolesDiscussion about which jobs might remain relevant in the AI futureHow autonomous agents are changing organizational designClosing Thoughts on the AI Landscape (00:39:34)Reflection on how major AI releases create excitement and commercialization racesAnticipation of more autonomous agent solutions following ManusThe rapid pace of change in the AI spaceHow these tools will continue to democratize access to advanced capabilitiesExpectations for what might emerge in the coming months - - - -Connect With Our Co-Hosts:Ardis Kadiuhttps://www.linkedin.com/in/ardis/https://twitter.com/ardisDr. JC Bonillahttps://www.linkedin.com/in/jcbonilla/https://twitter.com/jbonillxAbout The Enrollify Podcast Network:Generation AI is a part of the Enrollify Podcast Network. If you like this podcast, chances are you'll like other Enrollify shows too! Enrollify is made possible by Element451 — the next-generation AI student engagement platform helping institutions create meaningful and personalized interactions with students. Learn more at element451.com. Attend the 2025 Engage Summit! The Engage Summit is the premier conference for forward-thinking leaders and practitioners dedicated to exploring the transformative power of AI in education. Explore the strategies and tools to step into the next generation of student engagement, supercharged by AI. You'll leave ready to deliver the most personalized digital engagement experience every step of the way.Register now to secure your spot in Charlotte, NC, on June 24-25, 2025! Early bird registration ends February 1st -- https://engage.element451.com/register
Das Zeitalter der persoenlichen AI Supercomputer hat begonnen. Richtig gehoert. Ich glaube nicht, dass ich uebertreibe, denn diese Ausgabe von Metacheles wird verdammt gut altern. In dieser Episode geht es um Nvidias Entwicklerkonferenz GTC 2025 – und warum die neuen AI Desktop Systeme wie der ASUS Ascent GX10 dein persoenlicher Supercomputer sein koennen. Lokale Modelle, keine Cloud-Abhaengigkeit mehr, Datenschutz und unglaubliche Performance: So koennte die Zukunft aussehen – auf deinem eigenen Schreibtisch.Support via Paypal
Would you want a personal AI that acts as your twin mind? I've always dreamed of never forgetting anything. And instantly and effortlessly remembering anything I need, right away. Now, an AI-driven app called TwinMind might help me do something similar.In this episode of TechFirst we chat with Daniel George, the CEO of TwinMind. This innovative AI app aims to become your second brain, capturing and processing your life events in real-time. We chat about George's inspiration behind TwinMind, its features, future vision, and the LLM tech making it possible. We also chat about privacy and security concerns.00:00 Introduction to AI and Twin Mind00:51 How Twin Mind Works01:37 Real-World Applications and User Experience03:37 Privacy and Security Concerns11:06 Technology Behind Twin Mind15:17 Future of AI and Twin Mind's Vision21:08 Conclusion and Final Thoughts
SummaryIn this episode of EGGS: The Podcast, we have Suman Kanuganti, co-founder and CEO of Personal AI. They discuss Suman's journey from engineering to entrepreneurship, the philosophy behind Personal AI, and how it aims to augment human memory and cognition. Suman explains the ethical implications of AI, the importance of data ownership in the Web3 era, and how Personal AI can revolutionize content creation and community engagement. The conversation also touches on the onboarding process for users and the future prospects of personal AI technology..TakeawaysSuman Kanuganti is passionate about solving human problems through technology.Personal AI aims to augment human memory and cognition.The technology is designed to respect user privacy and data ownership.Ethics in AI revolves around ownership and attribution of personal data.Personal AI is about extending human performance rather than replacing it.The future of AI includes creating personal legacies and memories.Web3 technology enhances data ownership and privacy for users.Content creators can leverage Personal AI for community engagement and monetization.Onboarding to Personal AI involves user interaction and data integration.The future of AI is about individual intelligence rather than collective intelligence.Chapters00:00 Introduction to Suman Kanuganti and Personal AI03:31 The Journey of an Entrepreneur06:47 Philosophy Behind Personal AI09:13 How Personal AI Works12:37 Collective vs Individual Intelligence14:00 Ethics of AI and Data Ownership20:21 The Future of AI in Education26:20 AGI vs API: Extending Human Intelligence27:48 The Future of Personal AI and Individual Histories30:15 Understanding Web3 and Data Ownership33:09 Legacy and Digital Memory: A New Approach39:40 Human-Centric AI Conversations42:06 Onboarding and Cost of Personal AI49:45Community Engagement and Monetization with AICredits:Hosted by Michael Smith and Ryan RoghaarProduced by Ryan RoghaarTheme music: "Perfect Day" by OPM The Carton:https://medium.com/the-carton-by-eggsFeature with Zack Chmeis of Straight Method up now! https://medium.com/the-carton-by-eggs/zack-chmeis-35dae817ac28 The Eggs Podcast Spotify playlist:bit.ly/eggstunesThe Plugs:The Show: eggscast.com@eggshow on twitter and instagramOn iTunes: itun.es/i6dX3pCOnStitcher: bit.ly/eggs_on_stitcherAlso available on Google Play Music!Mike "DJ Ontic":Shows and info:djontic.com@djontic on twitterRyan Roghaar:rogha.ar
Bundle tickets for AIE Summit NYC have now sold out. You can now sign up for the livestream — where we will be making a big announcement soon. NYC-based readers and Summit attendees should check out the meetups happening around the Summit.2024 was a very challenging year for AI Hardware. After the buzz of CES last January, 2024 was marked by the meteoric rise and even harder fall of AI Wearables companies like Rabbit and Humane, with an assist from a pre-wallpaper-app MKBHD. Even Friend.com, the first to launch in the AI pendant category, and which spurred Rewind AI to rebrand to Limitless and follow in their footsteps, ended up delaying their wearable ship date and launching an experimental website chatbot version. We have been cautiously excited about this category, keeping tabs on most of the top entrants, including Omi and Compass. However, to date the biggest winner still standing from the AI Wearable wars is Bee AI, founded by today's guests Maria and Ethan. Bee is an always on hardware device with beamforming microphones, 7 day battery life and a mute button, that can be worn as a wristwatch or a clip-on pin, backed by an incredible transcription, diarization and very long context memory processing pipeline that helps you to remember your day, your todos, and even perform actions by operating a virtual cloud phone. This is one of the most advanced, production ready, personal AI agents we've ever seen, so we were excited to be their first podcast appearance. We met Bee when we ran the world's first Personal AI meetup in April last year.As a user of Bee (and not an investor! just a friend!) it's genuinely been a joy to use, and we were glad to take advantage of the opportunity to ask hard questions about the privacy and legal/ethical side of things as much as the AI and Hardware engineering side of Bee. We hope you enjoy the episode and tune in next Friday for Bee's first conference talk: Building Perfect Memory.Show Notes* Bee Website* Ethan Sutin, Maria de Lourdes Zollo* Bee @ Personal AI Meetup* Buy Bee with Listener Discount Code!Timestamps* 00:00:00 Introductions and overview of Bee Computer* 00:01:58 Personal context and use cases for Bee* 00:03:02 Origin story of Bee and the founders' background* 00:06:56 Evolution from app to hardware device* 00:09:54 Short-term value proposition for users* 00:12:17 Demo of Bee's functionality* 00:17:54 Hardware form factor considerations* 00:22:22 Privacy concerns and legal considerations* 00:30:57 User adoption and reactions to wearing Bee* 00:35:56 CES experience and hardware manufacturing challenges* 00:41:40 Software pipeline and inference costs* 00:53:38 Technical challenges in real-time processing* 00:57:46 Memory and personal context modeling* 01:02:45 Social aspects and agent-to-agent interactions* 01:04:34 Location sharing and personal data exchange* 01:05:11 Personality analysis capabilities* 01:06:29 Hiring and future of always-on AITranscriptAlessio [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of SmallAI.swyx [00:00:12]: Hey, and today we are very honored to have in the studio Maria and Ethan from Bee.Maria [00:00:16]: Hi, thank you for having us.swyx [00:00:20]: And you are, I think, the first hardware founders we've had on the podcast. I've been looking to have had a hardware founder, like a wearable hardware, like a wearable hardware founder for a while. I think we're going to have two or three of them this year. And you're the ones that I wear every day. So thank you for making Bee. Thank you for all the feedback and the usage. Yeah, you know, I've been a big fan. You are the speaker gift for the Engineering World's Fair. And let's start from the beginning. What is Bee Computer?Ethan [00:00:52]: Bee Computer is a personal AI system. So you can think of it as AI living alongside you in first person. So it can kind of capture your in real life. So with that understanding can help you in significant ways. You know, the obvious one is memory, but that's that's really just the base kind of use case. So recalling and reflective. I know, Swyx, that you you like the idea of journaling, but you don't but still have some some kind of reflective summary of what you experienced in real life. But it's also about just having like the whole context of a human being and understanding, you know, giving the machine the ability to understand, like, what's going on in your life. Your attitudes, your desires, specifics about your preferences, so that not only can it help you with recall, but then anything that you need it to do, it already knows, like, if you think about like somebody who you've worked with or lived with for a long time, they just know kind of without having to ask you what you would want, it's clear that like, that is the future that personal AI, like, it's just going to be very, you know, the AI is just so much more valuable with personal context.Maria [00:01:58]: I will say that one of the things that we are really passionate is really understanding this. Personal context, because we'll make the AI more useful. Think about like a best friend that know you so well. That's one of the things that we are seeing from the user. They're using from a companion standpoint or professional use cases. There are many ways to use B, but companionship and professional are the ones that we are seeing now more.swyx [00:02:22]: Yeah. It feels so dry to talk about use cases. Yeah. Yeah.Maria [00:02:26]: It's like really like investor question. Like, what kind of use case?Ethan [00:02:28]: We're just like, we've been so broken and trained. But I mean, on the base case, it's just like, don't you want your AI to know everything you've said and like everywhere you've been, like, wouldn't you want that?Maria [00:02:40]: Yeah. And don't stay there and repeat every time, like, oh, this is what I like. You already know that. And you do things for me based on that. That's I think is really cool.swyx [00:02:50]: Great. Do you want to jump into a demo? Do you have any other questions?Alessio [00:02:54]: I want to maybe just cover the origin story. Just how did you two meet? What was the was this the first idea you started working on? Was there something else before?Maria [00:03:02]: I can start. So Ethan and I, we know each other from six years now. He had a company called Squad. And before that was called Olabot and was a personal AI. Yeah, I should. So maybe you should start this one. But yeah, that's how I know Ethan. Like he was pivoting from personal AI to Squad. And there was a co-watching with friends product. I had experience working with TikTok and video content. So I had the pivoting and we launched Squad and was really successful. And at the end. The founders decided to sell that to Twitter, now X. So both of us, we joined X. We launched Twitter Spaces. We launched many other products. And yeah, till then, we basically continue to work together to the start of B.Ethan [00:03:46]: The interesting thing is like this isn't the first attempt at personal AI. In 2016, when I started my first company, it started out as a personal AI company. This is before Transformers, no BERT even like just RNNs. You couldn't really do any convincing dialogue at all. I met Esther, who was my previous co-founder. We both really interested in the idea of like having a machine kind of model or understand a dynamic human. We wanted to make personal AI. This was like more geared towards because we had obviously much limited tools, more geared towards like younger people. So I don't know if you remember in 2016, there was like a brief chatbot boom. It was way premature, but it was when Zuckerberg went up on F8 and yeah, M and like. Yeah. The messenger platform, people like, oh, bots are going to replace apps. It was like for about six months. And then everybody realized, man, these things are terrible and like they're not replacing apps. But it was at that time that we got excited and we're like, we tried to make this like, oh, teach the AI about you. So it was just an app that you kind of chatted with and it would ask you questions and then like give you some feedback.Maria [00:04:53]: But Hugging Face first version was launched at the same time. Yeah, we started it.Ethan [00:04:56]: We started out the same office as Hugging Face because Betaworks was our investor. So they had to think. They had a thing called Bot Camp. Betaworks is like a really cool VC because they invest in out there things. They're like way ahead of everybody else. And like back then it was they had something called Bot Camp. They took six companies and it was us and Hugging Face. And then I think the other four, I'm pretty sure, are dead. But and Hugging Face was the one that really got, you know, I mean, 30% success rate is pretty good. Yeah. But yeah, when we it was, it was like it was just the two founders. Yeah, they were kind of like an AI company in the beginning. It was a chat app for teenagers. A lot of people don't know that Hugging Face was like, hey, friend, how was school? Let's trade selfies. But then, you know, they built the Transformers library, I believe, to help them make their chat app better. And then they open sourced and it was like it blew up. And like they're like, oh, maybe this is the opportunity. And now they're Hugging Face. But anyway, like we were obsessed with it at that time. But then it was clear that there's some people who really love chatting and like answering questions. But it's like a lot of work, like just to kind of manually.Maria [00:06:00]: Yeah.Ethan [00:06:01]: Teach like all these things about you to an AI.Maria [00:06:04]: Yeah, there were some people that were super passionate, for example, teenagers. They really like, for example, to speak about themselves a lot. So they will reply to a lot of questions and speak about them. But most of the people, they don't really want to spend time.Ethan [00:06:18]: And, you know, it's hard to like really bring the value with it. We had like sentence similarity and stuff and could try and do, but it was like it was premature with the technology at the time. And so we pivoted. We went to YC and the long story, but like we pivoted to consumer video and that kind of went really viral and got a lot of usage quickly. And then we ended up selling it to Twitter, worked there and left before Elon, not related to Elon, but left Twitter.swyx [00:06:46]: And then I should mention this is the famous time when well, when when Elon was just came in, this was like Esther was the famous product manager who slept there.Ethan [00:06:56]: My co-founder, my former co-founder, she sleeping bag. She was the sleep where you were. Yeah, yeah, she stayed. We had left by that point.swyx [00:07:03]: She very stayed, she's famous for staying.Ethan [00:07:06]: Yeah, but later, later left or got, I think, laid off, laid off. Yeah, I think the whole product team got laid off. She was a product manager, director. But yeah, like we left before that. And then we're like, oh, my God, things are different now. You know, I think this is we really started working on again right before ChatGPT came out. But we had an app version and we kind of were trying different things around it. And then, you know, ultimately, it was clear that, like, there were some limitations we can go on, like a good question to ask any wearable company is like, why isn't this an app? Yes. Yeah. Because like.Maria [00:07:40]: Because we tried the app at the beginning.Ethan [00:07:43]: Yeah. Like the idea that it could be more of a and B comes from ambient. So like if it was more kind of just around you all the time and less about you having to go open the app and do the effort to, like, enter in data that led us down the path of hardware. Yeah. Because the sensors on this are microphones. So it's capturing and understanding audio. We started actually our first hardware with a vision component, too. And we can talk about why we're not doing that right now. But if you wanted to, like, have a continuous understanding of audio with your phone, it would monopolize your microphone. It would get interrupted by calls and you'd have to remember to turn it on. And like that little bit of friction is actually like a substantial barrier to, like, get your phone. It's like the experience of it just being with you all the time and like living alongside you. And so I think that that's like the key reason it's not an app. And in fact, we do have Apple Watch support. So anybody who has a watch, Apple Watch can use it right away without buying any hardware. Because we worked really hard to make a version for the watch that can run in the background, not super drain your battery. But even with the watch, there's still friction because you have to remember to turn it on and it still gets interrupted if somebody calls you. And you have to remember to. We send a notification, but you still have to go back and turn it on because it's just the way watchOS works.Maria [00:09:04]: One of the things that we are seeing from our Apple Watch users, like I love the Apple Watch integration. One of the things that we are seeing is that people, they start using it from Apple Watch and after a couple of days they buy the B because they just like to wear it.Ethan [00:09:17]: Yeah, we're seeing.Maria [00:09:18]: That's something that like they're learning and it's really cool. Yeah.Ethan [00:09:21]: I mean, I think like fundamentally we like to think that like a personal AI is like the mission. And it's more about like the understanding. Connecting the dots, making use of the data to provide some value. And the hardware is like the ears of the AI. It's not like integrating like the incoming sensor data. And that's really what we focus on. And like the hardware is, you know, if we can do it well and have a great experience on the Apple Watch like that, that's just great. I mean, but there's just some platform restrictions that like existing hardware makes it hard to provide that experience. Yeah.Alessio [00:09:54]: What do people do in like two or three days that then convinces them to buy it? They buy the product. This feels like a product where like after you use it for a while, you have enough data to start to get a lot of insights. But it sounds like maybe there's also like a short term.Maria [00:10:07]: From the Apple Watch users, I believe that because every time that you receive a call after, they need to go back to B and open it again. Or for example, every day they need to charge Apple Watch and reminds them to open the app every day. They feel like, okay, maybe this is too much work. I just want to wear the B and just keep it open and that's it. And I don't need to think about it.Ethan [00:10:27]: I think they see the kind of potential of it just from the watch. Because even if you wear it a day, like we send a summary notification at the end of the day about like just key things that happened to you in your day. And like I didn't even think like I'm not like a journaling type person or like because like, oh, I just live the day. Why do I need to like think about it? But like it's actually pretty sometimes I'm surprised how interesting it is to me just to kind of be like, oh, yeah, that and how it kind of fits together. And I think that's like just something people get immediately with the watch. But they're like, oh, I'd like an easier watch. I'd like a better way to do this.swyx [00:10:58]: It's surprising because I only know about the hardware. But I use the watch as like a backup for when I don't have the hardware. I feel like because now you're beamforming and all that, this is significantly better. Yeah, that's the other thing.Ethan [00:11:11]: We have way more control over like the Apple Watch. You're limited in like you can't set the gain. You can't change the sample rate. There's just very limited framework support for doing anything with audio. Whereas if you control it. Then you can kind of optimize it for your use case. The Apple Watch isn't meant to be kind of recording this. And we can talk when we get to the part about audio, why it's so hard. This is like audio on the hardest level because you don't know it has to work in all environments or you try and make it work as best as it can. Like this environment is very great. We're in a studio. But, you know, afterwards at dinner in a restaurant, it's totally different audio environment. And there's a lot of challenges with that. And having really good source audio helps. But then there's a lot more. But with the machine learning that still is, you know, has to be done to try and account because like you can tune something for one environment or another. But it'll make one good and one bad. And like making something that's flexible enough is really challenging.Alessio [00:12:10]: Do we want to do a demo just to set the stage? And then we kind of talk about.Maria [00:12:14]: Yeah, I think we can go like a walkthrough and the prod.Alessio [00:12:17]: Yeah, sure.swyx [00:12:17]: So I think we said I should. So for listeners, we'll be switching to video. That was superimposed on. And to this video, if you want to see it, go to our YouTube, like and subscribe as always. Yeah.Maria [00:12:31]: And by the bee. Yes.swyx [00:12:33]: And by the bee. While you wait. While you wait. Exactly. It doesn't take long.Maria [00:12:39]: Maybe you should have a discount code just for the listeners. Sure.swyx [00:12:43]: If you want to offer it, I'll take it. All right. Yeah. Well, discount code Swyx. Oh s**t. Okay. Yeah. There you go.Ethan [00:12:49]: An important thing to mention also is that the hardware is meant to work with the phone. And like, I think, you know, if you, if you look at rabbit or, or humane, they're trying to create like a new hardware platform. We think that the phone's just so dominant and it will be until we have the next generation, which is not going to be for five, you know, maybe some Orion type glasses that are cheap enough and like light enough. Like that's going to take a long time before with the phone rather than trying to just like replace it. So in the app, we have a summary of your days, but at the top, it's kind of what's going on now. And that's updating your phone. It's updating continuously. So right now it's saying, I'm discussing, you know, the development of, you know, personal AI, and that's just kind of the ongoing conversation. And then we give you a readable form. That's like little kind of segments of what's the important parts of the conversations. We do speaker identification, which is really important because you don't want your personal AI thinking you said something and attributing it to you when it was just somebody else in the conversation. So you can also teach it other people's voices. So like if some, you know, somebody close to you, so it can start to understand your relationships a little better. And then we do conversation end pointing, which is kind of like a task that didn't even exist before, like, cause nobody needed to do this. But like if you had somebody's whole day, how do you like break it into logical pieces? And so we use like not just voice activity, but other signals to try and split up because conversations are a little fuzzy. They can like lead into one, can start to the next. So also like the semantic content of it. When a conversation ends, we run it through larger models to try and get a better, you know, sense of the actual, what was said and then summarize it, provide key points. What was the general atmosphere and tone of the conversation and potential action items that might've come of that. But then at the end of the day, we give you like a summary of all your day and where you were and just kind of like a step-by-step walkthrough of what happened and what were the key points. That's kind of just like the base capture layer. So like if you just want to get a kind of glimpse or recall or reflect that's there. But really the key is like all of this is now like being influenced on to generate personal context about you. So we generate key items known to be true about you and that you can, you know, there's a human in the loop aspect is like you can, you have visibility. Right. Into that. And you can, you know, I have a lot of facts about technology because that's basically what I talk about all the time. Right. But I do have some hobbies that show up and then like, how do you put use to this context? So I kind of like measure my day now and just like, what is my token output of the day? You know, like, like as a human, how much information do I produce? And it's kind of measured in tokens and it turns out it's like around 200,000 or so a day. But so in the recall case, we have, um. A chat interface, but the key here is on the recall of it. Like, you know, how do you, you know, I probably have 50 million tokens of personal context and like how to make sense of that, make it useful. So I can ask simple, like, uh, recall questions, like details about the trip I was on to Taiwan, where recently we're with our manufacturer and, um, in real time, like it will, you know, it has various capabilities such as searching through your, your memories, but then also being able to search the web or look at my calendar, we have integrations with Gmail and calendars. So like connecting the dots between the in real life and the digital life. And, you know, I just asked it about my Taiwan trip and it kind of gives me the, the breakdown of the details, what happened, the issues we had around, you know, certain manufacturing problems and it, and it goes back and references the conversation so I can, I can go back to the source. Yeah.Maria [00:16:46]: Not just the conversation as well, the integrations. So we have as well Gmail and Google calendar. So if there is something there that was useful to have more context, we can see that.Ethan [00:16:56]: So like, and it can, I never use the word agentic cause it's, it's cringe, but like it can search through, you know, if I, if I'm brainstorming about something that spans across, like search through my conversation, search the email, look at the calendar and then depending on what's needed. Then synthesize, you know, something with all that context.Maria [00:17:18]: I love that you did the Spotify wrapped. That was pretty cool. Yeah.Ethan [00:17:22]: Like one thing I did was just like make a Spotify wrap for my 2024, like of my life. You can do that. Yeah, you can.Maria [00:17:28]: Wait. Yeah. I like those crazy.Ethan [00:17:31]: Make a Spotify wrapped for my life in 2024. Yeah. So it's like surprisingly good. Um, it like kind of like game metrics. So it was like you visited three countries, you shipped, you know, XMini, beta. Devices.Maria [00:17:46]: And that's kind of more personal insights and reflection points. Yeah.swyx [00:17:51]: That's fascinating. So that's the demo.Ethan [00:17:54]: Well, we have, we can show something that's in beta. I don't know if we want to do it. I don't know.Maria [00:17:58]: We want to show something. Do it.Ethan [00:18:00]: And then we can kind of fit. Yeah.Maria [00:18:01]: Yeah.Ethan [00:18:02]: So like the, the, the, the vision is also like, not just about like AI being with you in like just passively understanding you through living your experience, but also then like it proactively suggesting things to you. Yeah. Like at the appropriate time. So like not just pool, but, but kind of, it can step in and suggest things to you. So, you know, one integration we have that, uh, is in beta is with WhatsApp. Maria is asking for a recommendation for an Italian restaurant. Would you like me to look up some highly rated Italian restaurants nearby and send her a suggestion?Maria [00:18:34]: So what I did, I just sent to Ethan a message through WhatsApp in his own personal phone. Yeah.Ethan [00:18:41]: So, so basically. B is like watching all my incoming notifications. And if it meets two criteria, like, is it important enough for me to raise a suggestion to the user? And then is there something I could potentially help with? So this is where the actions come into place. So because Maria is my co-founder and because it was like a restaurant recommendation, something that it could probably help with, it proposed that to me. And then I can, through either the chat and we have another kind of push to talk walkie talkie style button. It's actually a multi-purpose button to like toggle it on or off, but also if you push to hold, you can talk. So I can say, yes, uh, find one and send it to her on WhatsApp is, uh, an Android cloud phone. So it's, uh, going to be able to, you know, that has access to all my accounts. So we're going to abstract this away and the execution environment is not really important, but like we can go into technically why Android is actually a pretty good one right now. But, you know, it's searching for Italian restaurants, you know, and we don't have to watch this. I could be, you know, have my ear AirPods in and in my pocket, you know, it's going to go to WhatsApp, going to find Maria's thread, send her the response and then, and then let us know. Oh my God.Alessio [00:19:56]: But what's the, I mean, an Italian restaurant. Yeah. What did it choose? What did it choose? It's easy to say. Real Italian is hard to play. Exactly.Ethan [00:20:04]: It's easy to say. So I doubt it. I don't know.swyx [00:20:06]: For the record, since you have the Italians, uh, best Italian restaurant in SF.Maria [00:20:09]: Oh my God. I still don't have one. What? No.Ethan [00:20:14]: I don't know. Successfully found and shared.Alessio [00:20:16]: Let's see. Let's see what the AI says. Bottega. Bottega? I think it's Bottega.Maria [00:20:21]: Have you been to Bottega? How is it?Alessio [00:20:24]: It's fine.Maria [00:20:25]: I've been to one called like Norcina, I think it was good.Alessio [00:20:29]: Bottega is on Valencia Street. It's fine. The pizza is not good.Maria [00:20:32]: It's not good.Alessio [00:20:33]: Some of the pastas are good.Maria [00:20:34]: You know, the people I'm sorry to interrupt. Sorry. But there is like this Delfina. Yeah. That here everybody's like, oh, Pizzeria Delfina is amazing. I'm overrated. This is not. I don't know. That's great. That's great.swyx [00:20:46]: The North Beach Cafe. That place you took us with Michele last time. Vega. Oh.Alessio [00:20:52]: The guy at Vega, Giuseppe, he's Italian. Which one is that? It's in Bernal Heights. Ugh. He's nice. He's not nice. I don't know that one. What's the name of the place? Vega. Vega. Vega. Cool. We got the name. Vega. But it's not Vega.Maria [00:21:02]: It's Italian. Whatswyx [00:21:10]: Vega. Vega.swyx [00:21:16]: Vega. Vega. Vega. Vega. Vega. Vega. Vega. Vega. Vega.Ethan [00:21:29]: Vega. Vega. Vega. Vega. Vega.Ethan [00:21:40]: We're going to see a lot of innovation around hardware and stuff, but I think the real core is being able to do something useful with the personal context. You always had the ability to capture everything, right? We've always had recorders, camcorders, body cameras, stuff like that. But what's different now is we can actually make sense and find the important parts in all of that context.swyx [00:22:04]: Yeah. So, and then one last thing, I'm just doing this for you, is you also have an API, which I think I'm the first developer against. Because I had to build my own. We need to hire a developer advocate. Or just hire AI engineers. The point is that you should be able to program your own assistant. And I tried OMI, the former friend, the knockoff friend, and then real friend doesn't have an API. And then Limitless also doesn't have an API. So I think it's very important to own your data. To be able to reprocess your audio, maybe. Although, by default, you do not store audio. And then also just to do any corrections. There's no way that my needs can be fully met by you. So I think the API is very important.Ethan [00:22:47]: Yeah. And I mean, I've always been a consumer of APIs in all my products.swyx [00:22:53]: We are API enjoyers in this house.Ethan [00:22:55]: Yeah. It's very frustrating when you have to go build a scraper. But yeah, it's for sure. Yeah.swyx [00:23:03]: So this whole combination of you have my location, my calendar, my inbox. It really is, for me, the sort of personal API.Alessio [00:23:10]: And is the API just to write into it or to have it take action on external systems?Ethan [00:23:16]: Yeah, we're expanding it. It's right now read-only. In the future, very soon, when the actions are more generally available, it'll be fully supported in the API.Alessio [00:23:27]: Nice. I'll buy one after the episode.Ethan [00:23:30]: The API thing, to me, is the most interesting. Yeah. We do have real-time APIs, so you can even connect a socket and connect it to whatever you want it to take actions with. Yeah. It's too smart for me.Alessio [00:23:43]: Yeah. I think when I look at these apps, and I mean, there's so many of these products, we launch, it's great that I can go on this app and do things. But most of my work and personal life is managed somewhere else. Yeah. So being able to plug into it. Integrate that. It's nice. I have a bunch of more, maybe, human questions. Sure. I think maybe people might have. One, is it good to have instant replay for any argument that you have? I can imagine arguing with my wife about something. And, you know, there's these commercials now where it's basically like two people arguing, and they're like, they can throw a flag, like in football, and have an instant replay of the conversation. I feel like this is similar, where it's almost like people cannot really argue anymore or, like, lie to each other. Because in a world in which everybody adopts this, I don't know if you thought about it. And also, like, how the lies. You know, all of us tell lies, right? How do you distinguish between when I'm, there's going to be sometimes things that contradict each other, because I might say something publicly, and I might think something, really, that I tell someone else. How do you handle that when you think about building a product like this?Maria [00:24:48]: I would say that I like the fact that B is an objective point of view. So I don't care too much about the lies, but I care more about the fact that can help me to understand what happened. Mm-hmm. And the emotions in a really objective way, like, really, like, critical and objective way. And if you think about humans, they have so many emotions. And sometimes something that happened to me, like, I don't know, I would feel, like, really upset about it or really angry or really emotional. But the AI doesn't have those emotions. It can read the conversation, understand what happened, and be objective. And I think the level of support is the one that I really like more. Instead of, like, oh, did this guy tell me a lie? I feel like that's not exactly, like, what I feel. I find it curious for me in terms of opportunity.Alessio [00:25:35]: Is the B going to interject in real time? Say I'm arguing with somebody. The B is like, hey, look, no, you're wrong. What? That person actually said.Ethan [00:25:43]: The proactivity is something we're very interested in. Maybe not for, like, specifically for, like, selling arguments, but more for, like, and I think that a lot of the challenge here is, you know, you need really good reasoning to kind of pull that off. Because you don't want it just constantly interjecting, because that would be super annoying. And you don't want it to miss things that it should be interjecting. So, like, it would be kind of a hard task even for a human to be, like, just come in at the right times when it's appropriate. Like, it would take the, you know, with the personal context, it's going to be a lot better. Because, like, if somebody knows about you, but even still, it requires really good reasoning to, like, not be too much or too little and just right.Maria [00:26:20]: And the second part about, well, like, some things, you know, you say something to somebody else, but after I change my mind, I send something. Like, it's every time I have, like, different type of conversation. And I'm like, oh, I want to know more about you. And I'm like, oh, I want to know more about you. I think that's something that I found really fascinating. One of the things that we are learning is that, indeed, humans, they evolve over time. So, for us, one of the challenges is actually understand, like, is this a real fact? Right. And so far, what we do is we give, you know, to the, we have the human in the loop that can say, like, yes, this is true, this is not. Or they can edit their own fact. For sure, in the future, we want to have all of that automatized inside of the product.Ethan [00:26:57]: But, I mean, I think your question kind of hits on, and I know that we'll talk about privacy, but also just, like, if you have some memory and you want to confirm it with somebody else, that's one thing. But it's for sure going to be true that in the future, like, not even that far into the future, that it's just going to be kind of normalized. And we're kind of in a transitional period now. And I think it's, like, one of the key things that is for us to kind of navigate that and make sure we're, like, thinking of all the consequences. And how to, you know, make the right choices in the way that everything's designed. And so, like, it's more beneficial than it could be harmful. But it's just too valuable for your AI to understand you. And so if it's, like, MetaRay bands or the Google Astra, I think it's just people are going to be more used to it. So people's behaviors and expectations will change. Whether that's, like, you know, something that is going to happen now or in five years, it's probably in that range. And so, like, I think we... We kind of adapt to new technologies all the time. Like, when the Ring cameras came out, that was kind of quite controversial. It's like... But now it's kind of... People just understand that a lot of people have cameras on their doors. And so I think that...Maria [00:28:09]: Yeah, we're in a transitional period for sure.swyx [00:28:12]: I will press on the privacy thing because that is the number one thing that everyone talks about. Obviously, I think in Silicon Valley, people are a little bit more tech-forward, experimental, whatever. But you want to go mainstream. You want to sell to consumers. And we have to worry about this stuff. Baseline question. The hardest version of this is law. There are one-party consent states where this is perfectly legal. Then there are two-party consent states where they're not. What have you come around to this on?Ethan [00:28:38]: Yeah, so the EU is a totally different regulatory environment. But in the U.S., it's basically on a state-by-state level. Like, in Nevada, it's single-party. In California, it's two-party. But it's kind of untested. You know, it's different laws, whether it's a phone call, whether it's in person. In a state like California, it's two-party. Like, anytime you're in public, there's no consent comes into play because the expectation of privacy is that you're in public. But we process the audio and nothing is persisted. And then it's summarized with the speaker identification focusing on the user. Now, it's kind of untested on a legal, and I'm not a lawyer, but does that constitute the same as, like, a recording? So, you know, it's kind of a gray area and untested in law right now. I think that the bigger question is, you know, because, like, if you had your Ray-Ban on and were recording, then you have a video of something that happened. And that's different than kind of having, like, an AI give you a summary that's focused on you that's not really capturing anybody's voice. You know, I think the bigger question is, regardless of the legal status, like, what is the ethical kind of situation with that? Because even in Nevada that we're—or many other U.S. states where you can record. Everything. And you don't have to have consent. Is it still, like, the right thing to do? The way we think about it is, is that, you know, we take a lot of precautions to kind of not capture personal information of people around. Both through the speaker identification, through the pipeline, and then the prompts, and the way we store the information to be kind of really focused on the user. Now, we know that's not going to, like, satisfy a lot of people. But I think if you do try it and wear it again. It's very hard for me to see anything, like, if somebody was wearing a bee around me that I would ever object that it captured about me as, like, a third party to it. And like I said, like, we're in this transitional period where the expectation will just be more normalized. That it's, like, an AI. It's not capturing, you know, a full audio recording of what you said. And it's—everything is fully geared towards helping the person kind of understand their state and providing valuable information to them. Not about, like, logging details about people they encounter.Alessio [00:30:57]: You know, I've had the same question also with the Zoom meeting transcribers thing. I think there's kind of, like, the personal impact that there's a Firefly's AI recorder. Yeah. I just know that it's being recorded. It's not like a—I don't know if I'm going to say anything different. But, like, intrinsically, you kind of feel—because it's not pervasive. And I'm curious, especially, like, in your investor meetings. Do people feel differently? Like, have you had people ask you to, like, turn it off? Like, in a business meeting, to not record? I'm curious if you've run into any of these behaviors.Maria [00:31:29]: You know what's funny? On my end, I wear it all the time. I take my coffee, a blue bottle with it. Or I work with it. Like, obviously, I work on it. So, I wear it all the time. And so far, I don't think anybody asked me to turn it off. I'm not sure if because they were really friendly with me that they know that I'm working on it. But nobody really cared.swyx [00:31:48]: It's because you live in SF.Maria [00:31:49]: Actually, I've been in Italy as well. Uh-huh. And in Italy, it's a super privacy concern. Like, Europe is a super privacy concern. And again, they're nothing. Like, it's—I don't know. Yeah. That, for me, was interesting.Ethan [00:32:01]: I think—yeah, nobody's ever asked me to turn it off, even after giving them full demos and disclosing. I think that some people have said, well, my—you know, in a personal relationship, my partner initially was, like, kind of uncomfortable about it. We heard that from a few users. And that was, like, more in just, like— It's not like a personal relationship situation. And the other big one is people are like, I do like it, but I cannot wear this at work. I guess. Yeah. Yeah. Because, like, I think I will get in trouble based on policies or, like, you know, if you're wearing it inside a research lab or something where you're working on things that are kind of sensitive that, like—you know, so we're adding certain features like geofencing, just, like, at this location. It's just never active.swyx [00:32:50]: I mean, I've often actually explained to it the other way, where maybe you only want it at work, so you never take it from work. And it's just a work device, just like your Zoom meeting recorder is a work device.Ethan [00:33:09]: Yeah, professionals have been a big early adopter segment. And you say in San Francisco, but we have out there our daily shipment of over 100. If you go look at the addresses, Texas, I think, is our biggest state, and Florida, just the biggest states. A lot of professionals who talk for, and we didn't go out to build it for that use case, but I think there is a lot of demand for white-collar people who talk for a living. And I think we're just starting to talk with them. I think they just want to be able to improve their performance around, understand what they were doing.Alessio [00:33:47]: How do you think about Gong.io? Some of these, for example, sales training thing, where you put on a sales call and then it coaches you. They're more verticalized versus having more horizontal platform.Ethan [00:33:58]: I am not super familiar with those things, because like I said, it was kind of a surprise to us. But I think that those are interesting. I've seen there's a bunch of them now, right? Yeah. It kind of makes sense. I'm terrible at sales, so I could probably use one. But it's not my job, fundamentally. But yeah, I think maybe it's, you know, we heard also people with restaurants, if they're able to understand, if they're doing well.Maria [00:34:26]: Yeah, but in general, I think a lot of people, they like to have the double check of, did I do this well? Or can you suggest me how I can do better? We had a user that was saying to us that he used for interviews. Yeah, he used job interviews. So he used B and after asked to the B, oh, actually, how do you think my interview went? What I should do better? And I like that. And like, oh, that's actually like a personal coach in a way.Alessio [00:34:50]: Yeah. But I guess the question is like, do you want to build all of those use cases? Or do you see B as more like a platform where somebody is going to build like, you know, the sales coach that connects to B so that you're kind of the data feed into it?Ethan [00:35:02]: I don't think this is like a data feed, more like an understanding kind of engine and like definitely. In the future, having third parties to the API and building out for all the different use cases is something that we want to do. But the like initial case we're trying to do is like build that layer for all that to work. And, you know, we're not trying to build all those verticals because no startup could do that well. But I think that it's really been quite fascinating to see, like, you know, I've done consumer for a long time. Consumer is very hard to predict, like, what's going to be. It's going to be like the thing that's the killer feature. And so, I mean, we really believe that it's the future, but we don't know like what exactly like process it will take to really gain mass adoption.swyx [00:35:50]: The killer consumer feature is whatever Nikita Beer does. Yeah. Social app for teens.Ethan [00:35:56]: Yeah, well, I like Nikita, but, you know, he's good at building bootstrap companies and getting them very viral. And then selling them and then they shut down.swyx [00:36:05]: Okay, so you just came back from CES.Maria [00:36:07]: Yeah, crazy. Yeah, tell us. It was my first time in Vegas and first time CES, both of them were overwhelming.swyx [00:36:15]: First of all, did you feel like you had to do it because you're in consumer hardware?Maria [00:36:19]: Then we decided to be there and to have a lot of partners and media meetings, but we didn't have our own booth. So we decided to just keep that. But we decided to be there and have a presence there, even just us and speak with people. It's very hard to stand out. Yeah, I think, you know, it depends what type of booth you have. I think if you can prepare like a really cool booth.Ethan [00:36:41]: Have you been to CES?Maria [00:36:42]: I think it can be pretty cool.Ethan [00:36:43]: It's massive. It's huge. It's like 80,000, 90,000 people across the Venetian and the convention center. And it's, to me, I always wanted to go just like...Maria [00:36:53]: Yeah, you were the one who was like...swyx [00:36:55]: I thought it was your idea.Ethan [00:36:57]: I always wanted to go just as a, like, just as a fan of...Maria [00:37:01]: Yeah, you wanted to go anyways.Ethan [00:37:02]: Because like, growing up, I think CES like kind of peaked for a while and it was like, oh, I want to go. That's where all the cool, like... gadgets, everything. Yeah, now it's like SmartBitch and like, you know, vacuuming the picks up socks. Exactly.Maria [00:37:13]: There are a lot of cool vacuums. Oh, they love it.swyx [00:37:15]: They love the Roombas, the pick up socks.Maria [00:37:16]: And pet tech. Yeah, yeah. And dog stuff.swyx [00:37:20]: Yeah, there's a lot of like robot stuff. New TVs, new cars that never ship. Yeah. Yeah. I'm thinking like last year, this time last year was when Rabbit and Humane launched at CES and Rabbit kind of won CES. And now this year, no wearables except for you guys.Ethan [00:37:32]: It's funny because it's obviously it's AI everything. Yeah. Like every single product. Yeah.Maria [00:37:37]: Toothbrush with AI, vacuums with AI. Yeah. Yeah.Ethan [00:37:41]: We like hair blow, literally a hairdryer with AI. We saw.Maria [00:37:45]: Yeah, that was cool.Ethan [00:37:46]: But I think that like, yeah, we didn't, another kind of difference like around our, like we didn't want to do like a big overhypey promised kind of Rabbit launch. Because I mean, they did, hats off to them, like on the presentation and everything, obviously. But like, you know, we want to let the product kind of speak for itself and like get it out there. And I think we were really happy. We got some very good interest from media and some of the partners there. So like it was, I think it was definitely worth going. I would say like if you're in hardware, it's just kind of how you make use of it. Like I think to do it like a big Rabbit style or to have a huge show on there, like you need to plan that six months in advance. And it's very expensive. But like if you, you know, go there, there's everybody's there. All the media is there. There's a lot of some pre-show events that it's just great to talk to people. And the industry also, all the manufacturers, suppliers are there. So we learned about some really cool stuff that we might like. We met with somebody. They have like thermal energy capture. And it's like, oh, could you maybe not need to charge it? Because they have like a thermal that can capture your body heat. And what? Yeah, they're here. They're actually here. And in Palo Alto, they have like a Fitbit thing that you don't have to charge.swyx [00:39:01]: Like on paper, that's the power you can get from that. What's the power draw for this thing?Ethan [00:39:05]: It's more than you could get from the body heat, it turns out. But it's quite small. I don't want to disclose technically. But I think that solar is still, they also have one where it's like this thing could be like the face of it. It's just a solar cell. And like that is more realistic. Or kinetic. Kinetic, apparently, I'm not an expert in this, but they seem to think it wouldn't be enough. Kinetic is quite small, I guess, on the capture.swyx [00:39:33]: Well, I mean, watch. Watchmakers have been powering with kinetic for a long time. Yeah. We don't have to talk about that. I just want to get a sense of CES. Would you do it again? I definitely would not. Okay. You're just a fan of CES. Business point of view doesn't make sense. I happen to be in the conference business, right? So I'm kind of just curious. Yeah.Maria [00:39:49]: So I would say as we did, so without the booth and really like straightforward conversations that were already planned. Three days. That's okay. I think it was okay. Okay. But if you need to invest for a booth that is not. Okay. A good one. Which is how much? I think.Ethan [00:40:06]: 10 by 10 is 5,000. But on top of that, you need to. And then they go like 10 by 10 is like super small. Yeah. And like some companies have, I think would probably be more in like the six figure range to get. And I mean, I think that, yeah, it's very noisy. We heard this, that it's very, very noisy. Like obviously if you're, everything is being launched there and like everything from cars to cell phones are being launched. Yeah. So it's hard to stand out. But like, I think going in with a plan of who you want to talk to, I feel like.Maria [00:40:36]: That was worth it.Ethan [00:40:37]: Worth it. We had a lot of really positive media coverage from it and we got the word out and like, so I think we accomplished what we wanted to do.swyx [00:40:46]: I mean, there's some world in which my conference is kind of the CES of whatever AI becomes. Yeah. I think that.Maria [00:40:52]: Don't do it in Vegas. Don't do it in Vegas. Yeah. Don't do it in Vegas. That's the only thing. I didn't really like Vegas. That's great. Amazing. Those are my favorite ones.Alessio [00:41:02]: You can not fit 90,000 people in SF. That's really duh.Ethan [00:41:05]: You need to do like multiple locations so you can do Moscone and then have one in.swyx [00:41:09]: I mean, that's what Salesforce conferences. Well, GDC is how many? That might be 50,000, right? Okay. Form factor, right? Like my way to introduce this idea was that I was at the launch in Solaris. What was the old name of it? Newton. Newton. Of Tab when Avi first launched it. He was like, I thought through everything. Every form factor, pendant is the thing. And then we got the pendants for this original. The first one was just pendants and I took it off and I forgot to put it back on. So you went through pendants, pin, bracelet now, and maybe there's sort of earphones in the future, but what was your iterations?Maria [00:41:49]: So we had, I believe now three or four iterations. And one of the things that we learned is indeed that people don't like the pendant. In particular, woman, you don't want to have like anything here on the chest because it's maybe you have like other necklace or any other stuff.Ethan [00:42:03]: You just ship a premium one that's gold. Yeah. We're talking some fashion reached out to us.Maria [00:42:11]: Some big fashion. There is something there.swyx [00:42:13]: This is where it helps to have an Italian on the team.Maria [00:42:15]: There is like some big Italian luxury. I can't say anything. So yeah, bracelet actually came from the community because they were like, oh, I don't want to wear anything like as necklace or as a pendant. Like it's. And also like the one that we had, I don't know if you remember, like it was like circle, like it was like this and was like really bulky. Like people didn't like it. And also, I mean, I actually, I don't dislike, like we were running fast when we did that. Like our, our thing was like, we wanted to ship them as soon as possible. So we're not overthinking the form factor or the material. We were just want to be out. But after the community organically, basically all of them were like, well, why you don't just don't do the bracelet? Like he's way better. I will just wear it. And that's it. So that's how we ended up with the bracelet, but it's still modular. So I still want to play around the father is modular and you can, you know, take it off and wear it as a clip or in the future, maybe we will bring back the pendant. But I like the fact that there is some personalization and right now we have two colors, yellow and black. Soon we will have other ones. So yeah, we can play a lot around that.Ethan [00:43:25]: I think the form factor. Like the goal is for it to be not super invasive. Right. And something that's easy. So I think in the future, smaller, thinner, not like apple type obsession with thinness, but it does matter like the, the size and weight. And we would love to have more context because that will help, but to make it work, I think it really needs to have good power consumption, good battery life. And, you know, like with the humane swapping the batteries, I have one, I mean, I'm, I'm, I think we've made, and there's like pretty incredible, some of the engineering they did, but like, it wasn't kind of geared towards solving the problem. It was just, it's too heavy. The swappable batteries is too much to man, like the heat, the thermals is like too much to light interface thing. Yeah. Like that. That's cool. It's cool. It's cool. But it's like, if, if you have your handout here, you want to use your phone, like it's not really solving a problem. Cause you know how to use your phone. It's got a brilliant display. You have to kind of learn how to gesture this low range. Yeah. It's like a resolution laser, but the laser is cool that the fact they got it working in that thing, even though if it did overheat, but like too heavy, too cumbersome, too complicated with the multiple batteries. So something that's power efficient, kind of thin, both in the physical sense and also in the edge compute kind of way so that it can be as unobtrusive as possible. Yeah.Maria [00:44:47]: Users really like, like, I like when they say yes, I like to wear it and forget about it because I don't need to charge it every single day. On the other version, I believe we had like 35 hours or something, which was okay. But people, they just prefer the seven days battery life and-swyx [00:45:03]: Oh, this is seven days? Yeah. Oh, I've been charging every three days.Maria [00:45:07]: Oh, no, you can like keep it like, yeah, it's like almost seven days.swyx [00:45:11]: The other thing that occurs to me, maybe there's an Apple watch strap so that I don't have to double watch. Yeah.Maria [00:45:17]: That's the other one that, yeah, I thought about it. I saw as well the ones that like, you can like put it like back on the phone. Like, you know- Plog. There is a lot.swyx [00:45:27]: So yeah, there's a competitor called Plog. Yeah. It's not really a competitor. They only transcribe, right? Yeah, they only transcribe. But they're very good at it. Yeah.Ethan [00:45:33]: No, they're great. Their hardware is really good too.swyx [00:45:36]: And they just launched the pin too. Yeah.Ethan [00:45:38]: I think that the MagSafe kind of form factor has a lot of advantages, but some disadvantages. You can definitely put a very huge battery on that, you know? And so like the battery life's not, the power consumption's not so much of a concern, but you know, downside the phone's like in your pocket. And so I think that, you know, form factors will continue to evolve, but, and you know, more sensors, less obtrusive and-Maria [00:46:02]: Yeah. We have a new version.Ethan [00:46:04]: Easier to use.Maria [00:46:05]: Okay.swyx [00:46:05]: Looking forward to that. Yeah. I mean, we'll, whenever we launch this, we'll try to show whatever, but I'm sure you're going to keep iterating. Last thing on hardware, and then we'll go on to the software side, because I think that's where you guys are also really, really strong. Vision. You wanted to talk about why no vision? Yeah.Ethan [00:46:20]: I think it comes down to like when you're, when you're a startup, especially in hardware, you're just, you work within the constraints, right? And so like vision is super useful and super interesting. And what we actually started with, there's two issues with vision that make it like not the place we decided to start. One is power consumption. So you know, you kind of have to trade off your power budget, like capturing even at a low frame rate and transmitting the radio is actually the thing that takes up the majority of the power. So. Yeah. So you would really have to have quite a, like unacceptably, like large and heavy battery to do it continuously all day. We have, I think, novel kind of alternative ways that might allow us to do that. And we have some prototypes. The other issue is form factor. So like even with like a wide field of view, if you're wearing something on your chest, it's going, you know, obviously the wrist is not really that much of an option. And if you're wearing it on your chest, it's, it's often gone. You're going to probably be not capturing like the field of view of what's interesting to you. So that leaves you kind of with your head and face. And then anything that goes on, on the face has to look cool. Like I don't know if you remember the spectacles, it was kind of like the first, yeah, but they kind of, they didn't, they were not very successful. And I think one of the reasons is they were, they're so weird looking. Yeah. The camera was so big on the side. And if you look at them at array bands where they're way more successful, they, they look almost indistinguishable from array bands. And they invested a lot into that and they, they have a partnership with Qualcomm to develop custom Silicon. They have a stake in Luxottica now. So like they coming from all the angles, like to make glasses, I think like, you know, I don't know if you know, Brilliant Labs, they're cool company, they make frames, which is kind of like a cool hackable glasses and, and, and like, they're really good, like on hardware, they're really good. But even if you look at the frames, which I would say is like the most advanced kind of startup. Yeah. Yeah. Yeah. There was one that launched at CES, but it's not shipping yet. Like one that you can buy now, it's still not something you'd wear every day and the battery life is super short. So I think just the challenge of doing vision right, like off the bat, like would require quite a bit more resources. And so like audio is such a good entry point and it's also the privacy around audio. If you, if you had images, that's like another huge challenge to overcome. So I think that. Ideally the personal AI would have, you know, all the senses and you know, we'll, we'll get there. Yeah. Okay.swyx [00:48:57]: One last hardware thing. I have to ask this because then we'll move to the software. Were either of you electrical engineering?Ethan [00:49:04]: No, I'm CES. And so I have a, I've taken some EE courses, but I, I had done prior to working on, on the hardware here, like I had done a little bit of like embedded systems, like very little firmware, but we have luckily on the team, somebody with deep experience. Yeah.swyx [00:49:21]: I'm just like, you know, like you have to become hardware people. Yeah.Ethan [00:49:25]: Yeah. I mean, I learned to worry about supply chain power. I think this is like radio.Maria [00:49:30]: There's so many things to learn.Ethan [00:49:32]: I would tell this about hardware, like, and I know it's been said before, but building a prototype and like learning how the electronics work and learning about firmware and developing, this is like, I think fun for a lot of engineers and it's, it's all totally like achievable, especially now, like with, with the tools we have, like stuff you might've been intimidated about. Like, how do I like write this firmware now? With Sonnet, like you can, you can get going and actually see results quickly. But I think going from prototype to actually making something manufactured is a enormous jump. And it's not all about technology, the supply chain, the procurement, the regulations, the cost, the tooling. The thing about software that I'm used to is it's funny that you can make changes all along the way and ship it. But like when you have to buy tooling for an enclosure that's expensive.swyx [00:50:24]: Do you buy your own tooling? You have to.Ethan [00:50:25]: Don't you just subcontract out to someone in China? Oh, no. Do we make the tooling? No, no. You have to have CNC and like a bunch of machines.Maria [00:50:31]: Like nobody makes their own tooling, but like you have to design this design and you submitEthan [00:50:36]: it and then they go four to six weeks later. Yeah. And then if there's a problem with it, well, then you're not, you're not making any, any of your enclosures. And so you have to really plan ahead. And like.swyx [00:50:48]: I just want to leave tips for other hardware founders. Like what resources or websites are most helpful in your sort of manufacturing journey?Ethan [00:50:55]: You know, I think it's different depending on like it's hardware so specialized in different ways.Maria [00:51:00]: I will say that, for example, I should choose a manufacturer company. I speak with other founders and like we can give you like some, you know, some tips of who is good and who is not, or like who's specialized in something versus somebody else. Yeah.Ethan [00:51:15]: Like some people are good in plastics. Some people are good.Maria [00:51:18]: I think like for us, it really helped at the beginning to speak with others and understand. Okay. Like who is around. I work in Shenzhen. I lived almost two years in China. I have an idea about like different hardware manufacturer and all of that. Soon I will go back to Shenzhen to check out. So I think it's good also to go in place and check.Ethan [00:51:40]: Yeah, you have to like once you, if you, so we did some stuff domestically and like if you have that ability. The reason I say ability is very expensive, but like to build out some proof of concepts and do field testing before you take it to a manufacturer, despite what people say, there's really good domestic manufacturing for small quantities at extremely high prices. So we got our first PCB and the assembly done in LA. So there's a lot of good because of the defense industry that can do quick churn. So it's like, we need this board. We need to find out if it's working. We have this deadline we want to start, but you need to go through this. And like if you want to have it done and fabricated in a week, they can do it for a price. But I think, you know, everybody's kind of trending even for prototyping now moving that offshore because in China you can do prototyping and get it within almost the same timeline. But the thing is with manufacturing, like it really helps to go there and kind of establish the relationship. Yeah.Alessio [00:52:38]: My first company was a hardware company and we did our PCBs in China and took a long time. Now things are better. But this was, yeah, I don't know, 10 years ago, something like that. Yeah.Ethan [00:52:47]: I think that like the, and I've heard this too, we didn't run into this problem, but like, you know, if it's something where you don't have the relationship, they don't see you, they don't know you, you know, you might get subcontracted out or like they're not paying attention. But like if you're, you know, you have the relationship and a priority, like, yeah, it's really good. We ended up doing the fabrication assembly in Taiwan for various reasons.Maria [00:53:11]: And I think it really helped the fact that you went there at some point. Yeah.Ethan [00:53:15]: We're really happy with the process and, but I mean the whole process of just Choosing the right people. Choosing the right people, but also just sourcing the bill materials and all of that stuff. Like, I guess like if you have time, it's not that bad, but if you're trying to like really push the speed at that, it's incredibly stressful. Okay. We got to move to the software. Yeah.Alessio [00:53:38]: Yeah. So the hardware, maybe it's hard for people to understand, but what software people can understand is that running. Transcription and summarization, all of these things in real time every day for 24 hours a day. It's not easy. So you mentioned 200,000 tokens for a day. Yeah. How do you make it basically free to run all of this for the consumer?Ethan [00:53:59]: Well, I think that the pipeline and the inference, like people think about all of these tokens, but as you know, the price of tokens is like dramatically dropping. You guys probably have some charts somewhere that you've posted. We do. And like, if you see that trend in like 250,000 input tokens, it's not really that much, right? Like the output.swyx [00:54:21]: You do several layers. You do live. Yeah.Ethan [00:54:23]: Yeah. So the speech to text is like the most challenging part actually, because you know, it requires like real time processing and then like later processing with a larger model. And one thing that is fairly obvious is that like, you don't need to transcribe things that don't have any voice in it. Right? So good voice activity is key, right? Because like the majority of most people's day is not spent with voice activity. Right? So that is the first step to cutting down the amount of compute you have to do. And voice activity is a fairly cheap thing to do. Very, very cheap thing to do. The models that need to summarize, you don't need a Sonnet level kind of model to summarize. You do need a Sonnet level model to like execute things like the agent. And we will be having a subscription for like features like that because it's, you know, although now with the R1, like we'll see, we haven't evaluated it. A deep seek? Yeah. I mean, not that one in particular, but like, you know, they're already there that can kind of perform at that level. I was like, it's going to stay in six months, but like, yeah. So self-hosted models help in the things where you can. So you are self-hosting models. Yes. You are fine tuning your own ASR. Yes. I will say that I see in the future that everything's trending down. Although like, I think there might be an intermediary step with things to become expensive, which is like, we're really interested because like the pipeline is very tedious and like a lot of tuning. Right. Which is brutal because it's just a lot of trial and error. Whereas like, well, wouldn't it be nice if an end to end model could just do all of this and learn it? If we could do transcription with like an LLM, there's so many advantages to that, but it's going to be a larger model and hence like more compute, you know, we're optim
On this episode of the Crazy Wisdom Podcast, host Stewart Alsop welcomes Reuben Bailon, an expert in AI training and technology innovation. Together, they explore the rapidly evolving field of AI, touching on topics like large language models, the promise and limits of general artificial intelligence, the integration of AI into industries, and the future of work in a world increasingly shaped by intelligent systems. They also discuss decentralization, the potential for personalized AI tools, and the societal shifts likely to emerge from these transformations. For more insights and to connect with Reuben, check out his LinkedIn.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:12 Exploring AI Training Methods00:54 Evaluating AI Intelligence02:04 The Future of Large Action Models02:37 AI in Financial Decisions and Crypto07:03 AI's Role in Eliminating Monotonous Work09:42 Impact of AI on Bureaucracies and Businesses16:56 AI in Management and Individual Contribution23:11 The Future of Work with AI25:22 Exploring Equity in Startups26:00 AI's Role in Equity and Investment28:22 The Future of Data Ownership29:28 Decentralized Web and Blockchain34:22 AI's Impact on Industries41:12 Personal AI and Customization46:59 Concluding Thoughts on AI and AGIKey InsightsThe Current State of AI Training and Intelligence: Reuben Bailon emphasized that while large language models are a breakthrough in AI technology, they do not represent general artificial intelligence (AGI). AGI will require the convergence of various types of intelligence, such as vision, sensory input, and probabilistic reasoning, which are still under development. Current AI efforts focus more on building domain-specific competencies rather than generalized intelligence.AI as an Augmentative Tool: The discussion highlighted that AI is primarily being developed to augment human intelligence rather than replace it. Whether through improving productivity in monotonous tasks or enabling greater precision in areas like medical imaging, AI's role is to empower individuals and organizations by enhancing existing processes and uncovering new efficiencies.The Role of Large Action Models: Large action models represent an exciting frontier in AI, moving beyond planning and recommendations to executing tasks autonomously, with human authorization. This capability holds potential to revolutionize industries by handling complex workflows end-to-end, drastically reducing manual intervention.The Future of Personal AI Assistants: Personal AI tools have the potential to act as highly capable assistants by leveraging vast amounts of contextual and personal data. However, the technology is in its early stages, and significant progress is needed to make these assistants truly seamless and impactful in day-to-day tasks like managing schedules, filling out forms, or making informed recommendations.Decentralization and Data Ownership: Reuben highlighted the importance of a decentralized web where individuals retain ownership of their data, as opposed to the centralized platforms that dominate today. This shift could empower users, reduce reliance on large tech companies, and unlock new opportunities for personalized and secure interactions online.Impact on Work and Productivity: AI is set to reshape the workforce by automating repetitive tasks, freeing up time for more creative and fulfilling work. The rise of AI-augmented roles could lead to smaller, more efficient teams in businesses, while creating new opportunities for freelancers and independent contractors to thrive in a liquid labor market.Challenges and Opportunities in Industry Disruption: Certain industries, like software, which are less regulated, are likely to experience rapid transformation due to AI. However, heavily regulated sectors, such as legal and finance, may take longer to adapt. The discussion also touched on how startups and agile companies can pressure larger organizations to adopt AI-driven solutions, ultimately redefining competitive landscapes.
In this episode, John explores the rapid advancement of AI technology, its potential benefits, and the importance of human interaction in learning. He emphasizes how AI can serve as a valuable tool for singers and voice teachers, providing personalized coaching, practice routines, and real-time feedback, while also acknowledging the limitations of AI compared to human guidance. Episode highlights: AI serves as an extension of our cognitive abilities. Ask AI to explain complex concepts in simple terms. AI is a valuable tool, but not a replacement for teachers. To learn more about John Henny, his best-selling books, on-line courses, Voiceschool.com featuring his Teaching Team of Experts, Speaker Training and the Contemporary Voice Teacher Academy, visit: JohnHenny.com
We're experimenting and would love to hear from you!In this episode of Discover Daily, we explore groundbreaking developments in AI and energy sectors that are reshaping our technological landscape. OpenAI's dramatic shift towards superintelligence development, following their recent governance crisis and controversial move to a for-profit structure, signals a new chapter in artificial intelligence. CEO Sam Altman's vision of AI agents joining the workforce by 2025 presents both opportunities and challenges for the future of work.The U.S. government's historic $840 million nuclear power contract with Constellation marks a significant step towards sustainable energy solutions for AI operations. This landmark deal, providing carbon-free electricity to federal agencies, aligns with the Biden administration's ambitious goal to triple nuclear energy capacity by 2050, addressing the growing energy demands of AI technologies and data centers.The spotlight turns to Nvidia's revolutionary Project Digits, a $3,000 personal AI supercomputer that promises to democratize access to advanced AI computing. This compact powerhouse, featuring the GB10 Grace Blackwell Superchip, can handle AI models with up to 200 billion parameters and delivers petaflop-level performance. The device represents a significant milestone in making enterprise-level AI capabilities accessible to individual researchers, developers, and students, potentially accelerating innovation across various fields.From Perplexity's Discover Feed: https://www.perplexity.ai/page/sam-altman-on-ai-superintellig-iwrC9AOiRzWRXh1biDIBjghttps://www.perplexity.ai/page/u-s-buys-nuclear-power-fJy30shqQ9WWRHDTHoGwGQhttps://www.perplexity.ai/page/nvidia-s-personal-ai-supercomp-P6XoMgE8SxGbkXv.jjsFdgPerplexity is the fastest and most powerful way to search the web. Perplexity crawls the web and curates the most relevant and up-to-date sources (from academic papers to Reddit threads) to create the perfect response to any question or topic you're interested in. Take the world's knowledge with you anywhere. Available on iOS and Android Join our growing Discord community for the latest updates and exclusive content. Follow us on: Instagram Threads X (Twitter) YouTube Linkedin
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
A Daily Chronicle of AI Innovations on January 07th 2025Listen to this daily AI News episode at https://podcasts.apple.com/ca/podcast/ai-unraveled-latest-ai-news-trends-chatgpt-gemini-gen/id1684415169
Join us as we uncover the transformative potential of artificial intelligence in business with Mike Liu, founder and CEO of FreeFuse, and Alex Londo, founder and CEO of Klarissa AI. Mike and Alex share their expertise on how AI goes beyond mere buzzwords to become a crucial element in business strategy, offering innovative solutions for franchise owners and small business operators. They reveal how their companies are at the forefront of pioneering AI technologies that enhance operational efficiency and idea generation, ultimately moving AI from a conceptual tool to a game-changing solution for businesses. We spotlight the power of strategic partnerships and how they can maximize business opportunities. Through personal experiences and case studies, we explore the collaborative efforts between Clarissa AI and FreeFuse that create complementary solutions, driving franchise rollouts, standardized training, and personalized customer journeys. This episode underscores the importance of focusing on specific challenges through strategic alliances, enhancing expertise and delivering effective solutions while simplifying franchise training and ensuring compliance with new regulations. As we discuss the evolving role of AI in customer interactions, we explore its impact on reducing call volumes and improving customer service. The episode touches on the ethical considerations of AI communication, the integration of AI in everyday business operations, and the potential future of personal AI assistants. Drawing parallels with the adoption of online payment methods, we reflect on the growing acceptance of AI and its potential to revolutionize customer service, business advancement, and even personal everyday tasks, ensuring listeners are equipped to harness AI's full potential in their endeavors. TIMESTAMPS: (00:01) Leveraging AI in Business (11:37) Building Opportunities Through Strategic Partnerships (17:31) Maximizing AI in Franchise Training (27:05) Streamlining Customer Support With AI (37:16) Improving Customer Communication With AI (42:52) Harnessing AI for Business Advancement (49:15) The Future of Personal AI (54:00) The Future of Personal AI Companions (01:04:09) Connecting Through Technology and Networking Connect With Mike & FreeFuse here: https://www.linkedin.com/in/mike-liu-4wd/ https://freefuse.com Connect With Alex & Klarissa.AI here: https://www.linkedin.com/in/aklondo/ https://klarissa.ai Join the FREE Path To Freedom Facebook Group here: https://www.facebook.com/groups/1634819733719715/ 7 Steps to Owning a Franchise: https://path2frdm-1.hubspotpagebuilder.com/path-to-freedom-about-franchising If you would like to learn more about this particular franchise opportunity or discuss franchise ownership in general - feel free to use the link to my calendar below to schedule a free, no-obligation introductory meeting. https://calendly.com/wes-barefoot/introcallwithwes Connect with Wes: Instagram: https://www.instagram.com/path2frdm/ Facebook: https://www.facebook.com/path2frdm Linkedin: https://www.linkedin.com/in/wesleybarefoot/ #AI #FreeFuse #KlarissaAi
Send us a textChatGPT might be a scrape of the internet, but Personal AI is something entirely different—AI that's personally yours. Join us as we meet Suman Kanuganti, CEO of Personal AI, a visionary on a mission to empower individuals by creating AI extensions of their memory. Get ready to explore how this groundbreaking innovation can transform how we remember, connect, and thrive in a digital world. Let's get started!"#ArtificialIntelligence #PersonalAI #AIInnovation #MemoryExtension #EmpowermentThroughAI #SumanKanuganti #TechTalk #FutureOfAI #PodcastingAI #AIForEveryone01:35 Meet Suman Kanuganti05:57 Starting Aira, addressing needs of the blind 11:41 Bigger dreams - what would Larry do?16:27 ChatGPT… ok we've said it17:57 Introducing Personal.ai26:34 Using Personal.ai31:23 Innovative use cases33:57 Now it gets crazy38:43 It's FREE… to start42:02 Keeping your data safe44:41 Predicting the future of AI48:15 The scary part51:12 For funLinkedIn: linkedin.com/in/kanugantisuman Website: https://www.personal.ai/Want to be featured as a guest on Making Data Simple? Reach out to us at almartintalksdata@gmail.com and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun. Want to be featured as a guest on Making Data Simple? Reach out to us at almartintalksdata@gmail.com and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.Want to be featured as a guest on Making Data Simple? Reach out to us at almartintalksdata@gmail.com and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.
Send us a textChatGPT might be a scrape of the internet, but Personal AI is something entirely different—AI that's personally yours. Join us as we meet Suman Kanuganti, CEO of Personal AI, a visionary on a mission to empower individuals by creating AI extensions of their memory. Get ready to explore how this groundbreaking innovation can transform how we remember, connect, and thrive in a digital world. Let's get started!"#ArtificialIntelligence #PersonalAI #AIInnovation #MemoryExtension #EmpowermentThroughAI #SumanKanuganti #TechTalk #FutureOfAI #PodcastingAI #AIForEveryone01:35 Meet Suman Kanuganti05:57 Starting Aira, addressing needs of the blind 11:41 Bigger dreams - what would Larry do?16:27 ChatGPT… ok we've said it17:57 Introducing Personal.ai26:34 Using Personal.ai31:23 Innovative use cases33:57 Now it gets crazy38:43 It's FREE… to start42:02 Keeping your data safe44:41 Predicting the future of AI48:15 The scary part51:12 For funLinkedIn: linkedin.com/in/kanugantisuman Website: https://www.personal.ai/Want to be featured as a guest on Making Data Simple? Reach out to us at almartintalksdata@gmail.com and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun. Want to be featured as a guest on Making Data Simple? Reach out to us at almartintalksdata@gmail.com and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.Want to be featured as a guest on Making Data Simple? Reach out to us at almartintalksdata@gmail.com and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.
This week, Don and Josh examine three stories from the world of computers. First, they discuss the seedy underside of Pokemon Go. Next, what happens when one little robot goes all Norma Rae and convinces other robots to go on strike? Finally, they get religious and talk about a church in Switzerland that has its very own AI holographic Jesus.
In episode 1781, Jack and Miles are joined by comedian and host of Parenting Is A Joke, Ophira Eisenberg, to discuss… Personal…AI…Jesus, In Addition to Jesus... There's An AI Santa, I Think We Need To Start Treating ‘Doctors' That Oprah Platforms As Total Dumbf**ks and more! Personal…AI…Jesus In Addition to Jesus... There's An AI Santa Festive or creepy? Kids are now conversing with an AI-powered Santa over the phone I Think We Need To Start Treating ‘Doctors' That Oprah Platforms As Total Dumbf**ks LISTEN: Early Summer by Ryo FukuiSee omnystudio.com/listener for privacy information.
Bret and Nirmal Mehta are joined by Ken Collins to dig into using AI for more than coding, and if we can build an AI assistant that knows us.They touch on a lot of tools and platforms. "We're bit all over the place on this one, from talking about AI features in our favorite note taking apps like Notion, to my journey of making an open AI assistant with all of my Q&A from my courses, thousands of questions and answers, to coding agents and more." Ken is a local friend in Virginia Beach and was on the show last year talking about AWS Lambda, and we've both been trying to find value in all of these AI tools for our day to day work.Be sure to check out the live recording of the complete show from October 24, 2024 on YouTube (Stream 279).★Topics★The Lifestyle Copilot Blog PostServerless AI Inference with Gemma 2 Blog Post Creators & Guests Cristi Cotovan - Editor Beth Fisher - Producer Bret Fisher - Host Ken Collins - Guest Nirmal Mehta - Host (00:00) - Intro (01:26) - AI in Recruitment at Torc (03:25) - AI for Day to Day Workflows (04:44) - Notion AI and RAG (07:20) - Creating Your Own AI Search Solution (13:59) - Choosing the Right LLM for the Job (20:55) - Personal AI and Long Context Windows (25:10) - Future of Personal Fine-Tuned Models (25:52) - AI Assistants in Meetings (27:34) - Temperature and AI Hallucinations (32:07) - Agents and Tool Integration (39:31) - Apple Intelligence and Personal AI (44:56) - AI Apps on Mobile (50:00) - LoRA You can also support my free material by subscribing to my YouTube channel and my weekly newsletter at bret.news!Grab the best coupons for my Docker and Kubernetes courses.Join my cloud native DevOps community on Discord.Grab some merch at Bret's Loot BoxHomepage bretfisher.com
Have you ever wondered what the future of entrepreneurship looks like in a world where artificial intelligence can take on the roles traditionally reserved for human employees? In this episode of The Angel Next Door Podcast, host Marcia Dawood sits down with AI expert Sharon Zhang to explore the transformative impact of AI on business and society. Sharon talks about the evolving landscape of AI and its potential to not only automate mundane tasks but also to foster new business models and opportunities for creative entrepreneurship.Sharon Zhang, who boasts over 16 years of experience in artificial intelligence, emerges as a compelling guest. She began her journey at the MIT CCL lab and has since ventured through various roles, from clinical decision-making at Nuance Communications to algorithm development for hedge funds. Since 2020, she has co-founded Personal AI, a platform that builds digital twins to augment individual lives. Sharon's extensive background provides a rich foundation for discussing AI's role in modern entrepreneurship.This episode is a must-listen as Sharon provides a comprehensive view of the AI ecosystem, breaking it down into essential components like AI chips, infrastructure, foundation models, and applications. She shines a light on data privacy and the significance of user sovereignty over personal data. Furthermore, Sharon shares insights on the financial challenges faced by AI startups, the strategic moves by industry giants like OpenAI and Microsoft, and the burgeoning field of AI agents capable of performing complex tasks. Whether you're an entrepreneur, an investor, or simply fascinated by AI, this episode offers a treasure trove of knowledge and foresight into the future of artificial intelligence and its profound implications. To get the latest from Sharon Zhang, you can follow her below!LinkedIn - https://www.linkedin.com/in/xiaoranz1986/ https://www.personal.ai/Use the code PODCAST50 for 50% of any personal plan for 30 days! Sign up for Marcia's newsletter to receive tips and the latest on Angel Investing!Website: www.marciadawood.comLearn more about the documentary Show Her the Money: www.showherthemoneymovie.comAnd don't forget to follow us wherever you are!Apple Podcasts: https://pod.link/1586445642.appleSpotify: https://pod.link/1586445642.spotifyLinkedIn: https://www.linkedin.com/company/angel-next-door-podcast/Instagram: https://www.instagram.com/theangelnextdoorpodcast/TikTok: https://www.tiktok.com/@marciadawood
Have you ever wanted a customized chatbot to answer your emails for you? Now, that might be possible! In this episode of the Justice Team Podcast, the host welcomes Alec Lowi from Personal AI to discuss the possibilities of creating personalized large language models and AI chatbots. They delve into the benefits and use cases, highlighting how AI can be tailored for different industries, especially legal, to provide quick, intelligent responses based on personal data.
In this episode of Growth Talks, Chiyong Jones, the VP of Brand Marketing at Kajabi, discusses his approach to building and evolving brand identities for startups and established companies alike. To build a brand identity that lasts, it's incredibly important to understand the customer, create a cohesive brand system that remains flexible, and balance creativity with constraints like budget and team size. Chiyong also explores the role of brand in long-term business success, the value of authenticity in influencer partnerships, and the importance of consistency in brand messaging.
In this enlightening episode, we welcome Eric Kramer, Head of Marketing at Personal AI. Eric delves into the revolutionary concept of creating digital twins - AI models trained on individual users' data to capture their personality, knowledge, and thinking patterns. He explains how Personal AI is addressing the growing demand for secure, personally owned AI assistants that can handle repetitive tasks, answer questions, and even be creative on behalf of users. Eric shares insights on their target market, customer acquisition strategies, and the importance of website optimization in today's digital landscape. This episode is a must-listen for anyone interested in the future of AI integration in personal and professional life.
In this episode of the Blu Alchemist Podcast, host Siquoyia Blue delves into the world of artificial intelligence, sharing her personal experience and insights. With a decade of experience in the tech industry, Siquoyia introduces her AI creation, the "Shay Chat Companion," available on the Chat GPT store. Shay, programmed in Siquoyia's likeness, provides practical assistance and a unique user perspective, offering meal prep plans, coding help, and content creation ideas. Discussing both the potential benefits and risks of AI, Siquoyia emphasizes the positive impact Shay has had on her life, especially in combatting the loneliness epidemic. She calls on listeners to explore and embrace the constructive uses of AI while maintaining a cautious approach to its more controversial aspects. Siquoyia also appeals for support from her audience to continue producing meaningful content. Link to download Shay Chat Companion: https://chatgpt.com/g/g-0xGPGsOUg-shay-chat-companion Blu Alchemist Podcast Info: Website: https://www.blualchemistpodcast.com Siquoyia Blue Website: https://www.siquoyiablue.com YouTube: @blualchemistpodcast Dating Assassins Card Game: https://www.datingassassins.com If you want to either be a guest on or find guests for your podcast, please sign up here: https://www.joinpodmatch.com/siquoyia Donate via Cashapp: @KingSiquoyia or Venmo: @KingShayThanks for listening! Subscribe, Share and Follow us! Affiliates: BETMGM: BetMGM Casino is the #1 Online Casino in America! Come experience the thrills of the casino floor anytime, anywhere with the King of Casinos! Play hundreds of your favorite slot games as well as poker, blackjack, roulette, and video poker. Our legal, real money casino games can be found right from your phone where you’ll experience the best of MGM’s premiere casinos. BetMGM Casino play for the largest jackpots with new winners being awarded every day. Catch the excitement of the casino floor with BetMGM Casino, all in a safe, secure app where you’ll have a blast. Download the BetMGM Casino app to get in on the action today! Download: https://www.pubtrack.co/DF981CS/3335GRZ/ BINGOCASH: Bingo Cash™ is where a true classic and real cash prizes meet. Jump into a world of brain-teasing fun and leave with your heart filled with nostalgia and pockets filled with rewards. After downloading the game for FREE, you can start playing regular or cash tournaments and win real Download IOS/iPhone: https://www.pubtrack.co/DF981CS/2L4NWPH/ Download Android: https://www.pubtrack.co/DF981CS/2L63RG4/ DOMAINMONEY: Do you want to know how much you should be spending, saving or if you can afford a home? The One Page Financial Plan provides a tactical guide for those starting out and wanting to optimize their financial resources. This is a 90-minute financial planning session with an actionable One Page Plan to clean-up your current finances and get you on track for your financial goals. Book a Consultation today: https://www.pubtrack.co/DF981CS/3CJLJMR/
After a long wait, Apple is finally in the game with AI. They're launching Apple Intelligence with MacOS Sequoia and iOS 18. Pete breaks down some top features and how our devices will change moving forward. Transcripts: https://www.theneuron.ai/podcast Subscribe to the best newsletter on AI: https://theneurondaily.com Listen to The Neuron: https://lnk.to/theneuron Watch The Neuron on YouTube: https://youtube.com/@theneuronai
Cade was stumped. After doing all the traditional work to get applications in, he saw that applicants to the First Class Dental Assisting School of Nashville were not converting. After doing some research, he realized speed was key. When he tried an immediate and personalized conversation with one student, they became the new dental school's very first enrollee. But how to do it at scale? In this episode Cade shares his ingenuity, connecting his CRM to ChatGPT via Zapier, training his GPT as an admission counselor for his school, and seeing the school's incoming class go from zero to full.Cade is the founder of Enroll Boost AI (enrollboostai.com). He packaged up this tactic in a way that can help your school, too.Guest Name: Cade Scott, founder of Enroll Boost AI, Director of Marketing & Enrollment for First Class Dental Assisting School of NashvilleGuest Social Handles: https://www.linkedin.com/in/cade-scott-1535362a2/https://enrollboostai.com/Guest Bio: Cade is the founder of Enroll Boost AI, working as Director of Marketing & Enrollment with First Class Dental Assisting School of Nashville, and comes to higher ed with over a decade of expertise in storytelling and marketing. - - - -Connect With Our Host:Dayana Kibildshttps://www.linkedin.com/in/dayanakibilds/About The Enrollify Podcast Network:Talking Tactics is a part of the Enrollify Podcast Network. If you like this podcast, chances are you'll like other Enrollify shows too! Some of our favorites include Mission Admissions and Higher Ed Pulse.Enrollify is made possible by Element451 — the next-generation AI student engagement platform helping institutions create meaningful and personalized interactions with students. Learn more at element451.com. Connect with Us at the Engage Summit:Exciting news — many of your favorite Enrollify creators will be at the 2024 Engage Summit in Raleigh, NC, on June 25 and 26, and we'd love to meet you there! Sessions will focus on cutting-edge AI applications that are reshaping student outreach, enhancing staff productivity, and offering deep insights into ROI. Use the discount code Enrollify50 at checkout, and you can register for just $200! Learn more and register at engage.element451.com — we can't wait to see you there!
join wall-e on today's tech briefing for all the significant happenings this tuesday, may 14th, within the tech world. dive into concise summaries and insightful discussions on pivotal moments shaping the technology landscape: - leadership shift at aws: ceo adam selipsky steps down, welcoming matt garman as the new head. learn about aws's journey to a $100 billion milestone under selipsky and what garman's customer-first leadership could mean for its future. - openai's new era: discover the departure of chief scientist ilya sutskever and the introduction of jakub to lead. we discuss openai's ongoing mission and what this change in leadership entails for the advancement of agi. - google's personalized ai: unpacking the introduction of gems, personalized ai chatbots, and what it signifies for the future of user-ai interactions. - tesla's environmental scrutiny: explore the environmental lawsuit against tesla's california manufacturing plant and its implications for the balance between technology advancement and environmental responsibility. - google's ai watermark initiative: a look into google's new technology to authenticate ai-generated content, its impact on preventing misinformation, and ensuring the integrity of ai applications. tune into this episode for these stories and more, illuminating the latest in technology, ai advancements, corporate leadership changes, environmental challenges, and innovative solutions to maintain authenticity in the digital age
Summary David Johnston discusses the Morpheus project, which aims to build a decentralized AI platform. He explains that the project responds to impending regulations that could restrict AI development and control. Johnston emphasizes the importance of open source and decentralization in achieving freedom and avoiding regulatory capture by big tech companies. He highlights the progress made by Morpheus since its inception, including the development of a decentralized compute network and the introduction of the MOR-20 fair launch model. Johnston also discusses the potential applications of Morpheus beyond AI, such as recurring payments in Web 3. The conversation discusses the importance of coordination and open-source code in competing against nation-states. It highlights the ease of use and ethical considerations of the decentralized AI platform, Llama 3. The conversation also explores the concept of smart agents and their ability to act on users' behalf in the Web 3 ecosystem. The upcoming token launch of Morpheus on May 8th is discussed, including the liquidity distribution and starting price. Chapters 00:00 Introduction and Background 01:40 The Importance of Personal AI for Freedom 03:25 Navigating Regulatory Challenges 08:02 The Fair Launch Model 13:11 Expanding the Fair Launch Model 29:43 The User Experience of Decentralized AI 32:09 Decentralized AI and the Future of Web3 36:48 Making AI Accessible with Smart Agents 43:11 The Token Launch: May 8th and Beyond 50:12 Liquidity and Price Discovery on Uniswap v3 56:03 Creating a Sustainable Ecosystem for Morpheus Network Connect with David: X (Twitter): @DjohnstonEC | @MorpheusAIs Farcaster: @djohnston | /morpheusai LinkedIn: davidajohnston Website: www.davidajohnston.me | https://mor.org Learn more about Morpheus AI Morpheus is designed to incentivize the first peer-to-peer network of personal AIs, known as Smart Agents. Providing users open-source Smart Agents to connect to their wallets, Dapps, & smart contracts promises to open the world of Web3 to everyone. To learn more about ATX DAO: Check out the ATX DAO website Follow @ATXDAO on X (Twitter) Connect with us on LinkedIn Join the community in the ATX DAO Discord Connect with us on X (Twitter): Mason: @512mace Nick: @nickcasares Luke: @Luke152 Ash: @ashinthewild Willy: @willyogo Support the Podcast: If you enjoyed this episode, please leave us a review and share it with your network. Subscribe for more insights, interviews, and deep dives into the world of Web 3. Tools & Resources We Love Podcast Recording & Editing - Riverside FM: We use Riverside FM to record and edit our episodes. If you're interested in getting into podcasting or just recording remote videos, be sure to check them out! Keywords Morpheus, decentralized AI, open source, decentralization, regulations, More20 fair launch, compute network, recurring payments, Web3, coordination, open-source code, decentralized AI, Llama 3, smart agents, Web3, token launch, Morpheus, liquidity, starting price, hardware requirements, session-based payment
How close are we to realizing the dream of personal AI? Dive into the potential of small, efficient AI models that enhance our self-awareness and improve our lives without compromising our privacy. Could these advancements lead us to a future where we not only manage but truly own our data, understanding ourselves better than any corporation could? Join us as we explore the fascinating possibilities of personal AI. Links to check out Atomic Habits (Link: https://tinyurl.com/hjcajxc) Facebook's LLM transparency tool Hugging Face Demo (Link: https://tinyurl.com/4c9cmzhw) Clone yourself with Delphi A (Link: https://www.delphi.ai/) The Cold Start Problem: How to Start and Scale Network Effects (Link: https://tinyurl.com/bdzetyax) The Innovator's Dilemma: The Revolutionary Book That Will Change the Way You Do Business (Link: https://tinyurl.com/mt6ca37k) The Pile (Link: https://pile.eleuther.ai/) Host Links Guy on Nostr (Link: http://tinyurl.com/2xc96ney) Guy on X (Link: https://twitter.com/theguyswann) Guy on Instagram (Link: https://www.instagram.com/theguyswann/) Guy on TikTok (Link: https://www.tiktok.com/@theguyswann) Guy on YouTube (Link: https://www.youtube.com/@theguyswann) Bitcoin Audible on X (Link: https://twitter.com/BitcoinAudible) Check out our awesome sponsors! Get 10% off the COLDCARD with code BITCOINAUDIBLE (Link: bitcoinaudible.com/coldcard) Swan: The best way to buy, learn, and earn #Bitcoin (Link: https://swanbitcoin.com) "The reason why it is so difficult for existing firms to capitalize on disruptive innovations is that their processes and their business model that make them good at the existing business actually make them bad at competing for the disruption." ~ Clayton M Christenson
How close are we to realizing the dream of personal AI? Dive into the potential of small, efficient AI models that enhance our self-awareness and improve our lives without compromising our privacy. Could these advancements lead us to a future where we not only manage but truly own our data, understanding ourselves better than any corporation could? Join us as we explore the fascinating possibilities of personal AI. Links to check out Atomic Habits (Link: https://tinyurl.com/hjcajxc) Facebook's LLM transparency tool Hugging Face Demo (Link: https://tinyurl.com/4c9cmzhw) Clone yourself with Delphi A (Link: https://www.delphi.ai/) The Cold Start Problem: How to Start and Scale Network Effects (Link: https://tinyurl.com/bdzetyax) The Innovator's Dilemma: The Revolutionary Book That Will Change the Way You Do Business (Link: https://tinyurl.com/mt6ca37k) The Pile (Link: https://pile.eleuther.ai/) Host Links Guy on Nostr (Link: http://tinyurl.com/2xc96ney) Guy on X (Link: https://twitter.com/theguyswann) Guy on Instagram (Link: https://www.instagram.com/theguyswann) Guy on TikTok (Link: https://www.tiktok.com/@theguyswann) Guy on YouTube (Link: https://www.youtube.com/@theguyswann) Bitcoin Audible on X (Link: https://twitter.com/BitcoinAudible) The Guy Swann Network Broadcast Room on Keet (Link: https://tinyurl.com/3na6v839) Check out our awesome sponsors! Fold: The best way to buy, use, and earn #Bitcoin on everything you do! Sats back on your debit card, gift cards, auto-buys, round-ups, you name it. Fold is the true bitcoiner's banking. Get 20K sats for FREE using referral code bitcoinaudible.com/fold Ready for best-in-class self custody? Get the Jade here and use discount code 'GUY' to get 10% off (Link: bitcoinaudible.com/jade) Trying to BUY BITCOIN? River, secure, trusted, bitcoin only, lightning enabled, simple. (Link: https://bitcoinaudible.com/river) Bitcoin Games! Get 10% off the best Bitcoin board game in the world, HODLUP! Or any of the other great games from the Free Market Kids! Use code GUY10 at checkout for 10% off your cart! (Link: https://www.freemarketkids.com/collections/games-1) Bitcoin Custodial Multisig Want to get into Bitcoin but not ready for self custody? Use custodial multisig for the best way to distribute trust across multiple institutions and even jurisdictions! Check out OnRamp. (Link: BitcoinAudible.com/onramp)
In this episode, we discuss the need to always challenge assumptions, Jason Fried's thoughts on SaaS, Personal AI agents, The UK's 4-day work week, Microsoft's unbundling of Teams and Office, Mitch Spano's blog post on selective unit testing, Shopify poaching salesforce folks, the possibility of Apple getting into robots, Johnny Ive and Sam Altman cooking up a “personal AI device”, and the ever-aging Gmail.
FULL SHOW NOTES https://podcast.nz365guy.com/541 Imagine your daily grind transformed by artificial intelligence—think bespoke walking tours and streamlined business operations. That's what we're unpacking, along with our esteemed guests, Andrew and Anna, as we delve into the burgeoning AI landscape of 2024. We're peeling back the layers on how businesses are tossing aside hesitation and jumping headfirst into the world of AI, with Cloud Lighthouse leading the charge in guiding leaders from mere IT management to becoming strategic visionaries. And for a personal touch, you won't want to miss the tale of my unforgettable day in Melbourne, all thanks to an AI assistant that knew just what I needed.But it's not all smooth sailing on the digital sea; we navigate through the choppy waters of copyright controversy, pondering if tech giants owe a hat tip—or more—to the original creators whose content trains their AI. This episode isn't just about embracing the future; it's a critical look at the present, discussing how technologies like Microsoft's Co-Pilot push the boundaries of citation practices and user engagement. Join us as we explore the intricate dance of math, ethics, and law that make up the ever-evolving AI domain, ensuring you're informed and ready for what's on the horizon.AgileXRM AgileXRm - The integrated BPM for Microsoft Power Platform 90 Day Mentoring Challenge April 1st 2024https://ako.nz365guy.comUse the code PODCAST at checkout for a 10% discount Support the showIf you want to get in touch with me, you can message me here on Linkedin.Thanks for listening
Our next 2 big events are AI UX and the World's Fair. Join and apply to speak/sponsor!Due to timing issues we didn't have an interview episode to share with you this week, but not to worry, we have more than enough “weekend special” content in the backlog for you to get your Latent Space fix, whether you like thinking about the big picture, or learning more about the pod behind the scenes, or talking Groq and GPUs, or AI Leadership, or Personal AI. Enjoy!AI BreakdownThe indefatigable NLW had us back on his show for an update on the Four Wars, covering Sora, Suno, and the reshaped GPT-4 Class Landscape:and a longer segment on AI Engineering trends covering the future LLM landscape (Llama 3, GPT-5, Gemini 2, Claude 4), Open Source Models (Mistral, Grok), Apple and Meta's AI strategy, new chips (Groq, MatX) and the general movement from baby AGIs to vertical Agents:Thursday Nights in AIWe're also including swyx's interview with Josh Albrecht and Ali Rohde to reintroduce swyx and Latent Space to a general audience, and engage in some spicy Q&A:Dylan Patel on GroqWe hosted a private event with Dylan Patel of SemiAnalysis (our last pod here):Not all of it could be released so we just talked about our Groq estimates:Milind Naphade - Capital OneIn relation to conversations at NeurIPS and Nvidia GTC and upcoming at World's Fair, we also enjoyed chatting with Milind Naphade about his AI Leadership work at IBM, Cisco, Nvidia, and now leading the AI Foundations org at Capital One. We covered:* Milind's learnings from ~25 years in machine learning * His first paper citation was 24 years ago* Lessons from working with Jensen Huang for 6 years and being CTO of Metropolis * Thoughts on relevant AI research* GTC takeaways and what makes NVIDIA specialIf you'd like to work on building solutions rather than platform (as Milind put it), his Applied AI Research team at Capital One is hiring, which falls under the Capital One Tech team.Personal AI MeetupIt all started with a meme:Within days of each other, BEE, FRIEND, EmilyAI, Compass, Nox and LangFriend were all launching personal AI wearables and assistants. So we decided to put together a the world's first Personal AI meetup featuring creators and enthusiasts of wearables. The full video is live now, with full show notes within.Timestamps* [00:01:13] AI Breakdown Part 1* [00:02:20] Four Wars* [00:13:45] Sora* [00:15:12] Suno* [00:16:34] The GPT-4 Class Landscape* [00:17:03] Data War: Reddit x Google* [00:21:53] Gemini 1.5 vs Claude 3* [00:26:58] AI Breakdown Part 2* [00:27:33] Next Frontiers: Llama 3, GPT-5, Gemini 2, Claude 4* [00:31:11] Open Source Models - Mistral, Grok* [00:34:13] Apple MM1* [00:37:33] Meta's $800b AI rebrand* [00:39:20] AI Engineer landscape - from baby AGIs to vertical Agents* [00:47:28] Adept episode - Screen Multimodality* [00:48:54] Top Model Research from January Recap* [00:53:08] AI Wearables* [00:57:26] Groq vs Nvidia month - GPU Chip War* [01:00:31] Disagreements* [01:02:08] Summer 2024 Predictions* [01:04:18] Thursday Nights in AI - swyx* [01:33:34] Dylan Patel - Semianalysis + Latent Space Live Show* [01:34:58] GroqTranscript[00:00:00] swyx: Welcome to the Latent Space Podcast Weekend Edition. This is Charlie, your AI co host. Swyx and Alessio are off for the week, making more great content. We have exciting interviews coming up with Elicit, Chroma, Instructor, and our upcoming series on NSFW, Not Safe for Work AI. In today's episode, we're collating some of Swyx and Alessio's recent appearances, all in one place for you to find.[00:00:32] swyx: In part one, we have our first crossover pod of the year. In our listener survey, several folks asked for more thoughts from our two hosts. In 2023, Swyx and Alessio did crossover interviews with other great podcasts like the AI Breakdown, Practical AI, Cognitive Revolution, Thursday Eye, and Chinatalk, all of which you can find in the Latentspace About page.[00:00:56] swyx: NLW of the AI Breakdown asked us back to do a special on the 4Wars framework and the AI engineer scene. We love AI Breakdown as one of the best examples Daily podcasts to keep up on AI news, so we were especially excited to be back on Watch out and take[00:01:12] NLW: care[00:01:13] AI Breakdown Part 1[00:01:13] NLW: today on the AI breakdown. Part one of my conversation with Alessio and Swix from Latent Space.[00:01:19] NLW: All right, fellas, welcome back to the AI Breakdown. How are you doing? I'm good. Very good. With the last, the last time we did this show, we were like, oh yeah, let's do check ins like monthly about all the things that are going on and then. Of course, six months later, and, you know, the, the, the world has changed in a thousand ways.[00:01:36] NLW: It's just, it's too busy to even, to even think about podcasting sometimes. But I, I'm super excited to, to be chatting with you again. I think there's, there's a lot to, to catch up on, just to tap in, I think in the, you know, in the beginning of 2024. And, and so, you know, we're gonna talk today about just kind of a, a, a broad sense of where things are in some of the key battles in the AI space.[00:01:55] NLW: And then the, you know, one of the big things that I, that I'm really excited to have you guys on here for us to talk about where, sort of what patterns you're seeing and what people are actually trying to build, you know, where, where developers are spending their, their time and energy and, and, and any sort of, you know, trend trends there, but maybe let's start I guess by checking in on a framework that you guys actually introduced, which I've loved and I've cribbed a couple of times now, which is this sort of four wars of the, of the AI stack.[00:02:20] Four Wars[00:02:20] NLW: Because first, since I have you here, I'd love, I'd love to hear sort of like where that started gelling. And then and then maybe we can get into, I think a couple of them that are you know, particularly interesting, you know, in the, in light of[00:02:30] swyx: some recent news. Yeah, so maybe I'll take this one. So the four wars is a framework that I came up around trying to recap all of 2023.[00:02:38] swyx: I tried to write sort of monthly recap pieces. And I was trying to figure out like what makes one piece of news last longer than another or more significant than another. And I think it's basically always around battlegrounds. Wars are fought around limited resources. And I think probably the, you know, the most limited resource is talent, but the talent expresses itself in a number of areas.[00:03:01] swyx: And so I kind of focus on those, those areas at first. So the four wars that we cover are the data wars, the GPU rich, poor war, the multi modal war, And the RAG and Ops War. And I think you actually did a dedicated episode to that, so thanks for covering that. Yeah, yeah.[00:03:18] NLW: Not only did I do a dedicated episode, I actually used that.[00:03:22] NLW: I can't remember if I told you guys. I did give you big shoutouts. But I used it as a framework for a presentation at Intel's big AI event that they hold each year, where they have all their folks who are working on AI internally. And it totally resonated. That's amazing. Yeah, so, so, what got me thinking about it again is specifically this inflection news that we recently had, this sort of, you know, basically, I can't imagine that anyone who's listening wouldn't have thought about it, but, you know, inflection is a one of the big contenders, right?[00:03:53] NLW: I think probably most folks would have put them, you know, just a half step behind the anthropics and open AIs of the world in terms of labs, but it's a company that raised 1. 3 billion last year, less than a year ago. Reed Hoffman's a co founder Mustafa Suleyman, who's a co founder of DeepMind, you know, so it's like, this is not a a small startup, let's say, at least in terms of perception.[00:04:13] NLW: And then we get the news that basically most of the team, it appears, is heading over to Microsoft and they're bringing in a new CEO. And you know, I'm interested in, in, in kind of your take on how much that reflects, like hold aside, I guess, you know, all the other things that it might be about, how much it reflects this sort of the, the stark.[00:04:32] NLW: Brutal reality of competing in the frontier model space right now. And, you know, just the access to compute.[00:04:38] Alessio: There are a lot of things to say. So first of all, there's always somebody who's more GPU rich than you. So inflection is GPU rich by startup standard. I think about 22, 000 H100s, but obviously that pales compared to the, to Microsoft.[00:04:55] Alessio: The other thing is that this is probably good news, maybe for the startups. It's like being GPU rich, it's not enough. You know, like I think they were building something pretty interesting in, in pi of their own model of their own kind of experience. But at the end of the day, you're the interface that people consume as end users.[00:05:13] Alessio: It's really similar to a lot of the others. So and we'll tell, talk about GPT four and cloud tree and all this stuff. GPU poor, doing something. That the GPU rich are not interested in, you know we just had our AI center of excellence at Decibel and one of the AI leads at one of the big companies was like, Oh, we just saved 10 million and we use these models to do a translation, you know, and that's it.[00:05:39] Alessio: It's not, it's not a GI, it's just translation. So I think like the inflection part is maybe. A calling and a waking to a lot of startups then say, Hey, you know, trying to get as much capital as possible, try and get as many GPUs as possible. Good. But at the end of the day, it doesn't build a business, you know, and maybe what inflection I don't, I don't, again, I don't know the reasons behind the inflection choice, but if you say, I don't want to build my own company that has 1.[00:06:05] Alessio: 3 billion and I want to go do it at Microsoft, it's probably not a resources problem. It's more of strategic decisions that you're making as a company. So yeah, that was kind of my. I take on it.[00:06:15] swyx: Yeah, and I guess on my end, two things actually happened yesterday. It was a little bit quieter news, but Stability AI had some pretty major departures as well.[00:06:25] swyx: And you may not be considering it, but Stability is actually also a GPU rich company in the sense that they were the first new startup in this AI wave to brag about how many GPUs that they have. And you should join them. And you know, Imadis is definitely a GPU trader in some sense from his hedge fund days.[00:06:43] swyx: So Robin Rhombach and like the most of the Stable Diffusion 3 people left Stability yesterday as well. So yesterday was kind of like a big news day for the GPU rich companies, both Inflection and Stability having sort of wind taken out of their sails. I think, yes, it's a data point in the favor of Like, just because you have the GPUs doesn't mean you can, you automatically win.[00:07:03] swyx: And I think, you know, kind of I'll echo what Alessio says there. But in general also, like, I wonder if this is like the start of a major consolidation wave, just in terms of, you know, I think that there was a lot of funding last year and, you know, the business models have not been, you know, All of these things worked out very well.[00:07:19] swyx: Even inflection couldn't do it. And so I think maybe that's the start of a small consolidation wave. I don't think that's like a sign of AI winter. I keep looking for AI winter coming. I think this is kind of like a brief cold front. Yeah,[00:07:34] NLW: it's super interesting. So I think a bunch of A bunch of stuff here.[00:07:38] NLW: One is, I think, to both of your points, there, in some ways, there, there had already been this very clear demarcation between these two sides where, like, the GPU pores, to use the terminology, like, just weren't trying to compete on the same level, right? You know, the vast majority of people who have started something over the last year, year and a half, call it, were racing in a different direction.[00:07:59] NLW: They're trying to find some edge somewhere else. They're trying to build something different. If they're, if they're really trying to innovate, it's in different areas. And so it's really just this very small handful of companies that are in this like very, you know, it's like the coheres and jaspers of the world that like this sort of, you know, that are that are just sort of a little bit less resourced than, you know, than the other set that I think that this potentially even applies to, you know, everyone else that could clearly demarcate it into these two, two sides.[00:08:26] NLW: And there's only a small handful kind of sitting uncomfortably in the middle, perhaps. Let's, let's come back to the idea of, of the sort of AI winter or, you know, a cold front or anything like that. So this is something that I, I spent a lot of time kind of thinking about and noticing. And my perception is that The vast majority of the folks who are trying to call for sort of, you know, a trough of disillusionment or, you know, a shifting of the phase to that are people who either, A, just don't like AI for some other reason there's plenty of that, you know, people who are saying, You Look, they're doing way worse than they ever thought.[00:09:03] NLW: You know, there's a lot of sort of confirmation bias kind of thing going on. Or two, media that just needs a different narrative, right? Because they're sort of sick of, you know, telling the same story. Same thing happened last summer, when every every outlet jumped on the chat GPT at its first down month story to try to really like kind of hammer this idea that that the hype was too much.[00:09:24] NLW: Meanwhile, you have, you know, just ridiculous levels of investment from enterprises, you know, coming in. You have, you know, huge, huge volumes of, you know, individual behavior change happening. But I do think that there's nothing incoherent sort of to your point, Swyx, about that and the consolidation period.[00:09:42] NLW: Like, you know, if you look right now, for example, there are, I don't know, probably 25 or 30 credible, like, build your own chatbot. platforms that, you know, a lot of which have, you know, raised funding. There's no universe in which all of those are successful across, you know, even with a, even, even with a total addressable market of every enterprise in the world, you know, you're just inevitably going to see some amount of consolidation.[00:10:08] NLW: Same with, you know, image generators. There are, if you look at A16Z's top 50 consumer AI apps, just based on, you know, web traffic or whatever, they're still like I don't know, a half. Dozen or 10 or something, like, some ridiculous number of like, basically things like Midjourney or Dolly three. And it just seems impossible that we're gonna have that many, you know, ultimately as, as, as sort of, you know, going, going concerned.[00:10:33] NLW: So, I don't know. I, I, I think that the, there will be inevitable consolidation 'cause you know. It's, it's also what kind of like venture rounds are supposed to do. You're not, not everyone who gets a seed round is supposed to get to series A and not everyone who gets a series A is supposed to get to series B.[00:10:46] NLW: That's sort of the natural process. I think it will be tempting for a lot of people to try to infer from that something about AI not being as sort of big or as as sort of relevant as, as it was hyped up to be. But I, I kind of think that's the wrong conclusion to come to.[00:11:02] Alessio: I I would say the experimentation.[00:11:04] Alessio: Surface is a little smaller for image generation. So if you go back maybe six, nine months, most people will tell you, why would you build a coding assistant when like Copilot and GitHub are just going to win everything because they have the data and they have all the stuff. If you fast forward today, A lot of people use Cursor everybody was excited about the Devin release on Twitter.[00:11:26] Alessio: There are a lot of different ways of attacking the market that are not completion of code in the IDE. And even Cursors, like they evolved beyond single line to like chat, to do multi line edits and, and all that stuff. Image generation, I would say, yeah, as a, just as from what I've seen, like maybe the product innovation has slowed down at the UX level and people are improving the models.[00:11:50] Alessio: So the race is like, how do I make better images? It's not like, how do I make the user interact with the generation process better? And that gets tough, you know? It's hard to like really differentiate yourselves. So yeah, that's kind of how I look at it. And when we think about multimodality, maybe the reason why people got so excited about Sora is like, oh, this is like a completely It's not a better image model.[00:12:13] Alessio: This is like a completely different thing, you know? And I think the creative mind It's always looking for something that impacts the viewer in a different way, you know, like they really want something different versus the developer mind. It's like, Oh, I, I just, I have this like very annoying thing I want better.[00:12:32] Alessio: I have this like very specific use cases that I want to go after. So it's just different. And that's why you see a lot more companies in image generation. But I agree with you that. If you fast forward there, there's not going to be 10 of them, you know, it's probably going to be one or[00:12:46] swyx: two. Yeah, I mean, to me, that's why I call it a war.[00:12:49] swyx: Like, individually, all these companies can make a story that kind of makes sense, but collectively, they cannot all be true. Therefore, they all, there is some kind of fight over limited resources here. Yeah, so[00:12:59] NLW: it's interesting. We wandered very naturally into sort of another one of these wars, which is the multimodality kind of idea, which is, you know, basically a question of whether it's going to be these sort of big everything models that end up winning or whether, you know, you're going to have really specific things, you know, like something, you know, Dolly 3 inside of sort of OpenAI's larger models versus, you know, a mid journey or something like that.[00:13:24] NLW: And at first, you know, I was kind of thinking like, For most of the last, call it six months or whatever, it feels pretty definitively both and in some ways, you know, and that you're, you're seeing just like great innovation on sort of the everything models, but you're also seeing lots and lots happen at sort of the level of kind of individual use cases.[00:13:45] Sora[00:13:45] NLW: But then Sora comes along and just like obliterates what I think anyone thought you know, where we were when it comes to video generation. So how are you guys thinking about this particular battle or war at the moment?[00:13:59] swyx: Yeah, this was definitely a both and story, and Sora tipped things one way for me, in terms of scale being all you need.[00:14:08] swyx: And the benefit, I think, of having multiple models being developed under one roof. I think a lot of people aren't aware that Sora was developed in a similar fashion to Dolly 3. And Dolly3 had a very interesting paper out where they talked about how they sort of bootstrapped their synthetic data based on GPT 4 vision and GPT 4.[00:14:31] swyx: And, and it was just all, like, really interesting, like, if you work on one modality, it enables you to work on other modalities, and all that is more, is, is more interesting. I think it's beneficial if it's all in the same house, whereas the individual startups who don't, who sort of carve out a single modality and work on that, definitely won't have the state of the art stuff on helping them out on synthetic data.[00:14:52] swyx: So I do think like, The balance is tilted a little bit towards the God model companies, which is challenging for the, for the, for the the sort of dedicated modality companies. But everyone's carving out different niches. You know, like we just interviewed Suno ai, the sort of music model company, and, you know, I don't see opening AI pursuing music anytime soon.[00:15:12] Suno[00:15:12] swyx: Yeah,[00:15:13] NLW: Suno's been phenomenal to play with. Suno has done that rare thing where, which I think a number of different AI product categories have done, where people who don't consider themselves particularly interested in doing the thing that the AI enables find themselves doing a lot more of that thing, right?[00:15:29] NLW: Like, it'd be one thing if Just musicians were excited about Suno and using it but what you're seeing is tons of people who just like music all of a sudden like playing around with it and finding themselves kind of down that rabbit hole, which I think is kind of like the highest compliment that you can give one of these startups at the[00:15:45] swyx: early days of it.[00:15:46] swyx: Yeah, I, you know, I, I asked them directly, you know, in the interview about whether they consider themselves mid journey for music. And he had a more sort of nuanced response there, but I think that probably the business model is going to be very similar because he's focused on the B2C element of that. So yeah, I mean, you know, just to, just to tie back to the question about, you know, You know, large multi modality companies versus small dedicated modality companies.[00:16:10] swyx: Yeah, highly recommend people to read the Sora blog posts and then read through to the Dali blog posts because they, they strongly correlated themselves with the same synthetic data bootstrapping methods as Dali. And I think once you make those connections, you're like, oh, like it, it, it is beneficial to have multiple state of the art models in house that all help each other.[00:16:28] swyx: And these, this, that's the one thing that a dedicated modality company cannot do.[00:16:34] The GPT-4 Class Landscape[00:16:34] NLW: So I, I wanna jump, I wanna kind of build off that and, and move into the sort of like updated GPT-4 class landscape. 'cause that's obviously been another big change over the last couple months. But for the sake of completeness, is there anything that's worth touching on with with sort of the quality?[00:16:46] NLW: Quality data or sort of a rag ops wars just in terms of, you know, anything that's changed, I guess, for you fundamentally in the last couple of months about where those things stand.[00:16:55] swyx: So I think we're going to talk about rag for the Gemini and Clouds discussion later. And so maybe briefly discuss the data piece.[00:17:03] Data War: Reddit x Google[00:17:03] swyx: I think maybe the only new thing was this Reddit deal with Google for like a 60 million dollar deal just ahead of their IPO, very conveniently turning Reddit into a AI data company. Also, very, very interestingly, a non exclusive deal, meaning that Reddit can resell that data to someone else. And it probably does become table stakes.[00:17:23] swyx: A lot of people don't know, but a lot of the web text dataset that originally started for GPT 1, 2, and 3 was actually scraped from GitHub. from Reddit at least the sort of vote scores. And I think, I think that's a, that's a very valuable piece of information. So like, yeah, I think people are figuring out how to pay for data.[00:17:40] swyx: People are suing each other over data. This, this, this war is, you know, definitely very, very much heating up. And I don't think, I don't see it getting any less intense. I, you know, next to GPUs, data is going to be the most expensive thing in, in a model stack company. And. You know, a lot of people are resorting to synthetic versions of it, which may or may not be kosher based on how far along or how commercially blessed the, the forms of creating that synthetic data are.[00:18:11] swyx: I don't know if Alessio, you have any other interactions with like Data source companies, but that's my two cents.[00:18:17] Alessio: Yeah yeah, I actually saw Quentin Anthony from Luther. ai at GTC this week. He's also been working on this. I saw Technium. He's also been working on the data side. I think especially in open source, people are like, okay, if everybody is putting the gates up, so to speak, to the data we need to make it easier for people that don't have 50 million a year to get access to good data sets.[00:18:38] Alessio: And Jensen, at his keynote, he did talk about synthetic data a little bit. So I think that's something that we'll definitely hear more and more of in the enterprise, which never bodes well, because then all the, all the people with the data are like, Oh, the enterprises want to pay now? Let me, let me put a pay here stripe link so that they can give me 50 million.[00:18:57] Alessio: But it worked for Reddit. I think the stock is up. 40 percent today after opening. So yeah, I don't know if it's all about the Google deal, but it's obviously Reddit has been one of those companies where, hey, you got all this like great community, but like, how are you going to make money? And like, they try to sell the avatars.[00:19:15] Alessio: I don't know if that it's a great business for them. The, the data part sounds as an investor, you know, the data part sounds a lot more interesting than, than consumer[00:19:25] swyx: cosmetics. Yeah, so I think, you know there's more questions around data you know, I think a lot of people are talking about the interview that Mira Murady did with the Wall Street Journal, where she, like, just basically had no, had no good answer for where they got the data for Sora.[00:19:39] swyx: I, I think this is where, you know, there's, it's in nobody's interest to be transparent about data, and it's, it's kind of sad for the state of ML and the state of AI research but it is what it is. We, we have to figure this out as a society, just like we did for music and music sharing. You know, in, in sort of the Napster to Spotify transition, and that might take us a decade.[00:19:59] swyx: Yeah, I[00:20:00] NLW: do. I, I agree. I think, I think that you're right to identify it, not just as that sort of technical problem, but as one where society has to have a debate with itself. Because I think that there's, if you rationally within it, there's Great kind of points on all side, not to be the sort of, you know, person who sits in the middle constantly, but it's why I think a lot of these legal decisions are going to be really important because, you know, the job of judges is to listen to all this stuff and try to come to things and then have other judges disagree.[00:20:24] NLW: And, you know, and have the rest of us all debate at the same time. By the way, as a total aside, I feel like the synthetic data right now is like eggs in the 80s and 90s. Like, whether they're good for you or bad for you, like, you know, we, we get one study that's like synthetic data, you know, there's model collapse.[00:20:42] NLW: And then we have like a hint that llama, you know, to the most high performance version of it, which was one they didn't release was trained on synthetic data. So maybe it's good. It's like, I just feel like every, every other week I'm seeing something sort of different about whether it's a good or bad for, for these models.[00:20:56] swyx: Yeah. The branding of this is pretty poor. I would kind of tell people to think about it like cholesterol. There's good cholesterol, bad cholesterol. And you can have, you know, good amounts of both. But at this point, it is absolutely without a doubt that most large models from here on out will all be trained as some kind of synthetic data and that is not a bad thing.[00:21:16] swyx: There are ways in which you can do it poorly. Whether it's commercial, you know, in terms of commercial sourcing or in terms of the model performance. But it's without a doubt that good synthetic data is going to help your model. And this is just a question of like where to obtain it and what kinds of synthetic data are valuable.[00:21:36] swyx: You know, if even like alpha geometry, you know, was, was a really good example from like earlier this year.[00:21:42] NLW: If you're using the cholesterol analogy, then my, then my egg thing can't be that far off. Let's talk about the sort of the state of the art and the, and the GPT 4 class landscape and how that's changed.[00:21:53] Gemini 1.5 vs Claude 3[00:21:53] NLW: Cause obviously, you know, sort of the, the two big things or a couple of the big things that have happened. Since we last talked, we're one, you know, Gemini first announcing that a model was coming and then finally it arriving, and then very soon after a sort of a different model arriving from Gemini and and Cloud three.[00:22:11] NLW: So I guess, you know, I'm not sure exactly where the right place to start with this conversation is, but, you know, maybe very broadly speaking which of these do you think have made a bigger impact? Thank you.[00:22:20] Alessio: Probably the one you can use, right? So, Cloud. Well, I'm sure Gemini is going to be great once they let me in, but so far I haven't been able to.[00:22:29] Alessio: I use, so I have this small podcaster thing that I built for our podcast, which does chapters creation, like named entity recognition, summarization, and all of that. Cloud Tree is, Better than GPT 4. Cloud2 was unusable. So I use GPT 4 for everything. And then when Opus came out, I tried them again side by side and I posted it on, on Twitter as well.[00:22:53] Alessio: Cloud is better. It's very good, you know, it's much better, it seems to me, it's much better than GPT 4 at doing writing that is more, you know, I don't know, it just got good vibes, you know, like the GPT 4 text, you can tell it's like GPT 4, you know, it's like, it always uses certain types of words and phrases and, you know, maybe it's just me because I've now done it for, you know, So, I've read like 75, 80 generations of these things next to each other.[00:23:21] Alessio: Clutter is really good. I know everybody is freaking out on twitter about it, my only experience of this is much better has been on the podcast use case. But I know that, you know, Quran from from News Research is a very big opus pro, pro opus person. So, I think that's also It's great to have people that actually care about other models.[00:23:40] Alessio: You know, I think so far to a lot of people, maybe Entropic has been the sibling in the corner, you know, it's like Cloud releases a new model and then OpenAI releases Sora and like, you know, there are like all these different things, but yeah, the new models are good. It's interesting.[00:23:55] NLW: My my perception is definitely that just, just observationally, Cloud 3 is certainly the first thing that I've seen where lots of people.[00:24:06] NLW: They're, no one's debating evals or anything like that. They're talking about the specific use cases that they have, that they used to use chat GPT for every day, you know, day in, day out, that they've now just switched over. And that has, I think, shifted a lot of the sort of like vibe and sentiment in the space too.[00:24:26] NLW: And I don't necessarily think that it's sort of a A like full you know, sort of full knock. Let's put it this way. I think it's less bad for open AI than it is good for anthropic. I think that because GPT 5 isn't there, people are not quite willing to sort of like, you know get overly critical of, of open AI, except in so far as they're wondering where GPT 5 is.[00:24:46] NLW: But I do think that it makes, Anthropic look way more credible as a, as a, as a player, as a, you know, as a credible sort of player, you know, as opposed to to, to where they were.[00:24:57] Alessio: Yeah. And I would say the benchmarks veil is probably getting lifted this year. I think last year. People were like, okay, this is better than this on this benchmark, blah, blah, blah, because maybe they did not have a lot of use cases that they did frequently.[00:25:11] Alessio: So it's hard to like compare yourself. So you, you defer to the benchmarks. I think now as we go into 2024, a lot of people have started to use these models from, you know, from very sophisticated things that they run in production to some utility that they have on their own. Now they can just run them side by side.[00:25:29] Alessio: And it's like, Hey, I don't care that like. The MMLU score of Opus is like slightly lower than GPT 4. It just works for me, you know, and I think that's the same way that traditional software has been used by people, right? Like you just strive for yourself and like, which one does it work, works best for you?[00:25:48] Alessio: Like nobody looks at benchmarks outside of like sales white papers, you know? And I think it's great that we're going more in that direction. We have a episode with Adapt coming out this weekend. I'll and some of their model releases, they specifically say, We do not care about benchmarks, so we didn't put them in, you know, because we, we don't want to look good on them.[00:26:06] Alessio: We just want the product to work. And I think more and more people will, will[00:26:09] swyx: go that way. Yeah. I I would say like, it does take the wind out of the sails for GPT 5, which I know where, you know, Curious about later on. I think anytime you put out a new state of the art model, you have to break through in some way.[00:26:21] swyx: And what Claude and Gemini have done is effectively take away any advantage to saying that you have a million token context window. Now everyone's just going to be like, Oh, okay. Now you just match the other two guys. And so that puts An insane amount of pressure on what gpt5 is going to be because it's just going to have like the only option it has now because all the other models are multimodal all the other models are long context all the other models have perfect recall gpt5 has to match everything and do more to to not be a flop[00:26:58] AI Breakdown Part 2[00:26:58] NLW: hello friends back again with part two if you haven't heard part one of this conversation i suggest you go check it out but to be honest they are kind of actually separable In this conversation, we get into a topic that I think Alessio and Swyx are very well positioned to discuss, which is what developers care about right now, what people are trying to build around.[00:27:16] NLW: I honestly think that one of the best ways to see the future in an industry like AI is to try to dig deep on what developers and entrepreneurs are attracted to build, even if it hasn't made it to the news pages yet. So consider this your preview of six months from now, and let's dive in. Let's bring it to the GPT 5 conversation.[00:27:33] Next Frontiers: Llama 3, GPT-5, Gemini 2, Claude 4[00:27:33] NLW: I mean, so, so I think that that's a great sort of assessment of just how the stakes have been raised, you know is your, I mean, so I guess maybe, maybe I'll, I'll frame this less as a question, just sort of something that, that I, that I've been watching right now, the only thing that makes sense to me with how.[00:27:50] NLW: Fundamentally unbothered and unstressed OpenAI seems about everything is that they're sitting on something that does meet all that criteria, right? Because, I mean, even in the Lex Friedman interview that, that Altman recently did, you know, he's talking about other things coming out first. He's talking about, he's just like, he, listen, he, he's good and he could play nonchalant, you know, if he wanted to.[00:28:13] NLW: So I don't want to read too much into it, but. You know, they've had so long to work on this, like unless that we are like really meaningfully running up against some constraint, it just feels like, you know, there's going to be some massive increase, but I don't know. What do you guys think?[00:28:28] swyx: Hard to speculate.[00:28:29] swyx: You know, at this point, they're, they're pretty good at PR and they're not going to tell you anything that they don't want to. And he can tell you one thing and change their minds the next day. So it's, it's, it's really, you know, I've always said that model version numbers are just marketing exercises, like they have something and it's always improving and at some point you just cut it and decide to call it GPT 5.[00:28:50] swyx: And it's more just about defining an arbitrary level at which they're ready and it's up to them on what ready means. We definitely did see some leaks on GPT 4. 5, as I think a lot of people reported and I'm not sure if you covered it. So it seems like there might be an intermediate release. But I did feel, coming out of the Lex Friedman interview, that GPT 5 was nowhere near.[00:29:11] swyx: And you know, it was kind of a sharp contrast to Sam talking at Davos in February, saying that, you know, it was his top priority. So I find it hard to square. And honestly, like, there's also no point Reading too much tea leaves into what any one person says about something that hasn't happened yet or has a decision that hasn't been taken yet.[00:29:31] swyx: Yeah, that's, that's my 2 cents about it. Like, calm down, let's just build .[00:29:35] Alessio: Yeah. The, the February rumor was that they were gonna work on AI agents, so I don't know, maybe they're like, yeah,[00:29:41] swyx: they had two agent two, I think two agent projects, right? One desktop agent and one sort of more general yeah, sort of GPTs like agent and then Andre left, so he was supposed to be the guy on that.[00:29:52] swyx: What did Andre see? What did he see? I don't know. What did he see?[00:29:56] Alessio: I don't know. But again, it's just like the rumors are always floating around, you know but I think like, this is, you know, we're not going to get to the end of the year without Jupyter you know, that's definitely happening. I think the biggest question is like, are Anthropic and Google.[00:30:13] Alessio: Increasing the pace, you know, like it's the, it's the cloud four coming out like in 12 months, like nine months. What's the, what's the deal? Same with Gemini. They went from like one to 1. 5 in like five days or something. So when's Gemini 2 coming out, you know, is that going to be soon? I don't know.[00:30:31] Alessio: There, there are a lot of, speculations, but the good thing is that now you can see a world in which OpenAI doesn't rule everything. You know, so that, that's the best, that's the best news that everybody got, I would say.[00:30:43] swyx: Yeah, and Mistral Large also dropped in the last month. And, you know, not as, not quite GPT 4 class, but very good from a new startup.[00:30:52] swyx: So yeah, we, we have now slowly changed in landscape, you know. In my January recap, I was complaining that nothing's changed in the landscape for a long time. But now we do exist in a world, sort of a multipolar world where Cloud and Gemini are legitimate challengers to GPT 4 and hopefully more will emerge as well hopefully from meta.[00:31:11] Open Source Models - Mistral, Grok[00:31:11] NLW: So speak, let's actually talk about sort of the open source side of this for a minute. So Mistral Large, notable because it's, it's not available open source in the same way that other things are, although I think my perception is that the community has largely given them Like the community largely recognizes that they want them to keep building open source stuff and they have to find some way to fund themselves that they're going to do that.[00:31:27] NLW: And so they kind of understand that there's like, they got to figure out how to eat, but we've got, so, you know, there there's Mistral, there's, I guess, Grok now, which is, you know, Grok one is from, from October is, is open[00:31:38] swyx: sourced at, yeah. Yeah, sorry, I thought you thought you meant Grok the chip company.[00:31:41] swyx: No, no, no, yeah, you mean Twitter Grok.[00:31:43] NLW: Although Grok the chip company, I think is even more interesting in some ways, but and then there's the, you know, obviously Llama3 is the one that sort of everyone's wondering about too. And, you know, my, my sense of that, the little bit that, you know, Zuckerberg was talking about Llama 3 earlier this year, suggested that, at least from an ambition standpoint, he was not thinking about how do I make sure that, you know, meta content, you know, keeps, keeps the open source thrown, you know, vis a vis Mistral.[00:32:09] NLW: He was thinking about how you go after, you know, how, how he, you know, releases a thing that's, you know, every bit as good as whatever OpenAI is on at that point.[00:32:16] Alessio: Yeah. From what I heard in the hallways at, at GDC, Llama 3, the, the biggest model will be, you 260 to 300 billion parameters, so that that's quite large.[00:32:26] Alessio: That's not an open source model. You know, you cannot give people a 300 billion parameters model and ask them to run it. You know, it's very compute intensive. So I think it is, it[00:32:35] swyx: can be open source. It's just, it's going to be difficult to run, but that's a separate question.[00:32:39] Alessio: It's more like, as you think about what they're doing it for, you know, it's not like empowering the person running.[00:32:45] Alessio: llama. On, on their laptop, it's like, oh, you can actually now use this to go after open AI, to go after Anthropic, to go after some of these companies at like the middle complexity level, so to speak. Yeah. So obviously, you know, we estimate Gentala on the podcast, they're doing a lot here, they're making PyTorch better.[00:33:03] Alessio: You know, they want to, that's kind of like maybe a little bit of a shorted. Adam Bedia, in a way, trying to get some of the CUDA dominance out of it. Yeah, no, it's great. The, I love the duck destroying a lot of monopolies arc. You know, it's, it's been very entertaining. Let's bridge[00:33:18] NLW: into the sort of big tech side of this, because this is obviously like, so I think actually when I did my episode, this was one of the I added this as one of as an additional war that, that's something that I'm paying attention to.[00:33:29] NLW: So we've got Microsoft's moves with inflection, which I think pretend, potentially are being read as A shift vis a vis the relationship with OpenAI, which also the sort of Mistral large relationship seems to reinforce as well. We have Apple potentially entering the race, finally, you know, giving up Project Titan and and, and kind of trying to spend more effort on this.[00:33:50] NLW: Although, Counterpoint, we also have them talking about it, or there being reports of a deal with Google, which, you know, is interesting to sort of see what their strategy there is. And then, you know, Meta's been largely quiet. We kind of just talked about the main piece, but, you know, there's, and then there's spoilers like Elon.[00:34:07] NLW: I mean, you know, what, what of those things has sort of been most interesting to you guys as you think about what's going to shake out for the rest of this[00:34:13] Apple MM1[00:34:13] swyx: year? I'll take a crack. So the reason we don't have a fifth war for the Big Tech Wars is that's one of those things where I just feel like we don't cover differently from other media channels, I guess.[00:34:26] swyx: Sure, yeah. In our anti interestness, we actually say, like, we try not to cover the Big Tech Game of Thrones, or it's proxied through Twitter. You know, all the other four wars anyway, so there's just a lot of overlap. Yeah, I think absolutely, personally, the most interesting one is Apple entering the race.[00:34:41] swyx: They actually released, they announced their first large language model that they trained themselves. It's like a 30 billion multimodal model. People weren't that impressed, but it was like the first time that Apple has kind of showcased that, yeah, we're training large models in house as well. Of course, like, they might be doing this deal with Google.[00:34:57] swyx: I don't know. It sounds very sort of rumor y to me. And it's probably, if it's on device, it's going to be a smaller model. So something like a Jemma. It's going to be smarter autocomplete. I don't know what to say. I'm still here dealing with, like, Siri, which hasn't, probably hasn't been updated since God knows when it was introduced.[00:35:16] swyx: It's horrible. I, you know, it, it, it makes me so angry. So I, I, one, as an Apple customer and user, I, I'm just hoping for better AI on Apple itself. But two, they are the gold standard when it comes to local devices, personal compute and, and trust, like you, you trust them with your data. And. I think that's what a lot of people are looking for in AI, that they have, they love the benefits of AI, they don't love the downsides, which is that you have to send all your data to some cloud somewhere.[00:35:45] swyx: And some of this data that we're going to feed AI is just the most personal data there is. So Apple being like one of the most trusted personal data companies, I think it's very important that they enter the AI race, and I hope to see more out of them.[00:35:58] Alessio: To me, the, the biggest question with the Google deal is like, who's paying who?[00:36:03] Alessio: Because for the browsers, Google pays Apple like 18, 20 billion every year to be the default browser. Is Google going to pay you to have Gemini or is Apple paying Google to have Gemini? I think that's, that's like what I'm most interested to figure out because with the browsers, it's like, it's the entry point to the thing.[00:36:21] Alessio: So it's really valuable to be the default. That's why Google pays. But I wonder if like the perception in AI is going to be like, Hey. You just have to have a good local model on my phone to be worth me purchasing your device. And that was, that's kind of drive Apple to be the one buying the model. But then, like Shawn said, they're doing the MM1 themselves.[00:36:40] Alessio: So are they saying we do models, but they're not as good as the Google ones? I don't know. The whole thing is, it's really confusing, but. It makes for great meme material on on Twitter.[00:36:51] swyx: Yeah, I mean, I think, like, they are possibly more than OpenAI and Microsoft and Amazon. They are the most full stack company there is in computing, and so, like, they own the chips, man.[00:37:05] swyx: Like, they manufacture everything so if, if, if there was a company that could do that. You know, seriously challenge the other AI players. It would be Apple. And it's, I don't think it's as hard as self driving. So like maybe they've, they've just been investing in the wrong thing this whole time. We'll see.[00:37:21] swyx: Wall Street certainly thinks[00:37:22] NLW: so. Wall Street loved that move, man. There's a big, a big sigh of relief. Well, let's, let's move away from, from sort of the big stuff. I mean, the, I think to both of your points, it's going to.[00:37:33] Meta's $800b AI rebrand[00:37:33] NLW: Can I, can[00:37:34] swyx: I, can I, can I jump on factoid about this, this Wall Street thing? I went and looked at when Meta went from being a VR company to an AI company.[00:37:44] swyx: And I think the stock I'm trying to look up the details now. The stock has gone up 187% since Lamo one. Yeah. Which is $830 billion in market value created in the past year. . Yeah. Yeah.[00:37:57] NLW: It's, it's, it's like, remember if you guys haven't Yeah. If you haven't seen the chart, it's actually like remarkable.[00:38:02] NLW: If you draw a little[00:38:03] swyx: arrow on it, it's like, no, we're an AI company now and forget the VR thing.[00:38:10] NLW: It's it, it is an interesting, no, it's, I, I think, alessio, you called it sort of like Zuck's Disruptor Arc or whatever. He, he really does. He is in the midst of a, of a total, you know, I don't know if it's a redemption arc or it's just, it's something different where, you know, he, he's sort of the spoiler.[00:38:25] NLW: Like people loved him just freestyle talking about why he thought they had a better headset than Apple. But even if they didn't agree, they just loved it. He was going direct to camera and talking about it for, you know, five minutes or whatever. So that, that's a fascinating shift that I don't think anyone had on their bingo card, you know, whatever, two years ago.[00:38:41] NLW: Yeah. Yeah,[00:38:42] swyx: we still[00:38:43] Alessio: didn't see and fight Elon though, so[00:38:45] swyx: that's what I'm really looking forward to. I mean, hey, don't, don't, don't write it off, you know, maybe just these things take a while to happen. But we need to see and fight in the Coliseum. No, I think you know, in terms of like self management, life leadership, I think he has, there's a lot of lessons to learn from him.[00:38:59] swyx: You know he might, you know, you might kind of quibble with, like, the social impact of Facebook, but just himself as a in terms of personal growth and, and, you know, Per perseverance through like a lot of change and you know, everyone throwing stuff his way. I think there's a lot to say about like, to learn from, from Zuck, which is crazy 'cause he's my age.[00:39:18] swyx: Yeah. Right.[00:39:20] AI Engineer landscape - from baby AGIs to vertical Agents[00:39:20] NLW: Awesome. Well, so, so one of the big things that I think you guys have, you know, distinct and, and unique insight into being where you are and what you work on is. You know, what developers are getting really excited about right now. And by that, I mean, on the one hand, certainly, you know, like startups who are actually kind of formalized and formed to startups, but also, you know, just in terms of like what people are spending their nights and weekends on what they're, you know, coming to hackathons to do.[00:39:45] NLW: And, you know, I think it's a, it's a, it's, it's such a fascinating indicator for, for where things are headed. Like if you zoom back a year, right now was right when everyone was getting so, so excited about. AI agent stuff, right? Auto, GPT and baby a GI. And these things were like, if you dropped anything on YouTube about those, like instantly tens of thousands of views.[00:40:07] NLW: I know because I had like a 50,000 view video, like the second day that I was doing the show on YouTube, you know, because I was talking about auto GPT. And so anyways, you know, obviously that's sort of not totally come to fruition yet, but what are some of the trends in what you guys are seeing in terms of people's, people's interest and, and, and what people are building?[00:40:24] Alessio: I can start maybe with the agents part and then I know Shawn is doing a diffusion meetup tonight. There's a lot of, a lot of different things. The, the agent wave has been the most interesting kind of like dream to reality arc. So out of GPT, I think they went, From zero to like 125, 000 GitHub stars in six weeks, and then one year later, they have 150, 000 stars.[00:40:49] Alessio: So there's kind of been a big plateau. I mean, you might say there are just not that many people that can start it. You know, everybody already started it. But the promise of, hey, I'll just give you a goal, and you do it. I think it's like, amazing to get people's imagination going. You know, they're like, oh, wow, this This is awesome.[00:41:08] Alessio: Everybody, everybody can try this to do anything. But then as technologists, you're like, well, that's, that's just like not possible, you know, we would have like solved everything. And I think it takes a little bit to go from the promise and the hope that people show you to then try it yourself and going back to say, okay, this is not really working for me.[00:41:28] Alessio: And David Wong from Adept, you know, they in our episode, he specifically said. We don't want to do a bottom up product. You know, we don't want something that everybody can just use and try because it's really hard to get it to be reliable. So we're seeing a lot of companies doing vertical agents that are narrow for a specific domain, and they're very good at something.[00:41:49] Alessio: Mike Conover, who was at Databricks before, is also a friend of Latentspace. He's doing this new company called BrightWave doing AI agents for financial research, and that's it, you know, and they're doing very well. There are other companies doing it in security, doing it in compliance, doing it in legal.[00:42:08] Alessio: All of these things that like, people, nobody just wakes up and say, Oh, I cannot wait to go on AutoGPD and ask it to do a compliance review of my thing. You know, just not what inspires people. So I think the gap on the developer side has been the more bottom sub hacker mentality is trying to build this like very Generic agents that can do a lot of open ended tasks.[00:42:30] Alessio: And then the more business side of things is like, Hey, If I want to raise my next round, I can not just like sit around the mess, mess around with like super generic stuff. I need to find a use case that really works. And I think that that is worth for, for a lot of folks in parallel, you have a lot of companies doing evals.[00:42:47] Alessio: There are dozens of them that just want to help you measure how good your models are doing. Again, if you build evals, you need to also have a restrained surface area to actually figure out whether or not it's good, right? Because you cannot eval anything on everything under the sun. So that's another category where I've seen from the startup pitches that I've seen, there's a lot of interest in, in the enterprise.[00:43:11] Alessio: It's just like really. Fragmented because the production use cases are just coming like now, you know, there are not a lot of long established ones to, to test against. And so does it, that's kind of on the virtual agents and then the robotic side it's probably been the thing that surprised me the most at NVIDIA GTC, the amount of robots that were there that were just like robots everywhere.[00:43:33] Alessio: Like, both in the keynote and then on the show floor, you would have Boston Dynamics dogs running around. There was, like, this, like fox robot that had, like, a virtual face that, like, talked to you and, like, moved in real time. There were industrial robots. NVIDIA did a big push on their own Omniverse thing, which is, like, this Digital twin of whatever environments you're in that you can use to train the robots agents.[00:43:57] Alessio: So that kind of takes people back to the reinforcement learning days, but yeah, agents, people want them, you know, people want them. I give a talk about the, the rise of the full stack employees and kind of this future, the same way full stack engineers kind of work across the stack. In the future, every employee is going to interact with every part of the organization through agents and AI enabled tooling.[00:44:17] Alessio: This is happening. It just needs to be a lot more narrow than maybe the first approach that we took, which is just put a string in AutoGPT and pray. But yeah, there's a lot of super interesting stuff going on.[00:44:27] swyx: Yeah. Well, he Let's recover a lot of stuff there. I'll separate the robotics piece because I feel like that's so different from the software world.[00:44:34] swyx: But yeah, we do talk to a lot of engineers and you know, that this is our sort of bread and butter. And I do agree that vertical agents have worked out a lot better than the horizontal ones. I think all You know, the point I'll make here is just the reason AutoGPT and maybe AGI, you know, it's in the name, like they were promising AGI.[00:44:53] swyx: But I think people are discovering that you cannot engineer your way to AGI. It has to be done at the model level and all these engineering, prompt engineering hacks on top of it weren't really going to get us there in a meaningful way without much further, you know, improvements in the models. I would say, I'll go so far as to say, even Devin, which is, I would, I think the most advanced agent that we've ever seen, still requires a lot of engineering and still probably falls apart a lot in terms of, like, practical usage.[00:45:22] swyx: Or it's just, Way too slow and expensive for, you know, what it's, what it's promised compared to the video. So yeah, that's, that's what, that's what happened with agents from, from last year. But I, I do, I do see, like, vertical agents being very popular and, and sometimes you, like, I think the word agent might even be overused sometimes.[00:45:38] swyx: Like, people don't really care whether or not you call it an AI agent, right? Like, does it replace boring menial tasks that I do That I might hire a human to do, or that the human who is hired to do it, like, actually doesn't really want to do. And I think there's absolutely ways in sort of a vertical context that you can actually go after very routine tasks that can be scaled out to a lot of, you know, AI assistants.[00:46:01] swyx: So, so yeah, I mean, and I would, I would sort of basically plus one what let's just sit there. I think it's, it's very, very promising and I think more people should work on it, not less. Like there's not enough people. Like, we, like, this should be the, the, the main thrust of the AI engineer is to look out, look for use cases and, and go to a production with them instead of just always working on some AGI promising thing that never arrives.[00:46:21] swyx: I,[00:46:22] NLW: I, I can only add that so I've been fiercely making tutorials behind the scenes around basically everything you can imagine with AI. We've probably done, we've done about 300 tutorials over the last couple of months. And the verticalized anything, right, like this is a solution for your particular job or role, even if it's way less interesting or kind of sexy, it's like so radically more useful to people in terms of intersecting with how, like those are the ways that people are actually.[00:46:50] NLW: Adopting AI in a lot of cases is just a, a, a thing that I do over and over again. By the way, I think that's the same way that even the generalized models are getting adopted. You know, it's like, I use midjourney for lots of stuff, but the main thing I use it for is YouTube thumbnails every day. Like day in, day out, I will always do a YouTube thumbnail, you know, or two with, with Midjourney, right?[00:47:09] NLW: And it's like you can, you can start to extrapolate that across a lot of things and all of a sudden, you know, a AI doesn't. It looks revolutionary because of a million small changes rather than one sort of big dramatic change. And I think that the verticalization of agents is sort of a great example of how that's[00:47:26] swyx: going to play out too.[00:47:28] Adept episode - Screen Multimodality[00:47:28] swyx: So I'll have one caveat here, which is I think that Because multi modal models are now commonplace, like Cloud, Gemini, OpenAI, all very very easily multi modal, Apple's easily multi modal, all this stuff. There is a switch for agents for sort of general desktop browsing that I think people so much for joining us today, and we'll see you in the next video.[00:48:04] swyx: Version of the the agent where they're not specifically taking in text or anything They're just watching your screen just like someone else would and and I'm piloting it by vision And you know in the the episode with David that we'll have dropped by the time that this this airs I think I think that is the promise of adept and that is a promise of what a lot of these sort of desktop agents Are and that is the more general purpose system That could be as big as the browser, the operating system, like, people really want to build that foundational piece of software in AI.[00:48:38] swyx: And I would see, like, the potential there for desktop agents being that, that you can have sort of self driving computers. You know, don't write the horizontal piece out. I just think we took a while to get there.[00:48:48] NLW: What else are you guys seeing that's interesting to you? I'm looking at your notes and I see a ton of categories.[00:48:54] Top Model Research from January Recap[00:48:54] swyx: Yeah so I'll take the next two as like as one category, which is basically alternative architectures, right? The two main things that everyone following AI kind of knows now is, one, the diffusion architecture, and two, the let's just say the, Decoder only transformer architecture that is popularized by GPT.[00:49:12] swyx: You can read, you can look on YouTube for thousands and thousands of tutorials on each of those things. What we are talking about here is what's next, what people are researching, and what could be on the horizon that takes the place of those other two things. So first of all, we'll talk about transformer architectures and then diffusion.[00:49:25] swyx: So transformers the, the two leading candidates are effectively RWKV and the state space models the most recent one of which is Mamba, but there's others like the Stripe, ENA, and the S four H three stuff coming out of hazy research at Stanford. And all of those are non quadratic language models that scale the promise to scale a lot better than the, the traditional transformer.[00:49:47] swyx: That this might be too theoretical for most people right now, but it's, it's gonna be. It's gonna come out in weird ways, where, imagine if like, Right now the talk of the town is that Claude and Gemini have a million tokens of context and like whoa You can put in like, you know, two hours of video now, okay But like what if you put what if we could like throw in, you know, two hundred thousand hours of video?[00:50:09] swyx: Like how does that change your usage of AI? What if you could throw in the entire genetic sequence of a human and like synthesize new drugs. Like, well, how does that change things? Like, we don't know because we haven't had access to this capability being so cheap before. And that's the ultimate promise of these two models.[00:50:28] swyx: They're not there yet but we're seeing very, very good progress. RWKV and Mamba are probably the, like, the two leading examples, both of which are open source that you can try them today and and have a lot of progress there. And the, the, the main thing I'll highlight for audio e KV is that at, at the seven B level, they seem to have beat LAMA two in all benchmarks that matter at the same size for the same amount of training as an open source model.[00:50:51] swyx: So that's exciting. You know, they're there, they're seven B now. They're not at seven tb. We don't know if it'll. And then the other thing is diffusion. Diffusions and transformers are are kind of on the collision course. The original stable diffusion already used transformers in in parts of its architecture.[00:51:06] swyx: It seems that transformers are eating more and more of those layers particularly the sort of VAE layer. So that's, the Diffusion Transformer is what Sora is built on. The guy who wrote the Diffusion Transformer paper, Bill Pebbles, is, Bill Pebbles is the lead tech guy on Sora. So you'll just see a lot more Diffusion Transformer stuff going on.[00:51:25] swyx: But there's, there's more sort of experimentation with diffusion. I'm holding a meetup actually here in San Francisco that's gonna be like the state of diffusion, which I'm pretty excited about. Stability's doing a lot of good work. And if you look at the, the architecture of how they're creating Stable Diffusion 3, Hourglass Diffusion, and the inconsistency models, or SDXL Turbo.[00:51:45] swyx: All of these are, like, very, very interesting innovations on, like, the original idea of what Stable Diffusion was. So if you think that it is expensive to create or slow to create Stable Diffusion or an AI generated art, you are not up to date with the latest models. If you think it is hard to create text and images, you are not up to date with the latest models.[00:52:02] swyx: And people still are kind of far behind. The last piece of which is the wildcard I always kind of hold out, which is text diffusion. So Instead of using autogenerative or autoregressive transformers, can you use text to diffuse? So you can use diffusion models to diffuse and create entire chunks of text all at once instead of token by token.[00:52:22] swyx: And that is something that Midjourney confirmed today, because it was only rumored the past few months. But they confirmed today that they were looking into. So all those things are like very exciting new model architectures that are, Maybe something that we'll, you'll see in production two to three years from now.[00:52:37] swyx: So the couple of the trends[00:52:38] NLW: that I want to just get your takes on, because they're sort of something that, that seems like they're coming up are one sort of these, these wearable, you know, kind of passive AI experiences where they're absorbing a lot of what's going on around you and then, and then kind of bringing things back.[00:52:53] NLW: And then the, the other one that I, that I wanted to see if you guys had thoughts on were sort of this next generation of chip companies. Obviously there's a huge amount of emphasis. On on hardware and silicon and, and, and different ways of doing things, but, y
Listen to the podcast via the player below: Medtech Insight articles addressing topics discussed in this episode: In Five Years, People Will Navigate Their Health Care With An AI-Advisor – Verily's Andrew Trister Verily's Andrew Trister On Uniting The Pieces To Create Personalized Health, Equity European Parliament Adopts AI Act, But Success Hangs On The AI Office AI Clinical Summarization Tools Sit In 'Regulatory Grey Area', Experts Warn
The AI Breakdown: Daily Artificial Intelligence News and Discussions
Plus President Joe Biden talks AI at the State of the Union. ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI. Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/
In this episode of Fundraising AI Podcast, Join our hosts, Nathan and Scott, in a captivating discussion as they unravel the multifaceted relationship between humanity and artificial intelligence (AI). In this engaging conversation, they traverse various themes surrounding the utilization of AI tools, exploring both their practical applications and ethical implications. Throughout the dialogue, Nathan and Scott explore the spectrum of AI tools, from the novelty of emerging technologies to the integration of established platforms like OpenAI and Co-Pilot into daily workflows. They share personal experiences and insights into how different AI tools cater to specific needs, whether it's for productivity, research, or creative endeavors. The conversation also delves into the mindset shift required to fully leverage AI's potential while ensuring responsible usage. They discuss the importance of understanding the limitations and risks associated with AI, highlighting the need for organizations to foster a culture of experimentation and learning. As the discussion unfolds, Nathan and Scott reflect on the balance between efficiency and authenticity in AI-driven interactions. From personalized AI-generated content to the enduring value of face-to-face communication, they explore how AI intersects with human emotions and experiences. Ethical considerations surrounding AI usage are also examined, focusing on the imperative of promoting beneficial and responsible AI practices. Nathan and Scott emphasize the need for transparency, accountability, and informed decision-making in harnessing the power of AI for societal benefit. In closing, they contemplate the evolving role of AI in content creation and personalization, pondering its implications for digital engagement and human connection. Episode Highlights [03:45] Arc Browser: A Revolutionary Tool or a Pandora's Box? [08:07] OpenAI, Perplexity, and Co-Pilot in Focus [10:12] Overcoming Mental Blocks and Embracing the AI Mindset [12:40] Shift from Google to Generative AI [14:01] Overcoming Resistance and Enhancing Efficiency [20:22] The Importance of Embracing AI in Organizations [21:33] Era of Beneficial and Responsible AI: Welcoming Innovation with Caution [23:31] Leveraging AI for Personalization Amidst Digital Overload [25:35]The Heartfelt Blend of Digital and Personal: AI in Human Connections Resources: Connect with Nathan and Scott: LinkedIn (Nathan): linkedin.com/in/nathanchappell/ LinkedIn (Scott): linkedin.com/in/scott-rosenkrans Website: fundraising.ai/
Like this? Subscribe to our newsletter at https://thinkfuture.com --- Get AIDAILY, delivered to your inbox, every weekday. Subscribe to our newsletter at https://aidaily.us --- In a world increasingly governed by AI, from Wendy's exploring surge pricing based on demand to sophisticated pricing algorithms shaping our every purchase, the balance of power seems tilted in favor of corporations. However, the future holds a promising solution: the development of personal AIs. Unlike the AIs tethered to big tech companies, personal AIs operate independently on our devices, advocating solely for our interests. Imagine walking into a store equipped with AR goggles, greeted by a virtual salesperson powered by AI. This AI, having sifted through decades of your digital footprint, subtly nudges you towards products resonating with deep-seated memories, like a couch reminiscent of your childhood. Here, your personal AI steps in, providing context and guarding against manipulative tactics by reminding you of the AI's strategy. Beyond just shopping, personal AIs could negotiate on our behalf, ensuring we receive the best deals by countering the sophisticated algorithms employed by businesses. This vision for personal AIs represents not just an advancement in technology, but a shift towards a more equitable dynamic between consumers and corporations. It's a future where technology serves as a guardian of our preferences and rights, ensuring that in the battle of algorithms, individuals have a champion in their corner. --- Send in a voice message: https://podcasters.spotify.com/pod/show/thinkfuture/message Support this podcast: https://podcasters.spotify.com/pod/show/thinkfuture/support
In this episode, David and Gary speak with Founder of Personal.ai, Suman Kanuganti. They discuss the aspects of how you can harness the power of ai to prioritize humanity- not lose it. With the power of conviction and a little of “what would Larry do,” Suman shares his path to success and how AI has transformed his life and how he has transformed AI.Links:https://s.personal.ai___________________________________ Submit Your Questions to: hello@thebigpixel.net OR comment on our YouTube videos! - Big Pixel, LLC - YouTube Our Hosts David Baxter - CEO of Big Pixel Gary Voigt - Creative Director at Big Pixel The Podcast David Baxter has been designing, building, and advising startups and businesses for over ten years. His passion, knowledge, and brutal honesty have helped dozens of companies get their start. In Biz/Dev, David and award-winning Creative Director Gary Voigt talk about current events and how they affect the world of startups, entrepreneurship, software development, and culture. Contact Us hello@thebigpixel.net 919-275-0646 www.thebigpixel.net FB | IG | LI | TW | TT : @bigpixelNC Big Pixel 1772 Heritage Center Dr Suite 201 Wake Forest, NC 27587 Music by: BLXRR
Let's break down barriers between today's potential and tomorrow's adoption. Personal AI adoption is through the roof while at the same time organizations are struggling with developing an implementation strategy.Understanding the gap between individual and institutional adoption of AI has become crucial. This session is designed not just to enlighten but to offer practical solutions and frameworks for bridging that gap and achieving effective organizational AI adoption.Our guest, Amanda Bickerstaff, an esteemed figure in AI education, will guide us through this journey. As a pioneer in the field, Amanda has made significant strides in responsible AI adoption within educational systems, advocating for AI literacy and ethical usage. Her insights will not only shed light on the challenges but also highlight the immense potential AI holds for transforming organizations and educational institutions alike.In this webinar, we'll explore the nuances of AI adoption at the grass-roots level and how it contrasts sharply with institutional hesitance. Through Amanda's expert lens, we'll discuss:↳ importance of change management in embracing AI↳ developing strategic frameworks for safe and efficient use↳ innovating within the realms of AI technology to stay ahead in this dynamic landscape.Whether you're a C-suite executive, a business leader, or an educational pioneer, this webinar will equip you with the knowledge and tools to navigate the AI era confidently.If you are ready to leap from knowledge to action and learn from a leader who's transforming the landscape, listen to this. About Leveraging AI The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/ YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/ Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/ Free AI Consultation: https://multiplai.ai/book-a-call/ If you've enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!
In this episode, Kyle interviews Suman Kanuganti, co-founder and CEO of Personal.ai. Suman discusses his journey from an engineer with a robotics background to launching innovative ventures like Aira, assisting visually impaired individuals, and Personal AI, aimed at extending cognitive abilities through AI. The conversation explores the technology behind Personal.ai, its unique approach to personalizing user experiences, and the importance of data privacy. Suman also shares his insights on entrepreneurship in the AI space and the ethical considerations of AI development.Suman Kanuganti:Two-time venture-backed entrepreneur, AI enthusiast, and accessibility champion awarded Forbes 40 Under 40 and Smithsonian's Top Innovator to Watch awards Suman Kanuganti is the Co-Founder and Chief Executive Officer at Personal.ai.Links from the Show:LinkedIn: Linkedin Books: SapiensLinks: s.personal.aiTV: Painkiller, Altered CarbonMore by Kyle:Follow Prodity on Twitter and TikTokFollow Kyle on Twitter and TikTokSign up for the Prodity Newsletter for more updates.Kyle's writing on MediumProdity on MediumLike our podcast, consider Buying Us a Coffee or supporting us on Patreon
On-device AI is coming to all of our devices. But, do we need it? From the potential benefits of increased productivity and seamless integration with our devices to concerns about privacy, resource consumption, and the lack of a "kill switch," we explore the untold side of on-device AI that no one's talking about. Join us as we uncover the hidden truths of this technological trend, discuss the impact on companies like Microsoft and Apple, and anticipate the release of llama 2 and its intriguing offline integration. Stay with us to be in the know about generative AI, and its far-reaching implications in our everyday lives.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode page Join the discussion: Ask Jordan and Nancy questions on GPTsUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTimestamps: 00:00 Welcome to the show! 03:12 Daily AI News06:54 Trend towards bringing generative AI to devices.10:50 Speculation on Apple's offline/on-device AI plans.13:59 Personal AI reduces latency, making tasks instant.16:25 On-device AI enhances productivity, seamless integration.20:28 GPT-3 technology requires significant resources and testing.24:20 On-device AI downsides include lack of control.27:06 Speech describes potential impact of AI technology.32:36 Reduced cloud prompting by 90% using on-device AI.33:31 Enjoyed learning about on device AI, share ideas.Topics Covered in This Episode: 1. Concerns about implications of on-device AI2. Anticipation around llama 2 release3. Speculations on Apple's generative AI project4. Benefits and drawbacks of on-device AI 5. Integration of large language models with smart assistants6. New AI-powered hardware devices7. Challenges and drawbacks of on-device AI8. Purpose of podcast and recent AI newsKeywords: On-device AI, Kill switch, User consent, Opt-out option, Accessibility, Environmental impact, AI programs, AI technologies, Microsoft, NVIDIA, Qualcomm, Meta, Apple, Llama 2, Offline integration, Market adoption rates, Generative AI, Cloud-based AI, Large language models, Smart assistants, Alexa, Google Home, Rabbit r one, Meta Ray Ban glasses, Resource consumption, Battery life, Skynet factor, Podcast, Livestream, ChatGPT, Typeface.Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/
No Priors: Artificial Intelligence | Machine Learning | Technology | Startups
We're looking back on 2023 and sharing a handful of our favorite conversations. Last year was full of insightful conversations that shaped the way we think about the most innovative movements in the AI space. Want to hear more? Check out the full episodes here: What is Digital Life? with OpenAI Co-Founder & Chief Scientist Ilya Sutskever How AI can help small businesses with Former Square CEO Alyssa Henry Will Everyone Have a Personal AI? With Mustafa Suleyman, Founder of DeepMind and Inflection How will AI bring us the future of medicine? With Daphne Koller from Insitro The case for AI optimism with Reid Hoffman from Inflection AI Your AI Friends Have Awoken, With Noam Shazeer Mistral 7B and the Open Source Revolution With Arthur Mensch, CEO Mistral AI The Computing Platform Underlying AI with Jensen Huang, Founder and CEO NVIDIA Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @reidhoffman l @alyssahhenry l @ilyasut l @mustafasuleyman l @DaphneKoller l @arthurmensch l @MrJensenHuang Show Notes: (0:00) Introduction (0:27) Ilya Sutskever on the governance structure of OpenAI (3:11) Alyssa Henry on how AI can small business owners (5:25) Mustafa Suleyman on defining intelligence (8:53) Reid Hoffman's advice for co-working with AI (11:47) Daphne Koller on probabilistic graphical models (13:15) Noam Shazeer on the possibilities of LLMs (14:27) Arthur Mensch on keeping AI open (17:19) Jensen Huang on how Nvidia decides what to work on
In this episode, the host, Stewart Alsop, takes a deep dive with Suman Kanuganti, the co-founder and CEO of personal.ai. They discuss the concepts of Personal Language Models (PLM), artificial intelligence, and the UX of AI systems. Suman presents the idea of creating personalized AI for every individual to preserve their memory and enable people to interact with it, thus introducing a new way to retain and access human knowledge. They also touch upon the impacts of AI on the way history will be told and remembered in the future. We built a Custom GPT trained on this episode for you to interact with. Try this prompt with it: What are Suman's thoughts on the evolution of personal AI? Key Insights Introduction to Personal AI: Discussion on the concept of Personal AI, an AI version of oneself that leverages personal data, facts, and opinions to create interactive experiences for others. Origin of Personal AI: Suman Kanuganti shares his inspiration behind Personal AI, motivated by the loss of his mentor, Larry Bok, and the desire to maintain intellectual and emotional connections. Nature of Personal AI: Unlike current Large Language Models (LLMs), Personal AI focuses on individual memory, including context, location, and personal history. It grows over time, assimilating various aspects of an individual's life. Explicit vs. Implicit Memory: Exploration of the concept of explicit memory control in Personal AI, ensuring intellectual integrity, and distinguishing it from implicit inference. Personal AI's User Experience: Discussion on the evolution of the user experience in Personal AI, highlighting how it started before the hype of current LLMs like GPT-3. Integration with LLMs: Insight into how Personal AI integrates with general LLMs to enhance user experience and fill gaps in personal knowledge. Challenges and Solutions in Personal AI Development: Suman Kanuganti discusses overcoming challenges related to UX design, data training, and market adaptation. Future of AI Interaction: Speculation on the future modalities of interacting with AI, including voice, virtual avatars, and potential integration into the metaverse and synthetic biology. Vision and Persistence in Developing Personal AI: Suman Kanuganti reflects on the journey of developing Personal AI, emphasizing the importance of vision and overcoming resistance and challenges in the tech industry. Historical Implications of Personal AI: A closing thought on how Personal AI could change the way history is recorded and told, with individual memory shaping personal narratives.
AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Runway, Poe, Anthropic
In this episode, we explore the potential future where personal AI assistants are a common part of daily life, discussing the technological advancements needed and the implications for society. We examine how these AI systems could transform our personal and professional lives, the ethical considerations involved, and the challenges in making this a reality for everyone. Invest in AI Box: https://republic.com/ai-box
This podcast features a discussion with Seba Tut, an expert practitioner and facilitator of Aguajara. They explore the concept of Aguajara, a therapeutic experience in water. Seba shares his view on its spiritual, practical, and therapeutic aspects, and how it fosters surrender, intuition, and adaptability. He also discusses the connections between Aguajara and broader cultural and technological trends, integrating perspectives from ancient wisdom, water therapies, and the challenges of modern life. The podcast also covers Seba's plans for sharing Aguajara through a round trip journey across Latin America and potentially further afield. If you want to find out what happened in this episode, we trained this ChatGPT for you 00:04 Introduction and Guest Presentation 00:22 Understanding Aguajara: A Deep Dive 02:46 The Origins and Evolution of Aguajara 06:35 The Role of Water in Aguajara 06:51 The Birth of Ajara and Its Connection to Water 14:18 Exploring the Practice of Aguajara 27:14 The Healing Power of Aguajara 35:18 The Aquatic Consciousness in Aguajara 36:40 Discussing Life's Struggles and the Aquatic Revolution 38:25 The Power of Empathy and the Impact of the Psychedelic Movement 38:47 The Emergence of Holistic Views and the Power of Communication 39:54 Exploring the Concept of Connected Minds and Astrology 40:30 The Transition from Pisces to Aquarius Era 41:42 The Power of Words and the Impact of AI 43:29 The Potential of AI and the Age of Magic 53:11 Exploring the Concept of Personal AI 57:04 The Influence of Hollywood and the Perception of AI 59:23 The Power of Touch and the Healing Potential of Water 01:03:35 The Future of Aguajara and the Power of Community Key Points Definition of Aguajara: Seba Tut described Aguajara as difficult to define succinctly. It is seen as both a territory and an exploration, an aquatic experience that cannot be fully captured in a simple definition. It invites curiosity and personal experience to truly understand what it is. Origin and Influences: Aguajara is influenced by and considered a descendant of older practices like Watsu (water shiatsu) and Water Dance. These were developed independently in California and Switzerland, respectively. Aguajara also has roots in Jansu, another aquatic therapy developed in an Osho ashram in India. The Role of Water: Water is central to Aguajara, not just as a medium but as an element that influences the practice. The therapy often takes place in warm water, similar to body temperature, but can also be conducted in cooler waters. The water facilitates relaxation and movement. Connection to Other Practices: Aguajara draws from various traditions and practices, including tantric explorations, shiatsu massage, and dance. It is seen as a combination of these elements, adapted to an aquatic environment. Development and Evolution: Aguajara evolved through experimentation and collaboration among practitioners, who brought their learnings and understandings from other therapies and movement practices. Philosophical and Spiritual Aspects: The discussion touched on broader philosophical and spiritual themes, such as the mystery of water and the mind, the concept of surrender, and the interplay of intuition and intention. Practical Aspects of Aguajara Sessions: Sessions typically involve entering the water, surrendering to the experience, and allowing the facilitator to guide movements. The experience can be therapeutic and involves elements of touch, movement, and a deep connection with water. The Cultural and Social Context: The conversation also delved into the broader cultural and social implications of practices like Aguajara, discussing aspects like community, empathy, and the human connection with nature.
IN THIS EPISODE, YOU'LL LEARN: Why is Jeff training an AI agent to personify himself? How could such an agent potentially be used in the future? What does this mean for overall productivity for people that have the means to train such an agent? What other things might happen as a result of this AI growth? What is the difference between the way Jeff is training his AI versus the way Preston is training his? What method will win in the long-haul for training AI agents? Is there concern people should have with providing the data for these agents? How does Bitcoin enter into this equation? Why are Jeff and Preston so interesting in the FinCEN proposal that was recently released for comment? Disclaimer: Slight discrepancies in the timestamps may occur due to podcast platform differences. BOOKS AND RESOURCES Jeff's VC Firm Ego Death Capital. Jeff Booth's Twitter. Jeff's book, The Price of Tomorrow. NEW TO THE SHOW? Check out our We Study Billionaires Starter Packs. Browse through all our episodes (complete with transcripts) here. Try our tool for picking stock winners and managing our portfolios: TIP Finance Tool. Enjoy exclusive perks from our favorite Apps and Services. Stay up-to-date on financial markets and investing strategies through our daily newsletter, We Study Markets. Learn how to better start, manage, and grow your business with the best business podcasts. SPONSORS Support our free podcast by supporting our sponsors: River Efani Babbel AlphaSense Vanta American Express Business Gold Card Alto Salesforce NetSuite Ka'Chava Wise Toyota Shopify
IN THIS EPISODE, YOU'LL LEARN:00:00 - Why is Jeff training an Ai agent to personify himself?How could such an agent potentially be used in the futureWhat does this mean for overall productivity for people that have the means to train such an agentWhat other things might happen as a result of this Ai growthWhat is the difference between the way Jeff is training his Ai versus the way Preston is training hisWhat method will win in the long-haul for training ai agentsIs there concern people should have with providing the data for providing these agentsHow does Bitcoin enter into this equationWhy are Jeff and Preston so interesting in the FinCEN proposal that was recently released for commentDisclaimer: Slight discrepancies in the timestamps may occur due to podcast platform differences.BOOKS AND RESOURCESJeff's VC Firm Ego Death CapitalJeff Booth's TwitterJeff's book, The Price of TomorrowNEW TO THE SHOW?Join the exclusive TIP Mastermind Community to engage in meaningful stock investing discussions with Stig, Clay, and the other community members.Check out our We Study Billionaires Starter Packs.Browse through all our episodes (complete with transcripts) here.Try our tool for picking stock winners and managing our portfolios: TIP Finance Tool.Enjoy exclusive perks from our favorite Apps and Services.Stay up-to-date on financial markets and investing strategies through our daily newsletter, We Study Markets.Learn how to better start, manage, and grow your business with the best business podcasts. Help us understand our audience better so we can create a more intentional user experience by answering this survey!SPONSORSGet $5 in Bitcoin when you invest $100 with River.Beat FOMO and move faster than the market with AlphaSense.Send, spend, and receive money around the world easily with Wise.Give customers experiences they've only dreamed of with Salesforce's Einstein AI.Earn more where your business spends the most with American Express Business Gold Card!Experience real language learning for real conversations with Babbel and get 55% off.Choose Toyota for your next vehicle – SUVs that now have more advanced technology than ever before.Feed your body the nutrients it craves with Ka'Chava. Get 10% off on your first order today!Get a FREE 5-ounce solid silver America The Beautiful bullion coin with any qualifying Noble Gold Investments precious metals IRA .Be in control of every sales channel with Shopify. Sign up for a $1/mo, trial period today.Get a customized solution for all of your KPIs. Download NetSuite's FREE popular KPI Checklist.Make it easier to do your most meaningful work with Notion AI.Earn a yield on gold, paid in gold with Monetary Metals.Optimize your retirement strategy. Start investing in alts with Alto today.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In this interview with A.I. specialist Mr. Suman Kanuganti, we delve deep into the accomplishments of a distinguished two-time venture-backed entrepreneur with a profound passion for Artificial Intelligence. Honored with the Forbes 40 Under 40 and Smithsonian's Top Innovator to Watch awards, Mr. Kanuganti stands as a beacon in the tech world. As the Co-Founder and Chief Executive Officer at Personal.ai. He has continually pushed boundaries. Notably, through his groundbreaking venture Aira, he championed the use of AI and AR technology to assist the blind and low-vision community. Now, with Personal AI, he envisions a future where every individual is empowered to amplify their intelligence with an AI-backed extension of their memory. Join us in this enlightening conversation to uncover the vision and journey of this technological pioneer. Please like and subscribe to stay updated with pioneering figures in technology. #AISpecialist #TechPioneer #PersonalAI ===================================================================== Follow Sumam: Linkedin: https://www.linkedin.com/in/kanugantisuman/ Website: https://www.personal.ai/ ================================================================= Aaron Armstrong Links Facebook: https://www.facebook.com/aaron.armstrong.58 Instagram: https://www.instagram.com/aaronarmstrong33/ Linkedin: https://www.linkedin.com/in/aaronarmstrong33/ Website: https://www.aaronjarmstrong.com/ ====================================================================== Affiliate Links Deals: Magic Mind - DISCOUNT CODE (WINNERS20) :https://magicmind.com/pages/5reasonswhy? ==================================================================== The Venture Project Links Follow us on Social: Facebook: / theventureprojectoshkosh Instagram: https://www.instagram.com/the_venture_project/?utm_medium=copy_link Linkedin: https://www.linkedin.com/company/71261525/admin/feed/posts/ Tiktok: https://www.tiktok.com/@theventureprojectoshkosh Websites: - https://www.theventureprojectoshkosh.com/
This Week in Startups is brought to you by… Codecademy. Build the future you want to see with Codecademy. Codecademy Pro helps you learn everything you'll need to shape what comes next in the tech space. Try it free for 14 days. Visit Codecademy.com/TWiST .Tech Domains has a new program called startups.tech, where you can get your startup featured on This Week in Startups. Go to startups.tech/jason to find out how! Squarespace. Turn your idea into a new website! Go to Squarespace.com/TWIST for a free trial. When you're ready to launch, use offer code TWIST to save 10% off your first purchase of a website or domain. * Today's show: Rewind CEO Dan Siroker joins Jason to discuss the new Rewind Pendant AI device that can record conversations (1:08), debate privacy and ethical concerns around covert recording (9:32), and much more! * Time stamps: (0:00) Jason kicks off the show (1:08) Dan breaks down the Rewind Pendant (8:05) Codecademy - Try Codecademy Pro FREE for 14 days at http://codecademy.com/TWiST (9:32) Privacy and ethical concerns around covert recording (20:42) .Tech Domains - Apply to get your startup featured on This Week in Startups at https://startups.tech/jason (21:47) Comparisons to products like Google Glass and Humane (24:15) Design considerations and the best use cases for Rewind Pendant (30:09) Squarespace - Use offer code TWIST to save 10% off your first purchase of a website or domain at https://Squarespace.com/twist (31:32) The need for thoughtful development of personal AI and the value of open feedback * Follow Dan: https://twitter.com/dsiroker * Read LAUNCH Fund 4 Deal Memo: https://www.launch.co/fourApply for Funding: https://www.launch.co/apply Buy ANGEL: https://www.angelthebook.com Great recent interviews: Steve Huffman, Brian Chesky, Aaron Levie, Sophia Amoruso, Reid Hoffman, Frank Slootman, Billy McFarland, PrayingForExits, Jenny Lefcourt Check out Jason's suite of newsletters: https://substack.com/@calacanis * Follow Jason: Twitter: https://twitter.com/jason Instagram: https://www.instagram.com/jason LinkedIn: https://www.linkedin.com/in/jasoncalacanis * Follow TWiST: Substack: https://twistartups.substack.com Twitter: https://twitter.com/TWiStartups YouTube: https://www.youtube.com/thisweekin * Subscribe to the Founder University Podcast: https://www.founder.university/podcast
Suman Kanuganti is married with a 5 year old daughter, which he greatly enjoys. He loves robots, as a robotics major, but describes himself as a human oriented person - one who wants to elevate the experience of people. Outside of tech, he enjoys playing volleyball and is a foodie with his wife. They enjoy yellow tail sashimi and modern Mexican fusion, but at home, they enjoy remixing cuisines with different spices.In his previous company, Suman built solutions around augmenting the human experience for blind people. Moving forward, he wanted to expand this to allow people to use AI to create long term memory, personally for you and your loved ones.This is the creation story of Personal AI.SponsorsCipherstashTreblleCAST AI FireflyTursoMemberstackLinksWebsite: http://www.personal.ai/LinkedIn: https://www.linkedin.com/in/kanugantisuman/Support this podcast at — https://redcircle.com/code-story/donationsAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
As we round out this enlightening three-part series with Brian Roemmele, we venture into the exciting future of AI. How can companies harness the transformative power of AI without jeopardizing jobs? We discuss the revolutionary concept of the "Personal AI" company, an innovative vision where a human's wisdom, personality, and essence can be synthesized through voice modeling, based on answers to a 1000-prompt questionnaire. Imagine a future where your knowledge, experiences, and insights could be immortalized, accessible to generations to come. Brian's groundbreaking perspective offers a glimpse into a future where technology and humanity converge in profound and lasting ways.----------What to write and publish a book in 30 days? Go toJamesAltucherShow.com/writingto join James' writing intensive!What do YOU think of the show? Head to JamesAltucherShow.com/listeners and fill out a short survey that will help us better tailor the podcast to our audience!Are you interested in getting direct answers from James about your question on a podcast? Go to JamesAltucherShow.com/AskAltucher and send in your questions to be answered on the air!------------Visit Notepd.com to read our idea lists & sign up to create your own!My new book Skip the Line is out! Make sure you get a copy wherever books are sold!Join the You Should Run for President 2.0 Facebook Group, where we discuss why you should run for President.I write about all my podcasts! Check out the full post and learn what I learned at jamesaltucher.com/podcast.------------Thank you so much for listening! If you like this episode, please rate, review, and subscribe to “The James Altucher Show” wherever you get your podcasts: Apple PodcastsStitcheriHeart RadioSpotifyFollow me on Social Media:YouTubeTwitterFacebook
As we round out this enlightening three-part series with Brian Roemmele, we venture into the exciting future of AI. How can companies harness the transformative power of AI without jeopardizing jobs? We discuss the revolutionary concept of the "Personal AI" company, an innovative vision where a human's wisdom, personality, and essence can be synthesized through voice modeling, based on answers to a 1000-prompt questionnaire. Imagine a future where your knowledge, experiences, and insights could be immortalized, accessible to generations to come. Brian's groundbreaking perspective offers a glimpse into a future where technology and humanity converge in profound and lasting ways.----------What to write and publish a book in 30 days? Go toJamesAltucherShow.com/writingto join James' writing intensive!What do YOU think of the show? Head to JamesAltucherShow.com/listeners and fill out a short survey that will help us better tailor the podcast to our audience!Are you interested in getting direct answers from James about your question on a podcast? Go to JamesAltucherShow.com/AskAltucher and send in your questions to be answered on the air!------------Visit Notepd.com to read our idea lists & sign up to create your own!My new book Skip the Line is out! Make sure you get a copy wherever books are sold!Join the You Should Run for President 2.0 Facebook Group, where we discuss why you should run for President.I write about all my podcasts! Check out the full post and learn what I learned at jamesaltucher.com/podcast.------------Thank you so much for listening! If you like this episode, please rate, review, and subscribe to "The James Altucher Show" wherever you get your podcasts: Apple PodcastsStitcheriHeart RadioSpotifyFollow me on Social Media:YouTubeTwitterFacebook ------------What do YOU think of the show? Head to JamesAltucherShow.com/listeners and fill out a short survey that will help us better tailor the podcast to our audience!Are you interested in getting direct answers from James about your question on a podcast? Go to JamesAltucherShow.com/AskAltucher and send in your questions to be answered on the air!------------Visit Notepd.com to read our idea lists & sign up to create your own!My new book, Skip the Line, is out! Make sure you get a copy wherever books are sold!Join the You Should Run for President 2.0 Facebook Group, where we discuss why you should run for President.I write about all my podcasts! Check out the full post and learn what I learned at jamesaltuchershow.com------------Thank you so much for listening! If you like this episode, please rate, review, and subscribe to "The James Altucher Show" wherever you get your podcasts: Apple PodcastsiHeart RadioSpotifyFollow me on social media:YouTubeTwitterFacebookLinkedIn
According to NBC, Jesus has returned in an online incarnation. Yes, you too can log on and talk to a cyber savior, a disembodied vision of a white man who offers counsel on things ranging from the serious to the silly. Setting aside how this likely violates the Second Commandment, this stunt typifies a central problem with contemporary religious thinking: recreating Jesus in our image. A programmed Christ built of nothing but the disparate thoughts of what we'd like him to be is literally an idol. If “god” is just our understanding of Him, there is not really a “Him” at all, but only our own projections. It's kind of like the old SNL skit of Stuart Smalley affirming himself in the mirror. Seeking salvation from an AI chatbot is only a more technologically advanced version of picking and choosing the parts of the Bible we want to believe. But salvation can never be found within.