Podcasts about Betaworks

  • 83PODCASTS
  • 112EPISODES
  • 45mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Apr 21, 2025LATEST
Betaworks

POPULARITY

20172018201920202021202220232024


Best podcasts about Betaworks

Latest podcast episodes about Betaworks

The Daily Crunch – Spoken Edition
Bluesky feed builder Graze raises $1M, rolls out ads

The Daily Crunch – Spoken Edition

Play Episode Listen Later Apr 21, 2025 5:34


Graze, a startup that lets people build and monetize custom feeds for Bluesky's social network, has attracted new capital. Pre-seed investors, led by Betaworks and Salesforce Ventures, have invested $1 million in the company's small team, which is working to give users control over their algorithms and social media experiences. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Bundle tickets for AIE Summit NYC have now sold out. You can now sign up for the livestream — where we will be making a big announcement soon. NYC-based readers and Summit attendees should check out the meetups happening around the Summit.2024 was a very challenging year for AI Hardware. After the buzz of CES last January, 2024 was marked by the meteoric rise and even harder fall of AI Wearables companies like Rabbit and Humane, with an assist from a pre-wallpaper-app MKBHD. Even Friend.com, the first to launch in the AI pendant category, and which spurred Rewind AI to rebrand to Limitless and follow in their footsteps, ended up delaying their wearable ship date and launching an experimental website chatbot version. We have been cautiously excited about this category, keeping tabs on most of the top entrants, including Omi and Compass. However, to date the biggest winner still standing from the AI Wearable wars is Bee AI, founded by today's guests Maria and Ethan. Bee is an always on hardware device with beamforming microphones, 7 day battery life and a mute button, that can be worn as a wristwatch or a clip-on pin, backed by an incredible transcription, diarization and very long context memory processing pipeline that helps you to remember your day, your todos, and even perform actions by operating a virtual cloud phone. This is one of the most advanced, production ready, personal AI agents we've ever seen, so we were excited to be their first podcast appearance. We met Bee when we ran the world's first Personal AI meetup in April last year.As a user of Bee (and not an investor! just a friend!) it's genuinely been a joy to use, and we were glad to take advantage of the opportunity to ask hard questions about the privacy and legal/ethical side of things as much as the AI and Hardware engineering side of Bee. We hope you enjoy the episode and tune in next Friday for Bee's first conference talk: Building Perfect Memory.Show Notes* Bee Website* Ethan Sutin, Maria de Lourdes Zollo* Bee @ Personal AI Meetup* Buy Bee with Listener Discount Code!Timestamps* 00:00:00 Introductions and overview of Bee Computer* 00:01:58 Personal context and use cases for Bee* 00:03:02 Origin story of Bee and the founders' background* 00:06:56 Evolution from app to hardware device* 00:09:54 Short-term value proposition for users* 00:12:17 Demo of Bee's functionality* 00:17:54 Hardware form factor considerations* 00:22:22 Privacy concerns and legal considerations* 00:30:57 User adoption and reactions to wearing Bee* 00:35:56 CES experience and hardware manufacturing challenges* 00:41:40 Software pipeline and inference costs* 00:53:38 Technical challenges in real-time processing* 00:57:46 Memory and personal context modeling* 01:02:45 Social aspects and agent-to-agent interactions* 01:04:34 Location sharing and personal data exchange* 01:05:11 Personality analysis capabilities* 01:06:29 Hiring and future of always-on AITranscriptAlessio [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of SmallAI.swyx [00:00:12]: Hey, and today we are very honored to have in the studio Maria and Ethan from Bee.Maria [00:00:16]: Hi, thank you for having us.swyx [00:00:20]: And you are, I think, the first hardware founders we've had on the podcast. I've been looking to have had a hardware founder, like a wearable hardware, like a wearable hardware founder for a while. I think we're going to have two or three of them this year. And you're the ones that I wear every day. So thank you for making Bee. Thank you for all the feedback and the usage. Yeah, you know, I've been a big fan. You are the speaker gift for the Engineering World's Fair. And let's start from the beginning. What is Bee Computer?Ethan [00:00:52]: Bee Computer is a personal AI system. So you can think of it as AI living alongside you in first person. So it can kind of capture your in real life. So with that understanding can help you in significant ways. You know, the obvious one is memory, but that's that's really just the base kind of use case. So recalling and reflective. I know, Swyx, that you you like the idea of journaling, but you don't but still have some some kind of reflective summary of what you experienced in real life. But it's also about just having like the whole context of a human being and understanding, you know, giving the machine the ability to understand, like, what's going on in your life. Your attitudes, your desires, specifics about your preferences, so that not only can it help you with recall, but then anything that you need it to do, it already knows, like, if you think about like somebody who you've worked with or lived with for a long time, they just know kind of without having to ask you what you would want, it's clear that like, that is the future that personal AI, like, it's just going to be very, you know, the AI is just so much more valuable with personal context.Maria [00:01:58]: I will say that one of the things that we are really passionate is really understanding this. Personal context, because we'll make the AI more useful. Think about like a best friend that know you so well. That's one of the things that we are seeing from the user. They're using from a companion standpoint or professional use cases. There are many ways to use B, but companionship and professional are the ones that we are seeing now more.swyx [00:02:22]: Yeah. It feels so dry to talk about use cases. Yeah. Yeah.Maria [00:02:26]: It's like really like investor question. Like, what kind of use case?Ethan [00:02:28]: We're just like, we've been so broken and trained. But I mean, on the base case, it's just like, don't you want your AI to know everything you've said and like everywhere you've been, like, wouldn't you want that?Maria [00:02:40]: Yeah. And don't stay there and repeat every time, like, oh, this is what I like. You already know that. And you do things for me based on that. That's I think is really cool.swyx [00:02:50]: Great. Do you want to jump into a demo? Do you have any other questions?Alessio [00:02:54]: I want to maybe just cover the origin story. Just how did you two meet? What was the was this the first idea you started working on? Was there something else before?Maria [00:03:02]: I can start. So Ethan and I, we know each other from six years now. He had a company called Squad. And before that was called Olabot and was a personal AI. Yeah, I should. So maybe you should start this one. But yeah, that's how I know Ethan. Like he was pivoting from personal AI to Squad. And there was a co-watching with friends product. I had experience working with TikTok and video content. So I had the pivoting and we launched Squad and was really successful. And at the end. The founders decided to sell that to Twitter, now X. So both of us, we joined X. We launched Twitter Spaces. We launched many other products. And yeah, till then, we basically continue to work together to the start of B.Ethan [00:03:46]: The interesting thing is like this isn't the first attempt at personal AI. In 2016, when I started my first company, it started out as a personal AI company. This is before Transformers, no BERT even like just RNNs. You couldn't really do any convincing dialogue at all. I met Esther, who was my previous co-founder. We both really interested in the idea of like having a machine kind of model or understand a dynamic human. We wanted to make personal AI. This was like more geared towards because we had obviously much limited tools, more geared towards like younger people. So I don't know if you remember in 2016, there was like a brief chatbot boom. It was way premature, but it was when Zuckerberg went up on F8 and yeah, M and like. Yeah. The messenger platform, people like, oh, bots are going to replace apps. It was like for about six months. And then everybody realized, man, these things are terrible and like they're not replacing apps. But it was at that time that we got excited and we're like, we tried to make this like, oh, teach the AI about you. So it was just an app that you kind of chatted with and it would ask you questions and then like give you some feedback.Maria [00:04:53]: But Hugging Face first version was launched at the same time. Yeah, we started it.Ethan [00:04:56]: We started out the same office as Hugging Face because Betaworks was our investor. So they had to think. They had a thing called Bot Camp. Betaworks is like a really cool VC because they invest in out there things. They're like way ahead of everybody else. And like back then it was they had something called Bot Camp. They took six companies and it was us and Hugging Face. And then I think the other four, I'm pretty sure, are dead. But and Hugging Face was the one that really got, you know, I mean, 30% success rate is pretty good. Yeah. But yeah, when we it was, it was like it was just the two founders. Yeah, they were kind of like an AI company in the beginning. It was a chat app for teenagers. A lot of people don't know that Hugging Face was like, hey, friend, how was school? Let's trade selfies. But then, you know, they built the Transformers library, I believe, to help them make their chat app better. And then they open sourced and it was like it blew up. And like they're like, oh, maybe this is the opportunity. And now they're Hugging Face. But anyway, like we were obsessed with it at that time. But then it was clear that there's some people who really love chatting and like answering questions. But it's like a lot of work, like just to kind of manually.Maria [00:06:00]: Yeah.Ethan [00:06:01]: Teach like all these things about you to an AI.Maria [00:06:04]: Yeah, there were some people that were super passionate, for example, teenagers. They really like, for example, to speak about themselves a lot. So they will reply to a lot of questions and speak about them. But most of the people, they don't really want to spend time.Ethan [00:06:18]: And, you know, it's hard to like really bring the value with it. We had like sentence similarity and stuff and could try and do, but it was like it was premature with the technology at the time. And so we pivoted. We went to YC and the long story, but like we pivoted to consumer video and that kind of went really viral and got a lot of usage quickly. And then we ended up selling it to Twitter, worked there and left before Elon, not related to Elon, but left Twitter.swyx [00:06:46]: And then I should mention this is the famous time when well, when when Elon was just came in, this was like Esther was the famous product manager who slept there.Ethan [00:06:56]: My co-founder, my former co-founder, she sleeping bag. She was the sleep where you were. Yeah, yeah, she stayed. We had left by that point.swyx [00:07:03]: She very stayed, she's famous for staying.Ethan [00:07:06]: Yeah, but later, later left or got, I think, laid off, laid off. Yeah, I think the whole product team got laid off. She was a product manager, director. But yeah, like we left before that. And then we're like, oh, my God, things are different now. You know, I think this is we really started working on again right before ChatGPT came out. But we had an app version and we kind of were trying different things around it. And then, you know, ultimately, it was clear that, like, there were some limitations we can go on, like a good question to ask any wearable company is like, why isn't this an app? Yes. Yeah. Because like.Maria [00:07:40]: Because we tried the app at the beginning.Ethan [00:07:43]: Yeah. Like the idea that it could be more of a and B comes from ambient. So like if it was more kind of just around you all the time and less about you having to go open the app and do the effort to, like, enter in data that led us down the path of hardware. Yeah. Because the sensors on this are microphones. So it's capturing and understanding audio. We started actually our first hardware with a vision component, too. And we can talk about why we're not doing that right now. But if you wanted to, like, have a continuous understanding of audio with your phone, it would monopolize your microphone. It would get interrupted by calls and you'd have to remember to turn it on. And like that little bit of friction is actually like a substantial barrier to, like, get your phone. It's like the experience of it just being with you all the time and like living alongside you. And so I think that that's like the key reason it's not an app. And in fact, we do have Apple Watch support. So anybody who has a watch, Apple Watch can use it right away without buying any hardware. Because we worked really hard to make a version for the watch that can run in the background, not super drain your battery. But even with the watch, there's still friction because you have to remember to turn it on and it still gets interrupted if somebody calls you. And you have to remember to. We send a notification, but you still have to go back and turn it on because it's just the way watchOS works.Maria [00:09:04]: One of the things that we are seeing from our Apple Watch users, like I love the Apple Watch integration. One of the things that we are seeing is that people, they start using it from Apple Watch and after a couple of days they buy the B because they just like to wear it.Ethan [00:09:17]: Yeah, we're seeing.Maria [00:09:18]: That's something that like they're learning and it's really cool. Yeah.Ethan [00:09:21]: I mean, I think like fundamentally we like to think that like a personal AI is like the mission. And it's more about like the understanding. Connecting the dots, making use of the data to provide some value. And the hardware is like the ears of the AI. It's not like integrating like the incoming sensor data. And that's really what we focus on. And like the hardware is, you know, if we can do it well and have a great experience on the Apple Watch like that, that's just great. I mean, but there's just some platform restrictions that like existing hardware makes it hard to provide that experience. Yeah.Alessio [00:09:54]: What do people do in like two or three days that then convinces them to buy it? They buy the product. This feels like a product where like after you use it for a while, you have enough data to start to get a lot of insights. But it sounds like maybe there's also like a short term.Maria [00:10:07]: From the Apple Watch users, I believe that because every time that you receive a call after, they need to go back to B and open it again. Or for example, every day they need to charge Apple Watch and reminds them to open the app every day. They feel like, okay, maybe this is too much work. I just want to wear the B and just keep it open and that's it. And I don't need to think about it.Ethan [00:10:27]: I think they see the kind of potential of it just from the watch. Because even if you wear it a day, like we send a summary notification at the end of the day about like just key things that happened to you in your day. And like I didn't even think like I'm not like a journaling type person or like because like, oh, I just live the day. Why do I need to like think about it? But like it's actually pretty sometimes I'm surprised how interesting it is to me just to kind of be like, oh, yeah, that and how it kind of fits together. And I think that's like just something people get immediately with the watch. But they're like, oh, I'd like an easier watch. I'd like a better way to do this.swyx [00:10:58]: It's surprising because I only know about the hardware. But I use the watch as like a backup for when I don't have the hardware. I feel like because now you're beamforming and all that, this is significantly better. Yeah, that's the other thing.Ethan [00:11:11]: We have way more control over like the Apple Watch. You're limited in like you can't set the gain. You can't change the sample rate. There's just very limited framework support for doing anything with audio. Whereas if you control it. Then you can kind of optimize it for your use case. The Apple Watch isn't meant to be kind of recording this. And we can talk when we get to the part about audio, why it's so hard. This is like audio on the hardest level because you don't know it has to work in all environments or you try and make it work as best as it can. Like this environment is very great. We're in a studio. But, you know, afterwards at dinner in a restaurant, it's totally different audio environment. And there's a lot of challenges with that. And having really good source audio helps. But then there's a lot more. But with the machine learning that still is, you know, has to be done to try and account because like you can tune something for one environment or another. But it'll make one good and one bad. And like making something that's flexible enough is really challenging.Alessio [00:12:10]: Do we want to do a demo just to set the stage? And then we kind of talk about.Maria [00:12:14]: Yeah, I think we can go like a walkthrough and the prod.Alessio [00:12:17]: Yeah, sure.swyx [00:12:17]: So I think we said I should. So for listeners, we'll be switching to video. That was superimposed on. And to this video, if you want to see it, go to our YouTube, like and subscribe as always. Yeah.Maria [00:12:31]: And by the bee. Yes.swyx [00:12:33]: And by the bee. While you wait. While you wait. Exactly. It doesn't take long.Maria [00:12:39]: Maybe you should have a discount code just for the listeners. Sure.swyx [00:12:43]: If you want to offer it, I'll take it. All right. Yeah. Well, discount code Swyx. Oh s**t. Okay. Yeah. There you go.Ethan [00:12:49]: An important thing to mention also is that the hardware is meant to work with the phone. And like, I think, you know, if you, if you look at rabbit or, or humane, they're trying to create like a new hardware platform. We think that the phone's just so dominant and it will be until we have the next generation, which is not going to be for five, you know, maybe some Orion type glasses that are cheap enough and like light enough. Like that's going to take a long time before with the phone rather than trying to just like replace it. So in the app, we have a summary of your days, but at the top, it's kind of what's going on now. And that's updating your phone. It's updating continuously. So right now it's saying, I'm discussing, you know, the development of, you know, personal AI, and that's just kind of the ongoing conversation. And then we give you a readable form. That's like little kind of segments of what's the important parts of the conversations. We do speaker identification, which is really important because you don't want your personal AI thinking you said something and attributing it to you when it was just somebody else in the conversation. So you can also teach it other people's voices. So like if some, you know, somebody close to you, so it can start to understand your relationships a little better. And then we do conversation end pointing, which is kind of like a task that didn't even exist before, like, cause nobody needed to do this. But like if you had somebody's whole day, how do you like break it into logical pieces? And so we use like not just voice activity, but other signals to try and split up because conversations are a little fuzzy. They can like lead into one, can start to the next. So also like the semantic content of it. When a conversation ends, we run it through larger models to try and get a better, you know, sense of the actual, what was said and then summarize it, provide key points. What was the general atmosphere and tone of the conversation and potential action items that might've come of that. But then at the end of the day, we give you like a summary of all your day and where you were and just kind of like a step-by-step walkthrough of what happened and what were the key points. That's kind of just like the base capture layer. So like if you just want to get a kind of glimpse or recall or reflect that's there. But really the key is like all of this is now like being influenced on to generate personal context about you. So we generate key items known to be true about you and that you can, you know, there's a human in the loop aspect is like you can, you have visibility. Right. Into that. And you can, you know, I have a lot of facts about technology because that's basically what I talk about all the time. Right. But I do have some hobbies that show up and then like, how do you put use to this context? So I kind of like measure my day now and just like, what is my token output of the day? You know, like, like as a human, how much information do I produce? And it's kind of measured in tokens and it turns out it's like around 200,000 or so a day. But so in the recall case, we have, um. A chat interface, but the key here is on the recall of it. Like, you know, how do you, you know, I probably have 50 million tokens of personal context and like how to make sense of that, make it useful. So I can ask simple, like, uh, recall questions, like details about the trip I was on to Taiwan, where recently we're with our manufacturer and, um, in real time, like it will, you know, it has various capabilities such as searching through your, your memories, but then also being able to search the web or look at my calendar, we have integrations with Gmail and calendars. So like connecting the dots between the in real life and the digital life. And, you know, I just asked it about my Taiwan trip and it kind of gives me the, the breakdown of the details, what happened, the issues we had around, you know, certain manufacturing problems and it, and it goes back and references the conversation so I can, I can go back to the source. Yeah.Maria [00:16:46]: Not just the conversation as well, the integrations. So we have as well Gmail and Google calendar. So if there is something there that was useful to have more context, we can see that.Ethan [00:16:56]: So like, and it can, I never use the word agentic cause it's, it's cringe, but like it can search through, you know, if I, if I'm brainstorming about something that spans across, like search through my conversation, search the email, look at the calendar and then depending on what's needed. Then synthesize, you know, something with all that context.Maria [00:17:18]: I love that you did the Spotify wrapped. That was pretty cool. Yeah.Ethan [00:17:22]: Like one thing I did was just like make a Spotify wrap for my 2024, like of my life. You can do that. Yeah, you can.Maria [00:17:28]: Wait. Yeah. I like those crazy.Ethan [00:17:31]: Make a Spotify wrapped for my life in 2024. Yeah. So it's like surprisingly good. Um, it like kind of like game metrics. So it was like you visited three countries, you shipped, you know, XMini, beta. Devices.Maria [00:17:46]: And that's kind of more personal insights and reflection points. Yeah.swyx [00:17:51]: That's fascinating. So that's the demo.Ethan [00:17:54]: Well, we have, we can show something that's in beta. I don't know if we want to do it. I don't know.Maria [00:17:58]: We want to show something. Do it.Ethan [00:18:00]: And then we can kind of fit. Yeah.Maria [00:18:01]: Yeah.Ethan [00:18:02]: So like the, the, the, the vision is also like, not just about like AI being with you in like just passively understanding you through living your experience, but also then like it proactively suggesting things to you. Yeah. Like at the appropriate time. So like not just pool, but, but kind of, it can step in and suggest things to you. So, you know, one integration we have that, uh, is in beta is with WhatsApp. Maria is asking for a recommendation for an Italian restaurant. Would you like me to look up some highly rated Italian restaurants nearby and send her a suggestion?Maria [00:18:34]: So what I did, I just sent to Ethan a message through WhatsApp in his own personal phone. Yeah.Ethan [00:18:41]: So, so basically. B is like watching all my incoming notifications. And if it meets two criteria, like, is it important enough for me to raise a suggestion to the user? And then is there something I could potentially help with? So this is where the actions come into place. So because Maria is my co-founder and because it was like a restaurant recommendation, something that it could probably help with, it proposed that to me. And then I can, through either the chat and we have another kind of push to talk walkie talkie style button. It's actually a multi-purpose button to like toggle it on or off, but also if you push to hold, you can talk. So I can say, yes, uh, find one and send it to her on WhatsApp is, uh, an Android cloud phone. So it's, uh, going to be able to, you know, that has access to all my accounts. So we're going to abstract this away and the execution environment is not really important, but like we can go into technically why Android is actually a pretty good one right now. But, you know, it's searching for Italian restaurants, you know, and we don't have to watch this. I could be, you know, have my ear AirPods in and in my pocket, you know, it's going to go to WhatsApp, going to find Maria's thread, send her the response and then, and then let us know. Oh my God.Alessio [00:19:56]: But what's the, I mean, an Italian restaurant. Yeah. What did it choose? What did it choose? It's easy to say. Real Italian is hard to play. Exactly.Ethan [00:20:04]: It's easy to say. So I doubt it. I don't know.swyx [00:20:06]: For the record, since you have the Italians, uh, best Italian restaurant in SF.Maria [00:20:09]: Oh my God. I still don't have one. What? No.Ethan [00:20:14]: I don't know. Successfully found and shared.Alessio [00:20:16]: Let's see. Let's see what the AI says. Bottega. Bottega? I think it's Bottega.Maria [00:20:21]: Have you been to Bottega? How is it?Alessio [00:20:24]: It's fine.Maria [00:20:25]: I've been to one called like Norcina, I think it was good.Alessio [00:20:29]: Bottega is on Valencia Street. It's fine. The pizza is not good.Maria [00:20:32]: It's not good.Alessio [00:20:33]: Some of the pastas are good.Maria [00:20:34]: You know, the people I'm sorry to interrupt. Sorry. But there is like this Delfina. Yeah. That here everybody's like, oh, Pizzeria Delfina is amazing. I'm overrated. This is not. I don't know. That's great. That's great.swyx [00:20:46]: The North Beach Cafe. That place you took us with Michele last time. Vega. Oh.Alessio [00:20:52]: The guy at Vega, Giuseppe, he's Italian. Which one is that? It's in Bernal Heights. Ugh. He's nice. He's not nice. I don't know that one. What's the name of the place? Vega. Vega. Vega. Cool. We got the name. Vega. But it's not Vega.Maria [00:21:02]: It's Italian. Whatswyx [00:21:10]: Vega. Vega.swyx [00:21:16]: Vega. Vega. Vega. Vega. Vega. Vega. Vega. Vega. Vega.Ethan [00:21:29]: Vega. Vega. Vega. Vega. Vega.Ethan [00:21:40]: We're going to see a lot of innovation around hardware and stuff, but I think the real core is being able to do something useful with the personal context. You always had the ability to capture everything, right? We've always had recorders, camcorders, body cameras, stuff like that. But what's different now is we can actually make sense and find the important parts in all of that context.swyx [00:22:04]: Yeah. So, and then one last thing, I'm just doing this for you, is you also have an API, which I think I'm the first developer against. Because I had to build my own. We need to hire a developer advocate. Or just hire AI engineers. The point is that you should be able to program your own assistant. And I tried OMI, the former friend, the knockoff friend, and then real friend doesn't have an API. And then Limitless also doesn't have an API. So I think it's very important to own your data. To be able to reprocess your audio, maybe. Although, by default, you do not store audio. And then also just to do any corrections. There's no way that my needs can be fully met by you. So I think the API is very important.Ethan [00:22:47]: Yeah. And I mean, I've always been a consumer of APIs in all my products.swyx [00:22:53]: We are API enjoyers in this house.Ethan [00:22:55]: Yeah. It's very frustrating when you have to go build a scraper. But yeah, it's for sure. Yeah.swyx [00:23:03]: So this whole combination of you have my location, my calendar, my inbox. It really is, for me, the sort of personal API.Alessio [00:23:10]: And is the API just to write into it or to have it take action on external systems?Ethan [00:23:16]: Yeah, we're expanding it. It's right now read-only. In the future, very soon, when the actions are more generally available, it'll be fully supported in the API.Alessio [00:23:27]: Nice. I'll buy one after the episode.Ethan [00:23:30]: The API thing, to me, is the most interesting. Yeah. We do have real-time APIs, so you can even connect a socket and connect it to whatever you want it to take actions with. Yeah. It's too smart for me.Alessio [00:23:43]: Yeah. I think when I look at these apps, and I mean, there's so many of these products, we launch, it's great that I can go on this app and do things. But most of my work and personal life is managed somewhere else. Yeah. So being able to plug into it. Integrate that. It's nice. I have a bunch of more, maybe, human questions. Sure. I think maybe people might have. One, is it good to have instant replay for any argument that you have? I can imagine arguing with my wife about something. And, you know, there's these commercials now where it's basically like two people arguing, and they're like, they can throw a flag, like in football, and have an instant replay of the conversation. I feel like this is similar, where it's almost like people cannot really argue anymore or, like, lie to each other. Because in a world in which everybody adopts this, I don't know if you thought about it. And also, like, how the lies. You know, all of us tell lies, right? How do you distinguish between when I'm, there's going to be sometimes things that contradict each other, because I might say something publicly, and I might think something, really, that I tell someone else. How do you handle that when you think about building a product like this?Maria [00:24:48]: I would say that I like the fact that B is an objective point of view. So I don't care too much about the lies, but I care more about the fact that can help me to understand what happened. Mm-hmm. And the emotions in a really objective way, like, really, like, critical and objective way. And if you think about humans, they have so many emotions. And sometimes something that happened to me, like, I don't know, I would feel, like, really upset about it or really angry or really emotional. But the AI doesn't have those emotions. It can read the conversation, understand what happened, and be objective. And I think the level of support is the one that I really like more. Instead of, like, oh, did this guy tell me a lie? I feel like that's not exactly, like, what I feel. I find it curious for me in terms of opportunity.Alessio [00:25:35]: Is the B going to interject in real time? Say I'm arguing with somebody. The B is like, hey, look, no, you're wrong. What? That person actually said.Ethan [00:25:43]: The proactivity is something we're very interested in. Maybe not for, like, specifically for, like, selling arguments, but more for, like, and I think that a lot of the challenge here is, you know, you need really good reasoning to kind of pull that off. Because you don't want it just constantly interjecting, because that would be super annoying. And you don't want it to miss things that it should be interjecting. So, like, it would be kind of a hard task even for a human to be, like, just come in at the right times when it's appropriate. Like, it would take the, you know, with the personal context, it's going to be a lot better. Because, like, if somebody knows about you, but even still, it requires really good reasoning to, like, not be too much or too little and just right.Maria [00:26:20]: And the second part about, well, like, some things, you know, you say something to somebody else, but after I change my mind, I send something. Like, it's every time I have, like, different type of conversation. And I'm like, oh, I want to know more about you. And I'm like, oh, I want to know more about you. I think that's something that I found really fascinating. One of the things that we are learning is that, indeed, humans, they evolve over time. So, for us, one of the challenges is actually understand, like, is this a real fact? Right. And so far, what we do is we give, you know, to the, we have the human in the loop that can say, like, yes, this is true, this is not. Or they can edit their own fact. For sure, in the future, we want to have all of that automatized inside of the product.Ethan [00:26:57]: But, I mean, I think your question kind of hits on, and I know that we'll talk about privacy, but also just, like, if you have some memory and you want to confirm it with somebody else, that's one thing. But it's for sure going to be true that in the future, like, not even that far into the future, that it's just going to be kind of normalized. And we're kind of in a transitional period now. And I think it's, like, one of the key things that is for us to kind of navigate that and make sure we're, like, thinking of all the consequences. And how to, you know, make the right choices in the way that everything's designed. And so, like, it's more beneficial than it could be harmful. But it's just too valuable for your AI to understand you. And so if it's, like, MetaRay bands or the Google Astra, I think it's just people are going to be more used to it. So people's behaviors and expectations will change. Whether that's, like, you know, something that is going to happen now or in five years, it's probably in that range. And so, like, I think we... We kind of adapt to new technologies all the time. Like, when the Ring cameras came out, that was kind of quite controversial. It's like... But now it's kind of... People just understand that a lot of people have cameras on their doors. And so I think that...Maria [00:28:09]: Yeah, we're in a transitional period for sure.swyx [00:28:12]: I will press on the privacy thing because that is the number one thing that everyone talks about. Obviously, I think in Silicon Valley, people are a little bit more tech-forward, experimental, whatever. But you want to go mainstream. You want to sell to consumers. And we have to worry about this stuff. Baseline question. The hardest version of this is law. There are one-party consent states where this is perfectly legal. Then there are two-party consent states where they're not. What have you come around to this on?Ethan [00:28:38]: Yeah, so the EU is a totally different regulatory environment. But in the U.S., it's basically on a state-by-state level. Like, in Nevada, it's single-party. In California, it's two-party. But it's kind of untested. You know, it's different laws, whether it's a phone call, whether it's in person. In a state like California, it's two-party. Like, anytime you're in public, there's no consent comes into play because the expectation of privacy is that you're in public. But we process the audio and nothing is persisted. And then it's summarized with the speaker identification focusing on the user. Now, it's kind of untested on a legal, and I'm not a lawyer, but does that constitute the same as, like, a recording? So, you know, it's kind of a gray area and untested in law right now. I think that the bigger question is, you know, because, like, if you had your Ray-Ban on and were recording, then you have a video of something that happened. And that's different than kind of having, like, an AI give you a summary that's focused on you that's not really capturing anybody's voice. You know, I think the bigger question is, regardless of the legal status, like, what is the ethical kind of situation with that? Because even in Nevada that we're—or many other U.S. states where you can record. Everything. And you don't have to have consent. Is it still, like, the right thing to do? The way we think about it is, is that, you know, we take a lot of precautions to kind of not capture personal information of people around. Both through the speaker identification, through the pipeline, and then the prompts, and the way we store the information to be kind of really focused on the user. Now, we know that's not going to, like, satisfy a lot of people. But I think if you do try it and wear it again. It's very hard for me to see anything, like, if somebody was wearing a bee around me that I would ever object that it captured about me as, like, a third party to it. And like I said, like, we're in this transitional period where the expectation will just be more normalized. That it's, like, an AI. It's not capturing, you know, a full audio recording of what you said. And it's—everything is fully geared towards helping the person kind of understand their state and providing valuable information to them. Not about, like, logging details about people they encounter.Alessio [00:30:57]: You know, I've had the same question also with the Zoom meeting transcribers thing. I think there's kind of, like, the personal impact that there's a Firefly's AI recorder. Yeah. I just know that it's being recorded. It's not like a—I don't know if I'm going to say anything different. But, like, intrinsically, you kind of feel—because it's not pervasive. And I'm curious, especially, like, in your investor meetings. Do people feel differently? Like, have you had people ask you to, like, turn it off? Like, in a business meeting, to not record? I'm curious if you've run into any of these behaviors.Maria [00:31:29]: You know what's funny? On my end, I wear it all the time. I take my coffee, a blue bottle with it. Or I work with it. Like, obviously, I work on it. So, I wear it all the time. And so far, I don't think anybody asked me to turn it off. I'm not sure if because they were really friendly with me that they know that I'm working on it. But nobody really cared.swyx [00:31:48]: It's because you live in SF.Maria [00:31:49]: Actually, I've been in Italy as well. Uh-huh. And in Italy, it's a super privacy concern. Like, Europe is a super privacy concern. And again, they're nothing. Like, it's—I don't know. Yeah. That, for me, was interesting.Ethan [00:32:01]: I think—yeah, nobody's ever asked me to turn it off, even after giving them full demos and disclosing. I think that some people have said, well, my—you know, in a personal relationship, my partner initially was, like, kind of uncomfortable about it. We heard that from a few users. And that was, like, more in just, like— It's not like a personal relationship situation. And the other big one is people are like, I do like it, but I cannot wear this at work. I guess. Yeah. Yeah. Because, like, I think I will get in trouble based on policies or, like, you know, if you're wearing it inside a research lab or something where you're working on things that are kind of sensitive that, like—you know, so we're adding certain features like geofencing, just, like, at this location. It's just never active.swyx [00:32:50]: I mean, I've often actually explained to it the other way, where maybe you only want it at work, so you never take it from work. And it's just a work device, just like your Zoom meeting recorder is a work device.Ethan [00:33:09]: Yeah, professionals have been a big early adopter segment. And you say in San Francisco, but we have out there our daily shipment of over 100. If you go look at the addresses, Texas, I think, is our biggest state, and Florida, just the biggest states. A lot of professionals who talk for, and we didn't go out to build it for that use case, but I think there is a lot of demand for white-collar people who talk for a living. And I think we're just starting to talk with them. I think they just want to be able to improve their performance around, understand what they were doing.Alessio [00:33:47]: How do you think about Gong.io? Some of these, for example, sales training thing, where you put on a sales call and then it coaches you. They're more verticalized versus having more horizontal platform.Ethan [00:33:58]: I am not super familiar with those things, because like I said, it was kind of a surprise to us. But I think that those are interesting. I've seen there's a bunch of them now, right? Yeah. It kind of makes sense. I'm terrible at sales, so I could probably use one. But it's not my job, fundamentally. But yeah, I think maybe it's, you know, we heard also people with restaurants, if they're able to understand, if they're doing well.Maria [00:34:26]: Yeah, but in general, I think a lot of people, they like to have the double check of, did I do this well? Or can you suggest me how I can do better? We had a user that was saying to us that he used for interviews. Yeah, he used job interviews. So he used B and after asked to the B, oh, actually, how do you think my interview went? What I should do better? And I like that. And like, oh, that's actually like a personal coach in a way.Alessio [00:34:50]: Yeah. But I guess the question is like, do you want to build all of those use cases? Or do you see B as more like a platform where somebody is going to build like, you know, the sales coach that connects to B so that you're kind of the data feed into it?Ethan [00:35:02]: I don't think this is like a data feed, more like an understanding kind of engine and like definitely. In the future, having third parties to the API and building out for all the different use cases is something that we want to do. But the like initial case we're trying to do is like build that layer for all that to work. And, you know, we're not trying to build all those verticals because no startup could do that well. But I think that it's really been quite fascinating to see, like, you know, I've done consumer for a long time. Consumer is very hard to predict, like, what's going to be. It's going to be like the thing that's the killer feature. And so, I mean, we really believe that it's the future, but we don't know like what exactly like process it will take to really gain mass adoption.swyx [00:35:50]: The killer consumer feature is whatever Nikita Beer does. Yeah. Social app for teens.Ethan [00:35:56]: Yeah, well, I like Nikita, but, you know, he's good at building bootstrap companies and getting them very viral. And then selling them and then they shut down.swyx [00:36:05]: Okay, so you just came back from CES.Maria [00:36:07]: Yeah, crazy. Yeah, tell us. It was my first time in Vegas and first time CES, both of them were overwhelming.swyx [00:36:15]: First of all, did you feel like you had to do it because you're in consumer hardware?Maria [00:36:19]: Then we decided to be there and to have a lot of partners and media meetings, but we didn't have our own booth. So we decided to just keep that. But we decided to be there and have a presence there, even just us and speak with people. It's very hard to stand out. Yeah, I think, you know, it depends what type of booth you have. I think if you can prepare like a really cool booth.Ethan [00:36:41]: Have you been to CES?Maria [00:36:42]: I think it can be pretty cool.Ethan [00:36:43]: It's massive. It's huge. It's like 80,000, 90,000 people across the Venetian and the convention center. And it's, to me, I always wanted to go just like...Maria [00:36:53]: Yeah, you were the one who was like...swyx [00:36:55]: I thought it was your idea.Ethan [00:36:57]: I always wanted to go just as a, like, just as a fan of...Maria [00:37:01]: Yeah, you wanted to go anyways.Ethan [00:37:02]: Because like, growing up, I think CES like kind of peaked for a while and it was like, oh, I want to go. That's where all the cool, like... gadgets, everything. Yeah, now it's like SmartBitch and like, you know, vacuuming the picks up socks. Exactly.Maria [00:37:13]: There are a lot of cool vacuums. Oh, they love it.swyx [00:37:15]: They love the Roombas, the pick up socks.Maria [00:37:16]: And pet tech. Yeah, yeah. And dog stuff.swyx [00:37:20]: Yeah, there's a lot of like robot stuff. New TVs, new cars that never ship. Yeah. Yeah. I'm thinking like last year, this time last year was when Rabbit and Humane launched at CES and Rabbit kind of won CES. And now this year, no wearables except for you guys.Ethan [00:37:32]: It's funny because it's obviously it's AI everything. Yeah. Like every single product. Yeah.Maria [00:37:37]: Toothbrush with AI, vacuums with AI. Yeah. Yeah.Ethan [00:37:41]: We like hair blow, literally a hairdryer with AI. We saw.Maria [00:37:45]: Yeah, that was cool.Ethan [00:37:46]: But I think that like, yeah, we didn't, another kind of difference like around our, like we didn't want to do like a big overhypey promised kind of Rabbit launch. Because I mean, they did, hats off to them, like on the presentation and everything, obviously. But like, you know, we want to let the product kind of speak for itself and like get it out there. And I think we were really happy. We got some very good interest from media and some of the partners there. So like it was, I think it was definitely worth going. I would say like if you're in hardware, it's just kind of how you make use of it. Like I think to do it like a big Rabbit style or to have a huge show on there, like you need to plan that six months in advance. And it's very expensive. But like if you, you know, go there, there's everybody's there. All the media is there. There's a lot of some pre-show events that it's just great to talk to people. And the industry also, all the manufacturers, suppliers are there. So we learned about some really cool stuff that we might like. We met with somebody. They have like thermal energy capture. And it's like, oh, could you maybe not need to charge it? Because they have like a thermal that can capture your body heat. And what? Yeah, they're here. They're actually here. And in Palo Alto, they have like a Fitbit thing that you don't have to charge.swyx [00:39:01]: Like on paper, that's the power you can get from that. What's the power draw for this thing?Ethan [00:39:05]: It's more than you could get from the body heat, it turns out. But it's quite small. I don't want to disclose technically. But I think that solar is still, they also have one where it's like this thing could be like the face of it. It's just a solar cell. And like that is more realistic. Or kinetic. Kinetic, apparently, I'm not an expert in this, but they seem to think it wouldn't be enough. Kinetic is quite small, I guess, on the capture.swyx [00:39:33]: Well, I mean, watch. Watchmakers have been powering with kinetic for a long time. Yeah. We don't have to talk about that. I just want to get a sense of CES. Would you do it again? I definitely would not. Okay. You're just a fan of CES. Business point of view doesn't make sense. I happen to be in the conference business, right? So I'm kind of just curious. Yeah.Maria [00:39:49]: So I would say as we did, so without the booth and really like straightforward conversations that were already planned. Three days. That's okay. I think it was okay. Okay. But if you need to invest for a booth that is not. Okay. A good one. Which is how much? I think.Ethan [00:40:06]: 10 by 10 is 5,000. But on top of that, you need to. And then they go like 10 by 10 is like super small. Yeah. And like some companies have, I think would probably be more in like the six figure range to get. And I mean, I think that, yeah, it's very noisy. We heard this, that it's very, very noisy. Like obviously if you're, everything is being launched there and like everything from cars to cell phones are being launched. Yeah. So it's hard to stand out. But like, I think going in with a plan of who you want to talk to, I feel like.Maria [00:40:36]: That was worth it.Ethan [00:40:37]: Worth it. We had a lot of really positive media coverage from it and we got the word out and like, so I think we accomplished what we wanted to do.swyx [00:40:46]: I mean, there's some world in which my conference is kind of the CES of whatever AI becomes. Yeah. I think that.Maria [00:40:52]: Don't do it in Vegas. Don't do it in Vegas. Yeah. Don't do it in Vegas. That's the only thing. I didn't really like Vegas. That's great. Amazing. Those are my favorite ones.Alessio [00:41:02]: You can not fit 90,000 people in SF. That's really duh.Ethan [00:41:05]: You need to do like multiple locations so you can do Moscone and then have one in.swyx [00:41:09]: I mean, that's what Salesforce conferences. Well, GDC is how many? That might be 50,000, right? Okay. Form factor, right? Like my way to introduce this idea was that I was at the launch in Solaris. What was the old name of it? Newton. Newton. Of Tab when Avi first launched it. He was like, I thought through everything. Every form factor, pendant is the thing. And then we got the pendants for this original. The first one was just pendants and I took it off and I forgot to put it back on. So you went through pendants, pin, bracelet now, and maybe there's sort of earphones in the future, but what was your iterations?Maria [00:41:49]: So we had, I believe now three or four iterations. And one of the things that we learned is indeed that people don't like the pendant. In particular, woman, you don't want to have like anything here on the chest because it's maybe you have like other necklace or any other stuff.Ethan [00:42:03]: You just ship a premium one that's gold. Yeah. We're talking some fashion reached out to us.Maria [00:42:11]: Some big fashion. There is something there.swyx [00:42:13]: This is where it helps to have an Italian on the team.Maria [00:42:15]: There is like some big Italian luxury. I can't say anything. So yeah, bracelet actually came from the community because they were like, oh, I don't want to wear anything like as necklace or as a pendant. Like it's. And also like the one that we had, I don't know if you remember, like it was like circle, like it was like this and was like really bulky. Like people didn't like it. And also, I mean, I actually, I don't dislike, like we were running fast when we did that. Like our, our thing was like, we wanted to ship them as soon as possible. So we're not overthinking the form factor or the material. We were just want to be out. But after the community organically, basically all of them were like, well, why you don't just don't do the bracelet? Like he's way better. I will just wear it. And that's it. So that's how we ended up with the bracelet, but it's still modular. So I still want to play around the father is modular and you can, you know, take it off and wear it as a clip or in the future, maybe we will bring back the pendant. But I like the fact that there is some personalization and right now we have two colors, yellow and black. Soon we will have other ones. So yeah, we can play a lot around that.Ethan [00:43:25]: I think the form factor. Like the goal is for it to be not super invasive. Right. And something that's easy. So I think in the future, smaller, thinner, not like apple type obsession with thinness, but it does matter like the, the size and weight. And we would love to have more context because that will help, but to make it work, I think it really needs to have good power consumption, good battery life. And, you know, like with the humane swapping the batteries, I have one, I mean, I'm, I'm, I think we've made, and there's like pretty incredible, some of the engineering they did, but like, it wasn't kind of geared towards solving the problem. It was just, it's too heavy. The swappable batteries is too much to man, like the heat, the thermals is like too much to light interface thing. Yeah. Like that. That's cool. It's cool. It's cool. But it's like, if, if you have your handout here, you want to use your phone, like it's not really solving a problem. Cause you know how to use your phone. It's got a brilliant display. You have to kind of learn how to gesture this low range. Yeah. It's like a resolution laser, but the laser is cool that the fact they got it working in that thing, even though if it did overheat, but like too heavy, too cumbersome, too complicated with the multiple batteries. So something that's power efficient, kind of thin, both in the physical sense and also in the edge compute kind of way so that it can be as unobtrusive as possible. Yeah.Maria [00:44:47]: Users really like, like, I like when they say yes, I like to wear it and forget about it because I don't need to charge it every single day. On the other version, I believe we had like 35 hours or something, which was okay. But people, they just prefer the seven days battery life and-swyx [00:45:03]: Oh, this is seven days? Yeah. Oh, I've been charging every three days.Maria [00:45:07]: Oh, no, you can like keep it like, yeah, it's like almost seven days.swyx [00:45:11]: The other thing that occurs to me, maybe there's an Apple watch strap so that I don't have to double watch. Yeah.Maria [00:45:17]: That's the other one that, yeah, I thought about it. I saw as well the ones that like, you can like put it like back on the phone. Like, you know- Plog. There is a lot.swyx [00:45:27]: So yeah, there's a competitor called Plog. Yeah. It's not really a competitor. They only transcribe, right? Yeah, they only transcribe. But they're very good at it. Yeah.Ethan [00:45:33]: No, they're great. Their hardware is really good too.swyx [00:45:36]: And they just launched the pin too. Yeah.Ethan [00:45:38]: I think that the MagSafe kind of form factor has a lot of advantages, but some disadvantages. You can definitely put a very huge battery on that, you know? And so like the battery life's not, the power consumption's not so much of a concern, but you know, downside the phone's like in your pocket. And so I think that, you know, form factors will continue to evolve, but, and you know, more sensors, less obtrusive and-Maria [00:46:02]: Yeah. We have a new version.Ethan [00:46:04]: Easier to use.Maria [00:46:05]: Okay.swyx [00:46:05]: Looking forward to that. Yeah. I mean, we'll, whenever we launch this, we'll try to show whatever, but I'm sure you're going to keep iterating. Last thing on hardware, and then we'll go on to the software side, because I think that's where you guys are also really, really strong. Vision. You wanted to talk about why no vision? Yeah.Ethan [00:46:20]: I think it comes down to like when you're, when you're a startup, especially in hardware, you're just, you work within the constraints, right? And so like vision is super useful and super interesting. And what we actually started with, there's two issues with vision that make it like not the place we decided to start. One is power consumption. So you know, you kind of have to trade off your power budget, like capturing even at a low frame rate and transmitting the radio is actually the thing that takes up the majority of the power. So. Yeah. So you would really have to have quite a, like unacceptably, like large and heavy battery to do it continuously all day. We have, I think, novel kind of alternative ways that might allow us to do that. And we have some prototypes. The other issue is form factor. So like even with like a wide field of view, if you're wearing something on your chest, it's going, you know, obviously the wrist is not really that much of an option. And if you're wearing it on your chest, it's, it's often gone. You're going to probably be not capturing like the field of view of what's interesting to you. So that leaves you kind of with your head and face. And then anything that goes on, on the face has to look cool. Like I don't know if you remember the spectacles, it was kind of like the first, yeah, but they kind of, they didn't, they were not very successful. And I think one of the reasons is they were, they're so weird looking. Yeah. The camera was so big on the side. And if you look at them at array bands where they're way more successful, they, they look almost indistinguishable from array bands. And they invested a lot into that and they, they have a partnership with Qualcomm to develop custom Silicon. They have a stake in Luxottica now. So like they coming from all the angles, like to make glasses, I think like, you know, I don't know if you know, Brilliant Labs, they're cool company, they make frames, which is kind of like a cool hackable glasses and, and, and like, they're really good, like on hardware, they're really good. But even if you look at the frames, which I would say is like the most advanced kind of startup. Yeah. Yeah. Yeah. There was one that launched at CES, but it's not shipping yet. Like one that you can buy now, it's still not something you'd wear every day and the battery life is super short. So I think just the challenge of doing vision right, like off the bat, like would require quite a bit more resources. And so like audio is such a good entry point and it's also the privacy around audio. If you, if you had images, that's like another huge challenge to overcome. So I think that. Ideally the personal AI would have, you know, all the senses and you know, we'll, we'll get there. Yeah. Okay.swyx [00:48:57]: One last hardware thing. I have to ask this because then we'll move to the software. Were either of you electrical engineering?Ethan [00:49:04]: No, I'm CES. And so I have a, I've taken some EE courses, but I, I had done prior to working on, on the hardware here, like I had done a little bit of like embedded systems, like very little firmware, but we have luckily on the team, somebody with deep experience. Yeah.swyx [00:49:21]: I'm just like, you know, like you have to become hardware people. Yeah.Ethan [00:49:25]: Yeah. I mean, I learned to worry about supply chain power. I think this is like radio.Maria [00:49:30]: There's so many things to learn.Ethan [00:49:32]: I would tell this about hardware, like, and I know it's been said before, but building a prototype and like learning how the electronics work and learning about firmware and developing, this is like, I think fun for a lot of engineers and it's, it's all totally like achievable, especially now, like with, with the tools we have, like stuff you might've been intimidated about. Like, how do I like write this firmware now? With Sonnet, like you can, you can get going and actually see results quickly. But I think going from prototype to actually making something manufactured is a enormous jump. And it's not all about technology, the supply chain, the procurement, the regulations, the cost, the tooling. The thing about software that I'm used to is it's funny that you can make changes all along the way and ship it. But like when you have to buy tooling for an enclosure that's expensive.swyx [00:50:24]: Do you buy your own tooling? You have to.Ethan [00:50:25]: Don't you just subcontract out to someone in China? Oh, no. Do we make the tooling? No, no. You have to have CNC and like a bunch of machines.Maria [00:50:31]: Like nobody makes their own tooling, but like you have to design this design and you submitEthan [00:50:36]: it and then they go four to six weeks later. Yeah. And then if there's a problem with it, well, then you're not, you're not making any, any of your enclosures. And so you have to really plan ahead. And like.swyx [00:50:48]: I just want to leave tips for other hardware founders. Like what resources or websites are most helpful in your sort of manufacturing journey?Ethan [00:50:55]: You know, I think it's different depending on like it's hardware so specialized in different ways.Maria [00:51:00]: I will say that, for example, I should choose a manufacturer company. I speak with other founders and like we can give you like some, you know, some tips of who is good and who is not, or like who's specialized in something versus somebody else. Yeah.Ethan [00:51:15]: Like some people are good in plastics. Some people are good.Maria [00:51:18]: I think like for us, it really helped at the beginning to speak with others and understand. Okay. Like who is around. I work in Shenzhen. I lived almost two years in China. I have an idea about like different hardware manufacturer and all of that. Soon I will go back to Shenzhen to check out. So I think it's good also to go in place and check.Ethan [00:51:40]: Yeah, you have to like once you, if you, so we did some stuff domestically and like if you have that ability. The reason I say ability is very expensive, but like to build out some proof of concepts and do field testing before you take it to a manufacturer, despite what people say, there's really good domestic manufacturing for small quantities at extremely high prices. So we got our first PCB and the assembly done in LA. So there's a lot of good because of the defense industry that can do quick churn. So it's like, we need this board. We need to find out if it's working. We have this deadline we want to start, but you need to go through this. And like if you want to have it done and fabricated in a week, they can do it for a price. But I think, you know, everybody's kind of trending even for prototyping now moving that offshore because in China you can do prototyping and get it within almost the same timeline. But the thing is with manufacturing, like it really helps to go there and kind of establish the relationship. Yeah.Alessio [00:52:38]: My first company was a hardware company and we did our PCBs in China and took a long time. Now things are better. But this was, yeah, I don't know, 10 years ago, something like that. Yeah.Ethan [00:52:47]: I think that like the, and I've heard this too, we didn't run into this problem, but like, you know, if it's something where you don't have the relationship, they don't see you, they don't know you, you know, you might get subcontracted out or like they're not paying attention. But like if you're, you know, you have the relationship and a priority, like, yeah, it's really good. We ended up doing the fabrication assembly in Taiwan for various reasons.Maria [00:53:11]: And I think it really helped the fact that you went there at some point. Yeah.Ethan [00:53:15]: We're really happy with the process and, but I mean the whole process of just Choosing the right people. Choosing the right people, but also just sourcing the bill materials and all of that stuff. Like, I guess like if you have time, it's not that bad, but if you're trying to like really push the speed at that, it's incredibly stressful. Okay. We got to move to the software. Yeah.Alessio [00:53:38]: Yeah. So the hardware, maybe it's hard for people to understand, but what software people can understand is that running. Transcription and summarization, all of these things in real time every day for 24 hours a day. It's not easy. So you mentioned 200,000 tokens for a day. Yeah. How do you make it basically free to run all of this for the consumer?Ethan [00:53:59]: Well, I think that the pipeline and the inference, like people think about all of these tokens, but as you know, the price of tokens is like dramatically dropping. You guys probably have some charts somewhere that you've posted. We do. And like, if you see that trend in like 250,000 input tokens, it's not really that much, right? Like the output.swyx [00:54:21]: You do several layers. You do live. Yeah.Ethan [00:54:23]: Yeah. So the speech to text is like the most challenging part actually, because you know, it requires like real time processing and then like later processing with a larger model. And one thing that is fairly obvious is that like, you don't need to transcribe things that don't have any voice in it. Right? So good voice activity is key, right? Because like the majority of most people's day is not spent with voice activity. Right? So that is the first step to cutting down the amount of compute you have to do. And voice activity is a fairly cheap thing to do. Very, very cheap thing to do. The models that need to summarize, you don't need a Sonnet level kind of model to summarize. You do need a Sonnet level model to like execute things like the agent. And we will be having a subscription for like features like that because it's, you know, although now with the R1, like we'll see, we haven't evaluated it. A deep seek? Yeah. I mean, not that one in particular, but like, you know, they're already there that can kind of perform at that level. I was like, it's going to stay in six months, but like, yeah. So self-hosted models help in the things where you can. So you are self-hosting models. Yes. You are fine tuning your own ASR. Yes. I will say that I see in the future that everything's trending down. Although like, I think there might be an intermediary step with things to become expensive, which is like, we're really interested because like the pipeline is very tedious and like a lot of tuning. Right. Which is brutal because it's just a lot of trial and error. Whereas like, well, wouldn't it be nice if an end to end model could just do all of this and learn it? If we could do transcription with like an LLM, there's so many advantages to that, but it's going to be a larger model and hence like more compute, you know, we're optim

The Startup Podcast
Insiders React: OpenAI o1 – The Smartest AI Ever? | Synthetic Employees: Revenue Hack | Zuckerberg's New Swagger

The Startup Podcast

Play Episode Listen Later Sep 18, 2024 52:04


Is AI getting smarter, and what does that mean for the future of work? Plus, how is Zuckerberg's new public persona shaking things up at Meta?  In this week's Reacts episode, Chris Saad is joined by Jordan Crook, former deputy editor at TechCrunch and current partner at Betaworks, along with Amir Shevat, a seasoned tech executive and investor.  Together, they break down the latest tech and business trends: OpenAI O1: The Smartest AI Ever?OpenAI's groundbreaking new O1 model is changing the landscape for developers and startups. Is this the smartest AI ever created, or just another step in AI's evolution? The Future of Work: 1 Billion, 3 EmployeesAI is transforming how companies scale, with some earning $1 billion with just 3 employees. The discussion unpacks synthetic employees and how AI is redefining the workforce of tomorrow. Zuckerberg's New SwaggerMark Zuckerberg is done apologizing and has adopted a bold new attitude. How is this shift impacting Meta and the tech leadership landscape, especially in the Elon Musk era? Stay tuned as Chris, Jordan, and Amir explore what these developments mean for the future of work and the changing dynamics of tech leadership. The Pact Honour The Startup Podcast Pact! If you have listened to TSP and gotten value from it, please: Follow, rate, and review us in your listening app Subscribe to the TSP Mailing List at ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://thestartuppodcast.beehiiv.com/subscribe⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Secure your official TSP merchandise at ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://shop.tsp.show/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Follow us on YouTube at ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.youtube.com/@startup-podcast⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Give us a public shout-out on LinkedIn or anywhere you have a social media following. Key links The Startup Podcast is sponsored by Vanta. Vanta helps businesses get and stay compliant by automating up to 90% of the work for the most in demand compliance frameworks. With over 200 integrations, you can easily monitor and secure the tools your business relies on. For a limited-time offer of US$1,000 off, go to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠www.vanta.com/tsp⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Get your question in for our next Q&A episode: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://forms.gle/NZzgNWVLiFmwvFA2A⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ The Startup Podcast website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://tsp.show⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Learn more about Chris and Yaniv Work 1:1 with Chris: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠http://chrissaad.com/advisory/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Follow Chris on Linkedin: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.linkedin.com/in/chrissaad/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Follow Yaniv on Linkedin: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.linkedin.com/in/ybernstein/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Credits Editor: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Justin McArthur⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Content Strategist: Carolina Franco Intro Voice: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jeremiah Owyang⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

The Sunday Show
Using AI to Engage People about Conspiracy Beliefs

The Sunday Show

Play Episode Listen Later Aug 4, 2024 35:54


In May, Justin Hendrix moderated a discussion with David Rand, who is a professor of Management Science and Brain and Cognitive Sciences at MIT, the director of the Applied Cooperation Initiative, and an affiliate of the MIT Institute of Data, Systems, and Society and the Initiative on the Digital Economy. David's work cuts across fields such as cognitive science, behavioral economics, and social psychology, and with his collaborators he's done a substantial amount of work on the psychological underpinnings of belief in misinformation and conspiracy theories.David is one of the authors, with Thomas Costello and Gordon Pennycook, of a paper published this spring titled "Durably reducing conspiracy beliefs through dialogues with AI." The paper considers the potential for people to enter into dialogues with LLMs and whether such exchanges can change the minds of conspiracy theory believers. According to the study, dialogues with GPT-4 Turbo reduced belief in various conspiracy theories, with effects lasting many months. Even more intriguingly, these dialogues seemed to have a spillover effect, reducing belief in unrelated conspiracies and influencing conspiracy-related behaviors.While these findings are certainly promising, the experiment raises a variety of questions. Some are specific under the premise of the experiment- such as how compelling and tailored does the counter-evidence need to be, and how well do the LLMs perform? What happens if and when they make mistakes or hallucinate? And some of the questions are bigger picture- are there ethical implications in using AI in this manner? Can these results be replicated and scaled in real-world applications, such as on social media platforms, and is that a good idea? Is an internet where various AI agents and systems are poking and prodding us and trying to shape or change our beliefs a good thing? This episode contains an edited recording of the discussion, which was hosted at Betaworks.

Culture: Founders, Entrepreneurs & Innovators
PrimeTime VC EP 32 @Betaworks NYC

Culture: Founders, Entrepreneurs & Innovators

Play Episode Listen Later Jul 26, 2024 59:49


Featured Venture Capitalists : ​Jesse Middelton is a GP at Flybridge Capital, ​Olivia O'Sullivan is a Partner at Forum Ventures, Zehra Naqvi is an Investor at Headline Ventures, ​Victoria Kennedy is a GP a Seed to Harvest an early stage fund investing in software companies.  Supported by ⁠Fidelity Private Shares | Equity Management Solutions⁠ cap table/data room platform helping startups with incorporation, 409a valuations, equity management, and modeling tools for future fund raises. Mention ‘PrimeTimeVC' for 20% off! 

FUTURE FOSSILS

This week marks the beginning of Embodied Ethics in The Age of A.I., a six-week online course led by writer and teacher Joshua Schrei, host of The Emerald Podcast.  This course is, in large part, inspired by an episode he wrote last year called “So You Want To Be A Sorcerer in The Age of Mythic Powers” — exploring the mythic dimensions of tech innovation and calling for a reclamation of initiatic mystery schools in order to provide us with the requisite self-mastery to wield tools like generative language models. I'm honored to be part of the all-star crew lined up to co-facilitate this course and as part of our pre-game sync and prep, I met with Josh to talk about the forces we've unleashed and how to live responsibly in a world where tech is, in Arthur C. Clarke's words, now undoubtedly “indistinguishable from magic.” We explore the need to pace ourselves and anchor novelty production in ecologies of accountability; what it means to raise kids well amidst the A.I. revolution; and why humans cannot seem to stop invoking power and powers greater than our understanding.If you enjoy this conversation, join us — and several dozen other awesome people — from 4/18-5/16 to learn and grow together and answer the call to better ourselves in service of this great historical unfolding!(Big big thanks to former Center for Humane Technology Innovation Lead Andrew Dunn, founder of The School of Wise Innovation, for everything you've done to help inspire and organize all of this…)Right after this course I will be in Denver for the 2024 ICON Future Human Conference and would love to see you there!  Use my link to grab yourself a conference pass and spend 5/16-5/19 with me and folks like Daniel Schmachtenberger, Marianne Williamson, Ken Wilber, Jeremy Johnson, Layman Pascal, and many more…✨ Support This Show & The Family It Feeds:• Subscribe on Substack or Patreon for COPIOUS extras, including private Discord server channels and MANY secret episodes• Make one-off donations at @futurefossils on Venmo, $manfredmacx on CashApp, or @michaelgarfield on PayPal• Buy the music on Bandcamp. This episode features:Tålmodighed (from Live at The Chillout Gardens, Boom Festival 2016)Gamma Pavonis (from Pavo: Music For Mystery)The Cartographers (from Get Used To Being Everything)• Buy the books we discuss at the Future Fossils Bookshop.org page and I get a small cut from your support of indie booksellers• Browse and buy original paintings and prints or email me to commission new work✨ Mentioned & Related Links:“Modern culture is ‘ahead of the one.' Modern culture is rushing to get somewhere.”* Josh Schrei on Howl In The Wilderness Podcast Episode 120Sam Arbesman's Cabinet of WondersDetermined: A Science of Life Without Free Will by Robert SapolskyRick Rubin and Dan Carlin discuss magicMichael Garfield w/ host Kiki Sanford on This Week In Science Episode 965“Information overload is a personal and societal danger” by Rensselaer Polytechnic InstituteThe Glass Cage by Nicholas CarrFuture Fossils 172 - Tyson Yunkaporta on Indigenous Systems Thinking, Fractal Governance, Ontopunk, and Queering W.E.I.R.D. ModernityCenter for Humane TechnologyThe Age of Em by Robin Hanson“Scan Lovers” from How to Live in The Future by Michael Garfield at Boom Festival 2016Wisdom 2.0 Summit”The Techno-Optimist Manifesto” by Marc AndreessenIron John by Robert Bly“The Model Isn't The Territory, Either” by Douglas RushkoffDarwin's Pharmacy: Sex, Plants, and The Evolution of The Noosphere by Richard Doyle“Chief Philosophy Officer” by Peter Limberg“The Next Tech Backlash Will Be About Hygiene” by Jonnie Penn at TIME MagazineDouglas Rushkoff at Betaworks in 2023: “I Will Not Be Automated”Zohar Atkins (Website, Twitter)My comments on “Hallucination Is Inevitable: An Innate Limitation of Large Language Models” by Xu, et al.“For The Intuitives” (Part 1, Part 2) on The Emerald Podcast This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit michaelgarfield.substack.com/subscribe

The Next Big Idea
ACHIEVEMENT CULTURE: What It's Doing to Our Kids—and to Us

The Next Big Idea

Play Episode Listen Later Nov 16, 2023 72:27


It's no secret that we live in a ferociously competitive world. But what is the drive to always be the best doing to our kids? That's what journalist Jennifer Breheny Wallace wanted to know when she set out to write her new book, "Never Enough." The kids, she discovered, are not alright. Teenagers are battling burnout, depression, and anxiety at alarming rates. How did we let this happen, and what can we do to fix it? To answer these vexing but vital questions, we invited Jennifer to chat with Daniel Markovits. He's the author of "The Meritocracy Trap" and a professor at Yale Law School, where he's seen toxic achievement culture up close. In this episode, recorded live at Betaworks in New York City, Jennifer, Daniel, and Rufus discuss why our kids are under such unrelenting pressure, what we can do to give them some relief, and the potential role of new technologies, like AI, in creating positive solutions. --- Host: Rufus Griscom Guests: Jennifer Breheny Wallace & Daniel Markovits • Click here to hear Daniel's previous appearance on the show. • Want the best non-fiction books of the year delivered to your doorstep? Sign up for a Next Big Idea Club subscription at nextbigideaclub.com, and use the code PODCAST to get 20% off and a free copy of Adam Grant's new book, "Hidden Potential"!

TechStuff
Digging Back Into Digg

TechStuff

Play Episode Listen Later Nov 6, 2023 43:22 Transcription Available


What has Digg been up to since our last episode about the site back in 2017? From shutting down a popular tool to another acquisition from a small ad company, we explore what has been up over at Digg.See omnystudio.com/listener for privacy information.

The Next Big Idea
Artificial Intelligence Meets Virtual Worlds: The Future of Sentience

The Next Big Idea

Play Episode Listen Later Jul 13, 2023 79:55


The two hottest topics in tech right now are the rise of generative AI and, with Apple's recent push into spatial computing, the mainstreaming of augmented reality. Will silicon-based machines develop sentience? Will human experience extend into virtual worlds? These distinct technologies may eventually blend to spawn a surprising future, as our “real” world becomes digitally enhanced and our machines behave increasingly like humans. Today, a provocative discussion with some big (human) thinkers: Steven Johnson, visiting scholar at Google Labs and author of ”Extra Life,” “Where Good Ideas Come from,” and “How We Got to Now”; philosopher and cognitive scientist David Chalmers, author of ”The Conscious Mind” and “Reality+;” and Betaworks founder and AI investor John Borthwick. • Want to learn more about our executive membership? Email podcast@nextbigideaclub.com • “David Chalmers Thinks We May Be Living in a Simulation (and He's OK With It)” • “Steven Johnson & Michael Specter on the Future of Life”

Team Human
Douglas Rushkoff: I Will Not Be Autotuned - Live from All Tech is Human's Responsible Tech Mixer

Team Human

Play Episode Listen Later Jun 7, 2023 31:31


Douglas Rushkoff took the stage at All Tech is Human's Responsible Tech Mixer to explain why he refuses to be autotuned in the face of technosolutionism and the rise AI. Recorded April 26, 2023, at Betaworks in New York City.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Read: https://www.latent.space/p/ai-interfaces-and-notionShow Notes* Linus on Twitter* Linus' personal blog* Notion* Notion AI* Notion Projects* AI UX Meetup RecapTimestamps* [00:03:30] Starting the AI / UX community* [00:10:01] Most knowledge work is not text generation* [00:16:21] Finding the right constraints and interface for AI* [00:19:06] Linus' journey to working at Notion* [00:23:29] The importance of notations and interfaces* [00:26:07] Setting interface defaults and standards* [00:32:36] The challenges of designing AI agents* [00:39:43] Notion deep dive: “Blocks”, AI, and more* [00:51:00] Prompt engineering at Notion* [01:02:00] Lightning RoundTranscriptAlessio: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO in residence at Decibel Partners. I'm joined by my co-host Swyx, writer and editor of Latent Space. [00:00:20]Swyx: And today we're not in our regular studio. We're actually at the Notion New York headquarters. Thanks to Linus. Welcome. [00:00:28]Linus: Thank you. Thanks for having me. [00:00:29]Swyx: Thanks for having us in your beautiful office. It is actually very startling how gorgeous the Notion offices are. And it's basically the same aesthetic. [00:00:38]Linus: It's a very consistent aesthetic. It's the same aesthetic in San Francisco and the other offices. It's been for many, many years. [00:00:46]Swyx: You take a lot of craft in everything that you guys do. Yeah. [00:00:50]Linus: I think we can, I'm sure, talk about this more later, but there is a consistent kind of focus on taste that I think flows down from Ivan and the founders into the product. [00:00:59]Swyx: So I'll introduce you a little bit, but also there's just, you're a very hard person to introduce because you do a lot of things. You got your BA in computer science at Berkeley. Even while you're at Berkeley, you're involved in a bunch of interesting things at Replit, CatalystX, Hack Club and Dorm Room Fund. I always love seeing people come out of Dorm Room Fund because they tend to be a very entrepreneurial. You're a product engineer at IdeaFlow, residence at Betaworks. You took a year off to do independent research and then you've finally found your home at Notion. What's one thing that people should know about you that's not on your typical LinkedIn profile? [00:01:39]Linus: Putting me on the spot. I think, I mean, just because I have so much work kind of out there, I feel like professionally, at least, anything that you would want to know about me, you can probably dig up, but I'm a big city person, but I don't come from the city. I went to school, I grew up in Indiana, in the middle of nowhere, near Purdue University, a little suburb. I only came out to the Bay for school and then I moved to New York afterwards, which is where I'm currently. I'm in Notion, New York. But I still carry within me a kind of love and affection for small town, Indiana, small town, flyover country. [00:02:10]Swyx: We do have a bit of indulgence in this. I'm from a small country and I think Alessio, you also kind of identified with this a little bit. Is there anything that people should know about Purdue, apart from the chickens? [00:02:24]Linus: Purdue has one of the largest international student populations in the country, which I don't know. I don't know exactly why, but because it's a state school, the focus is a lot on STEM topics. Purdue is well known for engineering and so we tend to have a lot of folks from abroad, which is particularly rare for a university in, I don't know, that's kind of like predominantly white American and kind of Midwestern state. That makes Purdue and the surrounding sort of area kind of like a younger, more diverse international island within the, I guess, broader world that is Indiana. [00:02:58]Swyx: Fair enough. We can always dive into sort of flyover country or, you know, small town insights later, but you and I, all three of us actually recently connected at AIUX SF, which is the first AIUX meetup, essentially which just came out of like a Twitter conversation. You and I have been involved in HCI Twitter is kind of how I think about it for a little bit and when I saw that you were in town, Geoffrey Litt was in town, Maggie Appleton in town, all on the same date, I was like, we have to have a meetup and that's how this thing was born. Well, what did it look like from your end? [00:03:30]Linus: From my end, it looked like you did all of the work and I... [00:03:33]Swyx: Well, you got us the Notion. Yeah, yeah. [00:03:36]Linus: It was also in the Notion office, it was in the San Francisco one and then thereafter there was a New York one that I decided I couldn't make. But yeah, from my end it was, and I'm sure you were too, but I was really surprised by both the mixture of people that we ended up getting and the number of people that we ended up getting. There was just a lot of attention on, obviously there was a lot of attention on the technology itself of GPT and language models and so on, but I was surprised by the interest specifically on trying to come up with interfaces that were outside of the box and the people that were interested in that topic. And so we ended up having a packed house and lots of interesting demos. I've heard multiple people comment on the event afterwards that they were positively surprised by the mixture of both the ML, AI-focused people at the event as well as the interface HCI-focused people. [00:04:24]Swyx: Yeah. I kind of see you as one of the leading, I guess, AI UX people, so I hope that we are maybe starting a new discipline, maybe. [00:04:33]Linus: Yeah, I mean, there is this kind of growing contingency of people interested in exploring the intersection of those things, so I'm excited for where that's going to go. [00:04:41]Swyx: I don't know if it's worth going through favorite demos. It was a little while ago, so I don't know if... [00:04:48]Alessio: There was, I forget who made it, but there was this new document writing tool where you could apply brushes to different paragraphs. [00:04:56]Linus: Oh, this was Amelia's. Yeah, yeah, yeah. [00:04:58]Alessio: You could set a tone, both in terms of writer inspiration and then a tone that you wanted, and then you could drag and drop different tones into paragraphs and have the model rewrite them. It was the first time that it's not just auto-complete, there's more to it. And it's not asked in a prompt, it's this funny drag-an-emoji over it. [00:05:20]Linus: Right. [00:05:21]Swyx: I actually thought that you had done some kind of demo where you could select text and then augment it in different moods, but maybe it wasn't you, maybe it was just someone else [00:05:28]Linus: I had done something similar, with slightly different building blocks. I think Amelia's demo was, there was sort of a preset palette of brushes and you apply them to text. I had built something related last year, I prototyped a way to give people sliders for different semantic attributes of text. And so you could start with a sentence, and you had a slider for length and a slider for how philosophical the text is, and a slider for how positive or negative the sentiment in the text is, and you could adjust any of them in the language model, reproduce the text. Yeah, similar, but continuous control versus distinct brushes, I think is an interesting distinction there. [00:06:03]Swyx: I should add it for listeners, if you missed the meetup, which most people will have not seen it, we actually did a separate post with timestamps of each video, so you can look at that. [00:06:13]Alessio: Sorry, Linus, this is unrelated, but I think you build over a hundred side projects or something like that. A hundred? [00:06:20]Swyx: I think there's a lot of people... I know it's a hundred. [00:06:22]Alessio: I think it's a lot of them. [00:06:23]Swyx: A lot of them are kind of small. [00:06:25]Alessio: Yeah, well, I mean, it still counts. I think there's a lot of people that are excited about the technology and want to hack on things. Do you have any tips on how to box, what you want to build, how do you decide what goes into it? Because all of these things, you could build so many more things on top of it. Where do you decide when you're done? [00:06:44]Linus: So my projects actually tend to be... I think especially when people approach project building with a goal of learning, I think a common mistake is to be over-ambitious and sort of not scope things very tightly. And so a classic kind of failure mode is, you say, I'm really interested in learning how to use the GPT-4 API, and I'm also interested in vector databases, and I'm also interested in Next.js. And then you devise a project that's going to take many weeks, and you glue all these things together. And it could be a really cool idea, but then especially if you have a day job and other things that life throws you away, it's hard to actually get to a point where you can ship something. And so one of the things that I got really good at was saying, one, knowing exactly how quickly I could work, at least on the technologies that I knew well, and then only adding one new unknown thing to learn per project. So it may be that for this project, I'm going to learn how the embedding API works. Or for this project, I'm going to learn how to do vector stuff with PyTorch or something. And then I would scope things so that it fit in one chunk of time, like Friday night to Sunday night or something like that. And then I would scope the project so that I could ship something as much work as I could fit into a two-day period, so that at the end of that weekend, I could ship something. And then afterwards, if I want to add something, I have time to do it and a chance to do that. But it's already shipped, so there's already momentum, and people are using it, or I'm using it, and so there's a reason to continue building. So only adding one new unknown per project, I think, is a good trick. [00:08:14]Swyx: I first came across you, I think, because of Monocle, which is your personal search engine. And I got very excited about it, because I always wanted a personal search engine, until I found that it was in a language that I've never seen before. [00:08:25]Linus: Yeah, there's a towel tower of little tools and technologies that I built for myself. One of the other tricks to being really productive when you're building side projects is just to use a consistent set of tools that you know really, really well. For me, that's Go, and my language, and a couple other libraries that I've written that I know all the way down to the bottom of the stack. And then I barely have to look anything up, because I've just debugged every possible issue that could come up. And so I could get from start to finish without getting stuck in a weird bug that I've never seen before. But yeah, it's a weird stack. [00:08:58]Swyx: It also means that you probably are not aiming for, let's say, open source glory, or whatever. Because you're not publishing in the JavaScript ecosystem. Right, right. [00:09:06]Linus: I mean, I've written some libraries before, but a lot of my projects tend to be like, the way that I approach it is less about building something that other people are going to use en masse. And make yourself happy. Yeah, more about like, here's the thing that I built, if you want to, and often I learn something in the process of building that thing. So like with Monocle, I wrote a custom sort of full text search index. And I thought a lot of the parts of what I built was interesting. And so I just wanted other people to be able to look at it and see how it works and understand it. But the goal isn't necessarily for you to be able to replicate it and run it on your own. [00:09:36]Swyx: Well, we can kind of dive into your other AIUX thoughts. As you've been diving in, you tend to share a lot on Twitter. And I just kind of took out some of your greatest hits. This is relevant to the demo that you picked out, Alessio. And what we're talking about, which is, most knowledge work is not a text generation task. That's funny, because a lot of what Notion AI is, is text generation right now. Maybe you want to elaborate a little bit. Yeah. [00:10:01]Linus: I think the first time you look at something like GPT, the shape of the thing you see is like, oh, it's a thing that takes some input text and generates some output text. And so the easiest thing to build on top of that is a content generation tool. But I think there's a couple of other categories of things that you could build that are sort of progressively more useful and more interesting. And so besides content generation, which requires the minimum amount of wrapping around ChatGPT, the second tier up from that is things around knowledge, I think. So if you have, I mean, this is the hot thing with all these vector databases things going around. But if you have a lot of existing context around some knowledge about your company or about a field or all of the internet, you can use a language model as a way to search and understand things in it and combine and synthesize them. And that synthesis, I think, is useful. And at that point, I think the value that that unlocks, I think, is much greater than the value of content generation. Because most knowledge work, the artifact that you produce isn't actually about writing more words. Most knowledge work, the goal is to understand something, synthesize new things, or propose actions or other kinds of knowledge-to-knowledge tasks. And then the third category, I think, is automation. Which I think is sort of the thing that people are looking at most actively today, at least from my vantage point in the ecosystem. Things like the React prompting technique, and just in general, letting models propose actions or write code to accomplish tasks. That's also moving far beyond generating text to doing something more interesting. So much of the value of what humans sit down and do at work isn't actually in the words that they write. It's all the thinking that goes on before you write those words. So how can you get language models to contribute to those parts of work? [00:11:43]Alessio: I think when you first tweeted about this, I don't know if you already accepted the job, but you tweeted about this, and then the next one was like, this is a NotionAI subtweet. [00:11:53]Swyx: So I didn't realize that. [00:11:56]Alessio: The best thing that I see is when people complain, and then they're like, okay, I'm going to go and help make the thing better. So what are some of the things that you've been thinking about? I know you talked a lot about some of the flexibility versus intuitiveness of the product. The language is really flexible, because you can say anything. And it's funny, the models never ignore you. They always respond with something. So no matter what you write, something is going to come back. Sometimes you don't know how big the space of action is, how many things you can do. So as a product builder, how do you think about the trade-offs that you're willing to take for your users? Where like, okay, I'm not going to let you be as flexible, but I'm going to create this guardrails for you. What's the process to think about the guardrails, and how you want to funnel them to the right action? [00:12:46]Linus: Yeah, I think what this trade-off you mentioned around flexibility versus intuitiveness, I think, gets at one of the core design challenges for building products on top of language models. A lot of good interface design comes from tastefully adding the right constraints in place to guide the user towards actions that you want to take. As you add more guardrails, the obvious actions become more obvious. And one common way to make an interface more intuitive is to narrow the space of choices that the users have to make, and the number of choices that they have to make. And that intuitiveness, that source of intuitiveness from adding constraints, is kind of directly at odds with the reason that language models are so powerful and interesting, which is that they're so flexible and so general, and you can ask them to do literally anything, and they will always give you something. But most of the time, the answer isn't that high quality. And so there's kind of a distribution of, like, there are clumps of things in the action space of what a language model can do that the model's good at, and there's parts of the space where it's bad at. And so one sort of high-level framework that I have for thinking about designing with language models is, there are actions that the language model's good at, and actions that it's bad at. How do you add the right constraints carefully to guide the user and the system towards the things that the language model's good at? And then at the same time, how do you use those constraints to set the user expectations for what it's going to be good at and bad at? One way to do this is just literally to add those constraints and to set expectations. So a common example I use all the time is, if you have some AI system to answer questions from a knowledge base, there are a couple of different ways to surface that in a kind of a hypothetical product. One is, you could have a thing that looks like a chat window in a messaging app, and then you could tell the user, hey, this is for looking things up from a database. You can ask a question, then it'll look things up and give you an answer. But if something looks like a chat, and this is a lesson that's been learned over and over for anyone building chat interfaces since, like, 2014, 15, if you have anything that looks like a chat interface or a messaging app, people are going to put some, like, weird stuff in there that just don't look like the thing that you want the model to take in, because the expectation is, hey, I can use this like a messaging app, and people will send in, like, hi, hello, you know, weird questions, weird comments. Whereas if you take that same, literally the same input box, and put it in, like, a thing that looks like a search bar with, like, a search button, people are going to treat it more like a search window. And at that point, inputs look a lot more like keywords or a list of keywords or maybe questions. So the simple act of, like, contextualizing that input in different parts of an interface reset the user's expectations, which constrain the space of things that the model has to handle. And that you're kind of adding constraints, because you're really restricting your input to mostly things that look like keyword search. But because of that constraint, you can have the model fit the expectations better. You can tune the model to perform better in those settings. And it's also less confusing and perhaps more intuitive, because the user isn't stuck with this blank page syndrome problem of, okay, here's an input. What do I actually do with it? When we initially launched Notion AI, one of my common takeaways, personally, from talking to a lot of my friends who had tried it, obviously, there were a lot of people who were getting lots of value out of using it to automate writing emails or writing marketing copy. There were a ton of people who were using it to, like, write Instagram ads and then sort of paste it into the Instagram tool. But some of my friends who had tried it and did not use it as much, a frequently cited reason was, I tried it. It was cool. It was cool for the things that Notion AI was marketed for. But for my particular use case, I had a hard time figuring out exactly the way it was useful for my workflow. And I think that gets back at the problem of, it's such a general tool that just presented with a blank prompt box, it's hard to know exactly the way it could be useful to your particular use case. [00:16:21]Alessio: What do you think is the relationship between novelty and flexibility? I feel like we're in kind of like a prompting honeymoon phase where the tools are new and then everybody just wants to do whatever they want to do. And so it's good to give these interfaces because people can explore. But if I go forward in three years, ideally, I'm not prompting anything. The UX has been built for most products to already have the intuitive, kind of like a happy path built into it. Do you think there's merit in a way? If you think about ChatGPT, if it was limited, the reason why it got so viral is people were doing things that they didn't think a computer could do, like write poems and solve riddles and all these different things. How do you think about that, especially in Notion, where Notion AI is kind of like a new product in an existing thing? How much of it for you is letting that happen and seeing how people use it? And then at some point be like, okay, we know what people want to do. The flexibility is not, it was cool before, but now we just want you to do the right things with the right UX. [00:17:27]Linus: I think there's value in always having the most general input as an escape hatch for people who want to take advantage of that power. At this point, Notion AI has a couple of different manifestations in the product. There's the writer. There's a thing we called an AI block, which is a thing that you can always sort of re-update as a part of document. It's like a live, a little portal inside the document that an AI can write. We also have a relatively new thing called AI autofill, which lets an AI fill an entire column in a Notion database. In all of these things, speaking of adding constraints, we have a lot of suggested prompts that we've worked on and we've curated and we think work pretty well for things like summarization and writing drafts to blog posts and things. But we always leave a fully custom prompt for a few reasons. One is if you are actually a power user and you know how language models work, you can go in and write your custom prompt and if you're a power user, you want access to the power. The other is for us to be able to discover new use cases. And so one of the lovely things about working on a product like Notion is that there's such an enthusiastic and lively kind of community of ambassadors and people that are excited about trying different things and coming up with all these templates and new use cases. And having a fully custom action or prompt whenever we launch something new in AI lets those people really experiment and help us discover new ways to take advantage of AI. I think it's good in that way. There's also a sort of complement to that, which is if we wanted to use feedback data or learn from those things and help improve the way that we are prompting the model or the models that we're building, having access to that like fully diverse, fully general range of use cases helps us make sure that our models can handle the full generality of what people want to do. [00:19:06]Swyx: I feel like we've segway'd a lot into our Notion conversation and maybe I just wanted to bridge that a little bit with your personal journey into Notion before we go into Notion proper. You spent a year kind of on a sabbatical, kind of on your own self-guided research journey and then deciding to join Notion. I think a lot of engineers out there thinking about doing this maybe don't have the internal compass that you have or don't have the guts to basically make no money for a year. Maybe just share with people how you decided to basically go on your own independent journey and what got you to join Notion in the end. [00:19:42]Linus: Yeah, what happened? Um, yeah, so for a little bit of context for people who don't know me, I was working mostly at sort of seed stage startups as a web engineer. I actually didn't really do much AI at all for prior to my year off. And then I took all of 2022 off with less of a focus on it ended up sort of in retrospect becoming like a Linus Pivots to AI year, which was like beautifully well timed. But in the beginning of the year, there was kind of a one key motivation and then one key kind of question that I had. The motivation was that I think I was at a sort of a privileged and fortunate enough place where I felt like I had some money saved up that I had saved up explicitly to be able to take some time off and investigate my own kind of questions because I was already working on lots of side projects and I wanted to spend more time on it. I think I also at that point felt like I had enough security in the companies and folks that I knew that if I really needed a job on a short notice, I could go and I could find some work to do. So I wouldn't be completely on the streets. And so that security, I think, gave me the confidence to say, OK, let's try this kind of experiment.[00:20:52]Maybe it'll only be for six months. Maybe it'll be for a year. I had enough money saved up to last like a year and change. And so I had planned for a year off and I had one sort of big question that I wanted to explore. Having that single question, I think, actually was really helpful for focusing the effort instead of just being like, I'm going to side project for a year, which I think would have been less productive. And that big question was, how do we evolve text interfaces forward? So, so much of knowledge work is consuming walls of text and then producing more walls of text. And text is so ubiquitous, not just in software, but just in general in the world. They're like signages and menus and books. And it's ubiquitous, but it's not very ergonomic. There's a lot of things about text interfaces that could be better. And so I wanted to explore how we could make that better. A key part of that ended up being, as I discovered, taking advantage of this new technologies that let computers make sense of text information. And so that's how I ended up sort of sliding into AI. But the motivation in the beginning was less focused on learning a new technology and more just on exploring this general question space. [00:21:53]Swyx: Yeah. You have the quote, text is the lowest denominator, not the end game. Right, right. [00:21:58]Linus: I mean, I think if you look at any specific domain or discipline, whether it's medicine or mathematics or software engineering, in any specific discipline where there's a narrower set of abstractions for people to work with, there are custom notations. One of the first things that I wrote in this exploration year was this piece called Notational Intelligence, where I talk about this idea that so much of, as a total sidebar, there's a whole other fascinating conversation that I would love to have at some point, maybe today, maybe later, about how to evolve a budding scene of research into a fully-fledged field. So I think AI UX is kind of in this weird stage where there's a group of interesting people that are interested in exploring this space of how do you design for this newfangled technology, and how do you take that and go and build best practices and powerful methods and tools [00:22:48]Swyx: We should talk about that at some point. [00:22:49]Linus: OK. But in a lot of established fields, there are notations that people use that really help them work at a slightly higher level than just raw words. So notations for describing chemicals and notations for different areas of mathematics that let people work with higher-level concepts more easily. Logic, linguistics. [00:23:07]Swyx: Yeah. [00:23:07]Linus: And I think it's fair to say that some large part of human intelligence, especially in these more technical domains, comes from our ability to work with notations instead of work with just the raw ideas in our heads. And text is a kind of notation. It's the most general kind of notation, but it's also, because of its generality, not super high leverage if you want to go into these specific domains. And so I wanted to try to improve on that frontier. [00:23:29]Swyx: Yeah. You said in our show notes, one of my goals over the next few years is to ensure that we end up with interface metaphors and technical conventions that set us up for the best possible timeline for creativity and inventions ahead. So part of that is constraints. But I feel like that is one part of the equation, right? What's the other part that is more engenders creativity? [00:23:47]Linus: Tell me a little bit about that and what you're thinking there. [00:23:51]Swyx: It's just, I feel like, you know, we talked a little bit about how you do want to constrain, for example, the user interface to guide people towards things that language models are good at. And creative solutions do arise out of constraints. But I feel like that alone is not sufficient for people to invent things. [00:24:10]Linus: I mean, there's a lot of directions, I think, that could go from that. The origin of that thing that you're quoting is when I decided to come help work on AI at Notion, a bunch of my friends were actually quite surprised, I think, because they had expected that I would have gone and worked… [00:24:29]Swyx: You did switch. I was eyeing that for you. [00:24:31]Linus: I mean, I worked at a lab or at my own company or something like that. But one of the core motivations for me joining an existing company and one that has lots of users already is this exact thing where in the aftermath of a new foundational technology emerging, there's kind of a period of a few years where the winners in the market get to decide what the default interface paradigm for the technology is. So, like, mini computers, personal computers, the winners of that market got to decide Windows are and how scrolling works and what a mouse cursor is and how text is edited. Similar with mobile, the concept of a home screen and apps and things like that, the winners of the market got to decide. And that has profound, like, I think it's difficult to understate the importance of, in those few critical years, the winning companies in the market choosing the right abstractions and the right metaphors. And AI, to me, seemed like it's at that pivotal moment where it's a technology that lots of companies are adopting. There is this well-recognized need for interface best practices. And Notion seemed like a company that had this interesting balance of it could still move quickly enough and ship and prototype quickly enough to try interesting interface ideas. But it also had enough presence in the ecosystem that if we came up with the right solution or one that we felt was right, we could push it out and learn from real users and iterate and hopefully be a part of that story of setting the defaults and setting what the dominant patterns are. [00:26:07]Swyx: Yeah, it's a special opportunity. One of my favorite stories or facts is it was like a team of 10 people that designed the original iPhone. And so all the UX that was created there is essentially what we use as smartphones today, including predictive text, because people were finding that people were kind of missing the right letters. So they just enhanced the hit area for certain letters based on what you're typing. [00:26:28]Linus: I mean, even just the idea of like, we should use QWERTY keyboards on tiny smartphone screens. Like that's a weird idea, right? [00:26:36]Swyx: Yeah, QWERTY is another one. So I have RSI. So this actually affects me. QWERTY was specifically chosen to maximize travel distance, right? Like it's actually not ergonomic by design because you wanted the keyboard, the key type writers to not stick. But we don't have that anymore. We're still sticking to QWERTY. I'm still sticking to QWERTY. I could switch to the other ones. I forget. QORAC or QOMAC anytime, but I don't just because of inertia. I have another thing like this. [00:27:02]Linus: So going even farther back, people don't really think enough about where this concept of buttons come from, right? So the concept of a push button as a thing where you press it and it activates some binary switch. I mean, buttons have existed for, like mechanical buttons have existed for a long time. But really, like this modern concept of a button that activates a binary switch really gets like popularized by the popular advent of electricity. Before the electricity, if you had a button that did something, you would have to construct a mechanical system where if you press down on a thing, it affects some other lever system that affects as like the final action. And this modern idea of a button that is just a binary switch gets popularized electricity. And at that point, a button has to work in the way that it does in like an alarm clock, because when you press down on it, there's like a spring that makes sure that the button comes back up and that it completes the circuit. And so that's the way the button works. And then when we started writing graphical interfaces, we just took that idea of a thing that could be depressed to activate a switch. All the modern buttons that we have today in software interfaces are like simulating electronic push buttons where you like press down to complete a circuit, except there's actually no circuit being completed. It's just like a square on a screen. [00:28:11]Swyx: It's all virtualized. Right. [00:28:12]Linus: And then you control the simulation of a button by clicking a physical button on a mouse. Except if you're on a trackpad, it's not even a physical button anymore. It's like a simulated button hardware that controls a simulated button in software. And it's also just this cascade of like conceptual backwards compatibility that gets us here. I think buttons are interesting. [00:28:32]Alessio: Where are you on the skeuomorphic design love-hate spectrum? There's people that have like high nostalgia for like the original, you know, the YouTube icon on the iPhone with like the knobs on the TV. [00:28:42]Linus: I think a big part of that is at least the aesthetic part of it is fashion. Like fashion taken very literally, like in the same way that like the like early like Y2K 90s aesthetic comes and goes. I think skeuomorphism as expressed in like the early iPhone or like Windows XP comes and goes. There's another aspect of this, which is the part of skeuomorphism that helps people understand and intuit software, which has less to do with skeuomorphism making things easier to understand per se and more about like, like a slightly more general version of skeuomorphism is like, there should be a consistent mental model behind an interface that is easy to grok. And then once the user has the mental model, even if it's not the full model of exactly how that system works, there should be a simplified model that the user can easily understand and then sort of like adopt and use. One of my favorite examples of this is how volume controls that are designed well often work. Like on an iPhone, when you make your iPhone volume twice as loud, the sound that comes out isn't actually like at a physical level twice as loud. It's on a log scale. When you push the volume slider up on an iPhone, the speaker uses like four times more energy, but humans perceive it as twice as loud. And so the mental model that we're working with is, okay, if I make this, this volume control slider have two times more value, it's going to sound two times louder, even though actually the underlying physics is like on a log scale. But what actually happens physically is not actually what matters. What matters is how humans perceive it in the model that I have in my head. And there, I think there are a lot of other instances where the skeuomorphism isn't actually the thing. The thing is just that there should be a consistent mental model. And often the easy, consistent mental model to reach for is the models that already exist in reality, but not always. [00:30:23]Alessio: I think the other big topic, maybe before we dive into Notion is agents. I think that's one of the toughest interfaces to crack, mostly because, you know, the text box, everybody understands that the agent is kind of like, it's like human-like feeling, you know, where it's like, okay, I'm kind of delegating something to a human, right? I think, like, Sean, you made the example of like a Calendly, like a savvy Cal, it's like an agent, because it's scheduling on your behalf for something. [00:30:51]Linus: That's actually a really interesting example, because it's a kind of a, it's a pretty deterministic, like there's no real AI to it, but it is agent in the sense that you're like delegating it and automate something. [00:31:01]Swyx: Yeah, it does work without me. It's great. [00:31:03]Alessio: So that one, we figured out. Like, we know what the scheduling interface is like. [00:31:07]Swyx: Well, that's the state of the art now. But, you know, for example, the person I'm corresponding with still has to pick a time from my calendar, which some people dislike. Sam Lesson famously says it's a sign of disrespect. I disagree with him, but, you know, it's a point of view. There could be some intermediate AI agents that would send emails back and forth like a human person to give the other person who feels slighted that sense of respect or a personalized touch that they want. So there's always ways to push it. [00:31:39]Alessio: Yeah, I think for me, you know, other stuff that I think about, so we were doing prep for another episode and had an agent and asked it to do like a, you know, background prep on like the background of the person. And it just couldn't quite get the format that I wanted it to be, you know, but I kept to have the only way to prompt that it's like, give it text, give a text example, give a text example. What do you think, like the interface between human and agents in the future will be like, do you still think agents are like this open ended thing that are like objective driven where you say, Hey, this is what I want to achieve versus I only trust this agent to do X. And like, this is how X is done. I'm curious because that kind of seems like a lot of mental overhead, you know, to remember each agent for each task versus like if you have an executive assistant, like they'll do a random set of tasks and you can trust them because they're a human. But I feel like with agents, we're not quite there. [00:32:36]Swyx: Agents are hard. [00:32:36]Linus: The design space is just so vast. Since all of the like early agent stuff came out around auto GPT, I've tried to develop some kind of a thesis around it. And I think it's just difficult because there's so many variables. One framework that I usually apply to sort of like existing chat based prompting kind of things that I think also applies just as well to agents is this duality between what you might call like trust and control. So you just now you brought up this example of you had an agent try to write some write up some prep document for an episode and it couldn't quite get the format right. And one way you could describe that is you could say, Oh, the, the agent didn't exactly do what I meant and what I had in my head. So I can't trust it to do the right job. But a different way to describe it is I have a hard time controlling exactly the output of the model and I have a hard time communicating exactly what's in my head to the model. And they're kind of two sides of the same coin. I think if you, if you can somehow provide a way to with less effort, communicate and control and constrain the model output a little bit more and constrain the behavior a little bit more, I think that would alleviate the pressure for the model to be this like fully trusted thing because there's no need for trust anymore. There's just kind of guardrails that ensure that the model does the right thing. So developing ways and interfaces for these agents to be a little more constrained in its output or maybe for the human to control its output a little bit more or behavior a little bit more, I think is a productive path. Another sort of more, more recent revelation that I had while working on this and autofill thing inside notion is the importance of zones of influence for AI agents, especially in collaborative settings. So having worked on lots of interfaces for independent work on my year off, one of the surprising lessons that I learned early on when I joined notion was that if you build a collaboration permeates everything, which is great for notion because collaborating with an AI, you reuse a lot of the same metaphors for collaborating with humans. So one nice thing about this autofill thing that also kind of applies to AI blocks, which is another thing that we have, is that you don't alleviate this problem of having to ask questions like, oh, is this document written by an AI or is this written by a human? Like this need for auditability, because the part that's written by the AI is just in like the autofilled cell or in the AI block. And you can, you can tell that's written by the AI and things outside of it, you can kind of reasonably assume that it was written by you. I think anytime you have sort of an unbounded action space for, for models like agents, it's especially important to be able to answer those questions easily and to have some sense of security that in the same way that you want to know whether your like coworker or collaborator has access to a document or has modified a document, you want to know whether an AI has permissions to access something. And if it's modified something or made some edit, you want to know that it did it. And so as a compliment to constraining the model's action space proactively, I think it's also important to communicate, have the user have an easy understanding of like, what exactly did the model do here? And I think that helps build trust as well. [00:35:39]Swyx: Yeah. I think for auto GPT and those kinds of agents in particular, anything that is destructive, you need to prompt for, I guess, or like check with, check in with the user. I know it's overloaded now. I can't say that. You have to confirm with the user. You confirm to the user. Yeah, exactly. Yeah. Yeah. [00:35:56]Linus: That's tough too though, because you, you don't want to stop. [00:35:59]Swyx: Yeah. [00:35:59]Linus: One of the, one of the benefits of automating these things that you can sort of like, in theory, you can scale them out arbitrarily. I can have like a hundred different agents working for me, but if that means I'm just spending my entire day in a deluge of notifications, that's not ideal either. [00:36:12]Swyx: Yeah. So then it could be like a reversible, destructive thing with some kind of timeouts, a time limit. So you could reverse it within some window. I don't know. Yeah. I've been thinking about this a little bit because I've been working on a small developer agent. Right. Right. [00:36:27]Linus: Or maybe you could like batch a group of changes and can sort of like summarize them with another AI and improve them in bulk or something. [00:36:33]Swyx: Which is surprisingly similar to the collaboration problem. Yeah. Yeah. Yeah. Exactly. Yeah. [00:36:39]Linus: I'm telling you, the collaboration, a lot of the problems with collaborating with humans also apply to collaborating with AI. There's a potential pitfall to that as well, which is that there are a lot of things that some of the core advantages of AI end up missing out on if you just fully anthropomorphize them into like human-like collaborators. [00:36:56]Swyx: But yeah. Do you have a strong opinion on that? Like, do you refer to it as it? Oh yeah. [00:37:00]Linus: I'm an it person, at least for now, in 2023. Yeah. [00:37:05]Swyx: So that leads us nicely into introducing what Notion and Notion AI is today. Do you have a pet answer as to what is Notion? I've heard it introduced as a database, a WordPress killer, a knowledge base, a collaboration tool. What is it? Yeah. [00:37:19]Linus: I mean, the official answer is that a Notion is a connected workspace. It has a space for your company docs, meeting notes, a wiki for all of your company notes. You can also use it to orchestrate your workflows if you're managing a project, if you have an engineering team, if you have a sales team. You can put all of those in a single Notion database. And the benefit of Notion is that all of them live in a single space where you can link to your wiki pages from your, I don't know, like onboarding docs. Or you can link to a GitHub issue through a task from your documentation on your engineering system. And all of this existing in a single place in this kind of like unified, yeah, like single workspace, I think has lots of benefits. [00:37:58]Swyx: That's the official line. [00:37:59]Linus: There's an asterisk that I usually enjoy diving deeper into, which is that the whole reason that this connected workspace is possible is because underlying all of this is this really cool abstraction of blocks. In Notion, everything is a block. A paragraph is a block. A bullet point is a block. But also a page is a block. And the way that Notion databases work is that a database is just a collection of pages, which are really blocks. And you can like take a paragraph and drag it into a database and it'll become a page. You can take a page inside a database and pull it out and it'll just become a link to that page. And so this core abstraction of a block that can also be a page, that can also be a row in a database, like an Excel sheet, that fluidity and this like shared abstraction across all these different areas inside Notion, I think is what really makes Notion powerful. This Lego theme, this like Lego building block theme permeates a lot of different parts of Notion. Some fans of Notion might know that when you, or when you join Notion, you get a little Lego minifigure, which has Lego building blocks for workflows. And then every year you're at Notion, you get a new block that says like you've been here for a year, you've been here for two years. And then Simon, our co-founder and CTO, has a whole crate of Lego blocks on his desk that he just likes to mess with because, you know, he's been around for a long time. But this Lego building block thing, this like shared sort of all-encompassing single abstraction that you can combine to build various different kinds of workflows, I think is really what makes Notion powerful. And one of the sort of background questions that I have for Notion AI is like, what is that kind of building block for AI? [00:39:30]Swyx: Well, we can dive into that. So what is Notion AI? Like, so I kind of view it as like a startup within the startup. Could you describe the Notion AI team? Is this like, how seriously is Notion taking the AI wave? [00:39:43]Linus: The most seriously? The way that Notion AI came about, as I understand it, because I joined a bit later, I think it was around October last year, all of Notion team had a little offsite. And as a part of that, Ivan and Simon kind of went into a little kind of hack weekend. And the thing that they ended up hacking on inside Notion was the very, very early prototype of Notion AI. They saw this GPT-3 thing. The early, early motivation for starting Notion, building Notion in the first place for them, was sort of grounded in this utopian end-user programming vision where software is so powerful, but there are only so many people in the world that can write programs. But everyone can benefit from having a little workspace or a little program or a little workflow tool that's programmed to just fit their use case. And so how can we build a tool that lets people customize their software tools that they use every day for their use case? And I think to them, seemed like such a critical part of facilitating that, bridging the gap between people who can code and people who need software. And so they saw that, they tried to build an initial prototype that ended up becoming the first version of Notion AI. They had a prototype in, I think, late October, early November, before Chachapiti came out and sort of evolved it over the few months. But what ended up launching was sort of in line with the initial vision, I think, of what they ended up building. And then once they had it, I think they wanted to keep pushing it. And so at this point, AI is a really key part of Notion strategy. And what we see Notion becoming going forward, in the same way that blocks and databases are a core part of Notion that helps enable workflow automation and all these important parts of running a team or collaborating with people or running your life, we think that AI is going to become an equally critical part of what Notion is. And it won't be, Notion is a cool connected workspace app, and it also has AI. It'll be that what Notion is, is databases, it has pages, it has space for your docs, and it also has this sort of comprehensive suite of AI tools that permeate everything. And one of the challenges of the AI team, which is, as you said, kind of a startup within a startup right now, is to figure out exactly what that all-permeating kind of abstraction means, which is a fascinating and difficult open problem. [00:41:57]Alessio: How do you think about what people expect of Notion versus what you want to build in Notion? A lot of this AI technology kind of changes, you know, we talked about the relationship between text and human and how human collaborates. Do you put any constraints on yourself when it's like, okay, people expect Notion to work this way with these blocks. So maybe I have this crazy idea and I cannot really pursue it because it's there. I think it's a classic innovator's dilemma kind of thing. And I think a lot of founders out there that are in a similar position where it's like, you know, series C, series D company, it's like, you're not quite yet the super established one, you're still moving forward, but you have an existing kind of following and something that Notion stands for. How do you kind of wrangle with that? [00:42:43]Linus: Yeah, that is in some ways a challenge and that Notion already is a kind of a thing. And so we can't just scrap everything and start over. But I think it's also, there's a blessing side of it too, in that because there are so many people using Notion in so many different ways, we understand all of the things that people want to use Notion for very well. And then so we already have a really well-defined space of problems that we want to help people solve. And that helps us. We have it with the existing Notion product and we also have it by sort of rolling out these AI things early and then watching, learning from the community what people want to do [00:43:17]Swyx: with them. [00:43:17]Linus: And so based on those learnings, I think it actually sort of helps us constrain the space of things we think we need to build because otherwise the design space is just so large with whatever we can do with AI and knowledge work. And so watching what people have been using Notion for and what they want to use Notion for, I think helps us constrain that space a little bit and make the problem of building AI things inside Notion a little more tractable. [00:43:36]Swyx: I think also just observing what they naturally use things for, and it sounds like you do a bunch of user interviews where you hear people running into issues and, or describe them as, the way that I describe myself actually is, I feel like the problem is with me, that I'm not creative enough to come up with use cases to use Notion AI or any other AI. [00:43:57]Linus: Which isn't necessarily on you, right? [00:43:59]Swyx: Exactly. [00:43:59]Linus: Again, like it goes way back to the early, the thing we touched on early in the conversation around like, if you have too much generality, there's not enough, there are not enough guardrails to obviously point to use cases. Blank piece of paper. [00:44:10]Swyx: I don't know what to do with this. So I think a lot of people judge Notion AI based on what they originally saw, which is write me a blog post or do a summary or do action items. Which, fun fact, for latent space, my very, very first Hacker News hit was reverse engineering Notion AI. I actually don't know if I got it exactly right. I think I got the easy ones right. And then apparently I got the action items one really wrong. So there's some art into doing that. But also you've since launched a bunch of other products and maybe you've already hinted at AI Autofill. Maybe we can just talk a little bit about what does the scope or suite of Notion AI products have been so far and what you're launching this week? Yeah. [00:44:53]Linus: So we have, I think, three main facets of Notion AI and Notion at the moment. We have sort of the first thing that ever launched with Notion AI, which I think that helps you write. It's, going back to earlier in the conversation, it's kind of a writing, kind of a content generation tool. If you have a document and you want to generate a summary, it helps you generate a summary, pull out action items, you can draft a blog post, you can help it improve, it's helped to improve your writings, it can help fix grammar and spelling mistakes. But under the hood, it's a fairly lightweight, a thick layer of prompts. But otherwise, it's a pretty straightforward use case of language models, right? And so there's that, a tool that helps you write documents. There's a thing called an AI block, which is a slightly more constrained version of that where one common way that we use it inside Notion is we take all of our meeting notes inside Notion. And frequently when you have a meeting and you want other people to be able to go back to it and reference it, it's nice to have a summary of that meeting. So all of our meeting notes templates, at least on the AI team, have an AI block at the top that automatically summarizes the contents of that page. And so whenever we're done with a meeting, we just press a button and it'll re-summarize that, including things like what are the core action items for every person in the meeting. And so that block, as I said before, is nice because it's a constrained space for the AI to work in, and we don't have to prompt it every single time. And then the newest member of this AI collection of features is AI autofill, which brings Notion AI to databases. So if you have a whole database of user interviews and you want to pull out what are the companies, core pain points, what are their core features, maybe what are their competitor products they use, you can just make columns. And in the same way that you write Excel formulas, you can write a little AI formula, basically, where the AI will look at the contents of the page and pull out each of these key pieces of information. The slightly new thing that autofill introduces is this idea of a more automated background [00:46:43]Swyx: AI thing. [00:46:44]Linus: So with Writer, the AI in your document product and the AI block, you have to always ask it to update. You have to always ask it to rewrite. But if you have a column in a database, in a Notion database, or a property in a Notion database, it would be nice if you, whenever someone went back and changed the contents of the meeting node or something updated about the page, or maybe it's a list of tasks that you have to do and the status of the task changes, you might want the summary of that task or detail of the task to update. And so anytime that you can set up an autofilled Notion property so that anytime something on that database row or page changes, the AI will go back and sort of auto-update the autofilled value. And that, I think, is a really interesting part that we might continue leading into of like, even though there's AI now tied to this particular page, it's sort of doing its own thing in the background to help automate and alleviate some of that pain of automating these things. But yeah, Writer, Blocks, and Autofill are the three sort of cornerstones we have today. [00:47:42]Alessio: You know, there used to be this glorious time where like, Roam Research was like the hottest knowledge company out there, and then Notion built Backlinks. I don't know if we are to blame for that. No, no, but how do Backlinks play into some of this? You know, I think most AI use cases today are kind of like a single page, right? Kind of like this document. I'm helping with this. Do you see some of these tools expanding to do changes across things? So we just had Itamar from Codium on the podcast, and he talked about how agents can tie in specs for features, tests for features, and the code for the feature. So like the three entities are tied together. Like, do you see some Backlinks help AI navigate through knowledge basis of companies where like, you might have the document the product uses, but you also have the document that marketing uses to then announce it? And as you make changes, the AI can work through different pieces of it? [00:48:41]Swyx: Definitely. [00:48:41]Linus: If I may get a little theoretical from that. One of my favorite ideas from my last year of hacking around building text augmentations with AI for documents is this realization that, you know, when you look at code in a code editor, what it is at a very lowest level is just text files. A code file is a text file, and there are maybe functions inside of it, and it's a list of functions, but it's a text file. But the way that you understand it is not as a file, like a Word document, it's a kind of a graph.[00:49:10]Linus: Like you have a function, you have call sites to that function, there are places where you call that function, there's a place where that function is tested, many different definitions for that function. Maybe there's a type definition that's tied to that function. So it's a kind of a graph. And if you want to understand that function, there's advantages to be able to traverse that whole graph and fully contextualize where that function is used. Same with types and same with variables. And so even though its code is represented as text files, it's actually kind of a graph. And a lot of the, of what, all of the key interfaces, interface innovations behind IDEs is helping surface that graph structure in the context of a text file. So like things like go to definition or VS Code's little window view when you like look at references. And interesting idea that I explored last year was what if you bring that to text documents? So text documents are a little more unstructured, so there's a less, there's a more fuzzy kind of graph idea. But if you're reading a textbook, if there's a new term, there's actually other places where the term is mentioned. There's probably a few places where that's defined. Maybe there's some figures that reference that term. If you have an idea, there are other parts of the document where the document might disagree with that idea or cite that idea. So there's still kind of a graph structure. It's a little more fuzzy, but there's a graph structure that ties together like a body of knowledge. And it would be cool if you had some kind of a text editor or some kind of knowledge tool that let you explore that whole graph. Or maybe if an AI could explore that whole graph. And so back to your point, I think taking advantage of not just the backlinks. Backlinks is a part of it. But the fact that all of these inside Notion, all of these pages exist in a single workspace and it's a shared context. It's a connected workspace. And you can take any idea and look up anywhere to fully contextualize what a part of your engineering system design means. Or what we know about our pitching their customer at a company. Or if I wrote down a book, what are other places where that book has been mentioned? All these graph following things, I think, are really important for contextualizing knowledge. [00:51:02]Swyx: Part of your job at Notion is prompt engineering. You are maybe one of the more advanced prompt engineers that I know out there. And you've always commented on the state of prompt ops tooling. What is your process today? What do you wish for? There's a lot here. [00:51:19]Linus: I mean, the prompts that are inside Notion right now, they're not complex in the sense that agent prompts are complex. But they're complex in the sense that there is even a problem as simple as summarize a [00:51:31]Swyx: page. [00:51:31]Linus: A page could contain anything from no information, if it's a fresh document, to a fully fledged news article. Maybe it's a meeting note. Maybe it's a bug filed by somebody at a company. The range of possible documents is huge. And then you have to distill all of it down to always generate a summary. And so describing that task to AI comprehensively is pretty hard. There are a few things that I think I ended up leaning on, as a team we ended up leaning on, for the prompt engineering part of it. I think one of the early transitions that we made was that the initial prototype for Notion AI was built on instruction following, the sort of classic instruction following models, TextWG003, and so on. And then at some point, we all switched to chat-based models, like Claude and the new ChatGPT Turbo and these models. And so that was an interesting transition. It actually kind of made few-shot prompting a little bit easier, I think, in that you could give the few-shot examples as sort of previous turns in a conversation. And then you could ask the real question as the next follow-up turn. I've come to appreciate few-shot prompting a lot more because it's difficult to fully comprehensively explain a particular task in words, but it's pretty easy to demonstrate like four or five different edge cases that you want the model to handle. And a lot of times, if there's an edge case that you want a model to handle, I think few-shot prompting is just the easiest, most reliable tool to reach for. One challenge in prompt engineering that Notion has to contend with often is we want to support all the different languages that Notion supports. And so all of our prompts have to be multilingual or compatible, which is kind of tricky because our prompts are written, our instructions are written in English. And so if you just have a naive approach, then the model tends to output in English, even when the document that you want to translate or summarize is in French. And so one way you could try to attack that problem is to tell the model, answering the language of the user's query. But it's actually a lot more effective to just give it examples of not just English documents, but maybe summarizing an English document, maybe summarize a ticket filed in French, summarize an empty document where the document's supposed to be in Korean. And so a lot of our few-shot prompt-included prompts in Notion AI tend to be very multilingual, and that helps support our non-English-speaking users. The other big part of prompt engineering is evaluation. The prompts that you exfiltrated out of Notion AI many weeks ago, surprisingly pretty spot-on, at least for the prompts that we had then, especially things like summary. But they're also outdated because we've evolved them a lot more, and we have a lot more examples. And some of our prompts are just really, really long. They're like thousands of tokens long. And so every time we go back and add an example or modify the instruction, we want to make sure that we don't regress any of the previous use cases that we've supported. And so we put a lot of effort, and we're increasingly building out internal tooling infrastructure for things like what you might call unit tests and regression tests for prompts with handwritten test cases, as well as tests that are driven more by feedback from Notion users that have chosen to share their feedback with us. [00:54:31]Swyx: You just have a hand-rolled testing framework or use Jest or whatever, and nothing custom out there. You basically said you've looked at so many prompt ops tools and you're sold on none of them. [00:54:42]Linus: So that tweet was from a while ago. I think there are a couple of interesting tools these days. But I think at the moment, Notion uses pretty hand-rolled tools. Nothing too heavy, but it's basically a for loop over a list of test cases. We do do quite a bit of using language models to evaluate language models. So our unit test descriptions are kind of funny because the test is literally just an input document and a query, and then we expect the model to say something. And then our qualification for whether that test passes or not is just ask the language model again, whether it looks like a reasonable summary or whether it's in the right language. [00:55:19]Swyx: Do you have the same model? Do you have entropic-criticized OpenAI or OpenAI-criticized entropic? That's a good question. Do you worry about models being biased towards its own self? [00:55:29]Linus: Oh, no, that's not a worry that we have. I actually don't know exactly if we use different models. If you have a fixed budget for running these tests, I think it would make sense to use more expensive models for evaluation rather than generation. But yeah, I don't remember exactly what we do there. [00:55:44]Swyx: And then one more follow-up on, you mentioned some of your prompts are thousands of tokens. That takes away from my budget as a user. Isn't that a trade-off that's a concern? So there's a limited context window, right? Some of that is taken by you as the app designer, product designer, deciding what system prompt to provide. And then the

Keen On Democracy
:Why Smart Machines Know Us So Well: John Borthwick on how today's AI revolution is being built upon social media data

Keen On Democracy

Play Episode Listen Later May 7, 2023 35:05


EPISODE 1481: In this KEEN ON show, Andrew talks to Betaworks CEO John Borthwick about the origins of the social media age and why the intelligence for today's AI revolution is being built upon social media data John Borthwick is the CEO and Founder of Betaworks. He leads the company building and investment process since 2008. Previously, John was SVP of Alliances and Technology Strategy for Time Warner Inc.; CEO of Fotolog, one of the first social photo sharing sites; and head of AOL's product development studio after they acquired his first company, WPStudio, one of Silicon Alley's first content studios. John studied at Wesleyan University and Wharton School of the University of Pennsylvania. Named as one of the "100 most connected men" by GQ magazine, Andrew Keen is amongst the world's best known broadcasters and commentators. In addition to presenting KEEN ON, he is the host of the long-running How To Fix Democracy show. He is also the author of four prescient books about digital technology: CULT OF THE AMATEUR, DIGITAL VERTIGO, THE INTERNET IS NOT THE ANSWER and HOW TO FIX THE FUTURE. Andrew lives in San Francisco, is married to Cassandra Knight, Google's VP of Litigation & Discovery, and has two grown children. Learn more about your ad choices. Visit megaphone.fm/adchoices

Team Human
Claire Leibowicz, Justin Hendrix, John Borthwick, and Douglas Rushkoff - live at Betaworks

Team Human

Play Episode Listen Later Apr 14, 2023 72:08


Head of the AI and Media Integrity Program at the Partnership on AI (PAI) Claire Leibowicz, Tech Policy Press CEO and Editor Justin Hendrix, Betaworks CEO John Borthwick, and Douglas Rushkoff come up with guiding principles for the future of artificial intelligence on a live panel discussion recorded at Betaworks on Monday, April 3.

Keen On Democracy
That Was The Week for 4/14/23: Keith Teare on Substack vs Twitter, Apple banking, and Betaworks' AI Camp

Keen On Democracy

Play Episode Listen Later Apr 14, 2023 34:29


Episode 1434: In this regular weekly show with THAT WAS THE WEEK newsletter author Keith Teare, Andrew and Keith discuss the looming Substack vs Twitter war, the stealthy entry of Apple into the banking sector, and Betaworks' AI Camp - Keith's start-up-of-the-week. Keith Teare is a Founder and CEO at SignalRank Corporation. Previously he was Executive Chairman at Accelerated Digital Ventures Ltd - A UK based global investment company focused on startups at all stages. He was also previously founder at the Palo Alto incubator, Archimedes Labs. Archimedes was the original incubator for TechCrunch and since 2011has invested, accelerated or incubated many Silicon valley startups including InFarm, Miles, Quixey; M.dot (sold to GoDaddy); chat.center; Loop Surveys; DownTown and Sunshine. Teare has a track record as a serial entrepreneur with big ideas and has achieved significant returns for investors. Named as one of the "100 most connected men" by GQ magazine, Andrew Keen is amongst the world's best known broadcasters and commentators. In addition to presenting KEEN ON, he is the host of the long-running How To Fix Democracy show. He is also the author of four prescient books about digital technology: CULT OF THE AMATEUR, DIGITAL VERTIGO, THE INTERNET IS NOT THE ANSWER and HOW TO FIX THE FUTURE. Andrew lives in San Francisco, is married to Cassandra Knight, Google's VP of Litigation & Discovery, and has two grown children. Learn more about your ad choices. Visit megaphone.fm/adchoices

Hidden Forces
How to Think About the Current Moment in A.I. | John Borthwick

Hidden Forces

Play Episode Listen Later Apr 10, 2023 58:14


In Episode 307 of Hidden Forces, Demetri Kofinas speaks with the founder and CEO of Betaworks, John Borthwick about the current moment in artificial intelligence. In John's first appearance on the podcast in 2017, Demetri and he spoke about the challenges and opportunities present in the field of A.I. and how we could progress technologically into an immersive future of superintelligence and simulation without losing touch with our humanity. The incredibly rapid adoption of ChatGPT has caused both Demetri and John to revisit these questions and whether that immersive future that they spoke about then has finally arrived. How we would even know if it has and what the appropriate response to the current moment is for the public, for companies investing in this technology, and for governments seeking to regulate and exploit it is the subject of the episode's second hour, which is available to premium subscribers. You can subscribe to our premium content and gain access to our premium feed, episode transcripts, and Intelligence Reports (or Key Takeaways) at HiddenForces.io/subscribe. If you want to join in on the conversation and become a member of the Hidden Forces genius community, which includes Q&A calls with guests, access to special research and analysis, in-person events, and dinners, you can also do that on our subscriber page. If you still have questions, feel free to email info@hiddenforces.io, and Demetri or someone else from our team will get right back to you. If you enjoyed listening to today's episode of Hidden Forces you can help support the show by doing the following: Subscribe on Apple Podcasts | YouTube | Spotify | Stitcher | SoundCloud | CastBox | RSS Feed Write us a review on Apple Podcasts & Spotify Subscribe to our mailing list at https://hiddenforces.io/newsletter/ Producer & Host: Demetri Kofinas Editor & Engineer: Stylianos Nicolaou Subscribe & Support the Podcast at https://hiddenforces.io Join the conversation on Facebook, Instagram, and Twitter at @hiddenforcespod Follow Demetri on Twitter at @Kofinas Episode Recorded on 04/06/2023

Oh Ship! Show
How Venture Studios Build Businesses

Oh Ship! Show

Play Episode Listen Later Apr 7, 2023 40:38


In this episode of Oh Ship, Freddie is joined by John Borthwick, Co-founder of Betaworks, a thematic residency and investment program for early-stage companies that has led to businesses like Giphy, Bitly, and Tweetdeck (to name a few).#ohshipshow #Betaworks #VentureStudio----------------------------------------------------------Thanks for watching! If you enjoyed this episode of Oh Ship! Show, please:

The Gradient Podcast
Linus Lee: At the Boundary of Machine and Mind

The Gradient Podcast

Play Episode Listen Later Jan 19, 2023 148:46


In episode 56 of The Gradient Podcast, Daniel Bashir speaks to Linus Lee. Linus is an independent researcher interested in the future of knowledge representation and creative work aided by machine understanding of language. He builds interfaces and knowledge tools that expand the domain of thoughts we can think and qualia we can feel. Linus has been writing online since 2014–his blog boasts half a million words–and has built well over 100 side projects. He has also spent time as a software engineer at Replit, Hack Club, and Spensa, and was most recently a Researcher in Residence at Betaworks in New York. Have suggestions for future podcast guests (or other feedback)? Let us know here!Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (02:00) Linus's background and interests, vision-language models* (07:45) Embodiment and limits for text-image* (11:35) Ways of experiencing the world* (16:55) Origins of the handle “thesephist”, languages* (25:00) Math notation, reading papers* (29:20) Operations on ideas* (32:45) Overview of Linus's research and current work* (41:30) The Oak and Ink languages, programming languages* (49:30) Personal search engines: Monocle and Reverie, what you can learn from personal data* (55:55) Web browsers as mediums for thought* (1:01:30) This AI Does Not Exist* (1:03:05) Knowledge representation and notational intelligence* Notation vs language* (1:07:00) What notation can/should be* (1:16:00) Inventing better notations and expanding human intelligence* (1:23:30) Better interfaces between humans and LMs to provide precise control, inefficiency prompt engineering* (1:33:00) Inexpressible experiences* (1:35:42) Linus's current work using latent space models* (1:40:00) Ideas as things you can hold* (1:44:55) Neural nets and cognitive computing* (1:49:30) Relation to Hardware Lottery and AI accelerators* (1:53:00) Taylor Swift Appreciation Session, mastery and virtuosity* (1:59:30) Mastery/virtuosity and interfaces / learning curves* (2:03:30) Linus's stories, the work of fiction* (2:09:00) Linus's thoughts on writing* (2:14:20) A piece of writing should be focused* (2:16:15) On proving yourself* (2:28:00) OutroLinks:* Linus's Twitter and website Get full access to The Gradient at thegradientpub.substack.com/subscribe

Global E-Commerce Tech Talks
Lynn Power CEO & Co-Founder, MASAMI Haircare, live from GELF 2022 in NYC

Global E-Commerce Tech Talks

Play Episode Listen Later Nov 30, 2022 21:56


Greetings and welcome to a special episode recorded live in the on-site Stream Commerce podcasting studio at the New York City Global eCommerce Leaders Forum in September 2022.   These episodes are presented by Stream Commerce, a full-service, full-funnel e-commerce consultancy bringing together the boldest and brightest innovators, strategists, and e-commerce experts. They are the Shopify Plus expert that does more than just build sites.The Global eCommerce Leaders Forum NYC 2022 brought together thought leaders from retailers and brands selling direct-to-consumer. Top global consumer brands were joined by digital natives and digitally-led brands from the fashion and apparel world, health, cosmetics and beauty, consumer electronics and other leading retail verticals for one full day of education, sharing and networking.   On this episode, Lynn Power CEO & Co-Founder, MASAMI Haircare, live from GELF 2022 in NYCAbout LynnAdvertising executive turned entrepreneur with a passion for leveraging creative thinking to build businesses -- and ultimately help brands survive and thrive. Strong experience in cultural & business transformation, digital strategy, consumer insights, talent design and marketing modernization. Member of Women's Forum of NY, CEW, Betaworks. About MichaelMichael is the Founder & President of M.E. LeBlanc & Company Inc and a Senior Advisor to Retail Council of Canada as part of his advisory and consulting practice. He brings 25+ years of brand/retail/marketing & eCommerce leadership experience and has been on the front lines of retail industry change for his entire career. He has delivered keynotes, hosted fire-side discussions with C-level executives and participated on thought leadership panels worldwide.  Michael was recently added to ReThink Retail's prestigious Top 100 Global Retail Influencers for a second year in  2022. Michael is also the producer and host of a network of leading podcasts, including Canada's top retail industry podcast, The Voice of Retail, plus the Remarkable Retail with author Steve Dennis, Global E-Commerce Tech Talks and The Food Professor with Dr. Sylvain Charlebois.  Most recently, Michael launched Conversations with CommerceNext, a podcast focussed on retail eCommerce, digital marketing and retail careers - all available on Apple, Spotify, Amazon Music and all major podcast platforms.   Michael is also the producer and host of the “Last Request Barbeque” channel on YouTube where he cooks meals to die for and influencer riches. If you liked this podcast, you can follow us on Apple iTunes, Spotify, Amazon music podcast channel or your favourite podcast platform. Please rate and be sure and recommend to a friend or colleague in the retail and cross-border commerce industry.=You can learn more about the Global E-Commerce Leaders Forum and continue to keep up with the latest on cross-border commerce online at https://www.globalecommerceleadersforum.com.  Be sure and visit our all-new YouTube site, where you will find plenty of excellent cross-border content from several of our events.

The Conversation Factory
Building an Integrity Culture: Co-Founder Conversations

The Conversation Factory

Play Episode Listen Later Oct 25, 2022 53:46


In this conversation, I sit down with Huddle Co-Founders Stephanie Golik and Michael Saloio. Huddle is a platform for designers and builders to invest in startups with their time.  Stephanie has spent her career building alongside founders at studios and leading design and product at fast-growing tech companies. She was an early design leader at Cruise, building user experiences for self-driving cars. Before that, Steph was Head of Product at Mapfit (acq. by Foursquare). She's a proud Cuban-American born, raised and currently residing in Miami. Michael is a product and team-focused entrepreneur and investor. He's spent his career working with technology executives and investors. As an investment analyst at Oppenheimer & Co., he followed some of the biggest names in technology including Cisco, EMC, and Apple. Prior to Oppenheimer, Mike covered special situations at Sidoti & Co. Over the past five years, Michael reimagined his career to focus on early-stage businesses. He was the first employee at SuperPhone, a messaging application backed by Ben Horowitz, Betaworks, Bessemer, and more. Since 2014 he has consulted with, invested in, or advised more than 35 startups that have raised more than $200M in venture financing. I met Michael years ago and have tracked his rise…when I saw that his latest venture raised 3.3M and was a co-founded company, I reconnected to include him in my co-founder conversations series. My question throughout this series has been simple - what does it take to build and sustain a powerful co-founder relationship?  Michael and Stephanie shared some of the insights and principles that helped them do exactly that. The biggest aha was the umbrella concept of an Integrity Culture, and how many powerful values fall into place with a focus on Integrity. As Michael points out, it's not just “I do what I say I will” it's also about a culture of Coaching and Feedback to help everyone right-size their commitments and to give themselves (and others) feedback along the way when they find themselves falling short. Stephanie and Michael share a conversation format that they use over the course of each week to keep their team on track and in integrity! Integrity Culture also implicates one of my favorite words: Interoception, a concept I learned from Food Coach Alissa Rumsey. Michael and Stephanie's vision of an integrity culture is one where you commit to a thing because you are intrinsically motivated to do it, not through force or pressure…you self-select the thing you are going to do. And that means you know what you want! Interoception is the ability to feel and know your inner state.  Some additional keys to a powerful co-founder relationship that line up with the other conversations in this series are the ability to have Healthy Conflict (rather than an unhealthy “peace”) and the regular asking and giving of generous and generative deep feedback. One other insight that was fresh for me in this conversation was Michael's idea of a good co-founder relationship as one that is “Energy Producing” vs. energy sucking. A powerful co-founder relationship is like a flywheel - the more energy you invest into it, the more energy it throws off. Be sure to check out my other co-founder conversations, like this episode with Jane Portman and Benedikt Deicke, co-founders of Userlist, on how they connected through shared communities and learned how each other really worked through real-world, previous projects. You may also enjoy my interview with Carolyn Gregoire and Scott Barry Kaufman, the co-authors of the 2015 bestseller, Wired to Create, where we unpack how they managed their working relationship. And if you really want to dive deep into the idea of being a conscious co-founder, make sure to check out my conversation with my friend Doug Erwin, the Senior Vice President of Entrepreneurial Development at EDAWN, the Economic Development Authority of Western Nevada. Head over to theconversationfactory.com/listen for full episode transcripts, links, show notes  and more key quotes and ideas. You can also head over there and become a monthly supporter of the show for as little as $8 a month. You'll get complimentary access to exclusive workshops and resources that I only share with this circle of facilitators and leaders. Links Huddle website

#dogoodwork
Unlock The Web's New Business Model with Julien Genestoux

#dogoodwork

Play Episode Listen Later Jul 5, 2022 22:44


Julien is the Founder & CEO of Unlock, where he's building the web's new business model by enabling new ways for creators to monetize their content through a decentralized, access control protocol. He previously founded SuperFeedr, which became one of the leading real-time web APIs, received funding from Mark Cuban and Betaworks, and was later acquired by Medium. At Medium, Julien led the company's SEO efforts and quadrupled the share of traffic Medium received from searches. He also created his first company, Jobetudiant, while still in school. After nearly 20 years, it's still one of the largest job boards for students in France. Highlights Who Jᵾlien Genestoux is Where is the internet going, why it's going in that direction, and what are the trends Jᵾlien is seeing in the next five years What NFT is How to start thinking about NFT as a tool versus what others think the mainstream of NFTs are How can someone enter into a contract What a wallet is and how it plays a role in NFT and cryptocurrency How to incorporate NFT into your business What prompted Jᵾlien to start to Unlock What are the trends he's seeing that excite him the most with Unlock How Unlock plays a role in the trends for infrastructures being built Where to find more information about Jᵾlien and Unlock Episode Resources Connect with Raul Hernandez Ochoa https://www.linkedin.com/in/dogoodwork https://dogoodwork.io/work-with-raul https://dogoodwork.io/podcast Connect with Jᵾlien Genestoux https://unlock-protocol.com/ https://twitter.com/unlockprotocol https://twitter.com/julien51 

Riding Unicorns
S4E6 - Paul Murphy, Partner @ Lightspeed Venture Partners

Riding Unicorns

Play Episode Listen Later Jun 8, 2022 40:40 Transcription Available


Paul Murphy (@paulbz on Twitter) is a Partner at Lightspeed, a global venture capital firm focusing on multi-stage investments in the enterprise technology, consumer, and health sectors. Prior to his stint at Lightspeed Paul worked at Microsoft, Betaworks, Giphy and Northzone to name but a few. Paul also has extensive experience as an entrepreneur having set-up several companies namely Mobile Games Studio, Dots and calendar app, Katch. Paul is now sharing his knowledge of what it takes to build a successful business with the next generation of founders and helping Lightspeed identify the game changing businesses of the future. Paul spoke to Riding Unicorns to shed some light on the decision making process behind his career moves, the lessons he has learnt from incorporating businesses and how his experiences in business has informed his approach to investing. The guys also go on to probe Paul on why Europe is becoming attractive to U.S funds, what common traits successful founders share and what he aims to achieve moving forward. Make sure to like and subscribe to the Riding Unicorns podcast to never miss an episode. Also don't forget to give Riding Unicorns a follow on Twitter and LinkedIn to keep on top of the latest developments.Small But Mighty Agency A podcast that uncovers how to go from solopreneur to seven figure agency with ease.Listen on: Apple Podcasts Spotify

CryptoNews Podcast
#118: Simon Yi on Growth Marketing Tactics and Clam Island

CryptoNews Podcast

Play Episode Listen Later Apr 18, 2022 46:30


Simon Yi is Co-founder and Head of Growth at Clam Island. He has over a decade of experience serving as a growth marketing expert for venture funds and startup studios including Consensys Mesh, Expa, Betaworks and Human Ventures. Simon also serves as a growth marketing advisor for startups, incubators and venture funds throughout New York City.In this conversation, we discuss:- VaynerMedia back in the day - Growth marketing tactics - Doxing teams - Finding users that have multiple ERC-721 wallets - Actionable marketing advice - Clam Island - DAOs not really being DAOs   Clam IslandWebsite: clamisland.fiTwitter: @clam_islandDiscord: discord.gg/F4ak27n8ShSimon YiTwitter: @MysterYiLinkedIn: Simon Yi---------------------------------------------------------------------------------  This episode is brought to you by PrimeXBT.  PrimeXBT offers a robust trading system for both beginners and professional traders that demand highly reliable market data and performance. Traders of all experience levels can easily design and customize layouts and widgets to best fit their trading style. PrimeXBT is always offering innovative products and professional trading conditions to all customers.  PrimeXBT is running an exclusive promotion for listeners of the podcast. After making your first deposit, 50% of that first deposit will be credited to your account as a bonus that can be used as additional collateral to open positions.  Code: CRYPTONEWS50  This promotion is available for a month after activation. Click the link below:  PrimeXBT x CRYPTONEWS50

Open Source Startup Podcast
E26: Cube.dev - Open Source Headless BI for Building Data Apps

Open Source Startup Podcast

Play Episode Listen Later Apr 7, 2022 38:17


Artyom Keydunov is Co-founder & CEO of Cube.dev, the headless BI platform for building data apps. Cube.dev's open-source project, cube-js, has almost 13K stars on GitHub and helps data engineers and application developers access data from modern data stores, organize it into consistent definitions, and deliver it to every application. Cube.dev has raised over $20M from investors including Bain Capital, Decibel Partners, Eniac VC, 645 Ventures, Slack Fund, and Betaworks.

Open Source Startup Podcast
E19: Hugging Face & the Open-Source AI Community

Open Source Startup Podcast

Play Episode Listen Later Mar 14, 2022 29:55


Julien Chaumond is Cofounder & CTO of Hugging Face, the AI community with deep roots in open source. Hugging Face has many open source projects including transformers (59K stars), datasets (13K stars), tokenizers (5K stars), among others. The company has raised $60M+ from investors such as Addition, Lux Capital, and Betaworks.

John's private podcast feed ~  betaworks Studios events & things I'm listening to.. enjoy

Interviewed by @YaelEisenstat and @borthwick --- Send in a voice message: https://anchor.fm/johnb/message

Techspansive
100. Our first guest: Peter Rojas on Epic vs. Apple, Amazon buys MGM, and Nintendo's next Switch

Techspansive

Play Episode Listen Later May 28, 2021 49:11


Venture Games
Venture Games Episode 10: Peter Rojas, Betaworks Ventures

Venture Games

Play Episode Listen Later May 27, 2021 61:29


In Episode 10 of Venture Games, my guest Peter Rojas, partner at Betaworks Ventures, talks about co-founding Gizmodo and Weblogs Inc. (Engadget, Joystiq, acquired by AOL), his views on social gaming, which he sees as one of the most promising areas in gaming right now, and his candid thoughts on the diversity challenges in the venture capital industry, as one of the few Latinx VC investors.

SaaS it Up

Justin Mitchell is the Founder of Yac which is an audio-first messaging platform that helps you talk faster and build stronger relationships with your remote team. Yac is Featured in TechCrunch & Wall Street Journal after their recent fund-raise of $10M in funding from GGV, Slack, Betaworks, BoostVC, Active Capital, Bessemer Venture Partners, and more. Audio is indeed powerful, and it touches every aspect of our lives. It connects us with our loved ones, and strangers across the world. It tells us stories, and it keeps us company as we simply work in this new world. Just when ClubHouse opened up its platform to Android users, we had a candid chat with Justin to understand how the world of audio is shaping the future of work.

the full bloom
Peter Rojas (Partner, Betaworks Ventures) :: the un-spoken challenges of early stage investing | the full bloom 1.14

the full bloom

Play Episode Listen Later Mar 25, 2021 47:15


What do tech bloggers and venture capitalists have in common? They're both digging to learn more about the newest emerging technology.Peter Rojas, Partner at Betaworks Ventures, joins Bart Macdonald to talk early stage investing, social gaming and first-time founder challenges. Peter has built a notable career in technology as a 4x founder- originating from a background in journalism as a founder of Gizmodo/Gawker Media, and most recently VP of Strategy at AOL. On this episode Peter and Bart uncover the challenges a venture firm faces when investing in early-stage technology. And on the flip-side, he shares the importance of having empathy for first-time founders navigating the venture world.Peter Rojas, :: Partner, Betaworks Ventures :: Twitter + LinkedInBart Macdonald, Managing Partner of Bloom Venture Partners :: Twitter + LinkedIn

The Sunday Show
Amazon & Inequality; Social Media & Regulation

The Sunday Show

Play Episode Listen Later Mar 14, 2021 64:12


In this episode we focus on the question of what should be done to constrain the dominance of the tech platforms, and to regulate the ways in which they control aspects of our economy, markets and the public sphere. First up, we hear from Alec MacGillis, a reporter for ProPublica that has just written a book that considers Amazon's dominance and what it means for wealth of American cities and people called https://www.amazon.com/Fulfillment-Winning-Losing-One-Click-America/dp/0374159270 (Fulfillment: Winning and Losing in One-Click America). Then, we listen to a panel discussion from the Betalab: Fix The Internet program at Betaworks. The discussion focuses on priorities for the regulation of social media, and features Yaël Eisenstat, Researcher-in-Residence at Betalab and formerly a CIA officer, a White House advisor, and the Global Head of Elections Integrity Operations for political advertising at Facebook; Jason Kint, the CEO of Digital Content Next, a trade association that advocates for media companies such as The New York Times, Conde Nast, ESPN, Vox, Politico and Insider, and the Chairman of TrustX, a cooperative digital advertising marketplace designed to address the industry's trust, transparency and accountability challenges in digital advertising; and Marietje Schaake, international policy director at Stanford University's Cyber Policy Center and international policy fellow at Stanford's Institute for Human-Centered Artificial Intelligence. Between 2009 and 2019, Marietje served as a Member of European Parliament for the Dutch liberal democratic party where she focused on trade, foreign affairs and technology policies. 

Portfolio Career Podcast
Growing A Company Of One With Jonathan Basker

Portfolio Career Podcast

Play Episode Listen Later Feb 11, 2021 37:54


For the last six years, Jonathan Basker has grown his own, independent company.He has done so WITHOUT having a website. He has worked in the recruiting, HR, and leadership sectors with and at leading firms like Etsy, Betaworks, Handy, Bark, and InVision just to name a few. He is currently a consultant usually taking on interim roles and a coach.  In this episode, you will learn:-how Jonathan started working WITH companies, instead of for them-how helping a friend led to Jonathan starting his coaching work-the ups and downs of having a Portfolio Career-how Jonathan determined his pricing strategyAs always, this episode with notes is available on my website Connect with Jonathan on Twitter

Outliers with Daniel Scrivner
#17 Andrew Dumont: Adventures in Business and Investing

Outliers with Daniel Scrivner

Play Episode Listen Later Feb 2, 2021 62:03


“When you’re first starting a fund, you're going to spend your first five years, realistically, actively raising money from LPs—and that is your primary role. It is not interfacing with founders and helping entrepreneurs; that's a small piece. I think it's important that people realize: if you just want to work with entrepreneurs all day, join an existing fund.” – Andrew Dumont Andrew Dumont (@andrewdumont) is the CEO of Meteor and WeWorkRemotely and the Founder of Curious Capital. He was previously the CMO of Bitly and an Entrepreneur in Residence at Betaworks. He’s spent his career building and growing companies like Moz, Seesmic (acquired by Hootsuite), Stride (acquired by Copper), and Tatango. He’s also an advisor at Techstars and Startup Weekend, and he writes for Inc. Magazine. Andrew was named one of Forbes’ 30 Innovators Under 30 in Marketing and was appointed an entrepreneurial delegate by the United Nations. In this episode, Andrew and Daniel discuss the pros and cons of venture capital, why some startups make it while others fail, and how investors can best serve founders. Show notes with links, quotes, and a transcript of the episode: https://www.danielscrivner.com/notes/andrew-dumont-outliers-show-notes  Chapters in this interview: 00:00:07 – Andrew’s start in the world of venture capital 00:06:38 – Why venture capital isn’t the best option for all startups 00:12:48 – Andrew’s work with Betaworks 00:18:41 – The truth about choosing companies to invest in 00:21:51 – Why companies succeed 00:24:52 – Andrew’s work with Tiny Capital 00:35:17 – Making an acquisition grow 00:40:19 – Getting started with investing  00:43:46 – The realities of managing a fund 00:50:44 – How investors can best help founders 00:56:05 – Why getting reps as investor is exciting   Sign up here for Outliers Weekly, our weekly Sunday newsletter that highlights our podcasts, business and investing concepts, and the best of what we read that week.  Follow Daniel on Twitter: https://twitter.com/DanielScrivner If you loved this episode, please share a quick review on Apple Podcasts. Learn more about your ad choices. Visit megaphone.fm/adchoices

Angel Talks - Подкаст про венчурные инвестиции
Angel Talks #33. Спецвыпуск с RB.RU. Стартап студии и Венчурные билдеры. Дмитрий Бергельсон

Angel Talks - Подкаст про венчурные инвестиции

Play Episode Listen Later Jan 28, 2021 85:51


Стартап-студии — компании, которые поставили создание стартапов на поток и сделали это своим бизнесом. Количество таких организаций по всему миру уже превысило 560, и это на 625% больше, чем семь лет назад. В 2020 году, по данным Enhance Ventures, в мире было больше 560 стартап-студий. В число наиболее известных входят Rocket Internet, Betaworks, Science, eFounders. Благодаря студиям появились такие стартапы, как giphy (продан Facebook за $400 млн), bit.ly (продан Spectrum Equity за $63 млн) и нашумевший сервис подписки на бритвы Dollar Shave Club (продан Unilever за $1 млрд). В специальном совместном прямом эфире Rusbase/RB и Angel Talks мы поговорим про стартап-студии в России и за рубежом, обсудим перспективы бизнес-модели для инвесторов, основателей и участников инфраструктуры рынка инноваций. Ответим на актуальные вопросы, например: Насколько реально сделать бизнес по созданию стартапов на потоке? Есть ли устоявшаяся модель работы на рынке и чем отличаются различные подходы? Как происходит поиск и отсев идей? Какова структура собственности? Может ли сама стартап-студия быть объектом для инвестиций венчурного фонда? ...и многие другие. Об опыте создания собственного венчур билдера расскажет Дмитрий Бергельсон, управляющий партнёр INNORETAIL.VC, Holmes & Moriarty и партнёр Strategy Partners Group. Дмитрий — менеджер и предприниматель с опытом коммерциализации инноваций и развития бизнеса в сфере розничной торговли, высоких технологий, ИТ и консалтинга, автомобильного ритейла, недвижимости, потребительской электроники и сервиса. INNORETAIL.VC — первый на российском рынке специализированный retail tech venture builder. В портфеле, например, TWIN (платформа для создания и эксплуатации диалоговых роботов), GetFaster (аутсорсинг E-Grocery для продуктовых сетей и производителей продуктов питания), LoyaltyLab (провайдер персонализации 2.0 для ритейла) и другие.

Inspiring Futures
Brent Vartan- Bullish

Inspiring Futures

Play Episode Listen Later Jan 7, 2021 60:43


This was a pod recorded almost a year ago- right before COVID hit and somehow this recording got lost in the mix.Brent Vartan left his high-powered strategy job at Deutsch to join a former colleague Mike Duda and created Bullish.Bullish is a hybrid company that brings together a PE/VC company and a creative agency- the remit is as an early stage/angel. The vision is to "own what they make" to get equity and ownership in the company's they work for. They only invest in consumer categories- there are apparently 256 of them!!The very first company they invested in was Birchbox in 2010....because they believed they need a brand and then they got introduced to Harry's, etc.Bullish was also they were the first money in Casper and Peleton. Their work is all about brand and providing thinking about how to build a brand, but also how to create a deep customer experience. A big challenge is understanding what the opportunities and limits are of a newly minted brand. Brett also talks about the differences between the ad industry and VCs and The Bay Area and NY and explains how hard it is to make this happen for an agency; you need brand strategists and MBAs and there also needs to be a willingness to risk your own money. This podcast was recorded at Betaworks and was recorded by Alexander Rae, who also co-hosted.

PressClub with Josh Constine
Co-worker games: Peter Rojas on replacing happy hour

PressClub with Josh Constine

Play Episode Listen Later Jan 7, 2021 17:09


Could video games help employees stay social without exhausting Zooms? Early stage investor Peter Rojas from BetaWorks discusses his tweetstorm ( https://twitter.com/peterrojas/status/1346544269218123777 ) about what a Fortnite For Business could look like. Hear about how shoulder-to-shoulder play can complement face-to-face collaboration, team-building through shared triumph, and why gaming can improve mental health. Subscribe at Constine.club ( http://constine.club/ )

Revenue Harvest
How to Finally Align Sales and Marketing with Digital Marketer Andrew Dumont

Revenue Harvest

Play Episode Listen Later Dec 8, 2020 43:30


More often than not, the relationship between marketing and sales is siloed at best or it's adversarial, at worse.There's no better time to lean into digital and for these organizations to unite. In this episode, Andrew Dumont draws on ways for sales and marketing organizations to be aligned on their shared revenue goals.Andrew Dumont is a serial technologist with a passion for building, growing, and investing in early-stage companies. He's currently the founder of Curious Capital and has worked as an Entrepreneur in Residence at Betaworks in New York City, a startup studio that has invested in companies like Tumblr and AirBnB, and has created companies like Bitly, Giphy, and Chartbeat.Show notes:Marketing should be goaled on revenue and top level business metrics as opposed to vanity marketing metrics. Weekly pipeline conversations between marketing and sales leaders is critical to maintaining that relationship. A standing agenda could include pipeline updates from the sales organization, performance from the marketing organization, activities attributed in terms of leads generated, or attributed revenue. One of the big tensions in conversations between sales and marketing is ensuring that marketing is filling in the gaps when it comes to collateral and that the sales teams feel they have the materials and support for them to be successful. Marketing should set goals for that and then benchmark against whether they were able to help sales reach those goals. At a minimum, a website should be making the life of the seller easier. A visual website experience is a key component of the brand and how reputable you look as an organization. Marketing should be responsible for building predictable, repeatable processes for generating qualified leads for the sales organization As a marketing organization, revenue is the key determining metric of whether you're successful or not. If you're not creating leads that turn into revenue, then you're just spinning your wheels.A lot of strife comes from marketing vanity metrics, and marketing organizations being focused on leads and getting people into the funnel, however if those leads aren't qualified, that's where the disconnect happens between sales and marketing. Attribution is a huge challenge for sales and marketing. When the shared focus is on revenue and growing the business, you don't have to worry as much about whether sales generated the lead or marketing generated the lead. A good sales and marketing alignment meeting may mean walking through the campaigns and activities for the week, looking at the timing, and pipeline, and uncovering disconnects. That's when these meetings are valuable. Marketing should go through the sales process. They should be a sales person for several days, try to sell the product and have conversations with customers to understand where they are failing in terms of their messaging, and the way the product is being served. Modern marketers today are very tactical. They know a lot about SEO, content marketing, they know about all these channels that exist on how to generate leads and generate new customers, but they actually step over the core of it which is: what is the value proposition? What is the benefit to the customer? What are the objections to why this product wouldn't work well for them?For links and resources from the episode, visit https://nigelgreen.co/revenue-harvest/

If You Give a Principal a Podcast...
Ayuna Vogel -Technical Product Manager at Microsoft

If You Give a Principal a Podcast...

Play Episode Listen Later Nov 25, 2020 29:28


Today, we have the opportunity to meet another member of the Microsoft team, Ayuna Vogel.  Ayuna is currently the technical product manager for Microsoft in Redmond, WA.  She has worked as an IOS Engineer for LinkedIn, an IOS developer for Vimeo, been a translator for the United Nations, founded an independent educational project that welcomes volunteers from all over the world to teach at the rural schools in Russian Siberia, taught intensive Russian language course to college students in Malaysia, wrote for Look-At Media, and has traveled to Germany, Russia, Mongolia, China, Viet Nam, Singapore, Malaysia, Thailand, and Laos She is a co-founder and a developer of Neverlate, a self-improvement iOS app for people who are often late.  She was selected one of 30, out of 620, applicants to become an iOS Fellow at the Access Code, an intensive, project based iOS Mobile App Development program in New York City. During her time at Access Code, she worked with leading iOS engineers from Google, Yahoo, Artsy, and Betaworks. Ayuna is a co-organizer of the iOSoho meetup, the largest iOS meetup in NYC. She also founded the Women in iOS meetup. the only NYC meetup focused on highlighting the work female iOS developers do in the industry. In her free time, she volunteers with Coder Dojo and helps kids learn the basics of HTML & CSS, and with Women Who Code where she helps beginner programmers and provides mentorship to women looking to enter the tech field. Today's episode was made possible by a partnership with the Mt. Adams High School STEM/CTE program and and the Microsoft TEALS program. The TEALS program is designed to help high schools build and grow sustainable computer science (CS) programs. They pair volunteer CS professionals across the US with classroom teachers looking to make a lasting impact in their students' lives. TEALS volunteers help shape students' futures and career opportunities by providing them with computational knowledge and skills they wouldn't otherwise have access to.

Founder's FAQ
Matthew Hartman | Partner at Betaworks Ventures

Founder's FAQ

Play Episode Listen Later Nov 18, 2020 11:05


Founder's FAQ: answers to all the possible questions of a founder. Hosted by Ilker Koksal. This episode's guest is Matthew Hartman. Matt is a partner at Betaworks Ventures, based in New York City. Matt started his career building the technology platform for Trammell Crow Company (acquired by CBRE) before joining Hot Potato (acquired by Facebook). He co-founded ReferBoost, a profitable b2b company in the real estate and social media space. In this episode;1-) How to help the portfolio founders2-) How to assess the founder product relatedness3-) Betaworks camps 4-) How to evaluate a companyFounder's FAQ is a book for founders and you can pre-order through the website. You can also reach us through @foundersfaq on Twitter, LinkedIn, Instagram, and YouTube.

Non Traditional
Ep 7 - John Borthwick

Non Traditional

Play Episode Listen Later Oct 26, 2020 54:39


In Episode 7 we talk to John Borthwick, founder and CEO of Betaworks.Things Mentioned:Betaworks Twitter: @traditionalpod Instagram: @none_traditional Facebook: Non Traditional Page Support us on Patreon here.Subscribe to the newsletter here. Be part of this Non Traditional community. Join the Friends of Non Traditional here. Co-Hosts: Anastasia Folorunso, Jenn MundiaProduced and Edited by: Anastasia FolorunsoTheme Music by EsabaluGraphics by Rowan ShalitSupport the show (https://www.patreon.com/nontraditional)

The Doctor's Farmacy with Mark Hyman, M.D.
Do Our Computers Know Us Better Than We Know Ourselves?

The Doctor's Farmacy with Mark Hyman, M.D.

Play Episode Listen Later Sep 25, 2020 17:35


Do Our Computers Know Us Better Than We Know Ourselves? | This episode is brought to you by Perfect KetoStudies have shown humans only actually remember 0.05% of our memories, whereas databases can remember it all! It’s a scary realization, but computers can know us in a way we don’t even know ourselves. We like to think we aren’t easily persuaded, but when companies know our likes, wants, needs, and purchase history it all adds up to an easier sale. Earlier this year, Dr. Hyman sat down to discuss this topic with Andy Russell. Andy explains how big companies like Google and Facebook access and use our data. He also shares what we can do to maintain control over the content we see.Andy Russell is a digital media, ad-tech, marketing-tech, and data science innovator and pioneer as well as a self-taught Behavioral Economist. He has invested in, incubated, or run over 50 technology companies, including Daily Candy, Thrillist, Tasting Table, Idealbite, PureWow, Zynga, Betaworks, Business Insider, Sailthru, RapLeaf and LiveRamp, SpongeCell, AdRoll, and Bounce Exchange. He is the Founder and Chairman of Trigger Media, InsideHook, and Fevo.This episode is brought to you by Perfect Keto. Right now, Perfect Keto is offering Doctor’s Farmacy listeners 20% off plus free shipping with the code DRMARK. Just go to perfectketo.com/drmark, and make sure you try their Nut Butters and Keto Cookies.Find Dr. Hyman’s full-length conversation with Andy Russell, “How Your Free Will And Data Are Being Hacked By Micro-Targeting Of Your Personality,” here: https://DrMarkHyman.lnk.to/AndyRussell See acast.com/privacy for privacy and opt-out information.

Leverage
Is Data Threatening our Democracy? with Yael Eisenstat

Leverage

Play Episode Listen Later Sep 17, 2020 46:26


Yael Eisenstat joins Nick to discuss how social media is affecting democracy and what it could mean for the upcoming elections.  As a former CIA Officer and National Security Advisor at the White House, Yael tackles the intersection of ethics, tech, and democracy with her years of experience working with governments, tech companies, and investors. She was the former head of elections integrity operations for political advertising at Facebook, and now works at Betaworks, a startup studio and seed-stage venture capital company in New York. RESOURCES: yaeleisenstat.com betaworks Yael’s TEDtalk

Dear Analyst
Dear Analyst #43: Setting up workflows that scale – from spreadsheets to tools & applications

Dear Analyst

Play Episode Listen Later Sep 14, 2020 50:56


This episode is the audio from a presentation I gave a few weeks ago to members of Betaworks based in NYC. Betaworks is a startup accelerator, co-working space, and community of founders. No-code is a pretty hot topic right now, and in this presentation I talk about how spreadsheets is one of the first no-code […] The post Dear Analyst #43: Setting up workflows that scale – from spreadsheets to tools & applications appeared first on .

The Infotagion Podcast with Damian Collins MP
Episode 26: John Borthwick

The Infotagion Podcast with Damian Collins MP

Play Episode Listen Later Jul 30, 2020 33:32


Damian Collins MP is joined by John Borthwick, Chair of the COVID-19 Technology Taskforce and CEO of Betaworks, who were early investors in Twitter and Tumblr. They discuss the tech companies' response to COVID-19 and compare track and trace technologies around the word.

The Business of Content
Can Digg return to its former glory?

The Business of Content

Play Episode Listen Later Jun 22, 2020 33:33


In the mid-2000s, Digg was one of the most powerful websites on the internet. Powered by its army of users, the platform would send gargantuan amounts of server-crushing traffic to any content featured on its front page. Millions of people visited it each day and it turned its founder Kevin Rose into an internet celebrity.   But you probably know what came next. A misguided redesign triggered a user revolt, and its audience abandoned it for Reddit and other platforms. Before long, it seemed destined to follow in the footsteps of Myspace and Friendster. Its story didn’t end there. In 2012, the site sold to the startup studio Betaworks, which immediately went about trying to revive the Digg brand. In 2018, it was purchased by a company called BuySellAds. I recently interviewed Todd Garland, Digg’s new owner and CEO. We discussed its current editorial operations, its monetization strategy, and his plans to restore Digg to its former glory.

The Bootstrapped VC - A Backstage Capital Podcast
Introducing 0G - A new Podcast by Christie Pitts + Peter Rojas

The Bootstrapped VC - A Backstage Capital Podcast

Play Episode Listen Later Feb 23, 2019 27:07


Peter Rojas loves gadgets and made a career around sharing the stories behind the devices, including vintage mobile phones. Christie Pitts worked on the front lines of consumer device sales at Verizon. Today, they're both VCs investing in startups – Peter at Betaworks, and Christie at Backstage Capital. But, while they look toward the future of technology, they can't refrain from reminiscing about the weird and wonderful evolution of phones that many have left behind in the past. Join us in this fun preview of Christie and Peter's new podcast, "0G: A History of Forgotten Phones". Listen to 0G: https://anchor.fm/0g Full show notes: https://backstagecapital.com/podcast/0g/‎ Credits: Executive Producer: Arlan Hamilton Co-Host: Christie Pitts Co-Host + Editor: Bryan Landers Theme by: Janine Additional Music by: Jeff Kaale

TechCrunch Startups – Spoken Edition
Here are the six startups participating in Betaworks LiveCamp

TechCrunch Startups – Spoken Edition

Play Episode Listen Later Sep 5, 2018 3:18


Betaworks this morning revealed this list of six startups participating in its fourth Camp accelerator program. Launched in 2016, the program brings together a collection of young companies united under a single theme. This time out, things are focused on live-streaming, for a program fittingly titled, LiveCamp. Betaworks settled on the topic based on the popularity of apps like Twitch and HQ Trivia. It's admittedly a bit more nebulous than past topics like BotCamp, VoiceCamp and VisionCamp.

WIRED Business – Spoken Edition
Facebook's Virtual Assistant M Is Dead. So Are Chatbots

WIRED Business – Spoken Edition

Play Episode Listen Later Jan 9, 2018 4:17


It's difficult to remember now, but there was a moment in early 2016 when many in the tech industry believed chatbots---automated text-based virtual assistants---would be the next big platform. Messaging app Kik staked its company's future on bots and “chatvertising.” Startup studio Betaworks launched an accelerator program called Botcamp. And at its 2016 F8 conference, Facebook pitched bots to developers as the best way to connect with 900 million Messenger users.

The Podcast Digest
TPD 73 - Allison Behringer from The Intern

The Podcast Digest

Play Episode Listen Later Jan 10, 2016 54:46


Allison Behringer from The Intern and Betaworks is my guest this week and she talks about her intriguing road to podcast host, teaching abroad, what it's like carrying a recorder everywhere and the reception to her new show!  Allison tells some great stories and this is definitely a show you should be jumping on now! Enjoy! Allison Behringer (Twitter) The Intern (Twitter) (Website) BetaWorks (Twitter) (Website)   Follow TPD on Twitter for all the latest news and announcements: TPD  

Daily
ED 04-26 Betaworks compra Instapaper

Daily

Play Episode Listen Later Apr 26, 2013 8:05


missing description

The Drill Down
242: Occupy ISP

The Drill Down

Play Episode Listen Later Aug 3, 2012 108:16


On this week's show, we'll talk about Google's ambitious plan to put Gigabit Fiber in every home, beginning with Kansas City. And a look at the 2012 London Olympics, an Olympics so far fraught with technical challenges and social media disputes. But first, the headlines...Rumored launch dates for the next Apple iPhone and iPad Mini, Verizon can't charge extra for tethering, Microsoft launches Outlook.com, and Digg relaunches! What We're Playing With Andy: Raspberry Pi Dwayne: NBC's Live Extra Olympic App. Amazon Prime Instant. Hulu Plus on Apple TV. Headlines Apple iPhone 5 and iPad mini event planned for Sep 12, iPhone 5 release date for Sep 21 New purported next-generation iPhone parts photos show fully assembled device Verizon can't charge extra for tethering Microsoft introduces Outlook.com, Hotmail's planned successor Rethink Digg v1 preview Digg launches redesign and new app with editorial curation, no comments   Audible Book of the Week Tubes: A Journey to the Center of the Internet by Andrew Blum Musical Interlude #1: Just What I Needed by The Cars Hot Topic: Google Fiber / Google Fiber TV Google Fiber launches in Kansas City, promises 1000 Mbps up and down The economics of Google Fiber and what it means for U.S. broadband Can Google conquer cable TV? Musical Interlude #2: And I Will Kiss by Underworld (feat. Dame Evelyn Glennie) Final Word: 2012 London Olympics The Olympian Infrastructure Challenge NBC insults viewer intelligence, says Olympic Opening Ceremony is too 'complex' Olympic fail: Officials tell crowd not to tweet or text during events #NBCFail: Online backlash as Twitter locks out Independent reporter Twitter 'alerted NBC to British journalist's critical tweets' Twitter 'sorry' for suspending Guy Adams as NBC withdraws complaint Outro: Caliban's Dream by Underworld (ft. Dockhead Choir, Dame Evelyn Glennie, Only Men Aloud!, Elizabeth Roberts, Alex Trimble) Subscribe! The Drill Down on iTunes (Subscribe now!) Add us on Stitcher! Sign up here to be alerted by SMS when the podcast is live! Geeks Of Doom's The Drill Down is a roundtable-style audio podcast where we discuss the most important issues of the week, in tech and on the web and how they affect us all. Hosts are Geeks of Doom contributor Andrew Sorcini (Mr. BabyMan), VentureBeat editor Devindra Hardawar, marketing research analyst Dwayne De Freitas, and Startup Digest CTO Christopher Burnor. Occasionally joining them is Techmeme editor Lidija Davis.