POPULARITY
Categories
Emily Mueller shares her inspiring journey of founding and scaling Wellory, a company she started from personal health challenges. Emily discusses how her entrepreneurial background and sales expertise helped her raise significant VC funds and ultimately led to Wellory's acquisition. She highlights the strategic decisions behind selling the company, the impact of emerging prescription drugs on her business model, and the importance of deal-closing skills in entrepreneurship. Emily also touches on her post-exit life, including her advisory role at The Whisper Group. 00:28 Meet Emily 00:43 The Birth of Wellory 01:57 From Startup to Acquisition 02:36 The Entrepreneurial Spirit: Family Influence and Risk 04:30 Sales Skills: The Key to Success 07:42 Raising Funds: The Venture Capital Path 13:04 Navigating Market Changes 16:51 The Decision to Sell: Timing and Strategy 20:57 The Exit Process: Closing the Deal 30:45 Life After Exit
Our new episode features Sakib Jamal ‘19, vice president at Crossbeam Venture Partners. Jamal was recently chosen for the Forbes 30 Under 30 list, in part for his book, "The Young VC's Handbook," a tactical guide for newcomers to venture capital. In the book, some of the smartest young minds in VC share actionable advice. Proceeds from the book go to help underprivileged children attend primary school in Jamal's home country of Bangladesh.
The latest episode of the Startup CEO Show features an enlightening conversation with Nick Franklin, the visionary behind ChartMogul, a leading subscription analytics platform. Mark McLeod explores ChartMogul's evolution from a niche product to a comprehensive platform for B2B SaaS companies. Nick shares fascinating insights into the state of SaaS metrics, revealing that while core metrics like MRR and churn remain crucial, the rise of AI and metered billing is reshaping the landscape. The discussion touches on ChartMogul's unique position as a barometer for the SaaS industry, offering listeners a rare glimpse into industry-wide trends and benchmarks. Nick's journey from an illustration background to becoming a successful tech CEO is both inspiring and instructive, highlighting the importance of adaptability and continuous learning in the startup world. The episode also dives into the challenges of remote work, with ChartMogul embracing a distributed model long before it became mainstream. Listeners will appreciate Nick's candid reflections on the pros and cons of venture funding, and his decision to prioritize profitability over rapid, unsustainable growth. Don't miss this opportunity to learn from one of the industry's most experienced and thoughtful leaders.--------------------------Since 1999, I have sat at the right-hand side of the leaders of high growth technology companies as either a CFO, VC or deal maker. I served as CFO for software companies including Shopify (NYSE: SHOP) and Freshbooks. As a CFO I experienced outright failures, wildly profitable exits, and everything in between.I was a General Partner in Real Ventures, Canada's largest and most active seed stage fund. My investments there include the fund's largest cash on cash and highest IRR returns to date. Most recently, I founded SurePath Capital Partners the leading investment bank for SMB SaaS companies where we did hundreds of millions in financing and exit transactions.Connect on LinkedIn: https://www.linkedin.com/in/themarkmacleod/Connect on X/Twitter: https://twitter.com/markmacleod_Contact Mark: https://markmacleod.me/Contact Mark: https://markmacleod.me/
IN. UP! OUT? Alles über Unternehmensberatung mit Moritz Neuhaus
Niklas Storz, Ex-Senior-Partner bei BCG, gibt mit 49 Jahren alles auf, um als Startup-Gründer bei Tidely neu durchzustarten. Warum er Beratungs-KPI gegen Cashflow-Tools tauscht, wie er ohne klassischen VC seine Vision verfolgt und was ihn nachts wach hält – mit mir hat er ein schonungsloses Gespräch über Erfolg, Scheitern und den Mut, noch einmal ganz von vorne anzufangen, geführt. --------------------------------------------------------------- Moritz Neuhaus ist Co-Founder und CEO der Insight Consulting GmbH. Gemeinsam mit seinem Team hilft er CEOs, Gründern und Consulting-Partnern dabei, online zu Meinungsführern in ihrer Industrie zu werden.
Jeremy Au explored the nuances of venture capital through three lenses. He described how LPs, such as sovereign wealth funds and institutional investors, pursue diversification and long-term returns, often seeking a 25% net IRR to justify the high risks of VC, as seen in Southeast Asia's emerging tech ecosystem. Using the "2 and 20" model, Jeremy explained that a. general partners commit 1% of the fund size (e.g., $1M for a $100M fund) as skin in the game while limited partners provide 99% of the capital. b. GPs spend 2% of the fund size on operations for 10 years and c. GPs receive 20% of the fund exit upside and LPs 80%. He shared examples like Sequoia's $100M investment in Zoom, yielding 22x returns, and Facebook's acquisition of WhatsApp, which turned a $60M investment into $3B. Lastly, he likened VC to 19th-century whaling, where only 6% of deals produce 60% of returns, drawing parallels to how power-law distributions shape the industry's focus on rare, high-value investments. Watch, listen or read the full insight at https://www.bravesea.com/blog/vc-vs-whaling-power-law Get transcripts, startup resources & community discussions at www.bravesea.com WhatsApp: https://whatsapp.com/channel/0029VakR55X6BIElUEvkN02e TikTok: https://www.tiktok.com/@jeremyau Instagram: https://www.instagram.com/jeremyauz Twitter: https://twitter.com/jeremyau LinkedIn: https://www.linkedin.com/company/bravesea English: Spotify | YouTube | Apple Podcasts Bahasa Indonesia: Spotify | YouTube | Apple Podcasts Chinese: Spotify | YouTube | Apple Podcasts Vietnamese: Spotify | YouTube | Apple Podcasts
Dia Bondi works with Senior Leaders, Founders and ambitious professionals to help them find their voice and lead with it securing hundreds of millions of dollars in decisions and resources and carving a path for the future of their teams and ventures. Dia has worked with world-class brands like Intel, Mozilla, Salesforce, Dropbox, and founders in portfolios of some of the most forward looking VC's. In global sport, she helped Rio de Janeiro win the right to host the 2016 Olympic Games, and in social impact has coached change-makers backstage at the Clinton Global Initiative.After training as an auctioneer, Dia translated the strategies she learned from the fundraising auctioneering stage into a program and book with a goal to help 1,000,000 women ask for more in their career and life. In Ask Like An Auctioneer: How To Ask For More and Get It, Dia outlines a six-step framework that will help you strategically and confidently ask for more, maximizing the potential of every ask, every time.Dia's approach is both practical and inspirational, rooted in real-world experience and a deep understanding of the power of asking as a success strategy.In everything Dia does, she is devoted to cultivating the courage and control you need to shine when stakes are high so you can move toward your goals with unshakable commitment on your own terms.
In this episode, Marc Benioff (CEO, Salesforce) responds to Satya Nadella's recent predictions and shares his thoughts on the current reality of Agi. He dives into the rise of digital labor, the multi-trillion-dollar potential of agentic technology, and what the future split between software and agentic revenue might look like. Marc also discusses why CEOs need to stay grounded in delivering actionable solutions, and he emphasizes the moral obligation businesses have to retrain employees and invest in communities as AI continues to evolve.(00:00) Intro(01:45) Salesforce's AI Impact on Business(03:03) The Future of Digital Labor(05:28) Agentic AI and Customer Success(07:42) Salesforce's Competitive Edge(11:48) Marc Benioff's Response to Satya Nadella(14:16) The Role of AI in Enterprise Software(20:14) The Balance of AI and Human Labor(28:34) Salesforce's Philanthropic Efforts(36:24) The Future of AI and Regulation(40:24) Conclusion and Farewell Executive Producer: Rashad AssirProducer: Leah ClapperMixing and editing: Justin Hrabovsky Check out Unsupervised Learning, Redpoint's AI Podcast: https://www.youtube.com/@UCUl-s_Vp-Kkk_XVyDylNwLA
Back in the building for 2025, this A-Side 'sode of SURFACE NOISE is one not to miss! Join our dais of doomsdayers as they look into their crystal balls and share their predictions for record collecting (their own and the hobby as a whole) for 2025. Then we get a segment called "Give Me a Break" (and no, this will not serve as a Nell Carter tribute). Don't be late, and don't be lame. SURFACE NOISE is the name of our game! But first: as mentioned at the end of this show, please consider helping a member of the Vinyl Community who is recovering from a horrific auto accident. Info here: Link to lend help to VC'er Brooke ( @PGHVINYL ): https://gofund.me/16be59f7 IG: https://www.instagram.com/pghvinyl For more on Concert Buddie: https://www.youtube.com/@ConcertBuddie https://concertbuddie.com For more on Jason Roxas: https://www.youtube.com/@JasonRoxas For more on Jose Moreno Rahn: https://www.youtube.com/@josemorenorahn https://auroracentralrecords.bandcamp.com For more on David Bianco (Safe & Sound Texas Audio Excursion): https://www.youtube.com/@SafeAndSoundTXAudioExcursion https://www.whatnot.com/user/vinyl4ever/shop For more information on Vinyl Community Podcasts: https://vinylcommunitypodcasts.com . . . . . Don't forget to visit FOTS (friends of the show) Vinyl Storage Solutions for the BEST sleeves to protect your best records (and your worst). Save 10% using the code(s) below: VCP10 https://vinylstoragesolutions.ca
Applications close Monday for the NYC AI Engineer Summit focusing on AI Leadership and Agent Engineering! If you applied, invites should be rolling out shortly.The search landscape is experiencing a fundamental shift. Google built a >$2T company with the “10 blue links” experience, driven by PageRank as the core innovation for ranking. This was a big improvement from the previous directory-based experiences of AltaVista and Yahoo. Almost 4 decades later, Google is now stuck in this links-based experience, especially from a business model perspective. This legacy architecture creates fundamental constraints:* Must return results in ~400 milliseconds* Required to maintain comprehensive web coverage* Tied to keyword-based matching algorithms* Cost structures optimized for traditional indexingAs we move from the era of links to the era of answers, the way search works is changing. You're not showing a user links, but the goal is to provide context to an LLM. This means moving from keyword based search to more semantic understanding of the content:The link prediction objective can be seen as like a neural PageRank because what you're doing is you're predicting the links people share... but it's more powerful than PageRank. It's strictly more powerful because people might refer to that Paul Graham fundraising essay in like a thousand different ways. And so our model learns all the different ways.All of this is now powered by a $5M cluster with 144 H200s:This architectural choice enables entirely new search capabilities:* Comprehensive result sets instead of approximations* Deep semantic understanding of queries* Ability to process complex, natural language requestsAs search becomes more complex, time to results becomes a variable:People think of searches as like, oh, it takes 500 milliseconds because we've been conditioned... But what if searches can take like a minute or 10 minutes or a whole day, what can you then do?Unlike traditional search engines' fixed-cost indexing, Exa employs a hybrid approach:* Front-loaded compute for indexing and embeddings* Variable inference costs based on query complexity* Mix of owned infrastructure ($5M H200 cluster) and cloud resourcesExa sees a lot of competition from products like Perplexity and ChatGPT Search which layer AI on top of traditional search backends, but Exa is betting that true innovation requires rethinking search from the ground up. For example, the recently launched Websets, a way to turn searches into structured output in grid format, allowing you to create lists and databases out of web pages. The company raised a $17M Series A to build towards this mission, so keep an eye out for them in 2025. Chapters* 00:00:00 Introductions* 00:01:12 ExaAI's initial pitch and concept* 00:02:33 Will's background at SpaceX and Zoox* 00:03:45 Evolution of ExaAI (formerly Metaphor Systems)* 00:05:38 Exa's link prediction technology* 00:09:20 Meaning of the name "Exa"* 00:10:36 ExaAI's new product launch and capabilities* 00:13:33 Compute budgets and variable compute products* 00:14:43 Websets as a B2B offering* 00:19:28 How do you build a search engine?* 00:22:43 What is Neural PageRank?* 00:27:58 Exa use cases * 00:35:00 Auto-prompting* 00:38:42 Building agentic search* 00:44:19 Is o1 on the path to AGI?* 00:49:59 Company culture and nap pods* 00:54:52 Economics of AI search and the future of search technologyFull YouTube TranscriptPlease like and subscribe!Show Notes* ExaAI* Web Search Product* Websets* Series A Announcement* Exa Nap Pods* Perplexity AI* Character.AITranscriptAlessio [00:00:00]: Hey, everyone. Welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:10]: Hey, and today we're in the studio with my good friend and former landlord, Will Bryk. Roommate. How you doing? Will, you're now CEO co-founder of ExaAI, used to be Metaphor Systems. What's your background, your story?Will [00:00:30]: Yeah, sure. So, yeah, I'm CEO of Exa. I've been doing it for three years. I guess I've always been interested in search, whether I knew it or not. Like, since I was a kid, I've always been interested in, like, high-quality information. And, like, you know, even in high school, wanted to improve the way we get information from news. And then in college, built a mini search engine. And then with Exa, like, you know, it's kind of like fulfilling the dream of actually being able to solve all the information needs I wanted as a kid. Yeah, I guess. I would say my entire life has kind of been rotating around this problem, which is pretty cool. Yeah.Swyx [00:00:50]: What'd you enter YC with?Will [00:00:53]: We entered YC with, uh, we are better than Google. Like, Google 2.0.Swyx [00:01:12]: What makes you say that? Like, that's so audacious to come out of the box with.Will [00:01:16]: Yeah, okay, so you have to remember the time. This was summer 2021. And, uh, GPT-3 had come out. Like, here was this magical thing that you could talk to, you could enter a whole paragraph, and it understands what you mean, understands the subtlety of your language. And then there was Google. Uh, which felt like it hadn't changed in a decade, uh, because it really hadn't. And it, like, you would give it a simple query, like, I don't know, uh, shirts without stripes, and it would give you a bunch of results for the shirts with stripes. And so, like, Google could barely understand you, and GBD3 could. And the theory was, what if you could make a search engine that actually understood you? What if you could apply the insights from LLMs to a search engine? And it's really been the same idea ever since. And we're actually a lot closer now, uh, to doing that. Yeah.Alessio [00:01:55]: Did you have any trouble making people believe? Obviously, there's the same element. I mean, YC overlap, was YC pretty AI forward, even 2021, or?Will [00:02:03]: It's nothing like it is today. But, um, uh, there were a few AI companies, but, uh, we were definitely, like, bold. And I think people, VCs generally like boldness, and we definitely had some AI background, and we had a working demo. So there was evidence that we could build something that was going to work. But yeah, I think, like, the fundamentals were there. I think people at the time were talking about how, you know, Google was failing in a lot of ways. And so there was a bit of conversation about it, but AI was not a big, big thing at the time. Yeah. Yeah.Alessio [00:02:33]: Before we jump into Exa, any fun background stories? I know you interned at SpaceX, any Elon, uh, stories? I know you were at Zoox as well, you know, kind of like robotics at Harvard. Any stuff that you saw early that you thought was going to get solved that maybe it's not solved today?Will [00:02:48]: Oh yeah. I mean, lots of things like that. Like, uh, I never really learned how to drive because I believed Elon that self-driving cars would happen. It did happen. And I take them every night to get home. But it took like 10 more years than I thought. Do you still not know how to drive? I know how to drive now. I learned it like two years ago. That would have been great to like, just, you know, Yeah, yeah, yeah. You know? Um, I was obsessed with Elon. Yeah. I mean, I worked at SpaceX because I really just wanted to work at one of his companies. And I remember they had a rule, like interns cannot touch Elon. And, um, that rule actually influenced my actions.Swyx [00:03:18]: Is it, can Elon touch interns? Ooh, like physically?Will [00:03:22]: Or like talk? Physically, physically, yeah, yeah, yeah, yeah. Okay, interesting. He's changed a lot, but, um, I mean, his companies are amazing. Um,Swyx [00:03:28]: What if you beat him at Diablo 2, Diablo 4, you know, like, Ah, maybe.Alessio [00:03:34]: I want to jump into, I know there's a lot of backstory used to be called metaphor system. So, um, and it, you've always been kind of like a prominent company, maybe at least RAI circles in the NSF.Swyx [00:03:45]: I'm actually curious how Metaphor got its initial aura. You launched with like, very little. We launched very little. Like there was, there was this like big splash image of like, this is Aurora or something. Yeah. Right. And then I was like, okay, what this thing, like the vibes are good, but I don't know what it is. And I think, I think it was much more sort of maybe consumer facing than what you are today. Would you say that's true?Will [00:04:06]: No, it's always been about building a better search algorithm, like search, like, just like the vision has always been perfect search. And if you do that, uh, we will figure out the downstream use cases later. It started on this fundamental belief that you could have perfect search over the web and we could talk about what that means. And like the initial thing we released was really just like our first search engine, like trying to get it out there. Kind of like, you know, an open source. So when OpenAI released, uh, ChachBt, like they didn't, I don't know how, how much of a game plan they had. They kind of just wanted to get something out there.Swyx [00:04:33]: Spooky research preview.Will [00:04:34]: Yeah, exactly. And it kind of morphed from a research company to a product company at that point. And I think similarly for us, like we were research, we started as a research endeavor with a, you know, clear eyes that like, if we succeed, it will be a massive business to make out of it. And that's kind of basically what happened. I think there are actually a lot of parallels to, of w between Exa and OpenAI. I often say we're the OpenAI of search. Um, because. Because we're a research company, we're a research startup that does like fundamental research into, uh, making like AGI for search in a, in a way. Uh, and then we have all these like, uh, business products that come out of that.Swyx [00:05:08]: Interesting. I want to ask a little bit more about Metaforesight and then we can go full Exa. When I first met you, which was really funny, cause like literally I stayed in your house in a very historic, uh, Hayes, Hayes Valley place. You said you were building sort of like link prediction foundation model, and I think there's still a lot of foundation model work. I mean, within Exa today, but what does that even mean? I cannot be the only person confused by that because like there's a limited vocabulary or tokens you're telling me, like the tokens are the links or, you know, like it's not, it's not clear. Yeah.Will [00:05:38]: Uh, what we meant by link prediction is that you are literally predicting, like given some texts, you're predicting the links that follow. Yes. That refers to like, it's how we describe the training procedure, which is that we find links on the web. Uh, we take the text surrounding the link. And then we predict. Which link follows you, like, uh, you know, similar to transformers where, uh, you're trying to predict the next token here, you're trying to predict the next link. And so you kind of like hide the link from the transformer. So if someone writes, you know, imagine some article where someone says, Hey, check out this really cool aerospace startup. And they, they say spacex.com afterwards, uh, we hide the spacex.com and ask the model, like what link came next. And by doing that many, many times, you know, billions of times, you could actually build a search engine out of that because then, uh, at query time at search time. Uh, you type in, uh, a query that's like really cool aerospace startup and the model will then try to predict what are the most likely links. So there's a lot of analogs to transformers, but like to actually make this work, it does require like a different architecture than, but it's transformer inspired. Yeah.Alessio [00:06:41]: What's the design decision between doing that versus extracting the link and the description and then embedding the description and then using, um, yeah. What do you need to predict the URL versus like just describing, because you're kind of do a similar thing in a way. Right. It's kind of like based on this description, it was like the closest link for it. So one thing is like predicting the link. The other approach is like I extract the link and the description, and then based on the query, I searched the closest description to it more. Yeah.Will [00:07:09]: That, that, by the way, that is, that is the link refers here to a document. It's not, I think one confusing thing is it's not, you're not actually predicting the URL, the URL itself that would require like the, the system to have memorized URLs. You're actually like getting the actual document, a more accurate name could be document prediction. I see. This was the initial like base model that Exo was trained on, but we've moved beyond that similar to like how, you know, uh, to train a really good like language model, you might start with this like self-supervised objective of predicting the next token and then, uh, just from random stuff on the web. But then you, you want to, uh, add a bunch of like synthetic data and like supervised fine tuning, um, stuff like that to make it really like controllable and robust. Yeah.Alessio [00:07:48]: Yeah. We just have flow from Lindy and, uh, their Lindy started to like hallucinate recrolling YouTube links instead of like, uh, something. Yeah. Support guide. So. Oh, interesting. Yeah.Swyx [00:07:57]: So round about January, you announced your series A and renamed to Exo. I didn't like the name at the, at the initial, but it's grown on me. I liked metaphor, but apparently people can spell metaphor. What would you say are the major components of Exo today? Right? Like, I feel like it used to be very model heavy. Then at the AI engineer conference, Shreyas gave a really good talk on the vector database that you guys have. What are the other major moving parts of Exo? Okay.Will [00:08:23]: So Exo overall is a search engine. Yeah. We're trying to make it like a perfect search engine. And to do that, you have to build lots of, and we're doing it from scratch, right? So to do that, you have to build lots of different. The crawler. Yeah. You have to crawl a bunch of the web. First of all, you have to find the URLs to crawl. Uh, it's connected to the crawler, but yeah, you find URLs, you crawl those URLs. Then you have to process them with some, you know, it could be an embedding model. It could be something more complex, but you need to take, you know, or like, you know, in the past it was like a keyword inverted index. Like you would process all these documents you gather into some processed index, and then you have to serve that. Uh, you had high throughput at low latency. And so that, and that's like the vector database. And so it's like the crawling system, the AI processing system, and then the serving system. Those are all like, you know, teams of like hundreds, maybe thousands of people at Google. Um, but for us, it's like one or two people each typically, but yeah.Alessio [00:09:13]: Can you explain the meaning of, uh, Exo, just the story 10 to the 16th, uh, 18, 18.Will [00:09:20]: Yeah, yeah, yeah, sure. So. Exo means 10 to the 18th, which is in stark contrast to. To Google, which is 10 to the hundredth. Uh, we actually have these like awesome shirts that are like 10th to 18th is greater than 10th to the hundredth. Yeah, it's great. And it's great because it's provocative. It's like every engineer in Silicon Valley is like, what? No, it's not true. Um, like, yeah. And, uh, and then you, you ask them, okay, what does it actually mean? And like the creative ones will, will recognize it. But yeah, I mean, 10 to the 18th is better than 10 to the hundredth when it comes to search, because with search, you want like the actual list of, of things that match what you're asking for. You don't want like the whole web. You want to basically with search filter, the, like everything that humanity has ever created to exactly what you want. And so the idea is like smaller is better there. You want like the best 10th to the 18th and not the 10th to the hundredth. I'm like, one way to say this is like, you know how Google often says at the top, uh, like, you know, 30 million results found. And it's like crazy. Cause you're looking for like the first startups in San Francisco that work on hardware or something. And like, they're not 30 million results like that. What you want is like 325 results found. And those are all the results. That's what you really want with search. And that's, that's our vision. It's like, it just gives you. Perfectly what you asked for.Swyx [00:10:24]: We're recording this ahead of your launch. Uh, we haven't released, we haven't figured out the, the, the name of the launch yet, but what is the product that you're launching? I guess now that we're coinciding this podcast with. Yeah.Will [00:10:36]: So we've basically developed the next version of Exa, which is the ability to get a near perfect list of results of whatever you want. And what that means is you can make a complex query now to Exa, for example, startups working on hardware in SF, and then just get a huge list of all the things that match. And, you know, our goal is if there are 325 startups that match that we find you all of them. And this is just like, there's just like a new experience that's never existed before. It's really like, I don't know how you would go about that right now with current tools and you can apply this same type of like technology to anything. Like, let's say you want, uh, you want to find all the blog posts that talk about Alessio's podcast, um, that have come out in the past year. That is 30 million results. Yeah. Right.Will [00:11:24]: But that, I mean, that would, I'm sure that would be extremely useful to you guys. And like, I don't really know how you would get that full comprehensive list.Swyx [00:11:29]: I just like, how do you, well, there's so many questions with regards to how do you know it's complete, right? Cause you're saying there's only 30 million, 325, whatever. And then how do you do the semantic understanding that it might take, right? So working in hardware, like I might not use the words hardware. I might use the words robotics. I might use the words wearables. I might use like whatever. Yes. So yeah, just tell us more. Yeah. Yeah. Sure. Sure.Will [00:11:53]: So one aspect of this, it's a little subjective. So like certainly providing, you know, at some point we'll provide parameters to the user to like, you know, some sort of threshold to like, uh, gauge like, okay, like this is a cutoff. Like, this is actually not what I mean, because sometimes it's subjective and there needs to be a feedback loop. Like, oh, like it might give you like a few examples and you say, yeah, exactly. And so like, you're, you're kind of like creating a classifier on the fly, but like, that's ultimately how you solve the problem. So the subject, there's a subjectivity problem and then there's a comprehensiveness problem. Those are two different problems. So. Yeah. So you have the comprehensiveness problem. What you basically have to do is you have to put more compute into the query, into the search until you get the full comprehensiveness. Yeah. And I think there's an interesting point here, which is that not all queries are made equal. Some queries just like this blog post one might require scanning, like scavenging, like throughout the whole web in a way that just, just simply requires more compute. You know, at some point there's some amount of compute where you will just be comprehensive. You could imagine, for example, running GPT-4 over the internet. You could imagine running GPT-4 over the entire web and saying like, is this a blog post about Alessio's podcast, like, is this a blog post about Alessio's podcast? And then that would work, right? It would take, you know, a year, maybe cost like a million dollars, but, or many more, but, um, it would work. Uh, the point is that like, given sufficient compute, you can solve the query. And so it's really a question of like, how comprehensive do you want it given your compute budget? I think it's very similar to O1, by the way. And one way of thinking about what we built is like O1 for search, uh, because O1 is all about like, you know, some, some, some questions require more compute than others, and we'll put as much compute into the question as we need to solve it. So similarly with our search, we will put as much compute into the query in order to get comprehensiveness. Yeah.Swyx [00:13:33]: Does that mean you have like some kind of compute budget that I can specify? Yes. Yes. Okay. And like, what are the upper and lower bounds?Will [00:13:42]: Yeah, there's something we're still figuring out. I think like, like everyone is a new paradigm of like variable compute products. Yeah. How do you specify the amount of compute? Like what happens when you. Run out? Do you just like, ah, do you, can you like keep going with it? Like, do you just put in more credits to get more, um, for some, like this can get complex at like the really large compute queries. And like, one thing we do is we give you a preview of what you're going to get, and then you could then spin up like a much larger job, uh, to get like way more results. But yes, there is some compute limit, um, at, at least right now. Yeah. People think of searches as like, oh, it takes 500 milliseconds because we've been conditioned, uh, to have search that takes 500 milliseconds. But like search engines like Google, right. No matter how complex your query to Google, it will take like, you know, roughly 400 milliseconds. But what if searches can take like a minute or 10 minutes or a whole day, what can you then do? And you can do very powerful things. Um, you know, you can imagine, you know, writing a search, going and get a cup of coffee, coming back and you have a perfect list. Like that's okay for a lot of use cases. Yeah.Alessio [00:14:43]: Yeah. I mean, the use case closest to me is venture capital, right? So, uh, no, I mean, eight years ago, I built one of the first like data driven sourcing platforms. So we were. You look at GitHub, Twitter, Product Hunt, all these things, look at interesting things, evaluate them. If you think about some jobs that people have, it's like literally just make a list. If you're like an analyst at a venture firm, your job is to make a list of interesting companies. And then you reach out to them. How do you think about being infrastructure versus like a product you could say, Hey, this is like a product to find companies. This is a product to find things versus like offering more as a blank canvas that people can build on top of. Oh, right. Right.Will [00:15:20]: Uh, we are. We are a search infrastructure company. So we want people to build, uh, on top of us, uh, build amazing products on top of us. But with this one, we try to build something that makes it really easy for users to just log in, put a few, you know, put some credits in and just get like amazing results right away and not have to wait to build some API integration. So we're kind of doing both. Uh, we, we want, we want people to integrate this into all their applications at the same time. We want to just make it really easy to use very similar again to open AI. Like they'll have, they have an API, but they also have. Like a ChatGPT interface so that you could, it's really easy to use, but you could also build it in your applications. Yeah.Alessio [00:15:56]: I'm still trying to wrap my head around a lot of the implications. So, so many businesses run on like information arbitrage, you know, like I know this thing that you don't, especially in investment and financial services. So yeah, now all of a sudden you have these tools for like, oh, actually everybody can get the same information at the same time, the same quality level as an API call. You know, it just kind of changes a lot of things. Yeah.Will [00:16:19]: I think, I think what we're grappling with here. What, what you're just thinking about is like, what is the world like if knowledge is kind of solved, if like any knowledge request you want is just like right there on your computer, it's kind of different from when intelligence is solved. There's like a good, I've written before about like a different super intelligence, super knowledge. Yeah. Like I think that the, the distinction between intelligence and knowledge is actually a pretty good one. They're definitely connected and related in all sorts of ways, but there is a distinction. You could have a world and we are going to have this world where you have like GP five level systems and beyond that could like answer any complex request. Um, unless it requires some. Like, if you say like, uh, you know, give me a list of all the PhDs in New York city who, I don't know, have thought about search before. And even though this, this super intelligence is going to be like, I can't find it on Google, right. Which is kind of crazy. Like we're literally going to have like super intelligences that are using Google. And so if Google can't find them information, there's nothing they could do. They can't find it. So, but if you also have a super knowledge system where it's like, you know, I'm calling this term super knowledge where you just get whatever knowledge you want, then you can pair with a super intelligence system. And then the super intelligence can, we'll never. Be blocked by lack of knowledge.Alessio [00:17:23]: Yeah. You told me this, uh, when we had lunch, I forget how it came out, but we were talking about AGI and whatnot. And you were like, even AGI is going to need search. Yeah.Swyx [00:17:32]: Yeah. Right. Yeah. Um, so we're actually referencing a blog post that you wrote super intelligence and super knowledge. Uh, so I would refer people to that. And this is actually a discussion we've had on the podcast a couple of times. Um, there's so much of model weights that are just memorizing facts. Some of the, some of those might be outdated. Some of them are incomplete or not. Yeah. So like you just need search. So I do wonder, like, is there a maximum language model size that will be the intelligence layer and then the rest is just search, right? Like maybe we should just always use search. And then that sort of workhorse model is just like, and it like, like, like one B or three B parameter model that just drives everything. Yes.Will [00:18:13]: I believe this is a much more optimal system to have a smaller LM. That's really just like an intelligence module. And it makes a call to a search. Tool that's way more efficient because if, okay, I mean the, the opposite of that would be like the LM is so big that can memorize the whole web. That would be like way, but you know, it's not practical at all. I don't, it's not possible to train that at least right now. And Carpathy has actually written about this, how like he could, he could see models moving more and more towards like intelligence modules using various tools. Yeah.Swyx [00:18:39]: So for listeners, that's the, that was him on the no priors podcast. And for us, we talked about this and the, on the Shin Yu and Harrison chase podcasts. I'm doing search in my head. I told you 30 million results. I forgot about our neural link integration. Self-hosted exit.Will [00:18:54]: Yeah. Yeah. No, I do see that that is a much more, much more efficient world. Yeah. I mean, you could also have GB four level systems calling search, but it's just because of the cost of inference. It's just better to have a very efficient search tool and a very efficient LM and they're built for different things. Yeah.Swyx [00:19:09]: I'm just kind of curious. Like it is still something so audacious that I don't want to elide, which is you're, you're, you're building a search engine. Where do you start? How do you, like, are there any reference papers or implementation? That would really influence your thinking, anything like that? Because I don't even know where to start apart from just crawl a bunch of s**t, but there's gotta be more insight than that.Will [00:19:28]: I mean, yeah, there's more insight, but I'm always surprised by like, if you have a group of people who are really focused on solving a problem, um, with the tools today, like there's some in, in software, like there are all sorts of creative solutions that just haven't been thought of before, particularly in the information retrieval field. Yeah. I think a lot of the techniques are just very old, frankly. Like I know how Google and Bing work and. They're just not using new methods. There are all sorts of reasons for that. Like one, like Google has to be comprehensive over the web. So they're, and they have to return in 400 milliseconds. And those two things combined means they are kind of limit and it can't cost too much. They're kind of limited in, uh, what kinds of algorithms they could even deploy at scale. So they end up using like a limited keyword based algorithm. Also like Google was built in a time where like in, you know, in 1998, where we didn't have LMS, we didn't have embeddings. And so they never thought to build those things. And so now they have this like gigantic system that is built on old technology. Yeah. And so a lot of the information retrieval field we found just like thinks in terms of that framework. Yeah. Whereas we came in as like newcomers just thinking like, okay, there here's GB three. It's magical. Obviously we're going to build search that is using that technology. And we never even thought about using keywords really ever. Uh, like we were neural all the way we're building an end to end neural search engine. And just that whole framing just makes us ask different questions, like pursue different lines of work. And there's just a lot of low hanging fruit because no one else is thinking about it. We're just on the frontier of neural search. We just are, um, for, for at web scale, um, because there's just not a lot of people thinking that way about it.Swyx [00:20:57]: Yeah. Maybe let's spell this out since, uh, we're already on this topic, elephants in the room are Perplexity and SearchGPT. That's the, I think that it's all, it's no longer called SearchGPT. I think they call it ChatGPT Search. How would you contrast your approaches to them based on what we know of how they work and yeah, just any, anything in that, in that area? Yeah.Will [00:21:15]: So these systems, there are a few of them now, uh, they basically rely on like traditional search engines like Google or Bing, and then they combine them with like LLMs at the end to, you know, output some power graphics, uh, answering your question. So they like search GPT perplexity. I think they have their own crawlers. No. So there's this important distinction between like having your own search system and like having your own cache of the web. Like for example, so you could create, you could crawl a bunch of the web. Imagine you crawl a hundred billion URLs, and then you create a key value store of like mapping from URL to the document that is technically called an index, but it's not a search algorithm. So then to actually like, when you make a query to search GPT, for example, what is it actually doing it? Let's say it's, it's, it could, it's using the Bing API, uh, getting a list of results and then it could go, it has this cache of like all the contents of those results and then could like bring in the cache, like the index cache, but it's not actually like, it's not like they've built a search engine from scratch over, you know, hundreds of billions of pages. It's like, does that distinction clear? It's like, yeah, you could have like a mapping from URL to documents, but then rely on traditional search engines to actually get the list of results because it's a very hard problem to take. It's not hard. It's not hard to use DynamoDB and, and, and map URLs to documents. It's a very hard problem to take a hundred billion or more documents and given a query, like instantly get the list of results that match. That's a much harder problem that very few entities on, in, on the planet have done. Like there's Google, there's Bing, uh, you know, there's Yandex, but you know, there are not that many companies that are, that are crazy enough to actually build their search engine from scratch when you could just use traditional search APIs.Alessio [00:22:43]: So Google had PageRank as like the big thing. Is there a LLM equivalent or like any. Stuff that you're working on that you want to highlight?Will [00:22:51]: The link prediction objective can be seen as like a neural PageRank because what you're doing is you're predicting the links people share. And so if everyone is sharing some Paul Graham essay about fundraising, then like our model is more likely to predict it. So like inherent in our training objective is this, uh, a sense of like high canonicity and like high quality, but it's more powerful than PageRank. It's strictly more powerful because people might refer to that Paul Graham fundraising essay in like a thousand different ways. And so our model learns all the different ways. That someone refers that Paul Graham, I say, while also learning how important that Paul Graham essay is. Um, so it's like, it's like PageRank on steroids kind of thing. Yeah.Alessio [00:23:26]: I think to me, that's the most interesting thing about search today, like with Google and whatnot, it's like, it's mostly like domain authority. So like if you get back playing, like if you search any AI term, you get this like SEO slop websites with like a bunch of things in them. So this is interesting, but then how do you think about more timeless maybe content? So if you think about, yeah. You know, maybe the founder mode essay, right. It gets shared by like a lot of people, but then you might have a lot of other essays that are also good, but they just don't really get a lot of traction. Even though maybe the people that share them are high quality. How do you kind of solve that thing when you don't have the people authority, so to speak of who's sharing, whether or not they're worth kind of like bumping up? Yeah.Will [00:24:10]: I mean, you do have a lot of control over the training data, so you could like make sure that the training data contains like high quality sources so that, okay. Like if you, if you're. Training data, I mean, it's very similar to like language, language model training. Like if you train on like a bunch of crap, your prediction will be crap. Our model will match the training distribution is trained on. And so we could like, there are lots of ways to tweak the training data to refer to high quality content that we want. Yeah. I would say also this, like this slop that is returned by, by traditional search engines, like Google and Bing, you have the slop is then, uh, transferred into the, these LLMs in like a search GBT or, you know, our other systems like that. Like if slop comes in, slop will go out. And so, yeah, that's another answer to how we're different is like, we're not like traditional search engines. We want to give like the highest quality results and like have full control over whatever you want. If you don't want slop, you get that. And then if you put an LM on top of that, which our customers do, then you just get higher quality results or high quality output.Alessio [00:25:06]: And I use Excel search very often and it's very good. Especially.Swyx [00:25:09]: Wave uses it too.Alessio [00:25:10]: Yeah. Yeah. Yeah. Yeah. Yeah. Like the slop is everywhere, especially when it comes to AI, when it comes to investment. When it comes to all of these things for like, it's valuable to be at the top. And this problem is only going to get worse because. Yeah, no, it's totally. What else is in the toolkit? So you have search API, you have ExaSearch, kind of like the web version. Now you have the list builder. I think you also have web scraping. Maybe just touch on that. Like, I guess maybe people, they want to search and then they want to scrape. Right. So is that kind of the use case that people have? Yeah.Will [00:25:41]: A lot of our customers, they don't just want, because they're building AI applications on top of Exa, they don't just want a list of URLs. They actually want. Like the full content, like cleans, parsed. Markdown. Markdown, maybe chunked, whatever they want, we'll give it to them. And so that's been like huge for customers. Just like getting the URLs and instantly getting the content for each URL is like, and you can do this for 10 or 100 or 1,000 URLs, wherever you want. That's very powerful.Swyx [00:26:05]: Yeah. I think this is the first thing I asked you for when I tried using Exa.Will [00:26:09]: Funny story is like when I built the first version of Exa, it's like, we just happened to store the content. Yes. Like the first 1,024 tokens. Because I just kind of like kept it because I thought of, you know, I don't know why. Really for debugging purposes. And so then when people started asking for content, it was actually pretty easy to serve it. But then, and then we did that, like Exa took off. So the computer's content was so useful. So that was kind of cool.Swyx [00:26:30]: It is. I would say there are other players like Gina, I think is in this space. Firecrawl is in this space. There's a bunch of scraper companies. And obviously scraper is just one part of your stack, but you might as well offer it since you already do it.Will [00:26:43]: Yeah, it makes sense. It's just easy to have an all-in-one solution. And like. We are, you know, building the best scraper in the world. So scraping is a hard problem and it's easy to get like, you know, a good scraper. It's very hard to get a great scraper and it's super hard to get a perfect scraper. So like, and, and scraping really matters to people. Do you have a perfect scraper? Not yet. Okay.Swyx [00:27:05]: The web is increasingly closing to the bots and the scrapers, Twitter, Reddit, Quora, Stack Overflow. I don't know what else. How are you dealing with that? How are you navigating those things? Like, you know. You know, opening your eyes, like just paying them money.Will [00:27:19]: Yeah, no, I mean, I think it definitely makes it harder for search engines. One response is just that there's so much value in the long tail of sites that are open. Okay. Um, and just like, even just searching over those well gets you most of the value. But I mean, there, there is definitely a lot of content that is increasingly not unavailable. And so you could get through that through data partnerships. The bigger we get as a company, the more, the easier it is to just like, uh, make partnerships. But I, I mean, I do see the world as like the future where the. The data, the, the data producers, the content creators will make partnerships with the entities that find that data.Alessio [00:27:53]: Any other fun use case that maybe people are not thinking about? Yeah.Will [00:27:58]: Oh, I mean, uh, there are so many customers. Yeah. What are people doing on AXA? Well, I think dating is a really interesting, uh, application of search that is completely underserved because there's a lot of profiles on the web and a lot of people who want to find love and that I'll use it. They give me. Like, you know, age boundaries, you know, education level location. Yeah. I mean, you want to, what, what do you want to do with data? You want to find like a partner who matches this education level, who like, you know, maybe has written about these types of topics before. Like if you could get a list of all the people like that, like, I think you will unblock a lot of people. I mean, there, I mean, I think this is a very Silicon Valley view of dating for sure. And I'm, I'm well aware of that, but it's just an interesting application of like, you know, I would love to meet like an intellectual partner, um, who like shares a lot of ideas. Yeah. Like if you could do that through better search and yeah.Swyx [00:28:48]: But what is it with Jeff? Jeff has already set me up with a few people. So like Jeff, I think it's my personal exit.Will [00:28:55]: my mom's actually a matchmaker and has got a lot of married. Yeah. No kidding. Yeah. Yeah. Search is built into the book. It's in your jeans. Yeah. Yeah.Swyx [00:29:02]: Yeah. Other than dating, like I know you're having quite some success in colleges. I would just love to map out some more use cases so that our listeners can just use those examples to think about use cases for XR, right? Because it's such a general technology that it's hard to. Uh, really pin down, like, what should I use it for and what kind of products can I build with it?Will [00:29:20]: Yeah, sure. So, I mean, there are so many applications of XR and we have, you know, many, many companies using us for very diverse range of use cases, but I'll just highlight some interesting ones. Like one customer, a big customer is using us to, um, basically build like a, a writing assistant for students who want to write, uh, research papers. And basically like XR will search for, uh, like a list of research papers related to what the student is writing. And then this product has. Has like an LLM that like summarizes the papers to basically it's like a next word prediction, but in, uh, you know, prompted by like, you know, 20 research papers that X has returned. It's like literally just doing their homework for them. Yeah. Yeah. the key point is like, it's, it's, uh, you know, it's, it's, you know, research is, is a really hard thing to do and you need like high quality content as input.Swyx [00:30:08]: Oh, so we've had illicit on the podcast. I think it's pretty similar. Uh, they, they do focus pretty much on just, just research papers and, and that research. Basically, I think dating, uh, research, like I just wanted to like spell out more things, like just the big verticals.Will [00:30:23]: Yeah, yeah, no, I mean, there, there are so many use cases. So finance we talked about, yeah. I mean, one big vertical is just finding a list of companies, uh, so it's useful for VCs, like you said, who want to find like a list of competitors to a specific company they're investigating or just a list of companies in some field. Like, uh, there was one VC that told me that him and his team, like we're using XR for like eight hours straight. Like, like that. For many days on end, just like, like, uh, doing like lots of different queries of different types, like, oh, like all the companies in AI for law or, uh, all the companies for AI for, uh, construction and just like getting lists of things because you just can't find this information with, with traditional search engines. And then, you know, finding companies is also useful for, for selling. If you want to find, you know, like if we want to find a list of, uh, writing assistants to sell to, then we can just, we just use XR ourselves to find that is actually how we found a lot of our customers. Ooh, you can find your own customers using XR. Oh my God. I, in the spirit of. Uh, using XR to bolster XR, like recruiting is really helpful. It is really great use case of XR, um, because we can just get like a list of, you know, people who thought about search and just get like a long list and then, you know, reach out to those people.Swyx [00:31:29]: When you say thought about, are you, are you thinking LinkedIn, Twitter, or are you thinking just blogs?Will [00:31:33]: Or they've written, I mean, it's pretty general. So in that case, like ideally XR would return like the, the really blogs written by people who have just. So if I don't blog, I don't show up to XR, right? Like I have to blog. well, I mean, you could show up. That's like an incentive for people to blog.Swyx [00:31:47]: Well, if you've written about, uh, search in on Twitter and we, we do, we do index a bunch of tweets and then we, we should be able to service that. Yeah. Um, I mean, this is something I tell people, like you have to make yourself discoverable to the web, uh, you know, it's called learning in public, but like, it's even more imperative now because otherwise you don't exist at all.Will [00:32:07]: Yeah, no, no, this is a huge, uh, thing, which is like search engines completely influence. They have downstream effects. They influence the internet itself. They influence what people. Choose to create. And so Google, because they're a keyword based search engine, people like kind of like keyword stuff. Yeah. They're, they're, they're incentivized to create things that just match a lot of keywords, which is not very high quality. Uh, whereas XR is a search algorithm that, uh, optimizes for like high quality and actually like matching what you mean. And so people are incentivized to create content that is high quality, that like the create content that they know will be found by the right person. So like, you know, if I am a search researcher and I want to be found. By XR, I should blog about search and all the things I'm building because, because now we have a search engine like XR that's powerful enough to find them. And so the search engine will influence like the downstream internet in all sorts of amazing ways. Yeah. Uh, whatever the search engine optimizes for is what the internet looks like. Yeah.Swyx [00:33:01]: Are you familiar with the term? McLuhanism? No, it's not. Uh, it's this concept that, uh, like first we shape tools and then the tools shape us. Okay. Yeah. Uh, so there's like this reflexive connection between the things we search for and the things that get searched. Yes. So like once you change the tool. The tool that searches the, the, the things that get searched also change. Yes.Will [00:33:18]: I mean, there was a clear example of that with 30 years of Google. Yeah, exactly. Google has basically trained us to think of search and Google has Google is search like in people's heads. Right. It's one, uh, hard part about XR is like, uh, ripping people away from that notion of search and expanding their sense of what search could be. Because like when people think search, they think like a few keywords, or at least they used to, they think of a few keywords and that's it. They don't think to make these like really complex paragraph long requests for information and get a perfect list. ChatGPT was an interesting like thing that expanded people's understanding of search because you start using ChatGPT for a few hours and you go back to Google and you like paste in your code and Google just doesn't work and you're like, oh, wait, it, Google doesn't do work that way. So like ChatGPT expanded our understanding of what search can be. And I think XR is, uh, is part of that. We want to expand people's notion, like, Hey, you could actually get whatever you want. Yeah.Alessio [00:34:06]: I search on XR right now, people writing about learning in public. I was like, is it gonna come out with Alessio? Am I, am I there? You're not because. Bro. It's. So, no, it's, it's so about, because it thinks about learning, like in public, like public schools and like focuses more on that. You know, it's like how, when there are like these highly overlapping things, like this is like a good result based on the query, you know, but like, how do I get to Alessio? Right. So if you're like in these subcultures, I don't think this would work in Google well either, you know, but I, I don't know if you have any learnings.Swyx [00:34:40]: No, I'm the first result on Google.Alessio [00:34:42]: People writing about learning. In public, you're not first result anymore, I guess.Swyx [00:34:48]: Just type learning public in Google.Alessio [00:34:49]: Well, yeah, yeah, yeah, yeah. But this is also like, this is in Google, it doesn't work either. That's what I'm saying. It's like how, when you have like a movement.Will [00:34:56]: There's confusion about the, like what you mean, like your intention is a little, uh. Yeah.Alessio [00:35:00]: It's like, yeah, I'm using, I'm using a term that like I didn't invent, but I'm kind of taking over, but like, they're just so much about that term already that it's hard to overcome. If that makes sense, because public schools is like, well, it's, it's hard to overcome.Will [00:35:14]: Public schools, you know, so there's the right solution to this, which is to specify more clearly what you mean. And I'm not expecting you to do that, but so the, the right interface to search is actually an LLM.Swyx [00:35:25]: Like you should be talking to an LLM about what you want and the LLM translates its knowledge of you or knowledge of what people usually mean into a query that excellent uses, which you have called auto prompts, right?Will [00:35:35]: Or, yeah, but it's like a very light version of that. And really it's just basically the right answer is it's the wrong interface and like very soon interface to search and really to everything will be LLM. And the LLM just has a full knowledge of you, right? So we're kind of building for that world. We're skating to where the puck is going to be. And so since we're moving to a world where like LLMs are interfaced to everything, you should build a search engine that can handle complex LLM queries, queries that come from LLMs. Because you're probably too lazy, I'm too lazy too, to write like a whole paragraph explaining, okay, this is what I mean by this word. But an LLM is not lazy. And so like the LLM will spit out like a paragraph or more explaining exactly what it wants. You need a search engine that can handle that. Traditional search engines like Google or Bing, they're actually... Designed for humans typing keywords. If you give a paragraph to Google or Bing, they just completely fail. And so Exa can handle paragraphs and we want to be able to handle it more and more until it's like perfect.Alessio [00:36:24]: What about opinions? Do you have lists? When you think about the list product, do you think about just finding entries? Do you think about ranking entries? I'll give you a dumb example. So on Lindy, I've been building the spot that every week gives me like the top fantasy football waiver pickups. But every website is like different opinions. I'm like, you should pick up. These five players, these five players. When you're making lists, do you want to be kind of like also ranking and like telling people what's best? Or like, are you mostly focused on just surfacing information?Will [00:36:56]: There's a really good distinction between filtering to like things that match your query and then ranking based on like what is like your preferences. And ranking is like filtering is objective. It's like, does this document match what you asked for? Whereas ranking is more subjective. It's like, what is the best? Well, it depends what you mean by best, right? So first, first table stakes is let's get the filtering into a perfect place where you actually like every document matches what you asked for. No surgeon can do that today. And then ranking, you know, there are all sorts of interesting ways to do that where like you've maybe for, you know, have the user like specify more clearly what they mean by best. You could do it. And if the user doesn't specify, you do your best, you do your best based on what people typically mean by best. But ideally, like the user can specify, oh, when I mean best, I actually mean ranked by the, you know, the number of people who visited that site. Let's say is, is one example ranking or, oh, what I mean by best, let's say you're listing companies. What I mean by best is like the ones that have, uh, you know, have the most employees or something like that. Like there are all sorts of ways to rank a list of results that are not captured by something as subjective as best. Yeah. Yeah.Alessio [00:38:00]: I mean, it's like, who are the best NBA players in the history? It's like everybody has their own. Right.Will [00:38:06]: Right. But I mean, the, the, the search engine should definitely like, even if you don't specify it, it should do as good of a job as possible. Yeah. Yeah. No, no, totally. Yeah. Yeah. Yeah. Yeah. It's a new topic to people because we're not used to a search engine that can handle like a very complex ranking system. Like you think to type in best basketball players and not something more specific because you know, that's the only thing Google could handle. But if Google could handle like, oh, basketball players ranked by like number of shots scored on average per game, then you would do that. But you know, they can't do that. So.Swyx [00:38:32]: Yeah. That's fascinating. So you haven't used the word agents, but you're kind of building a search agent. Do you believe that that is agentic in feature? Do you think that term is distracting?Will [00:38:42]: I think it's a good term. I do think everything will eventually become agentic. And so then the term will lose power, but yes, like what we're building is agentic it in a sense that it takes actions. It decides when to go deeper into something, it has a loop, right? It feels different from traditional search, which is like an algorithm, not an agent. Ours is a combination of an algorithm and an agent.Swyx [00:39:05]: I think my reflection from seeing this in the coding space where there's basically sort of classic. Framework for thinking about this stuff is the self-driving levels of autonomy, right? Level one to five, typically the level five ones all failed because there's full autonomy and we're not, we're not there yet. And people like control. People like to be in the loop. So the, the, the level ones was co-pilot first and now it's like cursor and whatever. So I feel like if it's too agentic, it's too magical, like, like a, like a one shot, I stick a, stick a paragraph into the text box and then it spits it back to me. It might feel like I'm too disconnected from the process and I don't trust it. As opposed to something where I'm more intimately involved with the research product. I see. So like, uh, wait, so the earlier versions are, so if trying to stick to the example of the basketball thing, like best basketball player, but instead of best, you, you actually get to customize it with like, whatever the metric is that you, you guys care about. Yeah. I'm still not a basketballer, but, uh, but, but, you know, like, like B people like to be in my, my thesis is that agents level five agents failed because people like to. To kind of have drive assist rather than full self-driving.Will [00:40:15]: I mean, a lot of this has to do with how good agents are. Like at some point, if agents for coding are better than humans at all tests and then humans block, yeah, we're not there yet.Swyx [00:40:25]: So like in a world where we're not there yet, what you're pitching us is like, you're, you're kind of saying you're going all the way there. Like I kind of, I think all one is also very full, full self-driving. You don't get to see the plan. You don't get to affect the plan yet. You just fire off a query and then it goes away for a couple of minutes and comes back. Right. Which is effectively what you're saying you're going to do too. And you think there's.Will [00:40:42]: There's a, there's an in-between. I saw. Okay. So in building this product, we're exploring new interfaces because what does it mean to kick off a search that goes and takes 10 minutes? Like, is that a good interface? Because what if the search is actually wrong or it's not exactly, exactly specified to what you mean, which is why you get previews. Yeah. You get previews. So it is iterative, but ultimately once you've specified exactly what you mean, then you kind of do just want to kick off a batch job. Right. So perhaps what you're getting at is like, uh, there's this barrier with agents where you have to like explain the full context of what you mean, and a lot of failure modes happen when you have, when you don't. Yeah. There's failure modes from the agent, just not being smart enough. And then there's failure modes from the agent, not understanding exactly what you mean. And there's a lot of context that is shared between humans that is like lost between like humans and, and this like new creature.Alessio [00:41:32]: Yeah. Yeah. Because people don't know what's going on. I mean, to me, the best example of like system prompts is like, why are you writing? You're a helpful assistant. Like. Of course you should be an awful, but people don't yet know, like, can I assume that, you know, that, you know, it's like, why did the, and now people write, oh, you're a very smart software engineer, but like, you never made, you never make mistakes. Like, were you going to try and make mistakes before? So I think people don't yet have an understanding, like with, with driving people know what good driving is. It's like, don't crash, stay within kind of like a certain speed range. It's like, follow the directions. It's like, I don't really have to explain all of those things. I hope. But with. AI and like models and like search, people are like, okay, what do you actually know? What are like your assumptions about how search, how you're going to do search? And like, can I trust it? You know, can I influence it? So I think that's kind of the, the middle ground, like before you go ahead and like do all the search, it's like, can I see how you're doing it? And then maybe help show your work kind of like, yeah, steer you. Yeah. Yeah.Will [00:42:32]: No, I mean, yeah. Sure. Saying, even if you've crafted a great system prompt, you want to be part of the process itself. Uh, because the system prompt doesn't, it doesn't capture everything. Right. So yeah. A system prompt is like, you get to choose the person you work with. It's like, oh, like I want, I want a software engineer who thinks this way about code. But then even once you've chosen that person, you can't just give them a high level command and they go do it perfectly. You have to be part of that process. So yeah, I agree.Swyx [00:42:58]: Just a side note for my system, my favorite system, prompt programming anecdote now is the Apple intelligence system prompt that someone, someone's a prompt injected it and seen it. And like the Apple. Intelligence has the words, like, please don't, don't hallucinate. And it's like, of course we don't want you to hallucinate. Right. Like, so it's exactly that, that what you're talking about, like we should train this behavior into the model, but somehow we still feel the need to inject into the prompt. And I still don't even think that we are very scientific about it. Like it, I think it's almost like cargo culting. Like we have this like magical, like turn around three times, throw salt over your shoulder before you do something. And like, it worked the last time. So let's just do it the same time now. And like, we do, there's no science to this.Will [00:43:35]: I do think a lot of these problems might be ironed out in future versions. Right. So, and like, they might, they might hide the details from you. So it's like, they actually, all of them have a system prompt. That's like, you are a helpful assistant. You don't actually have to include it, even though it might actually be the way they've implemented in the backend. It should be done in RLE AF.Swyx [00:43:52]: Okay. Uh, one question I was just kind of curious about this episode is I'm going to try to frame this in terms of this, the general AI search wars, you know, you're, you're one player in that, um, there's perplexity, chat, GPT, search, and Google, but there's also like the B2B side, uh, we had. Drew Houston from Dropbox on, and he's competing with Glean, who've, uh, we've also had DD from, from Glean on, is there an appetite for Exa for my company's documents?Will [00:44:19]: There is appetite, but I think we have to be disciplined, focused, disciplined. I mean, we're already taking on like perfect web search, which is a lot. Um, but I mean, ultimately we want to build a perfect search engine, which definitely for a lot of queries involves your, your personal information, your company's information. And so, yeah, I mean, the grandest vision of Exa is perfect search really over everything, every domain, you know, we're going to have an Exa satellite, uh, because, because satellites can gather information that, uh, is not available publicly. Uh, gotcha. Yeah.Alessio [00:44:51]: Can we talk about AGI? We never, we never talk about AGI, but you had, uh, this whole tweet about, oh, one being the biggest kind of like AI step function towards it. Why does it feel so important to you? I know there's kind of like always criticism and saying, Hey, it's not the smartest son is better. It's like, blah, blah, blah. What? You choose C. So you say, this is what Ilias see or Sam see what they will see.Will [00:45:13]: I've just, I've just, you know, been connecting the dots. I mean, this was the key thing that a bunch of labs were working on, which is like, can you create a reward signal? Can you teach yourself based on a reward signal? Whether you're, if you're trying to learn coding or math, if you could have one model say, uh, be a grading system that says like you have successfully solved this programming assessment and then one model, like be the generative system. That's like, here are a bunch of programming assessments. You could train on that. It's basically whenever you could create a reward signal for some task, you could just generate a bunch of tasks for yourself. See that like, oh, on two of these thousand, you did well. And then you just train on that data. It's basically like, I mean, creating your own data for yourself and like, you know, all the labs working on that opening, I built the most impressive product doing that. And it's just very, it's very easy now to see how that could like scale to just solving, like, like solving programming or solving mathematics, which sounds crazy, but everything about our world right now is crazy.Alessio [00:46:07]: Um, and so I think if you remove that whole, like, oh, that's impossible, and you just think really clearly about like, what's now possible with like what, what they've done with O1, it's easy to see how that scales. How do you think about older GPT models then? Should people still work on them? You know, if like, obviously they just had the new Haiku, like, is it even worth spending time, like making these models better versus just, you know, Sam talked about O2 at that day. So obviously they're, they're spending a lot of time in it, but then you have maybe. The GPU poor, which are still working on making Lama good. Uh, and then you have the follower labs that do not have an O1 like model out yet. Yeah.Will [00:46:47]: This kind of gets into like, uh, what will the ecosystem of, of models be like in the future? And is there room is, is everything just gonna be O1 like models? I think, well, I mean, there's definitely a question of like inference speed and if certain things like O1 takes a long time, because that's the thing. Well, I mean, O1 is, is two things. It's like one it's it's use it's bootstrapping itself. It's teaching itself. And so the base model is smarter. But then it also has this like inference time compute where it could like spend like many minutes or many hours thinking. And so even the base model, which is also fast, it doesn't have to take minutes. It could take is, is better, smarter. I believe all models will be trained with this paradigm. Like you'll want to train on the best data, but there will be many different size models from different, very many different like companies, I believe. Yeah. Because like, I don't, yeah, I mean, it's hard, hard to predict, but I don't think opening eye is going to dominate like every possible LLM for every possible. Use case. I think for a lot of things, like you just want the fastest model and that might not involve O1 methods at all.Swyx [00:47:42]: I would say if you were to take the exit being O1 for search, literally, you really need to prioritize search trajectories, like almost maybe paying a bunch of grad students to go research things. And then you kind of track what they search and what the sequence of searching is, because it seems like that is the gold mine here, like the chain of thought or the thinking trajectory. Yeah.Will [00:48:05]: When it comes to search, I've always been skeptical. I've always been skeptical of human labeled data. Okay. Yeah, please. We tried something at our company at Exa recently where me and a bunch of engineers on the team like labeled a bunch of queries and it was really hard. Like, you know, you have all these niche queries and you're looking at a bunch of results and you're trying to identify which is matched to query. It's talking about, you know, the intricacies of like some biological experiment or something. I have no idea. Like, I don't know what matches and what, what labelers like me tend to do is just match by keyword. I'm like, oh, I don't know. Oh, like this document matches a bunch of keywords, so it must be good. But then you're actually completely missing the meaning of the document. Whereas an LLM like GB4 is really good at labeling. And so I actually think like you just we get by, which we are right now doing using like LLM
Jason Fried is the co-founder and CEO of 37signals, makers of the popular Basecamp project management software, which is still growing and very profitable after 20 years. He is going long and still having fun as an engaged CEO, building great products with great marketing that stands out. Jason has long advocated for software founders to avoid VC funding and build sustainable businesses that are great for customers and generate healthy profits for the owners. His best-selling book, Rework, shared his practical approach for entrepreneurs. In this wide-ranging interview, Jason discusses these important topics: How the core principles of Basecamp remain focused on simplicity and essential tools for project management after 20 years. Why Basecamp targets small businesses, avoiding the enterprise market that many competitors chase. Why software should fit the needs of the user, rather than forcing users to adapt to complex tools for big companies How profitability, not growth, provides the freedom to innovate and explore new ideas. Why competing against your costs is more important than competing against other companies. How small teams have the agility to win against big companies. Quote from Jason Fried, co-founder and CEO of 37signals “My sense of independence has always been important to me. That's why I became an entrepreneur: to do things the way I wanted to do them. Otherwise, why be an entrepreneur? It's true when you work, you're working for your customers. That's always going to be true. But you still have a sense of independence. You get to make your own decisions. “What people don't realize is when you raise money, you don't really work for yourself anymore. You really don't. You work for someone else's schedule, for someone else's fulfillment, for someone else's return. That never appealed to me. “I want our products to explain themselves. I want our success to explain ourselves. I don't want to have to explain myself on a quarterly basis to somebody who's trying to get a return out of me. I'm not interested. So for all those reasons, it just wasn't right to raise big funding.” Links Jason Fried on LinkedIn Jason Fried on Twitter 37Signals on LinkedIn 37Signals website Basecamp website HEY website Ruby on Rails website Podcast Sponsor – Full Scale This week's podcast is sponsored by Full Scale, one of the fastest-growing software development companies in any region. Full Scale vets, employs, and supports over 300 professional developers, designers, and testers in the Philippines who can augment and extend your core dev team. Learn more at fullscale.io. The Practical Founders Podcast Tune into the Practical Founders Podcast for weekly in-depth interviews with founders who have built valuable software companies without big funding. Subscribe to the Practical Founders Podcast using your favorite podcast app or view on our YouTube channel. Get the weekly Practical Founders newsletter and podcast updates at practicalfounders.com/newsletter.
Gm! In this episode, we're joined by Dan Smith to discuss avoiding noise in crypto data, the issue with FDV, and stablecoin supply across chains. Finally, we end the podcast by diving into the concept of conditional liquidity, MEV, and Solana's sandwich attack problem. Enjoy! Resources Solana MEV Landscape From Helius Labs: https://x.com/__lostin__/status/1877444749298381156 Dan's Post on the Wash Trading Bot: https://x.com/smyyguy/status/1873461466541486096 Top 10 Chains by Stablecoins: https://x.com/smyyguy/status/1875613474774249724 -- Follow Dan: https://x.com/smyyguy Follow Mert: https://x.com/0xMert_ Follow Jack: https://x.com/whosknave Follow Lightspeed: https://twitter.com/Lightspeedpodhq Subscribe to the Lightspeed Newsletter: https://blockworks.co/newsletter/lightspeed Utilize the Solana Dashboard by Blockworks Research: http://solana.blockworksresearch.com/ -- Ledger, the global leader in digital asset security, proudly sponsors the Lightspeed podcast. As Ledger celebrates 10 years of securing 20% of global crypto, it remains the top choice for securing your Solana assets. Buy a LEDGER™ device now and build confidently, knowing your SOL are safe. Buy now on https://shop.ledger.com/?r=1da180a5de00. -- Renaud Partners has built the most elite network of native crypto marketers globally. They create custom, expert teams to support founders with transformative strategy work. Trusted by some of the best founders, VC firms, and ecosystem leaders in the business, helping their teams expedite their marketing success and catalyze their growth. If you're a founder or a VC looking for support for your teams, I highly recommend connecting with them at RenaudPartners.com -- Subscribe on YouTube: https://bit.ly/43o3Syk Subscribe on Apple: https://apple.co/3OhiXgV Subscribe on Spotify: https://spoti.fi/3OkF7PD Get top market insights and the latest in crypto news. Subscribe to Blockworks Daily Newsletter: https://blockworks.co/newsletter/ -- (00:00) Introduction (03:01) Analyzing a Wash Trading Bot (09:15) Avoiding Noise in Crypto Data (16:18) Ledger Ad (17:18) Renaud Partners Ad (18:06) The Issue with FDV (30:43) Stablecoin Supply Across Chains (42:29) Thoughts on Conditional Liquidity and MEV (50:21) Solana's Sandwiching Problem -- Disclaimers: Lightspeed was kickstarted by a grant from the Solana Foundation. Nothing said on Lightspeed is a recommendation to buy or sell securities or tokens. This podcast is for informational purposes only, and any views expressed by anyone on the show are solely our opinions, not financial advice. Mert, Jack, and our guests may hold positions in the companies, funds, or projects discussed.
Про помилки в інвестиціях, втрачені кошти та невдалі рішення рідко говорять публічно. Для того, щоб зменшити ризики та ухвалювати найкращі рішення, важливо обговорювати невдалі кейси та засвоєні уроки з них. Ділимося частиною розмови про персональні інвестиції, де учасники спільноти обговорюють свої власні прорахунки при інвестуванні. Повний епізод: https://open.spotify.com/episode/2GJOMevP8TUP5xMKQFRwVz?si=2idc8HzGQLadiDA6nl4YRw Власним досвідом невдалих інвестицій поділилися учасники CEO Club: — Тарас Козак, засновник ІГ УНІВЕР; — Олександр Пітенко, CEO TOTALCHEQ, член бордів українських компаній; — Тарас Кириченко, голова Наглядової ради «Нова Пошта», співзасновник TOLOKA.VC. — Сергій Жуйков, керуючий приватним капіталом, приватний інвестор та чинний військовослужбовець. У CEO клуб інвестори поділилися міркуваннями щодо таких питань: — відсутність механізму захисту прав інвесторів в Україні; — оцінка ризиків інвестування; — протоколи для ухвалення рішень про інвестиції; — ризики надто ранніх та великих інвестицій. Таймкоди 00:00 «Невдалі інвестиції є у всіх» 01:13 «Факапи — це завжди боляче» 04:24 Знецінення активів через рейдерські атаки 07:13 «Факапи — це можливість для висновків» 12:42 «Треба інвестувати постійно» 17:07 Потрібно вчитися у професіоналів CEO Club — клуб лідерів бізнесу Більше про клуб https://ceoclub.com.ua Facebook https://www.facebook.com/CEOClubUkraine Instagram https://www.instagram.com/ceoclubukraine Telegram https://t.me/CEOnotes
What does it take to redefine venture capital while empowering founders to disrupt industries and drive meaningful change? Meet Dan Bowyer, a bold investor from Superseed VC reshaping the game with a no-nonsense approach and a passion for impactful innovation.In this episode, Dan shares his journey from entrepreneur to VC and dives deep into what's wrong with traditional venture capital. He reveals the personality traits that define world-changing founders, the importance of customer obsession, and the stark realities of boardroom dynamics. Dan's candid insights and thought-provoking lessons are as entertaining as they are eye-opening.Whether you're an aspiring founder, an experienced investor, or someone intrigued by the startup world, this conversation will teach you:Why venture capital doesn't work for 99.9% of founders—and what needs to change.The three personality traits that separate great founders from the rest.How misaligned boards and outdated practices can derail even the best startups.At the heart of it all, Dan's passion for connecting people, products, and processes shines through. As he says, “The next wave of commercial activity is about making money and doing good—it's not just about the bottom line anymore.”Curious for more? Listen to the full episode here and dive deeper into Dan's bold vision and unfiltered perspective.Quotes:(3:44) "Most VCs have never worked in a startup, so how can they advise one?" (9:42) "The greatest founders are not just charming—they're determined, disagreeable, and unrelenting in disrupting the status quo." (22:02) "We've moved on from 'don't be a dick' to 'do some good'—and that's how we transform the future."Timestamps(01:37) Bridging the Gap: Why Venture Capital Fails 99% of Founders (03:44) Startup Struggles: The Hard Truths Founders Need to Hear (09:42) Decoding Greatness: Traits of Exceptional Founders (12:58) From Prototype to Profit: Why Customer Obsession Wins (17:01) Boardroom Battles: Misaligned Boards and Startup Success (21:02) A New Era: Making Money While Doing GoodDon't miss this chance to learn from Dan Bowyer's dynamic journey, sharp insights, and inspiring approach to venture capital. Press play, and let's spark some bold ideas together!Send us a textSupport the showJoin the Podcast Newsletter: Link
Jeremy Au discussed the nuanced challenges faced by venture capitalists in assessing startups, emphasizing the importance of local expertise and first-principles thinking. For example, he shared how his understanding of Singaporean founders—gained through years of personal interactions—provided him with an edge over Silicon Valley VCs. He highlighted the risks of fraud in Southeast Asia, citing cases like Zilingo and comparing them to global examples such as Theranos and FTX, which underscore the need for robust due diligence. He explained how VCs use strategies like hiring fraud analysts or leveraging local networks to address these risks. Drawing parallels to the 19th-century whaling industry, Jeremy illustrated how power law dynamics dominate VC returns, with only about 6% of investments producing 60% of total returns, as seen in analysis from Horsley Bridge. This perspective frames VCs as high-performance scouts navigating a market where a single unicorn, like Grab or Gojek, can make or break a fund's success. Watch, listen or read the full insight at https://www.bravesea.com/blog/vc-edge-vs-fraud Get transcripts, startup resources & community discussions at www.bravesea.com WhatsApp: https://whatsapp.com/channel/0029VakR55X6BIElUEvkN02e TikTok: https://www.tiktok.com/@jeremyau Instagram: https://www.instagram.com/jeremyauz Twitter: https://twitter.com/jeremyau LinkedIn: https://www.linkedin.com/company/bravesea English: Spotify | YouTube | Apple Podcasts Bahasa Indonesia: Spotify | YouTube | Apple Podcasts Chinese: Spotify | YouTube | Apple Podcasts Vietnamese: Spotify | YouTube | Apple Podcasts
Good morning from Pharma and Biotech daily: the podcast that gives you only what's important to hear in Pharma e Biotech world. Verdiva has entered the competitive obesity market with a $410 million debut, focusing on next-generation therapies. The company's lead asset is an oral glp-1 receptor agonist that can be dosed weekly, aiming to improve accessibility and affordability. In 2024, the biopharma industry saw a significant increase in first-time VC funding, totaling $7.7 billion over 137 deals. J.P. Morgan predicts a strong year for biopharma in 2025, with an uptick in M&A activity and FDA decisions to watch. The FDA has proposed setting a bar for weight-loss therapies as the obesity space heats up. Lilly has won Medicare coverage for Zepbound in sleep apnea, while Vanda criticizes the FDA's conduct after a drug rejection. Layoffs have affected the gene therapy space, with companies like Resilience and Scribe cutting staff. Overall, the outlook for biopharma in 2025 is cautiously optimistic, with opportunities for innovation and growth.
Soleio Cuervo is more than a chin on the internet. He's more than an early design leader at Facebook and Dropbox. And he's more than an epic angel investor. He's a truth seeker and truth teller. Or at least the truth as he sees it. Soleio was one of Facebook's earliest designers, joining in 2005 when the company was still finding its identity. During his six-year tenure, he helped shape Facebook's culture of “move fast and break things,” where mistakes were acceptable but slowness was not. This philosophy was crystallized during the controversial launch of News Feed, which showed that transformative products could succeed despite initial user resistance if the team moved quickly to address concerns.After Facebook, he joined Dropbox, first as an angel investor, then full-time in 2012 to lead design when they had just a handful of designers. The transition revealed stark cultural differences - while Facebook was a paranoid, competitive insurgent focused on speed, Dropbox prioritized reliability and stability, aiming to build “space shuttle quality software.” This experience taught Soleio how company culture gets set early and is deeply influenced by the business model and founders' personalities.Following his operational roles, Soleio had great success an angel investor before building Combine, a hybrid venture firm and design studio. However, the “Happy Meal” approach of bundling investment and services proved too complex, creating role confusion when advising founders. He ultimately returned to angel investing, finding it provided more authenticity and freedom to give direct advice without the constraints of managing outside capital. His investments include Figma, Vanta, Vercel, Replit, Perplexity, and many more. The parts of this conversation I find myself continuing to reflect on are just how formative company cultures are and how often that's taken for granted by otherwise overwhelmed entrepreneurs. In our first post about indie we wrote:Like cement, the cultural foundation for new projects and companies sets early. Those who focus on raising outside capital and achieving fundable milestones have a very difficult time getting off that VC treadmill.Those who focus on creating value for customers and generating positive cash flow from the very beginning are able to make their own decisions independent of competing outside interests.This is true of fundraising, product, and execution. As Soleio says so eloquently in this conversation - Culture is the deepest moat one can create. The "culture of Soleio” seems to contain a bunch of contradictions — care for craft with an obsession for speed. A clearly massive ambition coupled with a desire to be a “trim boat” that can be lean and focused. With Soleio, you can't spell culture without CULT, and in this conversation, he mentions a piece of writing that had a profound impact on him that we would be remiss not to share here: The Cult of Done Manifesto by Bre Petis and Kio StarksThere are three states of being. Not knowing, action and completion. Accept that everything is a draft. It helps to get it done.There is no editing stage.Pretending you know what you're doing is almost the same as knowing what you are doing, so just accept that you know what you're doing even if you don't and do it.Banish procrastination. If you wait more than a week to get an idea done, abandon it.The point of being done is not to finish but to get other things done.Once you're done you can throw it away.Laugh at perfection. It's boring and keeps you from being done.People without dirty hands are wrong. Doing something makes you right.Failure counts as done. So do mistakes.Destruction is a variant of done. If you have an idea and publish it on the internet, that counts as a ghost of done.Done is the engine of more.You may be piecing together a theme here that I hope comes through loud and clear here. Accelerating your time to done is the ultimate informant for what's next. If you can master that loop, you're well on your way to having the impact in life and culture that you aspire to. This conversation with Soleio is a good reminder of that, and I hope you enjoy listening as much as we enjoyed recording it. Happy New Year!
EV Trucks for Commercial Use w/ George Gebhart of Voltu Motor - AZ TRT S05 EP45 (261) 12-30-2024 What We Learned This Week Voltu Motor provides EV battery and drive train for commercial trucks Creating new EV tech that is lighter, extended charge for longer range, and Inverter to charge on site Deals in Class 3 trucks for business – utility, fleet, delivery, repair Further the cause of EV adoption to help the environment and be more efficient Guest: George Gebhart, CEO and founder of Voltu Motor https://www.voltumotor.com/ A trailblazer in the electric vehicle (EV) industry, leveraging nearly two decades of expertise in engineering, innovation, and leadership to redefine sustainable transportation. A bioengineer by training, George began his career conducting research in robotics and brain-computer interfaces (BCI), focusing on semi-autonomous systems for handicapped individuals. This groundbreaking work laid the foundation for his passion for electrification and technology-driven solutions. With over 15 years of experience in electric drives and EVs, George founded Voltu to revolutionize the EV market with an integrated, proprietary powertrain solution. Voltu's innovations—such as the patented immersive cooling battery technology and the bidirectional inverter that enables the industry-first onboard fast charging—are a testament to his technical ingenuity and determination to eliminate barriers to EV adoption. George's journey as a leader is marked by a rare combination of intellectual curiosity, mathematical acumen, and unwavering resilience. These traits not only underpin his own achievements but also inspire his team. “I haven't personally seen or heard about a group more tough or resilient, with an unmatched work ethic,” George says of Voltu's team, who have worked closely with him over the past years to prepare for scaling the company. This collaboration and shared vision fuel his conviction that Voltu can make a lasting impact in the EV industry A seasoned speaker, including as a TEDx presenter, George emphasizes perseverance, teamwork, and the transformative power of innovation in addressing global challenges. Today, as Voltu secures multi-million-dollar contracts and advances its vision of energy freedom, George continues to lead the charge toward a cleaner, smarter, and more connected world. Leading a Greener Future in Class 3 Trucking—Meet Voltu Motors Voltu Motor Redefines Sustainability with the Future of Class 3 Electric Trucks As the electric vehicle (EV) market hurtles toward a projected $255 billion valuation by 2030, Voltu Motor Inc. is emerging as a transformative force Class 3 trucking. Founded in Argentina and now headquartered in Riverside, California, Voltu Motors is setting a new industry standard with its innovative technologies and groundbreaking approach to electrification. The company's flagship Voltu 3 Pickup Work Truck addresses the critical needs of businesses in the commercial sector. With a 350-mile range, patented Immerse Cooling Technology, and Vehicle-to-Everything (V2X) capabilities, it's more than a truck—it's a mobile energy solution poised to revolutionize urban freight and fleet operations. Notes: Seg 1 Truck chassis with an embed electric battery Voltu deals with Class 3 commercial trucks They plan to launch a midsize pick up truck in 2025 The benefits are: extended charge, lightweight, charging inverter, cloud based monitoring in the future In the future they could have a setup for car conversion, from fuel to EV based. They are a technology company, and already delivering vehicles EV industry challenges: adoption, and cost, battery range, charging infrastructures. Electric powertrain, and drivetrain to integrate in vehicles Class 3 commercial vehicles are used for fleet, delivery, pick up truck Utility, service, maintenance, equipment, trucks, supply chain New electric vehicle chassis, battery plus drive chain Supply to legacy auto makers, and do the final assembly of the vehicle Range of 350 miles, smaller battery and is lighter with lithium cells Better cooling immersion and more efficient Over the long-term EV batteries will be smaller Patent on charging technology, inverter for a fast charge a DC Can plug the vehicle into any industrial outlet, and charge right at your business Seg 2 Fast charge tech, on vehicle charging – can charge vehicle to vehicle on the road More adoption of EV cars, and market share is grow Legacy auto maker struggle to transition to EV fleet Voltu started about 10 years ago, developing technology company with 20 people stationed in California, future plan to hire 400 more George background is a bio engineer Semi autonomous vehicle and Motor tech to Motor drivetrain, battery Vertical integration with the power Investment in the Voltu Motor company so far has been startup, family, some VC Raise money, in VC and capital markets Goal of 1 million commercial vehicles on the road in 10 years Solve tech issue for Class 3, dealing in Power, towing capacity, range, charging Seg 3 Government fleets, utility, airport trucks, ports, any type of delivery B to B type business, or businesses that want to do conversion to EV This is a common vehicle in commercial, pick up trucks, more EV coming Compatible with fast charging stations Typically, a fleet truck would charge on site first thing in the morning Truck on the road needs energy to do tasks Has capacity to charge vehicle to vehicle when someone is in the field Warranty of EV batteries is 10 years plus we can have an afterlife second life with energy storage Manufacturing in the USA and certified with US in California standard Carbon plus EBA standards, federal base of standards, be good all across the USA Battery for light and medium commercial vehicles Other type of vehicles are a class eight, example as long haul and big vehicles Biotech Shows: https://brt-show.libsyn.com/category/Biotech-Life+Sciences-Science AZ Tech Council Shows: https://brt-show.libsyn.com/size/5/?search=az+tech+council *Includes Best of AZ Tech Council show from 2/12/2023 Tech Topic: https://brt-show.libsyn.com/category/Tech-Startup-VC-Cybersecurity-Energy-Science Best of Tech: https://brt-show.libsyn.com/size/5/?search=best+of+tech ‘Best Of' Topic: https://brt-show.libsyn.com/category/Best+of+BRT Thanks for Listening. Please Subscribe to the BRT Podcast. AZ Tech Roundtable 2.0 with Matt Battaglia The show where Entrepreneurs, Top Executives, Founders, and Investors come to share insights about the future of business. AZ TRT 2.0 looks at the new trends in business, & how classic industries are evolving. Common Topics Discussed: Startups, Founders, Funds & Venture Capital, Business, Entrepreneurship, Biotech, Blockchain / Crypto, Executive Comp, Investing, Stocks, Real Estate + Alternative Investments, and more… AZ TRT Podcast Home Page: http://aztrtshow.com/ ‘Best Of' AZ TRT Podcast: Click Here Podcast on Google: Click Here Podcast on Spotify: Click Here More Info: https://www.economicknight.com/azpodcast/ KFNX Info: https://1100kfnx.com/weekend-featured-shows/ Disclaimer: The views and opinions expressed in this program are those of the Hosts, Guests and Speakers, and do not necessarily reflect the views or positions of any entities they represent (or affiliates, members, managers, employees or partners), or any Station, Podcast Platform, Website or Social Media that this show may air on. All information provided is for educational and entertainment purposes. Nothing said on this program should be considered advice or recommendations in: business, legal, real estate, crypto, tax accounting, investment, etc. Always seek the advice of a professional in all business ventures, including but not limited to: investments, tax, loans, legal, accounting, real estate, crypto, contracts, sales, marketing, other business arrangements, etc.
有人不敢许愿,有人许愿就能那么灵;有人是「许愿浓人」,有人是「许愿淡人」。 经过一番(不)科学的讨论,我们发现愿还是要许的,这背后是一场发现自己 passion 和热爱的过程,是不断认识自己和让自己成长的过程,也符合心理学上的「自我实现的预言」。 在 2025 年的开端,我们制作这期节目,希望能和大家一起心想事成。 另外,在节目的最后,不仅有后期同学为本次节目精心准备的原创许愿之歌,还有包括声动早咖啡成员在内的其他小伙伴们的新年愿望混剪哦,千万不要错过! 本期人物 徐涛,每次许愿一点点、许愿达成率超高的「声东击西」主播 Babs,给神写 prompt ,许愿就是要有仪式感的声动活泼市场营销 迪卡,曾经每年做计划,心愿还没实现,心事已经徒增了许多的声动活泼后期负责人 赛德,不敢许愿,怕给神明添麻烦的「声东击西」后期 可宣,不想许愿,怕给自己增负担的「声东击西」监制 主要话题 [06:01] 白日梦许愿大师的秘诀分享 [13:55] 如何修炼一个健康的许愿心态 [21:06] 心愿达成之路上的阻碍是? [27:41] 声动活泼伙伴们「buff 叠满」的新年愿望 图拉斯新年甲马礼盒 白族人民用甲马来承载自己对美好生活的企盼,不是坐等天降好运,而是自己动手「造神」、为自己加持。新的一年,我们的老朋友图拉斯也希望用一个妙趣横生的甲马礼盒,传递这份正能量。 礼盒中有 12 位由图拉斯独家设计的「赛博神明」:比如专为有车一族准备的「车位无限神」,还有打工人无法拒绝的「假期延长神」等等。除了这些可爱的甲马神版画,礼盒里还有油墨、滚刷和纸张套件,我们可以自己动手印制喜欢的甲马。做好的甲马可以塞进手机壳里,也可以放进亚克力框里当摆件。 当然了,礼盒里还有图拉斯的当家产品,一个超薄自带线充电宝、一个 O2L 支点壳,和一根手机挂绳,都是中国红新年限定款,一套下来很有年味。站在 2025 的起点,和「声东击西」的小伙伴们一起,用不一样的许愿方式展开新的一年吧! 如果你也对「甲马」这种非遗艺术形式感兴趣,可以关注图拉斯官方小红书账号 (https://sourl.cn/QaAiKr) 参与图拉斯在小红书上发布的甲马DIY活动,就有机会可以获得甲马周边小礼品啦! Untitled https://media24.fireside.fm/file/fireside-uploads-2024/images/8/8dd8a56f-9636-415a-8c00-f9ca6778e511/1VL736fk.jpeg Untitled https://media24.fireside.fm/file/fireside-uploads-2024/images/8/8dd8a56f-9636-415a-8c00-f9ca6778e511/maVI_Rg4.jpeg Untitled https://media24.fireside.fm/file/fireside-uploads-2024/images/8/8dd8a56f-9636-415a-8c00-f9ca6778e511/hcN7vbPS.jpeg 购买链接 - 图拉斯 O2L 支点壳:https://sourl.cn/ZtKfDq - 图拉斯超薄自带线充电宝:https://sourl.cn/m86Zk6 - 图拉斯 Q Pro支点壳:https://sourl.cn/H7AYxU - 图拉斯甲马礼盒:https://sourl.cn/9SNc27 声动活泼小伙伴们的新年甲马 Untitled https://media24.fireside.fm/file/fireside-uploads-2024/images/8/8dd8a56f-9636-415a-8c00-f9ca6778e511/n5UmjAAE.png Untitled https://media24.fireside.fm/file/fireside-uploads-2024/images/8/8dd8a56f-9636-415a-8c00-f9ca6778e511/2RWWuFQk.png 徐涛用 ChatGPT 画的「世界和平」和「播客发财」 Untitled https://media24.fireside.fm/file/fireside-uploads-2024/images/8/8dd8a56f-9636-415a-8c00-f9ca6778e511/ZYd1Vvpz.jpeg Untitled https://media24.fireside.fm/file/fireside-uploads-2024/images/8/8dd8a56f-9636-415a-8c00-f9ca6778e511/vmqMCJmB.jpeg Untitled https://media24.fireside.fm/file/fireside-uploads-2024/images/8/8dd8a56f-9636-415a-8c00-f9ca6778e511/AAkejfUH.jpeg Untitled https://media24.fireside.fm/file/fireside-uploads-2024/images/8/8dd8a56f-9636-415a-8c00-f9ca6778e511/4wqj2VOC.jpeg 声动活泼小伙伴们的手绘甲马 Untitled https://media24.fireside.fm/file/fireside-uploads-2024/images/8/8dd8a56f-9636-415a-8c00-f9ca6778e511/T7HZAGHu.jpeg 小伙伴用甲马礼盒套装里的工具,自己动手做甲马:支配众神。 甲马版「如果我有一个愿望,那我希望我能再多三个愿望」 给声东击西投稿 「声东击西」开放投稿啦,如果你在日常生活中产生了任何想要与我们分享的观察和思考,它可能是一个引起了你注意的社会现象,也可能是对你而言很有启发意义的一本书或一个影视剧,都欢迎你写下来与我们分享。 期待你的来信,我们一起「声东击西」~ 投稿入口 (https://eg76rdcl6g.feishu.cn/share/base/form/shrcne1CGVaSeJwtBriW6yNT2dg) 加入「2025 声动胡同会员计划」,成为好内容的支持者! 「声动胡同会员计划」是听众们付费支持声动活泼持续制作好内容、做更多创作尝试的计划。作为感谢,我们会每周给你发送一封「播客手记」文章、你还可以免费收听我们旗下所有付费专辑和单集,包括 2025 年即将上线的付费节目、还能优先参与我们的线下活动。 期待你成为好内容的支持者! (https://sourl.cn/hSdzkY) 幕后制作 监制:可宣 后期:赛德 运营:George 设计:饭团 加入我们 声动活泼的工作邀请 —— 我们正在寻找:【商业化合作经理】和【播客节目监制】的全职伙伴,以及《声动早咖啡》内容研究实习岗,加入声动活泼(在招职位速览) (https://eg76rdcl6g.feishu.cn/docx/XO6bd12aGoI4j0xmAMoc4vS7nBh?from=from_copylink),点击相应链接即可查看岗位详情及投递指南。工作地点: 北京东城区,期待你的加入。 商务合作 声动活泼商务合作咨询 (https://sourl.cn/6vdmQT) 关于声动活泼 「用声音碰撞世界」,声动活泼致力于为人们提供源源不断的思考养料。 我们还有这些播客:不止金钱(2024 全新发布) (https://www.xiaoyuzhoufm.com/podcast/65a625966d045a7f5e0b5640)、跳进兔子洞第三季(2024 全新发布) (https://www.xiaoyuzhoufm.com/podcast/666c0ad1c26e396a36c6ee2a)、声东击西 (https://etw.fm/episodes)、声动早咖啡 (https://sheng-espresso.fireside.fm/)、What's Next|科技早知道 (https://guiguzaozhidao.fireside.fm/episodes)、反潮流俱乐部 (https://fanchaoliuclub.fireside.fm/)、泡腾 VC (https://popvc.fireside.fm/)、商业WHY酱 (https://msbussinesswhy.fireside.fm/) 欢迎在即刻 (https://okjk.co/Qd43ia)、微博等社交媒体上与我们互动,搜索 声动活泼 即可找到我们。 也欢迎你写邮件和我们联系,邮箱地址是:ting@sheng.fm 获取更多和声动活泼有关的讯息,你也可以扫码添加声小音,在节目之外和我们保持联系! 声小音 https://files.fireside.fm/file/fireside-uploads/images/8/8dd8a56f-9636-415a-8c00-f9ca6778e511/hdvzQQ2r.png Special Guests: 赛德 and 迪卡.
In this podcast episode, Amir Bormand sits down with Jason Luce, CTO of Paperless Parts, to discuss the evolution from DevOps to platform engineering. They dive deep into the reasons behind this shift, the cultural and technical changes required, and how organizations can implement platform engineering to enhance developer experience and business outcomes. Jason shares insights into the challenges and opportunities that come with adopting platform engineering, drawing from his experience leading the transformation at Paperless Parts. Whether you're leading a DevOps team or considering platform engineering for your organization, this episode is packed with practical advice and thought-provoking takeaways. Key Takeaways: Platform Engineering vs. DevOps: Platform engineering builds on DevOps principles but focuses on productizing systems to streamline developer workflows and reduce cognitive load. The Role of Developer Experience: Improving developer experience is key to increasing productivity and morale while reducing inefficiencies. Adopting Big Tech Practices: Lessons from companies like Netflix and Spotify can inform smaller organizations, but they must adapt these practices to their own scale and needs. The Importance of Consistency: Standardized systems and processes are critical for ensuring security, observability, and operational efficiency. Vision and Communication: Clear goals, a detailed roadmap, and open communication are essential for successful transitions to platform engineering. Timestamped Highlights: [00:01:00] Introduction to Jason Luce and Paperless Parts [00:03:00] What is Platform Engineering, and how does it differ from DevOps? [00:05:49] The skills gap: Moving from DevOps to platform engineering [00:07:58] The role of developer experience in platform engineering [00:10:17] Lessons from large-scale tech companies on platform adoption [00:14:42] Transitioning from a startup mindset to scalable engineering practices [00:18:09] What prompted Paperless Parts to embrace platform engineering? [00:20:47] Best practices and learnings from a year into the transition [00:23:14] The end goal: Building a scalable, efficient platform for the future Jason also discusses the importance of balancing short-term sacrifices for long-term gains and why fostering a culture of collaboration and clarity is critical for success. Don't miss this episode if you're looking to understand how platform engineering can transform your organization! Guest: Jason Luce has a proven track record as a customer-centric technology executive with over twenty-five years of experience building products that drive business success. He spent the first half of his career as a hands-on engineer and grew into leadership roles as he discovered a love for developing talent. Jason is currently CTO at Paperless Parts. Jason has driven results through many phases of scale journeys from pre-revenue startups to high-growth VC-backed companies to global public companies. At the core of each of these journeys is his ability to build global teams, develop strong leaders, and lead highly efficient operations that enable the rapid delivery of large-scale SaaS products. https://www.linkedin.com/in/jasonrluce/
Sheel Mohnot is the Co-founder of Better Tomorrow Ventures, an early stage fintech focused fund leading rounds in pre-seed and seed-stage companies.Our conversation weaves through Sheel's two decades of building and investing in fintech, starting BTV, and why they started a fintech-focused accelerator, The Mint.Fun facts on Sheel, he was a contestant on the Zoom Bachelor during COVID lockdowns, in a Justin Bieber music video, got married in the Taco Bell Metaverse, and was once banned from Uber.We talk lessons competing against Stripe before selling his first company, common fintech startup pitfalls, and the trick every VC should use when fundraising.Timestamps:(00:00) Intro(03:13) Why fintech makes disproportionate positive change(07:49) Most interesting opportunities in fintech today(09:09) The accountant shortage(12:59) Common early fintech startup pitfalls(16:09) Building Fee Fighters to cut payment processing fees(21:15) Lessons competing with Stripe(21:57) Getting acquired by Groupon and adding $600m in market cap(25:11) Biggest first-time startup mistakes(29:21) Getting $10k in Uber credit via paid Google ad(32:13) Investing in Flexport(36:19) Navigating hot vs underhyped rounds(43:39) Sheel's domain auction company(53:18) How he started angel investing(54:57) Spotify acquiring his podcast “The Pitch”(59:55) Why accelerators succeed and fail(01:06:07) The Mint, BTV's fintech-focused accelerator(01:09:56) Camp BTV in the Santa Cruz Mountains(01:11:41) Early days of NerdWallet(01:14:55) Raising $75m BTV Fund 1 with Jake to fill a gap in the market(01:18:36) Understanding Fund of Funds incentives(01:22:15) Importance of references in VC fundraising(01:24:02) $150m BTV Fund 2(01:27:13) Importance of following-on when leading roundsReferenced:BTV: https://www.btv.vc/ The Mint: https://www.themint.vc/ Fee Fighters: https://techcrunch.com/2011/09/23/feefighters-launches-payment-gateway-samurai/ Follow Sheel:Twitter: https://x.com/pitdesi LinkedIn: https://www.linkedin.com/in/smohnot/ Follow Turner:Twitter: https://twitter.com/TurnerNovak LinkedIn: https://www.linkedin.com/in/turnernovak Subscribe to my newsletter to get every episode + the transcript in your inbox every week: https://www.thespl.it/
09 Jan 2025. We speak to Knight Frank's global data centre head on the back of Hussain Sajwani's multi-billion investment pledge to Donald Trump. Plus, tech funding guru Philip Bahoshy reveals why V-C firms slashed their regional investments last year. And, is Dubai's offplan pipeline finally about to outpace demand? That's one of the questions posed in Espace Real Estate's H2 report... we put it to boss John Lyons.See omnystudio.com/listener for privacy information.
The latest episode of Skin in the Game VC Podcast featured an inspiring conversation with Tom Wallace, Saxon Baum, and the dynamic sister duo, Amy and Shannon Wu. As founders backed by Florida Funders, Amy and Shannon shared their unique entrepreneurial journeys, highlighting the resilience and innovation shaping their paths in the tech world. Their story is a testament to the power of collaboration, as they support and inspire one another while carving out success in competitive markets.Amy Wu, founder of Manifest, and Shannon Wu, founder of Open Home, have both built groundbreaking companies leveraging the transformative potential of AI. Manifest addresses Gen Z's mental health challenges through personalized, AI-driven tools, creating bite-sized wellness interactions to combat loneliness and anxiety. Meanwhile, Open Home is revolutionizing smart speakers, enabling more intuitive and seamless voice interactions across a wide range of devices. Both sisters emphasized how AI allows lean teams to achieve significant impact, unlocking personalized solutions that enhance human connection and creativity.Their journey reflects a shared commitment to innovation and a deep bond that has guided them through challenges and triumphs. They credit their early experiences at Stanford and the personalized support from Florida Funders as key drivers of their success. Florida Funders' hands-on approach contrasts with the broader yet less personal resources of larger VC firms, offering a uniquely curated experience that has helped Amy and Shannon thrive.As these visionary founders continue to push boundaries, their work underscores the exciting potential of AI and the importance of fostering meaningful human connections in an increasingly digital world. Startups like Manifest and Open Home showcase how technology can be a tool for empowerment, solving real-world challenges with creativity and purpose. Tune in to this latest episode of Skin in the Game. Hosted on Acast. See acast.com/privacy for more information.
Driving innovation and VC synergy at CarrefourHélène Labaume leads Carrefour's unique approach to innovation by merging its venture capital and innovation teams. She emphasizes that while corporate venture capital (CVC) is common, Carrefour's partnership with the French VC firm daphni sets them apart. "We bring expertise in retail while daphni excels in VC investments," Hélène explains, highlighting the seamless collaboration. This model allows Carrefour to align startup investments with operational goals, ensuring mutual growth and innovation.Tackling sustainability with startupsOne standout initiative discussed by Hélène focuses on reducing energy consumption in Carrefour stores. A recent pilot with Axiom Cloud optimizes energy use for refrigeration systems. "This pilot could scale to over 10,000 stores globally, becoming one of our biggest achievements in innovation," she shares. Sustainability extends to tackling food waste, where Carrefour collaborates with startups like Ida to enhance forecasting and reduce waste in fruit and vegetable supply chains.Leveraging AI for retail transformationCarrefour's adoption of AI has been a game-changer, especially in personalization and operational efficiency. Hélène outlines the use of AI for tasks like analyzing procurement data and enabling dynamic customer feedback through generative AI. "With over 10 billion transactions in our data lake, we're creating ultra-personalized customer experiences," she says. This approach positions Carrefour as a leader in digital retail, integrating advanced technologies to meet evolving customer needs.Bridging startups and retail giantsCarrefour doesn't limit its startup collaborations to investments alone. Hélène emphasizes the flexibility of working with startups through pilots, even when they don't secure funding. She highlights the importance of cross-fertilization, stating, "We connect startups with the right internal and external networks, creating opportunities beyond Carrefour." This ecosystem-driven approach fosters agility and innovation for both parties.Hélène Labaume's dual role at Carrefour exemplifies how large corporations can integrate innovation with venture capital to drive impactful change. From sustainability pilots to AI-powered customer experiences, Carrefour's strategies offer valuable lessons for startups and industry leaders alike. Find Hélène on:LinkedIn: Hélène LabaumeTwitter/X: @CarrefourGroup Find Ben on:LinkedIn: Ben CostantiniTwitter/X: @bencostantini--Be sure to follow Sesamers on Instagram, LinkedIn, and X for more cool stories from the people we catch during the best Tech events!
On episode 394 of Animal Spirits, Michael Batnick and Ben Carlson discuss: all sorts of predictions for the upcoming year, the AI bubble, the roaring 2020s of stock market returns, financial regrets at age 80, housing is the business cycle, when to sell big winner, the VC drop-off, what to do when you get super rich, 50 things a man should not know, and much more! Join the thousands of parents who trust Fabric to protect their family. Apply today in just minutes at https://meetfabric.com/spirits. Sign up for The Compound newsletter and never miss out: thecompoundnews.com/subscribe Subscribe to The Unlock newsletter: https://www.advisorunlock.com/subscribe Find complete show notes on our blogs: Ben Carlson's A Wealth of Common Sense Michael Batnick's The Irrelevant Investor Feel free to shoot us an email at animalspirits@thecompoundnews.com with any feedback, questions, recommendations, or ideas for future topics of conversation. Investing involves the risk of loss. This podcast is for informational purposes only and should not be or regarded as personalized investment advice or relied upon for investment decisions. Michael Batnick and Ben Carlson are employees of Ritholtz Wealth Management and may maintain positions in the securities discussed in this video. All opinions expressed by them are solely their own opinion and do not reflect the opinion of Ritholtz Wealth Management. The Compound Media, Incorporated, an affiliate of Ritholtz Wealth Management, receives payment from various entities for advertisements in affiliated podcasts, blogs and emails. Inclusion of such advertisements does not constitute or imply endorsement, sponsorship or recommendation thereof, or any affiliation therewith, by the Content Creator or by Ritholtz Wealth Management or any of its employees. For additional advertisement disclaimers see here https://ritholtzwealth.com/advertising-disclaimers. Investments in securities involve the risk of loss. Any mention of a particular security and related performance data is not a recommendation to buy or sell that security. The information provided on this website (including any information that may be accessed through this website) is not directed at any investor or category of investors and is provided solely as general information. Obviously nothing on this channel should be considered as personalized financial advice or a solicitation to buy or sell any securities. See our disclosures here: https://ritholtzwealth.com/podcast-youtube-disclosures/ Learn more about your ad choices. Visit megaphone.fm/adchoices
12 月 10 号,谷歌重磅推出了他们的量子计算芯片 Willow,是一款拥有 105 个物理量子比特量子芯片,那它的亮点就在于它有惊人的计算速度和错误矫正能力。根据报道, Willow 能在不到 5 分钟的时间内完成一个标准计算任务,而这个任务如果交给全球最快的超级计算机,可能需要 10 ^25 年,那这个数字甚至是超过了宇宙的年龄!Willow 发布后也在网上引起了一波热议。 量子计算是什么?谷歌的量子芯片究竟有什么突破?量子计算的瓶颈有哪些?我们今天的节目就一起来聊聊这些问题。 本期人物 丁教 Diane,「声动活泼」联合创始人、「科技早知道」主播 Yaxian,「科技早知道」监制 赵智沉,高能理论物理博士,软件工程师,《什么是物理?用物理学的视角看世界》作者 关于界环 AI 音频眼镜 春节送礼又难倒了选择困难星人?本期推荐的界环AI音频眼镜是一个新颖又有趣的选择。界环AI音频眼镜是一个眼镜+耳机二合一的产品,戴上眼镜就相当于具备了听音频、通话的能力。它拥有开放式声场设计,无需入耳,对于长辈来说长时间佩戴也不会耳朵疼。 界环 AI 音频眼镜是它是由小米生态链公司蜂巢科技出品,它拥有普通眼镜的形态,30.7g 超轻设计佩戴舒适,16 种时尚镜框可随意更换。 这次过年回家给爸妈带一份全新的音频眼镜吧,让老人也享受到科技的乐趣。 点击专属购买链接 (https://sourl.cn/Hrc7bu),下单跟客服报暗号“声动活泼”将获赠专属福利——价值 99 元的替换镜腿一对。湖北地区听众参加国家补贴下单直降 20%,最高优惠 300 元! Untitled https://media24.fireside.fm/file/fireside-uploads-2024/images/4/4931937e-0184-4c61-a658-6b03c254754d/gczBleBc.png 主要话题 [04:24] 究竟什么是计算?跳出图灵机的框架思考计算的本质 [07:41] 量子世界的波粒二象性:旋转的硬币和半死半活的猫 [11:07] 信息密度的极大提升:经典比特像开关,量子比特像旋钮 [15:07] Willow 发布,比特币大跌,量子计算要破解加密系统了? [20:24] 量子比特从 100 个到 100 万个,首先解决量子纠错问题 [26:00] 假如实现了量子计算规模化,人类的未来将如何改变? 延伸阅读 Quantum Computers Cross Critical Error Threshold (https://www.quantamagazine.org/quantum-computers-cross-critical-error-threshold-20241209/) Our quantum computing roadmap (https://quantumai.google/roadmap) Explore quantum applications (https://quantumai.google/applications) 幕后制作 监制:Yaxian 后期:Jack、迪卡 运营:George 设计:饭团 2024 声音时光胶囊 我们声动活泼一年一度的节目「声音时光胶囊」节目已经在「声东击西」上线啦!胶囊中收录了过去这一年,从个体到时代,那些值得被记住和珍藏的声音。 可以直接平台内搜索“声音胶囊”收听。 商务合作 声动活泼商务合作咨询 (https://sourl.cn/6vdmQT) Untitled https://media24.fireside.fm/file/fireside-uploads-2024/images/4/4931937e-0184-4c61-a658-6b03c254754d/G8JTHkTU.png 加入「2025 声动胡同会员计划」,成为好内容的支持者! 2021 年我们发起了「声动胡同会员计划」,这是一个纯支持项目,支持「科技早知道」、支持声动活泼持续为你制作好内容。现在已经有 1714 人加入了这个计划。点此成为好内容的支持者! (https://sourl.cn/MW3MmU) 作为感谢,我们将每周给你发送一封邮件通讯「播客手记」,分享我们的创作思考和幕后故事,比如主播 Diane 持续记录了她在美国生活工作的一线观察、「声东击西」的主播徐涛分享了她去参加苹果全球开发者大会 WWDC 的见闻;声音设计团队的迪卡还分享了巴黎圣母院在大火之后如何重构往日的声音 (https://mp.weixin.qq.com/s/6--XOn24Hze6WlEOYT8tBw)......你可以通过手记和我们深入交流、看到节目之外我们更多的想法。 此外,你还可以免费收听我们旗下所有付费专辑和单集,包括「不止金钱」「跳进兔子洞第三季」等,以及 2025 年即将上线的付费节目;还能优先参与我们的线下活动。 欢迎每一位好内容的见证者和受益者支持我们不断创作: - 声动胡同会员计划 365 元/年 - 老会员续订可享受 7 折优惠,255 元/年 - 学生可享受教育折扣 120 元/年 点此成为好内容的支持者! (https://sourl.cn/NLvVVW) 加入我们 加入我们:声动活泼正在寻找商业化合作经理、播客节目监制的全职伙伴,以及早咖啡内容实习生,详情点击招聘入口; 关于声动活泼 「用声音碰撞世界」,声动活泼致力于为人们提供源源不断的思考养料。 我们还有这些播客:声动早咖啡 (https://www.xiaoyuzhoufm.com/podcast/60de7c003dd577b40d5a40f3)、声东击西 (https://etw.fm/episodes)、吃喝玩乐了不起 (https://www.xiaoyuzhoufm.com/podcast/644b94c494d78eb3f7ae8640)、反潮流俱乐部 (https://www.xiaoyuzhoufm.com/podcast/5e284c37418a84a0462634a4)、泡腾 VC (https://www.xiaoyuzhoufm.com/podcast/5f445cdb9504bbdb77f092e9)、商业WHY酱 (https://www.xiaoyuzhoufm.com/podcast/61315abc73105e8f15080b8a)、跳进兔子洞 (https://therabbithole.fireside.fm/) 、不止金钱 (https://www.xiaoyuzhoufm.com/podcast/65a625966d045a7f5e0b5640) 欢迎在即刻 (https://okjk.co/Qd43ia)、微博等社交媒体上与我们互动,搜索 声动活泼 即可找到我们。 期待你给我们写邮件,邮箱地址是:ting@sheng.fm 声小音 https://files.fireside.fm/file/fireside-uploads/images/4/4931937e-0184-4c61-a658-6b03c254754d/gK0pledC.png 欢迎扫码添加声小音,在节目之外和我们保持联系。 Special Guests: 赵智沉 and 雅娴.
Highlights from their conversation include:CCR's Background and Journey to FleetPulse (0:41)FleetPulse Overview (1:19)Trailer Telematics Industry Primer (4:41)Technological Advancements in Trailers (5:10)The Rise of Cargo Theft (7:50)Understanding Trailer Neglect (8:18)Impact of Cargo Theft Trends (10:07)Value of Smart Trailers (12:31)Case Study on FleetPulse Benefits (14:48) Safety Concerns in US-Mexico Trade (18:07)Establishing a Startup Culture (23:15)Investor Expectations in Industrial Tech (26:07)Hiring at Fleet Pulse (29:13)Supply Chain Prediction for 2025 (30:11)Coaching and Mentoring Approach and Final Thoughts (30:55)Dynamo is a VC firm led by supply chain and mobility specialists that focus on seed-stage, enterprise startups.Find out more at: https://www.dynamo.vc/
The annual New Year's Comic Spree continues, as Zach attempts to lower his large stack of comics that he didn't get to this past year!Come along on the ride and hit us up on social media with your thoughts on the books he's reading, and maybe give him some comics to check out for the Spree as well!On the Docket this week:FML #1 and #2 (Dark Horse Comics)Written by Kelly Sue DeConnickArt by David LopezColors by Cris PeterLetters by Clayton CowlesWarm Fusion #1 (DSTLRY)Written by Scott HoffmanArt by Alberto PonticelliColors by Lee LoughridgeLetters by Steve WandsKnights vs Samurai #1-4 (Image Comics)Written by: David DastmalchianArt by Fede MeleColors by Ulises ArreolaLetters by Andworld DesignBlack Canary: Best of the Best #1-2 (DC Comics)Written by: Tom KingArt by: Ryan SookColors by: Dave StewartLetters by Clayton CowlesIn Bloom #1 (BOOM! Studios)Written byMichael W. ConradArt by John J. Pearson (with Lola Bonato)Letters by Pat BrosseauUltimate Spider-Man #1-12 (Marvel Comics)Written by Jonathan HickmanArt by Marco Checchetto with guest David Messina for issue #5Colors by Matthew WilsonLetters by VC's Cory Petit and Joe Sabino---------------------------------------------------Check out Dreampass and all their killer tracks on Spotify!---------------------------------------------------Join the Patreon to help us keep the lights on, and internet connected! https://www.patreon.com/tctwl---------------------------------------------------Listen to my other podcast!TFD: NerdcastAnd I am also part of the team over at...I Read Comic Books!---------------------------------------------------Want to try out all the sweet gigs over on Fiverr.com? Click on the link below and sign up!https://go.fiverr.com/visit/?bta=323533&brand=fiverrcpa---------------------------------------------------Follow on Instagram!The Comics That We LoveFollow on Tiktok!The Comics that We LoveFollow on Bluesky!@comicsthatwelove.bsky.social
Julia Krieger is an experienced venture investor and startup founder who recently co-founded PariPassu, a private investor app that gives fellow founders, operators, and accredited investors access to angel invest in the most highly competitive deals in the venture ecosystem and be a part of the village that helps these founders succeed. As Managing Partner at Pari Passu Venture Partners (PPVP), a founder-led, founder-backed early-stage venture firm investing in the future of commerce, SaaS, and consumer tech, Julia has backed 19 companies to date. Previously Julia founded and ran VillageLuxe, a venture-backed peer-to-peer luxury rental marketplace, for which she was awarded the Forbes 30 Under 30 accolade. She was also an investor at Insight Partners, investing over $100M in SaaS and marketplace companies globally. Julia serves on the board of YPO Metro New York and is active in the Harvard alumni communities.
Felipe Camposano, Managing Partner at Taram Capital, shares his journey as a pioneer in Chile's venture capital ecosystem and his efforts to support startups across Latin America. He discusses his transition from entrepreneur to investor, emphasizing his focus on mission-driven founders and leveraging networks for deal sourcing and evaluation. Felipe highlights Taram Capital's unique approach to scaling startups through corporate partnerships, the importance of founder obsession with solving problems, and the critical role of localization in the region. He also shares insights from portfolio companies like YAPP and Webdox, showcasing the value of strong teams and founder-led growth.In this episode, you'll learn:[02:05] Felipe's Path to VC: From entrepreneur to investor, Felipe reflects on early lessons in founder selection at Fundación Chile.[07:36] Evolving the Chilean VC Landscape: Launching one of Chile's first VC funds to support local entrepreneurs and regional expansion.[10:31] The Genesis of Taram Capital: Leveraging corporate partnerships to help startups scale their go-to-market strategies.[13:15] Focus Areas for Investment: B2B SaaS, fintech, e-commerce, and data-driven solutions with a localization focus.[17:40] Investing with Fewer but Deeper Relationships: Building a network-driven approach to founder evaluation and support.[26:24] Spotlight on Portfolio Companies: YAPP simplifies health-tech connections; Webdox fosters business growth collaborative contract management.The non-profit organization Felipe is passionate about: ChileFlorida.orgAbout Felipe CamposanoFelipe Camposano is the Managing Partner at Taram Capital and Strategic Planning Chair at ASEM-BIO (The Chilean Biotechnology Association). He co-founded Lucien Biotech, holds board seats at DICTUC's Agro Biotech Center, and previously directed New Business Development at Fundación Chile. Felipe founded INTRA, an enterprise knowledge management firm, and co-founded NEXION, offering tech development services in China. He has advised CORFO and ProChile, taught technology marketing at Universidad Católica, written for BioNexa, and holds patents recognized with innovation awards. His career spans venture creation, investment, and international business development across Latin America and beyond.About Taram CapitalTaram Capital is a Santiago-based early-stage venture capital firm specializing in early-stage investments in software, information technology, and healthcare. With a primary focus on Chile and the United States, Taram has made investments in companies such as Andes STR, Keirón, Regcheq, Dentalink, TaskHuman, Radar (Financial Services), SimpliRoute, Lirmi, LAP Global, and Webdox.Subscribe to our podcast and stay tuned for our next episode.
In this episode of the Build Tech Stack Equity Podcast, Eduardo Kupper, the founder of Airborne Ventures, shares his unique professional journey from an Air Force pilot in Brazil to a Wharton MBA graduate and successful venture capitalist. The conversation delves into his diverse experiences in management consulting at McKinsey, tech entrepreneurship through Rocket Internet, and leading a venture capital arm in one of Brazil's biggest banks. Eduardo outlines the challenges and strategies in raising a $5 million fund in Brazil, emphasizing the importance of adding value to others. He discusses the focus of his first fund and plans for raising a $10 million second fund, highlighting the importance of investing in FinTech, healthcare, and B2B companies in Brazil. Eduardo also elaborates on the VC landscape in Brazil, the significance of productivity and efficiency, and the potential for Brazilian startups to compete on a global scale. Finally, Eduardo provides valuable insights for potential LPs and founders on navigating the Brasilian venture capital market. If your company is looking to scale its AI initiatives, head over to Tesoro AI (www.tesoroai.com). We are experts in AI strategy, staff augmentation, and AI product development. Founder Bio: Eduardo is a seasoned VC with 17+ years of experience across various industries and sectors, including entrepreneurship, venture capital, management consulting, private equity, and investment banking. Previously, he served as the Head of VC at Bradesco Venture Capital arm, where he successfully managed two funds totaling R$ 800mm of invested capital and R$ 1.7 billion AUM. As an experienced VC investor, he has made 70+ deals across several stages and industries over the last decade, with successful exits, including a +1bi BRL sale and a Nasdaq IPO, and sat on the boards of several companies in different sectors and stages. As an entrepreneur, he founded and co-founded a few companies, including Remessa Online, Finpass, MAR Ventures, and the Wharton Alumni Angels. As an investor with a track record of successful deals, he has made marquee investments in companies including Beep Saúde, D1, Conexa, Jeitto, Asaas, and Skyone. Highly connected and well-networked within the venture capital industry, with close relationships and co-investments with several VC funds, angel investors, and internet & tech startups. He brings a wealth of expertise and strategic guidance and an eye for promising opportunities, helping to identify and invest in promising startups and drive returns for our investors on ESG-compliant/target companies, reflecting his commitment to responsible and sustainable investing. Time Stamps: 00:49 Eduardo's Unconventional Path to Venture Capital 05:44 Insights on Venture Building and Finding Founders 08:21 Corporate Venture Capital Experience 10:00 Launching Airborne Ventures 14:06 Investment Strategies and Fund Management 17:34 Reflections on Fund Performance 24:08 Navigating Investment Strategies 26:04 Investment Stages and Requirements 28:21 Leading vs. Following in Investments 30:07 Investment Structures in Brasil 34:18 B2B vs. B2C in Brasil 41:08 Challenges and Strategies for B2B Startups 47:15 Looking Ahead to 2025 Resources Follow Darius Gant LinkedIn - https://www.linkedin.com/in/m-darius-gant-cpa-44650aa/ Company Website - www.tesoroai.com Subscribe on Spotify: https://open.spotify.com/show/4uDVNgsK3iNeu7yU4Inu2n Subscribe on Apple Podcast: https://podcasts.apple.com/ae/podcast/the-darius-gant-show/id1527996104 Company website: https://airborne.ventures/ LinkedIn: https://www.linkedin.com/company/airborne-ventures