Logarithmic unit expressing the ratio of a physical quantity
POPULARITY
Categories
The epigenome is essentially the control software for plants and animals that dictates when, where, and to what level different genes in the genome are expressed. Moderating the epigenome has the potential to upgrade crops in real time, during the season, to adjust photosynthesis and warn crops of upcoming droughts, diseases, or other threats. While traditional plant breeding and genetics requires trait selections to be made prior to planting and establishing the crops, epigentics enables these traits to be managed after the plant is already growing. Today we are joined by Travis Bayer who recently founded Decibel Bio to develop spray on epigenetic instructions that enable a new level of control over crop traits.Travis and Decibel are leading the development of highly targeted epigenetic innovations for crops while other startups are looking at epigenetic reprograming to develop human therapeutics. Send us a text
Prime Minister Mark Carney met with U.S. President Donald Trump face-to-face in Washington, D.C. for the first time on Tuesday. Tensions between the two leaders' nations are at a historic high: a trade war, escalating tariffs and threats against Canada's sovereignty have all been major issues since Trump's re-election. For many Canadians, the central question in the recent federal election was how the next prime minister would handle U.S. aggression. Carney is now facing that reality.Doug Saunders, The Globe's international affairs columnist, joins The Decibel to analyze the Carney-Trump meeting and what it signals about the Canada–U.S. relationship now.Questions? Comments? Ideas? E-mail us at thedecibel@globeandmail.com
Now that the election is over, we're bringing you another edition of Campaign Call, The Decibel's weekly election panel that makes sense of the major issues.Where does Pierre Poilievre go from here without a seat in the House of Commons? What kind of Prime Minister will Mark Carney be and how will he actually handle Trump? How do the NDP rebuild?Globe columnists Robyn Urback, Andrew Coyne and Gary Mason are on the show to discuss the path ahead for the leaders and their parties.Questions? Comments? Ideas? Email us at thedecibel@globeandmail.com
Présentée par Vivy Horny, Alex, Ace Ace Baby, Papy Regis & Docteur Geek L'émission Rock'n'm'Hell est terminée et enterrée. Mais tel le phœnix qui renaît de ses cendres, ses anciens animateurs ont créé Dark Decibels, où ils parlent de musiques extrêmes tout en ayant une approche sur la mouvance 'goth' avec la darkwave et le post-punk, par exemple. Vivi Horny et Alex ont ainsi recruté plusieurs acolytes : un prof d'histoire qui va parler du metal dans l'Histoire et inversement, un 'professeur Geek' qui traitera metal dans la pop culture, puis un autre spécialiste lui plus axé sur la culture du black metal, et d'autres acolytes d'horizons divers pour quelques reviews... Une équipe étoffée donc pour un nouveau format détonnant ! Voici la trente-cinquième émission de la première saison.
NASA researchers are working on reducing landing gear noise to make flying quieter.
Episode 43 of The Flight Deck as aired on www.afterhours.fm 1. Bochan - Endlessness (Extended Mix) 2. Marc Dawn - Expander [2K25] (Wavetraxx Extended Mix) 3. Robin McIlmoyle & G Summers - The Story of You (Extended Mix) 4. Pinkque & Roxanne Emery - Better Than This (ILYIN & Darix Extended Remix) 5. Better Than This (ILYIN & Darix Extended Remix) - Gargantua (Extended Mix) 6. Monzilla feat. Lucy Faye - Aether (Extended Mix) 7. U-Mount - So Why (Extended Mix) 8. Brian Murphy - Save Me From Myself (Extended Mix) 9. Tom Exo - No Rainbow Without Rain (Extended Mix) 10. Kenny McAuley - Digital Frontier (Extended Mix) 11. Laucco - Tor Lara 12. Mike Bound - here R U Now (Extended Mix) 13. Alternate High - One Life (Extended Mix) 14. Cold Face & Henry Moe - Twilight (Extended Mix) 15. Allan Berndtz - Recognition (Extended Mix) 16. Dalmoori & Liquid Dream - Lighthouse (Extended Mix) 17. Steve Allen - Constant Motion (Extended Mix) 18. Photographer - Airport (London & Niko 2025 Extended Remix) 19. Time Geometry & Kezia dt - Be My Parachute (Extended Mix) 20. Lyterra - Momento (Extended Mix) 21. Rixson - Vision (2025 Rework) (Extended Version) 22. Angelus & Ren Faye - With Me Now (Extended Mix) 23. Sequence Six - Silver Lining (Extended Mix) 24. Mark Sherry - Heroes (Extended Mix) Piston Pounder of the Month 25. SASH! - Ecuador (12" Mix)
This week, host Jorden Guth joins Martijn Mensink of Dutch & Dutch to discuss what aspects of the company's beloved 8c speakers are made where, how the decision is made to craft some things in-house and outsource other elements, why driver manufacturing and DSP development are very different enterprises, and what the future holds for the design and manufacture of products of its type. Sources: “Martijn Mensink On the Magic of Dutch & Dutch”: https://www.soundstage.life/e/martijn-mensink-on-the-magic-of-dutch-dutch-active-speakers-cardioid-dispersion-acoustics/ "Dutch & Dutch 8c Active Loudspeakers Reviewed" by Diego Estan: https://www.soundstagehifi.com/index.php/equipment-reviews/1270-dutch-dutch-8c-active-loudspeakers "Intro to Dutch & Dutch and the 8c Active Loudspeaker - SoundStage! InSight": https://www.youtube.com/watch?v=n2R9lEJH4Og&ab_channel=soundstagenetwork "Dutch & Dutch 8c Active Loudspeaker in Detail - SoundStage! InSight": https://www.youtube.com/watch?v=aW-6R5HP8SE&ab_channel=soundstagenetwork Chapters: 00:00:00 Announcement 00:00:31 Introductions 00:21:43 Unexpected lessons 00:26:24 Outro music: “Get it Right” by Water on Mars
Présentée par Vivy Horny, Alex, Ace Ace Baby, Papy Regis & Docteur Geek L'émission Rock'n'm'Hell est terminée et enterrée. Mais tel le phœnix qui renaît de ses cendres, ses anciens animateurs ont créé Dark Decibels, où ils parlent de musiques extrêmes tout en ayant une approche sur la mouvance 'goth' avec la darkwave et le post-punk, par exemple. Vivi Horny et Alex ont ainsi recruté plusieurs acolytes : un prof d'histoire qui va parler du metal dans l'Histoire et inversement, un 'professeur Geek' qui traitera metal dans la pop culture, puis un autre spécialiste lui plus axé sur la culture du black metal, et d'autres acolytes d'horizons divers pour quelques reviews... Une équipe étoffée donc pour un nouveau format détonnant ! Voici la trente-quatrième émission de la première saison.
Ben and Markisan had a great time talking to Southern New Jersey's superb post-black metal band Deadyellow. The band's devastating record "What Was Left of Them" was a feature album stream on Decibel Magazine's website on 1/29/24 (See Decibel's more extended album review below). The album was Jeff's #1 record of 2024 and Ben's #11. What ensues is a free flowing and candid discussion with the super cool band members Paul, James, and Jon. We begin with Deadyellow's short but eventful pandemic-era origin story. Along the way we deep dive into the band's creative process and lyrical and conceptual themes. We talked to the guys the day after they parted ways with their long time bass player, Tim. In the course of the discussion, we learn about the band in a time of great introspection and also creative change and expansion. We learn about the many trials and tribulations the band faced during the creation of "What Was Left of Them." In a word, this is one of the most most human and, for us, best dialogues we ever had with a band. Accordingly, we also decided to include our post-interview reactions.Some good laughs as well! We finish the episode with a fun discussion of our Top 3 Black Metal Albums of All Time! So sit back and enjoy a pint of Golden Monkey, a Belgian-Style Tripel brewed by Philly's own great Victory brewing company.“What Was Left of Them is the kind of album best enjoyed when one is fully immersed… The album's first five tracks build to album centerpiece “Fallen Trees,” a stirring 16-minute epic that soars through melodic and emotional highs and lows… melodic and thoughtful in its composition; songs aren't long just for the sake of length and melody is prioritized over ambience. “What Was Left of Them is an album about change,” Deadyellow tell Decibel. “It's about embracing darkness and finding the light within to persevere. ‘Birth from death, the light will find a way.'” - Emily Bellino, Decibel Magazine
Présentée par Vivy Horny, Alex, Ace Ace Baby, Papy Regis & Docteur Geek L'émission Rock'n'm'Hell est terminée et enterrée. Mais tel le phœnix qui renaît de ses cendres, ses anciens animateurs ont créé Dark Decibels, où ils parlent de musiques extrêmes tout en ayant une approche sur la mouvance 'goth' avec la darkwave et le post-punk, par exemple. Vivi Horny et Alex ont ainsi recruté plusieurs acolytes : un prof d'histoire qui va parler du metal dans l'Histoire et inversement, un 'professeur Geek' qui traitera metal dans la pop culture, puis un autre spécialiste lui plus axé sur la culture du black metal, et d'autres acolytes d'horizons divers pour quelques reviews... Une équipe étoffée donc pour un nouveau format détonnant ! Voici la trente-troisième émission de la première saison.
With less than two weeks until the federal election, The Decibel is bringing you another edition of Campaign Call, The Globe's weekly election panel.This week, ahead of the French and English leaders' debates, feature writer Shannon Proudfoot and chief political writer Campbell Clark will explain why debates still matter and what each leader needs to accomplish during them.In the second half, we're joined by Nik Nanos, the chief data scientist of Nanos Research, to get a behind-the-scenes look at the polls – in terms of how the data is gathered and how reliable polls are.Questions? Comments? Ideas? E-mail us at thedecibel@globeandmail.com
In June of 2024, I talked with Peter Holsapple about the reissues of two classic albums by the dB's – “Stands for Decibels” and “Repercussion”. I had the pleasure of talking to Peter again recently, this time about his soon to be released solo album, The Face of 68. The song you're hearing, “Larger than Life” , is the first single from that album. It was great to catch up with Peter to hear about his very busy 2024, the recording of the album, touring, his continued work as a member of The Paranoid Style, and some candid, honest talk reflecting on his lifestyle in the past and his outlook on life and love these days. Stay tuned for a second talk with the one and only Peter Holsapple.Photo by Bill Reeves.My first talk with Peter from June 2024 is also available here.Save on Certified Pre-Owned ElectronicsPlug has great prices on refurbished electronics. Up to 70% off with a 30-day money back guarantee!Find or Sell Guitars and Gear at ReverbFind great deals on guitars, amps, audio and recording gear. Or sell yours! Check out Reverb.comDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Thanks for listening to Frets with DJ Fey. You can follow or subscribe for FREE at most podcast platforms.And now, Frets is available on YouTube. There are a lot of fun extras like videos and shorts and audio of all episodes. Subscribing for FREE at YouTube helps support the show tremendously, so hit that subscribe button! https://www.youtube.com/@DJFey39 You can also find information about guitarists, bands and more at the Frets with DJ Fey Facebook page. Give it a like! And – stay tuned…
The federal election is in two weeks, on April 28 – so the Decibel has invited the leaders from Canada's major parties onto the show to share their vision for the country.And while environmental concerns haven't been top-of-mind in this election … Green Party co-leader Jonathan Pedneault says he isn't just concerned about climate change.Pedneault – who previously served as the party's deputy leader from 2022 to 2024 – is proposing bold policies on a range of issues Canadians are facing, from U.S. President Donald Trump's tariff threats to the high cost of living.The former journalist and human rights investigator, who has spent the better part of the last decade and a half working and living abroad, believes more progressive ideas are needed in this election. But the Greens are lagging in the polls, and Pedneault is running in a Liberal stronghold … So how will they be effective if they don't make it to the House of Commons?Today, Green Party co-leader Jonathan Pedneault joins us from Montreal. Ahead of the leader debates this Thursday, we ask him about his party's daring proposals, what the Greens are offering Canadians, and if he's returning to Canadian politics for good.Questions? Comments? Ideas? Email us at thedecibel@globeandmail.com
We're halfway through the election period and it's time for Campaign Call, The Decibel's weekly election panel making sense of the major issues. This week, the embers of western separatism were stoked by an opinion piece by Preston Manning published in The Globe, arguing that national unity is on the ballot. We explore the threats of regionalism amidst the surge of pro-Canadian sentiment across the country. Plus, we'll look into how the major parties are making their pitch to win over a crucial voting demographic – seniors.Feature writer Shannon Proudfoot, Alberta politics reporter Carrie Tait, columnist Konrad Yakabuski based in Montreal and Meera Raman, retirement and financial planning reporter, discuss the big stories with host Menaka Raman-Wilms.Questions? Comments? Ideas? E-mail us at thedecibel@globeandmail.com
Présentée par Vivy Horny, Alex, Ace Ace Baby, Papy Regis & Docteur Geek L'émission Rock'n'm'Hell est terminée et enterrée. Mais tel le phœnix qui renaît de ses cendres, ses anciens animateurs ont créé Dark Decibels, où ils parlent de musiques extrêmes tout en ayant une approche sur la mouvance 'goth' avec la darkwave et le post-punk, par exemple. Vivi Horny et Alex ont ainsi recruté plusieurs acolytes : un prof d'histoire qui va parler du metal dans l'Histoire et inversement, un 'professeur Geek' qui traitera metal dans la pop culture, puis un autre spécialiste lui plus axé sur la culture du black metal, et d'autres acolytes d'horizons divers pour quelques reviews... Une équipe étoffée donc pour un nouveau format détonnant ! Voici la trente-deuxième émission de la première saison.
We're a few weeks into a federal election that is currently too close to call. And while most Canadians are wondering who our next Prime Minister will be, my guests today are preoccupied with a different question: will this election be free and fair?In her recent report on foreign interference, Justice Marie-Josée Hogue wrote that “information manipulation poses the single biggest risk to our democracy”. Meanwhile, senior Canadian intelligence officials are predicting that India, China, Pakistan and Russia will all attempt to influence the outcome of this election. To try and get a sense of what we're up against, I wanted to get two different perspectives on this. My colleague Aengus Bridgman is the Director of the Media Ecosystem Observatory, a project that we run together at McGill University, and Nina Jankocwicz is the co-founder and CEO of the American Sunlight Project. Together, they are two of the leading authorities on the problem of information manipulation.Mentioned:“Public Inquiry Into Foreign Interference in Federal Electoral Processes and Democratic Institutions,” by the Honourable Marie-Josée Hogue"A Pro-Russia Content Network Foreshadows the Automated Future of Info Ops,” by the American Sunlight ProjectFurther Reading:“Report ties Romanian liberals to TikTok campaign that fueled pro-Russia candidate,” by Victor Goury-Laffont (Politico)“2025 Federal Election Monitoring and Response,” by the Canadian Digital Media Research Network“Election threats watchdog detects Beijing effort to influence Chinese Canadians on Carney,” by Steven Chase (Globe & Mail)“The revelations and events that led to the foreign-interference inquiry,” by Steven Chase and Robert Fife (Globe & Mail)“Foreign interference inquiry finds ‘problematic' conduct,” by The Decibel
Présentée par Vivy Horny, Alex, Ace Ace Baby, Papy Regis & Docteur Geek L'émission Rock'n'm'Hell est terminée et enterrée. Mais tel le phœnix qui renaît de ses cendres, ses anciens animateurs ont créé Dark Decibels, où ils parlent de musiques extrêmes tout en ayant une approche sur la mouvance 'goth' avec la darkwave et le post-punk, par exemple. Vivi Horny et Alex ont ainsi recruté plusieurs acolytes : un prof d'histoire qui va parler du metal dans l'Histoire et inversement, un 'professeur Geek' qui traitera metal dans la pop culture, puis un autre spécialiste lui plus axé sur la culture du black metal, et d'autres acolytes d'horizons divers pour quelques reviews... Une équipe étoffée donc pour un nouveau format détonnant ! Voici la trente-et-unième émission de la première saison.
1. aprīlī plkst. 19:00 JVLMA Lielajā zālē 11. JVLMA mūsdienu mūzikas festivāla ''deciBels'' ietvaros skanēs koncerts "Melanholija", kurā piedalīsies pianiste Herta Hansena, čellists Ēriks Kiršfelds un vijolnieks Sandis Šteinbergs. Programmā būs Eiropas un Amerikas avangarda lielmeistaru Jaņņa Ksenaka, Džonatana Hārvija un Salvatores Šarīno opusi čellam un klavierēm, kā arī mūsu pašu modernie klasiķi – Maija Einfelde un Pauls Dambis. Paaudžu tilts tiks mests uz visjaunāko komponistu paaudzi – jaundarbu trio sastāvam radījis kāds JVLMA kompozīcijas students. Bet savu īpašu krāsu koncertā ienesīs Ērika Ešenvalda un Dimitra Maronida elektroakustiskās kompozīcijas, apcerot matērijas kristālu dalīšanās procesā un cilvēka dzīves pavērsienus. Par šo intriģējošo koncertu "Klasikai" plašāk stāsta Herta Hansena un Ēriks Kiršfelds. Viņi atklāj savu viedokli par festivāla "deciBels" nozīmi laikmetīgās mūzikas attīstībā un tās lomu mūzikas izglītībā, pastāsta par trio sadarbības vēsturi un ieskicē koncerta "Melanholija" koncepciju, mūzikas programmu un īpašo katrā programmas skaņdarbā.
To unpack some of the most topical questions in AI, I'm joined by two fellow AI podcasters: Swyx and Alessio Fanelli, co-hosts of the Latent Space podcast. We've been wanting to do a cross-over episode for a while and finally made it happen.Swyx brings deep experience from his time at AWS, Temporal, and Airbyte, and is now focused on AI agents and dev tools. Alessio is an investor at Decibel, where he's been backing early technical teams pushing the boundaries of infrastructure and applied AI. Together they run Latent Space, a technical newsletter and podcast by and for AI engineers.To subscribe or learn more about Latent Space, click here: https://www.latent.space/ [0:00] Intro[1:08] Reflecting on AI Surprises of the Past Year[2:24] Open Source Models and Their Adoption[6:48] The Rise of GPT Wrappers[7:49] Challenges in AI Model Training[10:33] Over-hyped and Under-hyped AI Trends[24:00] The Future of AI Product Market Fit[30:27] Google's Momentum and Customer Support Insights[33:16] Emerging AI Applications and Market Trends[35:13] Challenges and Opportunities in AI Development[39:02] Defensibility in AI Applications[42:42] Infrastructure and Security in AI[50:04] Future of AI and Unanswered Questions[55:34] Quickfire With your co-hosts: @jacobeffron - Partner at Redpoint, Former PM Flatiron Health @patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn @ericabrescia - Former COO Github, Founder Bitnami (acq'd by VMWare) @jordan_segall - Partner at Redpoint
Welcome to The Decibel's inaugural election panel!Each week, we're going to focus on a major theme from the week's campaign, and provide some analysis about what's happening. Then, we're going to unpack specific policy promises from the big parties to help you decide how to vote. We'll end by answering your questions. So here's a reminder to send us an e-mail or voice note with your questions about the campaign.This week we look at how all of the candidates are trying to campaign on the idea that they are the change Canada needs, and then we'll break down the duelling tax cuts from the Conservatives, the Liberals and the NDP.For our first panel today, we've got Ottawa-based feature writer Shannon Proudfoot, columnist Robyn Urback and economics reporter Nojoud Al Mallees.Questions? Comments? Ideas? E-mail us at thedecibel@globeandmail.com
Laikmetīgās mūzikas un laikmetīgā cirka apvienojums ir pamatā Mūzikas akadēmijas un Rīgas cirka sadarbībai. Par iecerēto Kultūras rondo izvaicāsjam cirka direktori Māru Pāvulu un mūsdienu mūzikas festivāla "deciBels" māksliniecisko vadītāju, komponisti Annu Fišeri un festivāla pārstāvi Paulu Prauliņu. Jāzepa Vītola Latvijas Mūzikas akadēmijas (JVLMA) Mūsdienu mūzikas festivālā "deciBels", kas norisināsies no 31. marta līdz 4. aprīlim, izskanēs trīspadsmit latviešu komponistu jaundarbi, kas īpaši sacerēti pēc "deciBels" pasūtījuma, informē festivāla rīkotāji. Organizētāji norāda, ka vislielākais izaicinājums jaunajiem komponistiem varētu būt koncerts "Jaunas skaņu pasaules: DIY perkusiju ansamblis", kas notiks 3. aprīlī plkst. 19.00 Rīgas cirka atjaunotajā arēnā. Koncertam JVLMA Kompozīcijas katedras studenti ne tikai radījuši jaunus skaņdarbus, bet arī unikālus no dažādiem sadzīves priekšmetiem un materiāliem izgatavotus netipiskus mūzikas instrumentus.
Présentée par Vivy Horny, Alex, Ace Ace Baby, Papy Regis & Docteur Geek L'émission Rock'n'm'Hell est terminée et enterrée. Mais tel le phœnix qui renaît de ses cendres, ses anciens animateurs ont créé Dark Decibels, où ils parlent de musiques extrêmes tout en ayant une approche sur la mouvance 'goth' avec la darkwave et le post-punk, par exemple. Vivi Horny et Alex ont ainsi recruté plusieurs acolytes : un prof d'histoire qui va parler du metal dans l'Histoire et inversement, un 'professeur Geek' qui traitera metal dans la pop culture, puis un autre spécialiste lui plus axé sur la culture du black metal, et d'autres acolytes d'horizons divers pour quelques reviews... Une équipe étoffée donc pour un nouveau format détonnant ! Voici la trentième émission de la première saison.
While everyone is now repeating that 2025 is the “Year of the Agent”, OpenAI is heads down building towards it. In the first 2 months of the year they released Operator and Deep Research (arguably the most successful agent archetype so far), and today they are bringing a lot of those capabilities to the API:* Responses API* Web Search Tool* Computer Use Tool* File Search Tool* A new open source Agents SDK with integrated Observability ToolsWe cover all this and more in today's lightning pod on YouTube!More details here:Responses APIIn our Michelle Pokrass episode we talked about the Assistants API needing a redesign. Today OpenAI is launching the Responses API, “a more flexible foundation for developers building agentic applications”. It's a superset of the chat completion API, and the suggested starting point for developers working with OpenAI models. One of the big upgrades is the new set of built-in tools for the responses API: Web Search, Computer Use, and Files. Web Search ToolWe previously had Exa AI on the podcast to talk about web search for AI. OpenAI is also now joining the race; the Web Search API is actually a new “model” that exposes two 4o fine-tunes: gpt-4o-search-preview and gpt-4o-mini-search-preview. These are the same models that power ChatGPT Search, and are priced at $30/1000 queries and $25/1000 queries respectively. The killer feature is inline citations: you do not only get a link to a page, but also a deep link to exactly where your query was answered in the result page. Computer Use ToolThe model that powers Operator, called Computer-Using-Agent (CUA), is also now available in the API. The computer-use-preview model is SOTA on most benchmarks, achieving 38.1% success on OSWorld for full computer use tasks, 58.1% on WebArena, and 87% on WebVoyager for web-based interactions.As you will notice in the docs, `computer-use-preview` is both a model and a tool through which you can specify the environment. Usage is priced at $3/1M input tokens and $12/1M output tokens, and it's currently only available to users in tiers 3-5.File Search ToolFile Search was also available in the Assistants API, and it's now coming to Responses too. OpenAI is bringing search + RAG all under one umbrella, and we'll definitely see more people trying to find new ways to build all-in-one apps on OpenAI. Usage is priced at $2.50 per thousand queries and file storage at $0.10/GB/day, with the first GB free.Agent SDK: Swarms++!https://github.com/openai/openai-agents-pythonTo bring it all together, after the viral reception to Swarm, OpenAI is releasing an officially supported agents framework (which was previewed at our AI Engineer Summit) with 4 core pieces:* Agents: Easily configurable LLMs with clear instructions and built-in tools.* Handoffs: Intelligently transfer control between agents.* Guardrails: Configurable safety checks for input and output validation.* Tracing & Observability: Visualize agent execution traces to debug and optimize performance.Multi-agent workflows are here to stay!OpenAI is now explicitly designs for a set of common agentic patterns: Workflows, Handoffs, Agents-as-Tools, LLM-as-a-Judge, Parallelization, and Guardrails. OpenAI previewed this in part 2 of their talk at NYC:Further coverage of the launch from Kevin Weil, WSJ, and OpenAIDevs, AMA here.Show Notes* Assistants API* Swarm (OpenAI)* Fine-Tuning in AI* 2024 OpenAI DevDay Recap with Romain* Michelle Pokrass episode (API lead)Timestamps* 00:00 Intros* 02:31 Responses API * 08:34 Web Search API * 17:14 Files Search API * 18:46 Files API vs RAG * 20:06 Computer Use / Operator API * 22:30 Agents SDKAnd of course you can catch up with the full livestream here:TranscriptAlessio [00:00:03]: Hey, everyone. Welcome back to another Latent Space Lightning episode. This is Alessio, partner and CTO at Decibel, and I'm joined by Swyx, founder of Small AI.swyx [00:00:11]: Hi, and today we have a super special episode because we're talking with our old friend Roman. Hi, welcome.Romain [00:00:19]: Thank you. Thank you for having me.swyx [00:00:20]: And Nikunj, who is most famously, if anyone has ever tried to get any access to anything on the API, Nikunj is the guy. So I know your emails because I look forward to them.Nikunj [00:00:30]: Yeah, nice to meet all of you.swyx [00:00:32]: I think that we're basically convening today to talk about the new API. So perhaps you guys want to just kick off. What is OpenAI launching today?Nikunj [00:00:40]: Yeah, so I can kick it off. We're launching a bunch of new things today. We're going to do three new built-in tools. So we're launching the web search tool. This is basically chat GPD for search, but available in the API. We're launching an improved file search tool. So this is you bringing your data to OpenAI. You upload it. We, you know, take care of parsing it, chunking it. We're embedding it, making it searchable, give you this like ready vector store that you can use. So that's the file search tool. And then we're also launching our computer use tool. So this is the tool behind the operator product in chat GPD. So that's coming to developers today. And to support all of these tools, we're going to have a new API. So, you know, we launched chat completions, like I think March 2023 or so. It's been a while. So we're looking for an update over here to support all the new things that the models can do. And so we're launching this new API. It is, you know, it works with tools. We think it'll be like a great option for all the future agentic products that we build. And so that is also launching today. Actually, the last thing we're launching is the agents SDK. We launched this thing called Swarm last year where, you know, it was an experimental SDK for people to do multi-agent orchestration and stuff like that. It was supposed to be like educational experimental, but like people, people really loved it. They like ate it up. And so we are like, all right, let's, let's upgrade this thing. Let's give it a new name. And so we're calling it the agents SDK. It's going to have built-in tracing in the OpenAI dashboard. So lots of cool stuff going out. So, yeah.Romain [00:02:14]: That's a lot, but we said 2025 was the year of agents. So there you have it, like a lot of new tools to build these agents for developers.swyx [00:02:20]: Okay. I guess, I guess we'll just kind of go one by one and we'll leave the agents SDK towards the end. So responses API, I think the sort of primary concern that people have and something I think I've voiced to you guys when, when, when I was talking with you in the, in the planning process was, is chat completions going away? So I just wanted to let it, let you guys respond to the concerns that people might have.Romain [00:02:41]: Chat completion is definitely like here to stay, you know, it's a bare metal API we've had for quite some time. Lots of tools built around it. So we want to make sure that it's maintained and people can confidently keep on building on it. At the same time, it was kind of optimized for a different world, right? It was optimized for a pre-multi-modality world. We also optimized for kind of single turn. It takes two problems. It takes prompt in, it takes response out. And now with these agentic workflows, we, we noticed that like developers and companies want to build longer horizon tasks, you know, like things that require multiple returns to get the task accomplished. And computer use is one of those, for instance. And so that's why the responses API came to life to kind of support these new agentic workflows. But chat completion is definitely here to stay.swyx [00:03:27]: And assistance API, we've, uh, has a target sunset date of first half of 2020. So this is kind of like, in my mind, there was a kind of very poetic mirroring of the API with the models. This, I kind of view this as like kind of the merging of assistance API and chat completions, right. Into one unified responses. So it's kind of like how GPT and the old series models are also unifying.Romain [00:03:48]: Yeah, that's exactly the right, uh, that's the right framing, right? Like, I think we took the best of what we learned from the assistance API, especially like being able to access tools very, uh, very like conveniently, but at the same time, like simplifying the way you have to integrate, like, you no longer have to think about six different objects to kind of get access to these tools with the responses API. You just get one API request and suddenly you can weave in those tools, right?Nikunj [00:04:12]: Yeah, absolutely. And I think we're going to make it really easy and straightforward for assistance API users to migrate over to responsive. Right. To the API without any loss of functionality or data. So our plan is absolutely to add, you know, assistant like objects and thread light objects to that, that work really well with the responses API. We'll also add like the code interpreter tool, which is not launching today, but it'll come soon. And, uh, we'll add async mode to responses API, because that's another difference with, with, uh, assistance. I will have web hooks and stuff like that, but I think it's going to be like a pretty smooth transition. Uh, once we have all of that in place. And we'll be. Like a full year to migrate and, and help them through any issues they, they, they face. So overall, I feel like assistance users are really going to benefit from this longer term, uh, with this more flexible, primitive.Alessio [00:05:01]: How should people think about when to use each type of API? So I know that in the past, the assistance was maybe more stateful, kind of like long running, many tool use kind of like file based things. And the chat completions is more stateless, you know, kind of like traditional completion API. Is that still the mental model that people should have? Or like, should you buy the.Nikunj [00:05:20]: So the responses API is going to support everything that it's at launch, going to support everything that chat completion supports, and then over time, it's going to support everything that assistance supports. So it's going to be a pretty good fit for anyone starting out with open AI. Uh, they should be able to like go to responses responses, by the way, also has a stateless mode, so you can pass in store false and they'll make the whole API stateless, just like chat completions. You're really trying to like get this unification. A story in so that people don't have to juggle multiple endpoints. That being said, like chat completions, just like the most widely adopted API, it's it's so popular. So we're still going to like support it for years with like new models and features. But if you're a new user, you want to or if you want to like existing, you want to tap into some of these like built in tools or something, you should feel feel totally fine migrating to responses and you'll have more capabilities and performance than the tech completions.swyx [00:06:16]: I think the messaging that I agree that I think resonated the most. When I talked to you was that it is a strict superset, right? Like you should be able to do everything that you could do in chat completions and with assistants. And the thing that I just assumed that because you're you're now, you know, by default is stateful, you're actually storing the chat logs or the chat state. I thought you'd be charging me for it. So, you know, to me, it was very surprising that you figured out how to make it free.Nikunj [00:06:43]: Yeah, it's free. We store your state for 30 days. You can turn it off. But yeah, it's it's free. And the interesting thing on state is that it just like makes particularly for me, it makes like debugging things and building things so much simpler, where I can like create a responses object that's like pretty complicated and part of this more complex application that I've built, I can just go into my dashboard and see exactly what happened that mess up my prompt that is like not called one of these tools that misconfigure one of the tools like the visual observability of everything that you're doing is so, so helpful. So I'm excited, like about people trying that out and getting benefits from it, too.swyx [00:07:19]: Yeah, it's a it's really, I think, a really nice to have. But all I'll say is that my friend Corey Quinn says that anything that can be used as a database will be used as a database. So be prepared for some abuse.Romain [00:07:34]: All right. Yeah, that's a good one. Some of that I've tried with the metadata. That's some people are very, very creative at stuffing data into an object. Yeah.Nikunj [00:07:44]: And we do have metadata with responses. Exactly. Yeah.Alessio [00:07:48]: Let's get through it. All of these. So web search. I think the when I first said web search, I thought you were going to just expose a API that then return kind of like a nice list of thing. But the way it's name is like GPD for all search preview. So I'm guessing you have you're using basically the same model that is in the chat GPD search, which is fine tune for search. I'm guessing it's a different model than the base one. And it's impressive the jump in performance. So just to give an example, in simple QA, GPD for all is 38% accuracy for all search is 90%. But we always talk about. How tools are like models is not everything you need, like tools around it are just as important. So, yeah, maybe give people a quick review on like the work that went into making this special.Nikunj [00:08:29]: Should I take that?Alessio [00:08:29]: Yeah, go for it.Nikunj [00:08:30]: So firstly, we're launching web search in two ways. One in responses API, which is our API for tools. It's going to be available as a web search tool itself. So you'll be able to go tools, turn on web search and you're ready to go. We still wanted to give chat completions people access to real time information. So in that. Chat completions API, which does not support built in tools. We're launching the direct access to the fine tuned model that chat GPD for search uses, and we call it GPD for search preview. And how is this model built? Basically, we have our search research team has been working on this for a while. Their main goal is to, like, get information, like get a bunch of information from all of our data sources that we use to gather information for search and then pick the right things and then cite them. As accurately as possible. And that's what the search team has really focused on. They've done some pretty cool stuff. They use like synthetic data techniques. They've done like all series model distillation to, like, make these four or fine tunes really good. But yeah, the main thing is, like, can it remain factual? Can it answer questions based on what it retrieves and get cited accurately? And that's what this like fine tune model really excels at. And so, yeah, so we're excited that, like, it's going to be directly available in chat completions along with being available as a tool. Yeah.Alessio [00:09:49]: Just to clarify, if I'm using the responses API, this is a tool. But if I'm using chat completions, I have to switch model. I cannot use 01 and call search as a tool. Yeah, that's right. Exactly.Romain [00:09:58]: I think what's really compelling, at least for me and my own uses of it so far, is that when you use, like, web search as a tool, it combines nicely with every other tool and every other feature of the platform. So think about this for a second. For instance, imagine you have, like, a responses API call with the web search tool, but suddenly you turn on function calling. You also turn on, let's say, structure. So you can have, like, the ability to structure any data from the web in real time in the JSON schema that you need for your application. So it's quite powerful when you start combining those features and tools together. It's kind of like an API for the Internet almost, you know, like you get, like, access to the precise schema you need for your app. Yeah.Alessio [00:10:39]: And then just to wrap up on the infrastructure side of it, I read on the post that people, publisher can choose to appear in the web search. So are people by default in it? Like, how can we get Latent Space in the web search API?Nikunj [00:10:53]: Yeah. Yeah. I think we have some documentation around how websites, publishers can control, like, what shows up in a web search tool. And I think you should be able to, like, read that. I think we should be able to get Latent Space in for sure. Yeah.swyx [00:11:10]: You know, I think so. I compare this to a broader trend that I started covering last year of online LLMs. Actually, Perplexity, I think, was the first. It was the first to say, to offer an API that is connected to search, and then Gemini had the sort of search grounding API. And I think you guys, I actually didn't, I missed this in the original reading of the docs, but you even give like citations with like the exact sub paragraph that is matching, which I think is the standard nowadays. I think my question is, how do we take what a knowledge cutoff is for something like this, right? Because like now, basically there's no knowledge cutoff is always live, but then there's a difference between what the model has sort of internalized in its back propagation and what is searching up its rag.Romain [00:11:53]: I think it kind of depends on the use case, right? And what you want to showcase as the source. Like, for instance, you take a company like Hebbia that has used this like web search tool. They can combine like for credit firms or law firms, they can find like, you know, public information from the internet with the live sources and citation that sometimes you do want to have access to, as opposed to like the internal knowledge. But if you're building something different, well, like, you just want to have the information. If you want to have an assistant that relies on the deep knowledge that the model has, you may not need to have these like direct citations. So I think it kind of depends on the use case a little bit, but there are many, uh, many companies like Hebbia that will need that access to these citations to precisely know where the information comes from.swyx [00:12:34]: Yeah, yeah, uh, for sure. And then one thing on the, on like the breadth, you know, I think a lot of the deep research, open deep research implementations have this sort of hyper parameter about, you know, how deep they're searching and how wide they're searching. I don't see that in the docs. But is that something that we can tune? Is that something you recommend thinking about?Nikunj [00:12:53]: Super interesting. It's definitely not a parameter today, but we should explore that. It's very interesting. I imagine like how you would do it with the web search tool and responsive API is you would have some form of like, you know, agent orchestration over here where you have a planning step and then each like web search call that you do like explicitly goes a layer deeper and deeper and deeper. But it's not a parameter that's available out of the box. But it's a cool. It's a cool thing to think about. Yeah.swyx [00:13:19]: The only guidance I'll offer there is a lot of these implementations offer top K, which is like, you know, top 10, top 20, but actually don't really want that. You want like sort of some kind of similarity cutoff, right? Like some matching score cuts cutoff, because if there's only five things, five documents that match fine, if there's 500 that match, maybe that's what I want. Right. Yeah. But also that might, that might make my costs very unpredictable because the costs are something like $30 per a thousand queries, right? So yeah. Yeah.Nikunj [00:13:49]: I guess you could, you could have some form of like a context budget and then you're like, go as deep as you can and pick the best stuff and put it into like X number of tokens. There could be some creative ways of, of managing cost, but yeah, that's a super interesting thing to explore.Alessio [00:14:05]: Do you see people using the files and the search API together where you can kind of search and then store everything in the file so the next time I'm not paying for the search again and like, yeah, how should people balance that?Nikunj [00:14:17]: That's actually a very interesting question. And let me first tell you about how I've seen a really cool way I've seen people use files and search together is they put their user preferences or memories in the vector store and so a query comes in, you use the file search tool to like get someone's like reading preferences or like fashion preferences and stuff like that, and then you search the web for information or products that they can buy related to those preferences and you then render something beautiful to show them, like, here are five things that you might be interested in. So that's how I've seen like file search, web search work together. And by the way, that's like a single responses API call, which is really cool. So you just like configure these things, go boom, and like everything just happens. But yeah, that's how I've seen like files and web work together.Romain [00:15:01]: But I think that what you're pointing out is like interesting, and I'm sure developers will surprise us as they always do in terms of how they combine these tools and how they might use file search as a way to have memory and preferences, like Nikum says. But I think like zooming out, what I find very compelling and powerful here is like when you have these like neural networks. That have like all of the knowledge that they have today, plus real time access to the Internet for like any kind of real time information that you might need for your app and file search, where you can have a lot of company, private documents, private details, you combine those three, and you have like very, very compelling and precise answers for any kind of use case that your company or your product might want to enable.swyx [00:15:41]: It's a difference between sort of internal documents versus the open web, right? Like you're going to need both. Exactly, exactly. I never thought about it doing memory as well. I guess, again, you know, anything that's a database, you can store it and you will use it as a database. That sounds awesome. But I think also you've been, you know, expanding the file search. You have more file types. You have query optimization, custom re-ranking. So it really seems like, you know, it's been fleshed out. Obviously, I haven't been paying a ton of attention to the file search capability, but it sounds like your team has added a lot of features.Nikunj [00:16:14]: Yeah, metadata filtering was like the main thing people were asking us for for a while. And I'm super excited about it. I mean, it's just so critical once your, like, web store size goes over, you know, more than like, you know, 5,000, 10,000 records, you kind of need that. So, yeah, metadata filtering is coming, too.Romain [00:16:31]: And for most companies, it's also not like a competency that you want to rebuild in-house necessarily, you know, like, you know, thinking about embeddings and chunking and, you know, how of that, like, it sounds like very complex for something very, like, obvious to ship for your users. Like companies like Navant, for instance. They were able to build with the file search, like, you know, take all of the FAQ and travel policies, for instance, that you have, you, you put that in file search tool, and then you don't have to think about anything. Now your assistant becomes naturally much more aware of all of these policies from the files.swyx [00:17:03]: The question is, like, there's a very, very vibrant RAG industry already, as you well know. So there's many other vector databases, many other frameworks. Probably if it's an open source stack, I would say like a lot of the AI engineers that I talk to want to own this part of the stack. And it feels like, you know, like, when should we DIY and when should we just use whatever OpenAI offers?Nikunj [00:17:24]: Yeah. I mean, like, if you're doing something completely from scratch, you're going to have more control, right? Like, so super supportive of, you know, people trying to, like, roll up their sleeves, build their, like, super custom chunking strategy and super custom retrieval strategy and all of that. And those are things that, like, will be harder to do with OpenAI tools. OpenAI tool has, like, we have an out-of-the-box solution. We give you the tools. We use some knobs to customize things, but it's more of, like, a managed RAG service. So my recommendation would be, like, start with the OpenAI thing, see if it, like, meets your needs. And over time, we're going to be adding more and more knobs to make it even more customizable. But, you know, if you want, like, the completely custom thing, you want control over every single thing, then you'd probably want to go and hand roll it using other solutions. So we're supportive of both, like, engineers should pick. Yeah.Alessio [00:18:16]: And then we got computer use. Which I think Operator was obviously one of the hot releases of the year. And we're only two months in. Let's talk about that. And that's also, it seems like a separate model that has been fine-tuned for Operator that has browser access.Nikunj [00:18:31]: Yeah, absolutely. I mean, the computer use models are exciting. The cool thing about computer use is that we're just so, so early. It's like the GPT-2 of computer use or maybe GPT-1 of computer use right now. But it is a separate model that has been, you know, the computer. The computer use team has been working on, you send it screenshots and it tells you what action to take. So the outputs of it are almost always tool calls and you're inputting screenshots based on whatever computer you're trying to operate.Romain [00:19:01]: Maybe zooming out for a second, because like, I'm sure your audience is like super, super like AI native, obviously. But like, what is computer use as a tool, right? And what's operator? So the idea for computer use is like, how do we let developers also build agents that can complete tasks for the users, but using a computer? Okay. Or a browser instead. And so how do you get that done? And so that's why we have this custom model, like optimized for computer use that we use like for operator ourselves. But the idea behind like putting it as an API is that imagine like now you want to, you want to automate some tasks for your product or your own customers. Then now you can, you can have like the ability to spin up one of these agents that will look at the screen and act on the screen. So that means able, the ability to click, the ability to scroll. The ability to type and to report back on the action. So that's what we mean by computer use and wrapping it as a tool also in the responses API. So now like that gives a hint also at the multi-turned thing that we were hinting at earlier, the idea that like, yeah, maybe one of these actions can take a couple of minutes to complete because there's maybe like 20 steps to complete that task. But now you can.swyx [00:20:08]: Do you think a computer use can play Pokemon?Romain [00:20:11]: Oh, interesting. I guess we tried it. I guess we should try it. You know?swyx [00:20:17]: Yeah. There's a lot of interest. I think Pokemon really is a good agent benchmark, to be honest. Like it seems like Claude is, Claude is running into a lot of trouble.Romain [00:20:25]: Sounds like we should make that a new eval, it looks like.swyx [00:20:28]: Yeah. Yeah. Oh, and then one more, one more thing before we move on to agents SDK. I know you have a hard stop. There's all these, you know, blah, blah, dash preview, right? Like search preview, computer use preview, right? And you see them all like fine tunes of 4.0. I think the question is, are we, are they all going to be merged into the main branch or are we basically always going to have subsets? Of these models?Nikunj [00:20:49]: Yeah, I think in the early days, research teams at OpenAI like operate with like fine tune models. And then once the thing gets like more stable, we sort of merge it into the main line. So that's definitely the vision, like going out of preview as we get more comfortable with and learn about all the developer use cases and we're doing a good job at them. We'll sort of like make them part of like the core models so that you don't have to like deal with the bifurcation.Romain [00:21:12]: You should think of it this way as exactly what happened last year when we introduced vision capabilities, you know. Yes. Vision capabilities were in like a vision preview model based off of GPT-4 and then vision capabilities now are like obviously built into GPT-4.0. You can think about it the same way for like the other modalities like audio and those kind of like models, like optimized for search and computer use.swyx [00:21:34]: Agents SDK, we have a few minutes left. So let's just assume that everyone has looked at Swarm. Sure. I think that Swarm has really popularized the handoff technique, which I thought was like, you know, really, really interesting for sort of a multi-agent. What is new with the SDK?Nikunj [00:21:50]: Yeah. Do you want to start? Yeah, for sure. So we've basically added support for types. We've made this like a lot. Yeah. Like we've added support for types. We've added support for guard railing, which is a very common pattern. So in the guardrail example, you basically have two things happen in parallel. The guardrail can sort of block the execution. It's a type of like optimistic generation that happens. And I think we've added support for tracing. So I think that's really cool. So you can basically look at the traces that the Agents SDK creates in the OpenAI dashboard. We also like made this pretty flexible. So you can pick any API from any provider that supports the ChatCompletions API format. So it supports responses by default, but you can like easily plug it in to anyone that uses the ChatCompletions API. And similarly, on the tracing side, you can support like multiple tracing providers. By default, it sort of points to the OpenAI dashboard. But, you know, there's like so many tracing providers. There's so many tracing companies out there. And we'll announce some partnerships on that front, too. So just like, you know, adding lots of core features and making it more usable, but still centered around like handoffs is like the main, main concept.Romain [00:22:59]: And by the way, it's interesting, right? Because Swarm just came to life out of like learning from customers directly that like orchestrating agents in production was pretty hard. You know, simple ideas could quickly turn very complex. Like what are those guardrails? What are those handoffs, et cetera? So that came out of like learning from customers. And it was initially shipped. It was not as a like low-key experiment, I'd say. But we were kind of like taken by surprise at how much momentum there was around this concept. And so we decided to learn from that and embrace it. To be like, okay, maybe we should just embrace that as a core primitive of the OpenAI platform. And that's kind of what led to the Agents SDK. And I think now, as Nikuj mentioned, it's like adding all of these new capabilities to it, like leveraging the handoffs that we had, but tracing also. And I think what's very compelling for developers is like instead of having one agent to rule them all and you stuff like a lot of tool calls in there that can be hard to monitor, now you have the tools you need to kind of like separate the logic, right? And you can have a triage agent that based on an intent goes to different kind of agents. And then on the OpenAI dashboard, we're releasing a lot of new user interface logs as well. So you can see all of the tracing UIs. Essentially, you'll be able to troubleshoot like what exactly happened. In that workflow, when the triage agent did a handoff to a secondary agent and the third and see the tool calls, et cetera. So we think that the Agents SDK combined with the tracing UIs will definitely help users and developers build better agentic workflows.Alessio [00:24:28]: And just before we wrap, are you thinking of connecting this with also the RFT API? Because I know you already have, you kind of store my text completions and then I can do fine tuning of that. Is that going to be similar for agents where you're storing kind of like my traces? And then help me improve the agents?Nikunj [00:24:43]: Yeah, absolutely. Like you got to tie the traces to the evals product so that you can generate good evals. Once you have good evals and graders and tasks, you can use that to do reinforcement fine tuning. And, you know, lots of details to be figured out over here. But that's the vision. And I think we're going to go after it like pretty hard and hope we can like make this whole workflow a lot easier for developers.Alessio [00:25:05]: Awesome. Thank you so much for the time. I'm sure you'll be busy on Twitter tomorrow with all the developer feedback. Yeah.Romain [00:25:12]: Thank you so much for having us. And as always, we can't wait to see what developers will build with these tools and how we can like learn as quickly as we can from them to make them even better over time.Nikunj [00:25:21]: Yeah.Romain [00:25:22]: Thank you, guys.Nikunj [00:25:23]: Thank you.Romain [00:25:23]: Thank you both. Awesome. Get full access to Latent.Space at www.latent.space/subscribe
After a little more than two days, U.S. President Donald Trump paused the 25 per cent tariffs on Canada and Mexico until April 2. It's not just the tariff whiplash that's causing anxiety – since Trump took office, he's alienated allies, moved closer to traditional rivals, and hinted at a new age of U.S. imperialism.Doug Saunders is the international affairs columnist for the Globe. He joins the Decibel to talk about how the world as we know it has changed since Trump took office in January, and how countries are adapting to the constantly shifting global order.Questions? Comments? Ideas? E-mail us at thedecibel@globeandmail.com
"Would you be mad at us if we raised more money?"That was the text I got late one night from Will Pearce, co-founder and CEO of Dreadnode, a company we'd funded about nine months prior. I'd gotten to know Will through a cold DM he sent me. The DM had nothing to do with tech or investing, just starting a conversation. At the time, Will was working at NVIDIA as an AI red team lead putting him at the cutting edge of AI and ML security. As we got to know each other, it was clear he had a large vision for where the puck was going for the broader market and where his expertise and insights could create the foundation for building the leading company for offensive AI security. I offered to put together a small round to put he and his co-founder, Nick Landers, in business. We filled the round out with several value-added angels and were off to the races to build product and company with indie ideals at the core. And it worked!Within months, they were signing contracts with the top names at leading research labs and hyper-scalers. On several occasions, they were even profitable! And then the VCs noticed. Shortly after our round closed, the word started to spread on what the team at Dreadnode was building. Then the emails started. The early answers were nos and not interested, but as demand from customers grew and incredible, early potential team members started coming out of the woodwork, these high class problems were becoming very real problems. The team found themselves turning away customers and potential hires as their demand outstripped their resources. Thus, the late night text. The text led to a conversation that led to a formal fundraising process and ended in the announcement last week of Dreadnode's $14M Series A led by Decibel, a top tier firm focused exclusively on security. In today's video, we unpack the conversation and process that unfolded after that late night text. We dig into why Will may have had concerns that I, and indie, would be upset or not supportive of their decision, and learnings from two very green and very technical first time founders getting sucked into their first proper fundraising process. I hope what comes through in video is something I've written often — we are not anti-fundraising. But we believe that the best way to improve your odds of building a generational company that can attract world class customers and investors is to focus on the former, the latter will come. And when they do, you'll be in a position of ultimate optionality that empowers founding teams to pick exactly the partner they want, on the terms they want, and get back to build the company they want. It seems obvious, but it runs so counter to the advice and examples that get celebrated in the startup world. In the case of Dreadnode, they were able to do just that — work with the partner they wanted on the terms they wanted. And, as an added benefit, they were effectively able to skip 2 or 3 interim rounds of funding and skip straight to a fully baked Series A. Our goal with today's video was to put some personalities and experiences to that narrative and I think it comes through here. As always, I hope you enjoy watching it as much as we enjoyed recording it. PS — if you see any of yourself in the Dreadnode founders, don't hesitate to reach out to discuss what you're building and whether indie could be a fit for you too.
The Islamic Republic of Iran is as isolated from the western world as ever. It has no diplomatic relations with Canada, President Trump recently recommitted to exerting “maximum economic pressure” on the country to force it to abandon its nuclear weapons program and support for terrorism. Western sanctions have contributed to its 32 per cent inflation rate.And yet, as The Globe's Africa Bureau Chief Geoffrey York found on a recent — and rare — reporting trip to the country, ordinary Iranians are pushing for change. More women are defying the strict dress code laws and don't cover their hair in public, despite the violent crackdown on their protests in 2022. Iranian films are also defying morality laws, screening them in Iran, and submitting them to the international film festival in Cannes.Enter this Decibel survey: and share your thoughts for a chance to win $100 grocery gift cardsQuestions? Comments? Ideas? Email us at thedecibel@globeandmail.com
Scammers are using generative AI technology to create deepfakes, compelling their targets to send large sums of money. And it is not just individuals getting scammed any more – businesses are increasingly being targeted by these look-alikes too.While there are positive applications for generative AI, these digital replicas may mean the need for better regulation.Alexandra Posadski is the Globe's financial and cybersecurity reporter. Alexandra will explain how these scams usually work, how deepfakes are increasingly being used, and what can be done to help protect ourselves against them.Enter this Decibel survey: https://thedecibelsurvey.ca/ and share your thoughts for a chance to win $100 grocery gift cardsQuestions? Comments? Ideas? E-mail us at thedecibel@globeandmail.com
March 1 marks the official end of the first phase of the ceasefire in Gaza. Phase two remains in doubt, unless all parties can start negotiations or extend the deadline for phase one.Hamida Ghafour is The Globe's Deputy Foreign Editor. She explains what has happened during the last six weeks, how hostage handovers have caused outrage in Israel and what could happen next.Enter this Decibel survey: https://thedecibelsurvey.ca/ and share your thoughts for a chance to win $100 grocery gift cardsQuestions? Comments? Ideas? E-mail us at thedecibel@globeandmail.com
This Thursday, Ontario voters head to the polls for a provincial election that Premier Doug Ford called more than a year early. The threat of tariffs looms large, overshadowing traditional election issues like health care and affordability.Jeff Gray is The Globe's Ontario politics reporter. He's on the show to talk about how tariffs shaped this election, how the parties are approaching the challenges facing Ontario and how Ford's opponents are dealing with the uphill battle against him.Enter this Decibel survey: https://thedecibelsurvey.ca/ and share your thoughts for a chance to win $100 grocery gift cardsQuestions? Comments? Ideas? E-mail us at thedecibel@globeandmail.com
Josh Kamdjou is CEO and Founder of Sublime Security. Josh started Sublime after realizing just how easy it was for him to break into companies with phishing emails. He wanted to build a solution that better addressed the tailored environment of each organization such as historical data. Now the company has raised over $80 million from leading VCs such as IVP, Index Ventures, and Decibel. Before Sublime, Josh worked as a DoD hacker for 9 years.In the episode we discuss his emphasis on leveraging the attacker perspective, the fundamental difficulties of email security, his conviction in product-led growth, and more.Website: https://sublime.security/Sponsor: VulnCheck
Since returning to office, the Trump administration has taken aim at diversity, equity, and inclusion (DEI), with major American corporations scrapping their policies and programs in response. But the backlash goes beyond DEI — corporate climate commitments are under attack, too.The progressive policies being rolled back fall under ESG (environmental, social and governance). ESG factors help businesses evaluate their practices related to sustainability and ethics, and help investors decide who to support. But with major political shifts taking place in the U.S., could Canada's ESG boom go bust too?Jeffrey Jones is the Globe's ESG and sustainable finance reporter. He'll explain the rise of ESG, the growing backlash, and whether we could see Canadian companies roll back their own environmental commitments in the coming months.Enter this Decibel survey: https://thedecibelsurvey.ca/ and share your thoughts for a chance to win $100 grocery gift cardsQuestions? Comments? Ideas? Email us at thedecibel@globeandmail.com
As the new deadline for U.S. tariffs approaches, Canadian businesses are trying to suss out whether it's possible for them to diversify their trading partners to help soften the blow if American demand dries up.Chris Wilson-Smith – who writes The Globe's daily Business Brief newsletter – recently looked into how feasible diversification is and found there are some significant barriers. But not all hope is lost.Enter this Decibel survey: https://thedecibelsurvey.ca/ and share your thoughts for a chance to win $100 grocery gift cardsQuestions? Comments? Ideas? E-mail us at thedecibel@globeandmail.com
In this special weekend edition of The Decibel, two Canadian authors discuss their new memoirs on divorce. Scaachi Koul is a senior writer at Slate, and co-hosts the podcast Scamfluencers, as well as the Netflix show Follow This. Her second book is called Sucker Punch: Essays, and is a collection of essays about her divorce, among many other life-changing events.Haley Mlotek is a writer, editor, and organizer and has been published in the New York Times Magazine, The New Yorker and many other places. Her first book is called No Fault: a Memoir of Romance and Divorce.Want more weekend editions of The Decibel? Email us at thedecibel@globeandmail.comEnter this Decibel survey: https://thedecibelsurvey.ca/ and share your thoughts for a chance to win $100 grocery gift cards.
CareerCast by the University of Chicago Booth School of Business
Tune in to CareerCast with host, Anita Brick, as she welcomes Mike Schiller, former corporate executive and author of "High Impact at Low Decibels." In this compelling episode, Mike reveals strategies for making a significant impact in your career without pushy self-promotion or office drama. Discover how to: Amplify your influence quietly yet effectively; Navigate workplace challenges with grace, and Leverage your unique strengths for career advancement. Whether you are an introvert looking to shine or an extrovert seeking to refine your approach, this episode offers invaluable insights for professionals across industries. Learn to create waves without making noise, and transform your career trajectory with subtle yet powerful techniques.
Last week, U.S. President Donald Trump had a 90-minute phone call with Russian President Vladimir Putin about the end of the war in Ukraine. That call ended three years of U.S. isolation of Russia – former President Joe Biden hadn't spoken to Putin since before Russia invaded Ukraine – and caught Ukrainian Prime Minister Volodymyr Zelensky by surprise. Days later, U.S. and Russian officials met in Saudi Arabia to discuss a plan for the end of the war, once again, without Ukraine.Mark MacKinnon is a senior international correspondent for the Globe. Today, he's on the show to talk about how the relationship between the U.S. and Ukraine is deteriorating as the three-year anniversary of the war approaches, and what that signals for Ukraine's future.Enter this Decibel survey: https://thedecibelsurvey.ca/ and share your thoughts for a chance to win $100 grocery gift cardsQuestions? Comments? Ideas? Email us at thedecibel@globeandmail.com
Alberta Premier Danielle Smith's government is facing scrutiny after serious allegations were in a wrongful dismissal lawsuit launched by Alberta Health Services' former CEO, Athana Mentzelopoulos. The lawsuit alleges that government officials interfered with the health system on behalf of private firms. It also claims that Mentzelopoulos was fired because of “an internal investigation,” she was launched into how Alberta Health Services' contracts are procured. Carrie Tait, one of The Globe's reporters covering Alberta, broke this story. She explains the allegations made against the government, their ties to for-profit medical companies and what Smith's government has said publicly so far. Questions? Comments? Ideas? Email us at thedecibel@globeandmail.comEnter this Decibel survey: https://thedecibelsurvey.ca/ and share your thoughts for a chance to win $100 grocery gift cards
Rates of cancer diagnosis and death are climbing worldwide in people under 50, according to the World Health Organization. A report, with data between 1990 and 2019 in 204 countries, showed early onset cancer grew 79 percent, while deaths also went up by 28 percent in the same time period.We follow the stories of two cancer survivors along with Kelly Grant, The Globe's national health reporter. She'll detail what we know about why younger people are being diagnosed with cancer, the symptoms to look for and why fighting cancer at a young age carries new challenges for Millennials and Gen Xers.Questions? Comments? Ideas? Email us at thedecibel@globeandmail.comEnter this Decibel survey: https://thedecibelsurvey.ca/ and share your thoughts for a chance to win $100 grocery gift cards
Less than a month into his second term, U.S. President Donald Trump has already threatened to impose tariffs on half a dozen allies and adversaries. This week, he announced incoming universal tariffs on steel and aluminum, along with reciprocal tariffs on a range of foreign imports at ‘different levels'.But when and why did Trump decide that tariffs would be the centrepiece of his plan in redefining America's role in the global trading system?Mark Rendell is the Globe's Economics Reporter. He'll explain how Trump is using tariffs, its role in achieving his administration's vision for U.S. trade, and whether all of this... could actually backfire.Enter this Decibel survey: https://thedecibelsurvey.ca/ and share your thoughts for a chance to win $100 grocery gift cardsQuestions? Comments? Ideas? Email us at thedecibel@globeandmail.com
In this podcast episode, Dr. Jonathan H. Westover talks with Mike Schiller about his book, High Impact at Low Decibels: How Anxiety-Filled Introverts (And Others) Can Thrive In The Workplace. Mike Schiller has more than 30 years of experience at Fortune 500 companies, both as a technical expert and as an executive-level leader, and most recently was vice president and chief information security officer at Texas Instruments. He is currently president of Onward Consulting, specializing in information security and audit consulting. Schiller is a graduate of Texas A&M University and has spent more than 20 years working in the IT audit and information security fields. He is an anxiety-filled introvert and enjoys helping others with similar challenges succeed. For more information, please visit https://www.businessexpertpress.com/books/high-impact-at-low-decibels-how-anxiety-filled-introverts-and-others-can-thrive-in-the-workplace Check out all of the podcasts in the HCI Podcast Network!
One last Gold sponsor slot is available for the AI Engineer Summit in NYC. Our last round of invites is going out soon - apply here - If you are building AI agents or AI eng teams, this will be the single highest-signal conference of the year for you!While the world melts down over DeepSeek, few are talking about the OTHER notable group of former hedge fund traders who pivoted into AI and built a remarkably profitable consumer AI business with a tiny team with incredibly cracked engineering team — Chai Research. In short order they have:* Started a Chat AI company well before Noam Shazeer started Character AI, and outlasted his departure.* Crossed 1m DAU in 2.5 years - William updates us on the pod that they've hit 1.4m DAU now, another +40% from a few months ago. Revenue crossed >$22m. * Launched the Chaiverse model crowdsourcing platform - taking 3-4 week A/B testing cycles down to 3-4 hours, and deploying >100 models a week.While they're not paying million dollar salaries, you can tell they're doing pretty well for an 11 person startup:The Chai Recipe: Building infra for rapid evalsRemember how the central thesis of LMarena (formerly LMsys) is that the only comprehensive way to evaluate LLMs is to let users try them out and pick winners?At the core of Chai is a mobile app that looks like Character AI, but is actually the largest LLM A/B testing arena in the world, specialized on retaining chat users for Chai's usecases (therapy, assistant, roleplay, etc). It's basically what LMArena would be if taken very, very seriously at one company (with $1m in prizes to boot):Chai publishes occasional research on how they think about this, including talks at their Palo Alto office:William expands upon this in today's podcast (34 mins in):Fundamentally, the way I would describe it is when you're building anything in life, you need to be able to evaluate it. And through evaluation, you can iterate, we can look at benchmarks, and we can say the issues with benchmarks and why they may not generalize as well as one would hope in the challenges of working with them. But something that works incredibly well is getting feedback from humans. And so we built this thing where anyone can submit a model to our developer backend, and it gets put in front of 5000 users, and the users can rate it. And we can then have a really accurate ranking of like which model, or users finding more engaging or more entertaining. And it gets, you know, it's at this point now, where every day we're able to, I mean, we evaluate between 20 and 50 models, LLMs, every single day, right. So even though we've got only got a team of, say, five AI researchers, they're able to iterate a huge quantity of LLMs, right. So our team ships, let's just say minimum 100 LLMs a week is what we're able to iterate through. Now, before that moment in time, we might iterate through three a week, we might, you know, there was a time when even doing like five a month was a challenge, right? By being able to change the feedback loops to the point where it's not, let's launch these three models, let's do an A-B test, let's assign, let's do different cohorts, let's wait 30 days to see what the day 30 retention is, which is the kind of the, if you're doing an app, that's like A-B testing 101 would be, do a 30-day retention test, assign different treatments to different cohorts and come back in 30 days. So that's insanely slow. That's just, it's too slow. And so we were able to get that 30-day feedback loop all the way down to something like three hours.In Crowdsourcing the leap to Ten Trillion-Parameter AGI, William describes Chai's routing as a recommender system, which makes a lot more sense to us than previous pitches for model routing startups:William is notably counter-consensus in a lot of his AI product principles:* No streaming: Chats appear all at once to allow rejection sampling* No voice: Chai actually beat Character AI to introducing voice - but removed it after finding that it was far from a killer feature.* Blending: “Something that we love to do at Chai is blending, which is, you know, it's the simplest way to think about it is you're going to end up, and you're going to pretty quickly see you've got one model that's really smart, one model that's really funny. How do you get the user an experience that is both smart and funny? Well, just 50% of the requests, you can serve them the smart model, 50% of the requests, you serve them the funny model.” (that's it!)But chief above all is the recommender system.We also referenced Exa CEO Will Bryk's concept of SuperKnowlege:Full Video versionOn YouTube. please like and subscribe!Timestamps* 00:00:04 Introductions and background of William Beauchamp* 00:01:19 Origin story of Chai AI* 00:04:40 Transition from finance to AI* 00:11:36 Initial product development and idea maze for Chai* 00:16:29 User psychology and engagement with AI companions* 00:20:00 Origin of the Chai name* 00:22:01 Comparison with Character AI and funding challenges* 00:25:59 Chai's growth and user numbers* 00:34:53 Key inflection points in Chai's growth* 00:42:10 Multi-modality in AI companions and focus on user-generated content* 00:46:49 Chaiverse developer platform and model evaluation* 00:51:58 Views on AGI and the nature of AI intelligence* 00:57:14 Evaluation methods and human feedback in AI development* 01:02:01 Content creation and user experience in Chai* 01:04:49 Chai Grant program and company culture* 01:07:20 Inference optimization and compute costs* 01:09:37 Rejection sampling and reward models in AI generation* 01:11:48 Closing thoughts and recruitmentTranscriptAlessio [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel, and today we're in the Chai AI office with my usual co-host, Swyx.swyx [00:00:14]: Hey, thanks for having us. It's rare that we get to get out of the office, so thanks for inviting us to your home. We're in the office of Chai with William Beauchamp. Yeah, that's right. You're founder of Chai AI, but previously, I think you're concurrently also running your fund?William [00:00:29]: Yep, so I was simultaneously running an algorithmic trading company, but I fortunately was able to kind of exit from that, I think just in Q3 last year. Yeah, congrats. Yeah, thanks.swyx [00:00:43]: So Chai has always been on my radar because, well, first of all, you do a lot of advertising, I guess, in the Bay Area, so it's working. Yep. And second of all, the reason I reached out to a mutual friend, Joyce, was because I'm just generally interested in the... ...consumer AI space, chat platforms in general. I think there's a lot of inference insights that we can get from that, as well as human psychology insights, kind of a weird blend of the two. And we also share a bit of a history as former finance people crossing over. I guess we can just kind of start it off with the origin story of Chai.William [00:01:19]: Why decide working on a consumer AI platform rather than B2B SaaS? So just quickly touching on the background in finance. Sure. Originally, I'm from... I'm from the UK, born in London. And I was fortunate enough to go study economics at Cambridge. And I graduated in 2012. And at that time, everyone in the UK and everyone on my course, HFT, quant trading was really the big thing. It was like the big wave that was happening. So there was a lot of opportunity in that space. And throughout college, I'd sort of played poker. So I'd, you know, I dabbled as a professional poker player. And I was able to accumulate this sort of, you know, say $100,000 through playing poker. And at the time, as my friends would go work at companies like ChangeStreet or Citadel, I kind of did the maths. And I just thought, well, maybe if I traded my own capital, I'd probably come out ahead. I'd make more money than just going to work at ChangeStreet.swyx [00:02:20]: With 100k base as capital?William [00:02:22]: Yes, yes. That's not a lot. Well, it depends what strategies you're doing. And, you know, there is an advantage. There's an advantage to being small, right? Because there are, if you have a 10... Strategies that don't work in size. Exactly, exactly. So if you have a fund of $10 million, if you find a little anomaly in the market that you might be able to make 100k a year from, that's a 1% return on your 10 million fund. If your fund is 100k, that's 100% return, right? So being small, in some sense, was an advantage. So started off, and the, taught myself Python, and machine learning was like the big thing as well. Machine learning had really, it was the first, you know, big time machine learning was being used for image recognition, neural networks come out, you get dropout. And, you know, so this, this was the big thing that's going on at the time. So I probably spent my first three years out of Cambridge, just building neural networks, building random forests to try and predict asset prices, right, and then trade that using my own money. And that went well. And, you know, if you if you start something, and it goes well, you You try and hire more people. And the first people that came to mind was the talented people I went to college with. And so I hired some friends. And that went well and hired some more. And eventually, I kind of ran out of friends to hire. And so that was when I formed the company. And from that point on, we had our ups and we had our downs. And that was a whole long story and journey in itself. But after doing that for about eight or nine years, on my 30th birthday, which was four years ago now, I kind of took a step back to just evaluate my life, right? This is what one does when one turns 30. You know, I just heard it. I hear you. And, you know, I looked at my 20s and I loved it. It was a really special time. I was really lucky and fortunate to have worked with this amazing team, been successful, had a lot of hard times. And through the hard times, learned wisdom and then a lot of success and, you know, was able to enjoy it. And so the company was making about five million pounds a year. And it was just me and a team of, say, 15, like, Oxford and Cambridge educated mathematicians and physicists. It was like the real dream that you'd have if you wanted to start a quant trading firm. It was like...swyx [00:04:40]: Your own, all your own money?William [00:04:41]: Yeah, exactly. It was all the team's own money. We had no customers complaining to us about issues. There's no investors, you know, saying, you know, they don't like the risk that we're taking. We could. We could really run the thing exactly as we wanted it. It's like Susquehanna or like Rintec. Yeah, exactly. Yeah. And they're the companies that we would kind of look towards as we were building that thing out. But on my 30th birthday, I look and I say, OK, great. This thing is making as much money as kind of anyone would really need. And I thought, well, what's going to happen if we keep going in this direction? And it was clear that we would never have a kind of a big, big impact on the world. We can enrich ourselves. We can make really good money. Everyone on the team would be paid very, very well. Presumably, I can make enough money to buy a yacht or something. But this stuff wasn't that important to me. And so I felt a sort of obligation that if you have this much talent and if you have a talented team, especially as a founder, you want to be putting all that talent towards a good use. I looked at the time of like getting into crypto and I had a really strong view on crypto, which was that as far as a gambling device. This is like the most fun form of gambling invented in like ever super fun, I thought as a way to evade monetary regulations and banking restrictions. I think it's also absolutely amazing. So it has two like killer use cases, not so much banking the unbanked, but everything else, but everything else to do with like the blockchain and, and you know, web, was it web 3.0 or web, you know, that I, that didn't, it didn't really make much sense. And so instead of going into crypto, which I thought, even if I was successful, I'd end up in a lot of trouble. I thought maybe it'd be better to build something that governments wouldn't have a problem with. I knew that LLMs were like a thing. I think opening. I had said they hadn't released GPT-3 yet, but they'd said GPT-3 is so powerful. We can't release it to the world or something. Was it GPT-2? And then I started interacting with, I think Google had open source, some language models. They weren't necessarily LLMs, but they, but they were. But yeah, exactly. So I was able to play around with, but nowadays so many people have interacted with the chat GPT, they get it, but it's like the first time you, you can just talk to a computer and it talks back. It's kind of a special moment and you know, everyone who's done that goes like, wow, this is how it should be. Right. It should be like, rather than having to type on Google and search, you should just be able to ask Google a question. When I saw that I read the literature, I kind of came across the scaling laws and I think even four years ago. All the pieces of the puzzle were there, right? Google had done this amazing research and published, you know, a lot of it. Open AI was still open. And so they'd published a lot of their research. And so you really could be fully informed on, on the state of AI and where it was going. And so at that point I was confident enough, it was worth a shot. I think LLMs are going to be the next big thing. And so that's the thing I want to be building in, in that space. And I thought what's the most impactful product I can possibly build. And I thought it should be a platform. So I myself love platforms. I think they're fantastic because they open up an ecosystem where anyone can contribute to it. Right. So if you think of a platform like a YouTube, instead of it being like a Hollywood situation where you have to, if you want to make a TV show, you have to convince Disney to give you the money to produce it instead, anyone in the world can post any content they want to YouTube. And if people want to view it, the algorithm is going to promote it. Nowadays. You can look at creators like Mr. Beast or Joe Rogan. They would have never have had that opportunity unless it was for this platform. Other ones like Twitter's a great one, right? But I would consider Wikipedia to be a platform where instead of the Britannica encyclopedia, which is this, it's like a monolithic, you get all the, the researchers together, you get all the data together and you combine it in this, in this one monolithic source. Instead. You have this distributed thing. You can say anyone can host their content on Wikipedia. Anyone can contribute to it. And anyone can maybe their contribution is they delete stuff. When I was hearing like the kind of the Sam Altman and kind of the, the Muskian perspective of AI, it was a very kind of monolithic thing. It was all about AI is basically a single thing, which is intelligence. Yeah. Yeah. The more intelligent, the more compute, the more intelligent, and the more and better AI researchers, the more intelligent, right? They would speak about it as a kind of erased, like who can get the most data, the most compute and the most researchers. And that would end up with the most intelligent AI. But I didn't believe in any of that. I thought that's like the total, like I thought that perspective is the perspective of someone who's never actually done machine learning. Because with machine learning, first of all, you see that the performance of the models follows an S curve. So it's not like it just goes off to infinity, right? And the, the S curve, it kind of plateaus around human level performance. And you can look at all the, all the machine learning that was going on in the 2010s, everything kind of plateaued around the human level performance. And we can think about the self-driving car promises, you know, how Elon Musk kept saying the self-driving car is going to happen next year, it's going to happen next, next year. Or you can look at the image recognition, the speech recognition. You can look at. All of these things, there was almost nothing that went superhuman, except for something like AlphaGo. And we can speak about why AlphaGo was able to go like super superhuman. So I thought the most likely thing was going to be this, I thought it's not going to be a monolithic thing. That's like an encyclopedia Britannica. I thought it must be a distributed thing. And I actually liked to look at the world of finance for what I think a mature machine learning ecosystem would look like. So, yeah. So finance is a machine learning ecosystem because all of these quant trading firms are running machine learning algorithms, but they're running it on a centralized platform like a marketplace. And it's not the case that there's one giant quant trading company of all the data and all the quant researchers and all the algorithms and compute, but instead they all specialize. So one will specialize on high frequency training. Another will specialize on mid frequency. Another one will specialize on equity. Another one will specialize. And I thought that's the way the world works. That's how it is. And so there must exist a platform where a small team can produce an AI for a unique purpose. And they can iterate and build the best thing for that, right? And so that was the vision for Chai. So we wanted to build a platform for LLMs.Alessio [00:11:36]: That's kind of the maybe inside versus contrarian view that led you to start the company. Yeah. And then what was maybe the initial idea maze? Because if somebody told you that was the Hugging Face founding story, people might believe it. It's kind of like a similar ethos behind it. How did you land on the product feature today? And maybe what were some of the ideas that you discarded that initially you thought about?William [00:11:58]: So the first thing we built, it was fundamentally an API. So nowadays people would describe it as like agents, right? But anyone could write a Python script. They could submit it to an API. They could send it to the Chai backend and we would then host this code and execute it. So that's like the developer side of the platform. On their Python script, the interface was essentially text in and text out. An example would be the very first bot that I created. I think it was a Reddit news bot. And so it would first, it would pull the popular news. Then it would prompt whatever, like I just use some external API for like Burr or GPT-2 or whatever. Like it was a very, very small thing. And then the user could talk to it. So you could say to the bot, hi bot, what's the news today? And it would say, this is the top stories. And you could chat with it. Now four years later, that's like perplexity or something. That's like the, right? But back then the models were first of all, like really, really dumb. You know, they had an IQ of like a four year old. And users, there really wasn't any demand or any PMF for interacting with the news. So then I was like, okay. Um. So let's make another one. And I made a bot, which was like, you could talk to it about a recipe. So you could say, I'm making eggs. Like I've got eggs in my fridge. What should I cook? And it'll say, you should make an omelet. Right. There was no PMF for that. No one used it. And so I just kept creating bots. And so every single night after work, I'd be like, okay, I like, we have AI, we have this platform. I can create any text in textile sort of agent and put it on the platform. And so we just create stuff night after night. And then all the coders I knew, I would say, yeah, this is what we're going to do. And then I would say to them, look, there's this platform. You can create any like chat AI. You should put it on. And you know, everyone's like, well, chatbots are super lame. We want absolutely nothing to do with your chatbot app. No one who knew Python wanted to build on it. I'm like trying to build all these bots and no consumers want to talk to any of them. And then my sister who at the time was like just finishing college or something, I said to her, I was like, if you want to learn Python, you should just submit a bot for my platform. And she, she built a therapy for me. And I was like, okay, cool. I'm going to build a therapist bot. And then the next day I checked the performance of the app and I'm like, oh my God, we've got 20 active users. And they spent, they spent like an average of 20 minutes on the app. I was like, oh my God, what, what bot were they speaking to for an average of 20 minutes? And I looked and it was the therapist bot. And I went, oh, this is where the PMF is. There was no demand for, for recipe help. There was no demand for news. There was no demand for dad jokes or pub quiz or fun facts or what they wanted was they wanted the therapist bot. the time I kind of reflected on that and I thought, well, if I want to consume news, the most fun thing, most fun way to consume news is like Twitter. It's not like the value of there being a back and forth, wasn't that high. Right. And I thought if I need help with a recipe, I actually just go like the New York times has a good recipe section, right? It's not actually that hard. And so I just thought the thing that AI is 10 X better at is a sort of a conversation right. That's not intrinsically informative, but it's more about an opportunity. You can say whatever you want. You're not going to get judged. If it's 3am, you don't have to wait for your friend to text back. It's like, it's immediate. They're going to reply immediately. You can say whatever you want. It's judgment-free and it's much more like a playground. It's much more like a fun experience. And you could see that if the AI gave a person a compliment, they would love it. It's much easier to get the AI to give you a compliment than a human. From that day on, I said, okay, I get it. Humans want to speak to like humans or human like entities and they want to have fun. And that was when I started to look less at platforms like Google. And I started to look more at platforms like Instagram. And I was trying to think about why do people use Instagram? And I could see that I think Chai was, was filling the same desire or the same drive. If you go on Instagram, typically you want to look at the faces of other humans, or you want to hear about other people's lives. So if it's like the rock is making himself pancakes on a cheese plate. You kind of feel a little bit like you're the rock's friend, or you're like having pancakes with him or something, right? But if you do it too much, you feel like you're sad and like a lonely person, but with AI, you can talk to it and tell it stories and tell you stories, and you can play with it for as long as you want. And you don't feel like you're like a sad, lonely person. You feel like you actually have a friend.Alessio [00:16:29]: And what, why is that? Do you have any insight on that from using it?William [00:16:33]: I think it's just the human psychology. I think it's just the idea that, with old school social media. You're just consuming passively, right? So you'll just swipe. If I'm watching TikTok, just like swipe and swipe and swipe. And even though I'm getting the dopamine of like watching an engaging video, there's this other thing that's building my head, which is like, I'm feeling lazier and lazier and lazier. And after a certain period of time, I'm like, man, I just wasted 40 minutes. I achieved nothing. But with AI, because you're interacting, you feel like you're, it's not like work, but you feel like you're participating and contributing to the thing. You don't feel like you're just. Consuming. So you don't have a sense of remorse basically. And you know, I think on the whole people, the way people talk about, try and interact with the AI, they speak about it in an incredibly positive sense. Like we get people who say they have eating disorders saying that the AI helps them with their eating disorders. People who say they're depressed, it helps them through like the rough patches. So I think there's something intrinsically healthy about interacting that TikTok and Instagram and YouTube doesn't quite tick. From that point on, it was about building more and more kind of like human centric AI for people to interact with. And I was like, okay, let's make a Kanye West bot, right? And then no one wanted to talk to the Kanye West bot. And I was like, ah, who's like a cool persona for teenagers to want to interact with. And I was like, I was trying to find the influencers and stuff like that, but no one cared. Like they didn't want to interact with the, yeah. And instead it was really just the special moment was when we said the realization that developers and software engineers aren't interested in building this sort of AI, but the consumers are right. And rather than me trying to guess every day, like what's the right bot to submit to the platform, why don't we just create the tools for the users to build it themselves? And so nowadays this is like the most obvious thing in the world, but when Chai first did it, it was not an obvious thing at all. Right. Right. So we took the API for let's just say it was, I think it was GPTJ, which was this 6 billion parameter open source transformer style LLM. We took GPTJ. We let users create the prompt. We let users select the image and we let users choose the name. And then that was the bot. And through that, they could shape the experience, right? So if they said this bot's going to be really mean, and it's going to be called like bully in the playground, right? That was like a whole category that I never would have guessed. Right. People love to fight. They love to have a disagreement, right? And then they would create, there'd be all these romantic archetypes that I didn't know existed. And so as the users could create the content that they wanted, that was when Chai was able to, to get this huge variety of content and rather than appealing to, you know, 1% of the population that I'd figured out what they wanted, you could appeal to a much, much broader thing. And so from that moment on, it was very, very crystal clear. It's like Chai, just as Instagram is this social media platform that lets people create images and upload images, videos and upload that, Chai was really about how can we let the users create this experience in AI and then share it and interact and search. So it's really, you know, I say it's like a platform for social AI.Alessio [00:20:00]: Where did the Chai name come from? Because you started the same path. I was like, is it character AI shortened? You started at the same time, so I was curious. The UK origin was like the second, the Chai.William [00:20:15]: We started way before character AI. And there's an interesting story that Chai's numbers were very, very strong, right? So I think in even 20, I think late 2022, was it late 2022 or maybe early 2023? Chai was like the number one AI app in the app store. So we would have something like 100,000 daily active users. And then one day we kind of saw there was this website. And we were like, oh, this website looks just like Chai. And it was the character AI website. And I think that nowadays it's, I think it's much more common knowledge that when they left Google with the funding, I think they knew what was the most trending, the number one app. And I think they sort of built that. Oh, you found the people.swyx [00:21:03]: You found the PMF for them.William [00:21:04]: We found the PMF for them. Exactly. Yeah. So I worked a year very, very hard. And then they, and then that was when I learned a lesson, which is that if you're VC backed and if, you know, so Chai, we'd kind of ran, we'd got to this point, I was the only person who'd invested. I'd invested maybe 2 million pounds in the business. And you know, from that, we were able to build this thing, get to say a hundred thousand daily active users. And then when character AI came along, the first version, we sort of laughed. We were like, oh man, this thing sucks. Like they don't know what they're building. They're building the wrong thing anyway, but then I saw, oh, they've raised a hundred million dollars. Oh, they've raised another hundred million dollars. And then our users started saying, oh guys, your AI sucks. Cause we were serving a 6 billion parameter model, right? How big was the model that character AI could afford to serve, right? So we would be spending, let's say we would spend a dollar per per user, right? Over the, the, you know, the entire lifetime.swyx [00:22:01]: A dollar per session, per chat, per month? No, no, no, no.William [00:22:04]: Let's say we'd get over the course of the year, we'd have a million users and we'd spend a million dollars on the AI throughout the year. Right. Like aggregated. Exactly. Exactly. Right. They could spend a hundred times that. So people would say, why is your AI much dumber than character AIs? And then I was like, oh, okay, I get it. This is like the Silicon Valley style, um, hyper scale business. And so, yeah, we moved to Silicon Valley and, uh, got some funding and iterated and built the flywheels. And, um, yeah, I, I'm very proud that we were able to compete with that. Right. So, and I think the reason we were able to do it was just customer obsession. And it's similar, I guess, to how deep seek have been able to produce such a compelling model when compared to someone like an open AI, right? So deep seek, you know, their latest, um, V2, yeah, they claim to have spent 5 million training it.swyx [00:22:57]: It may be a bit more, but, um, like, why are you making it? Why are you making such a big deal out of this? Yeah. There's an agenda there. Yeah. You brought up deep seek. So we have to ask you had a call with them.William [00:23:07]: We did. We did. We did. Um, let me think what to say about that. I think for one, they have an amazing story, right? So their background is again in finance.swyx [00:23:16]: They're the Chinese version of you. Exactly.William [00:23:18]: Well, there's a lot of similarities. Yes. Yes. I have a great affinity for companies which are like, um, founder led, customer obsessed and just try and build something great. And I think what deep seek have achieved. There's quite special is they've got this amazing inference engine. They've been able to reduce the size of the KV cash significantly. And then by being able to do that, they're able to significantly reduce their inference costs. And I think with kind of with AI, people get really focused on like the kind of the foundation model or like the model itself. And they sort of don't pay much attention to the inference. To give you an example with Chai, let's say a typical user session is 90 minutes, which is like, you know, is very, very long for comparison. Let's say the average session length on TikTok is 70 minutes. So people are spending a lot of time. And in that time they're able to send say 150 messages. That's a lot of completions, right? It's quite different from an open AI scenario where people might come in, they'll have a particular question in mind. And they'll ask like one question. And a few follow up questions, right? So because they're consuming, say 30 times as many requests for a chat, or a conversational experience, you've got to figure out how to how to get the right balance between the cost of that and the quality. And so, you know, I think with AI, it's always been the case that if you want a better experience, you can throw compute at the problem, right? So if you want a better model, you can just make it bigger. If you want it to remember better, give it a longer context. And now, what open AI is doing to great fanfare is with projection sampling, you can generate many candidates, right? And then with some sort of reward model or some sort of scoring system, you can serve the most promising of these many candidates. And so that's kind of scaling up on the inference time compute side of things. And so for us, it doesn't make sense to think of AI is just the absolute performance. So. But what we're seeing, it's like the MML you score or the, you know, any of these benchmarks that people like to look at, if you just get that score, it doesn't really tell tell you anything. Because it's really like progress is made by improving the performance per dollar. And so I think that's an area where deep seek have been able to form very, very well, surprisingly so. And so I'm very interested in what Lama four is going to look like. And if they're able to sort of match what deep seek have been able to achieve with this performance per dollar gain.Alessio [00:25:59]: Before we go into the inference, some of the deeper stuff, can you give people an overview of like some of the numbers? So I think last I checked, you have like 1.4 million daily active now. It's like over 22 million of revenue. So it's quite a business.William [00:26:12]: Yeah, I think we grew by a factor of, you know, users grew by a factor of three last year. Revenue over doubled. You know, it's very exciting. We're competing with some really big, really well funded companies. Character AI got this, I think it was almost a $3 billion valuation. And they have 5 million DAU is a number that I last heard. Torquay, which is a Chinese built app owned by a company called Minimax. They're incredibly well funded. And these companies didn't grow by a factor of three last year. Right. And so when you've got this company and this team that's able to keep building something that gets users excited, and they want to tell their friend about it, and then they want to come and they want to stick on the platform. I think that's very special. And so last year was a great year for the team. And yeah, I think the numbers reflect the hard work that we put in. And then fundamentally, the quality of the app, the quality of the content, the quality of the content, the quality of the content, the quality of the content, the quality of the content. AI is the quality of the experience that you have. You actually published your DAU growth chart, which is unusual. And I see some inflections. Like, it's not just a straight line. There's some things that actually inflect. Yes. What were the big ones? Cool. That's a great, great, great question. Let me think of a good answer. I'm basically looking to annotate this chart, which doesn't have annotations on it. Cool. The first thing I would say is this is, I think the most important thing to know about success is that success is born out of failures. Right? Through failures that we learn. You know, if you think something's a good idea, and you do and it works, great, but you didn't actually learn anything, because everything went exactly as you imagined. But if you have an idea, you think it's going to be good, you try it, and it fails. There's a gap between the reality and expectation. And that's an opportunity to learn. The flat periods, that's us learning. And then the up periods is that's us reaping the rewards of that. So I think the big, of the growth shot of just 2024, I think the first thing that really kind of put a dent in our growth was our backend. So we just reached this scale. So we'd, from day one, we'd built on top of Google's GCP, which is Google's cloud platform. And they were fantastic. We used them when we had one daily active user, and they worked pretty good all the way up till we had about 500,000. It was never the cheapest, but from an engineering perspective, man, that thing scaled insanely good. Like, not Vertex? Not Vertex. Like GKE, that kind of stuff? We use Firebase. So we use Firebase. I'm pretty sure we're the biggest user ever on Firebase. That's expensive. Yeah, we had calls with engineers, and they're like, we wouldn't recommend using this product beyond this point, and you're 3x over that. So we pushed Google to their absolute limits. You know, it was fantastic for us, because we could focus on the AI. We could focus on just adding as much value as possible. But then what happened was, after 500,000, just the thing, the way we were using it, and it would just, it wouldn't scale any further. And so we had a really, really painful, at least three-month period, as we kind of migrated between different services, figuring out, like, what requests do we want to keep on Firebase, and what ones do we want to move on to something else? And then, you know, making mistakes. And learning things the hard way. And then after about three months, we got that right. So that, we would then be able to scale to the 1.5 million DAE without any further issues from the GCP. But what happens is, if you have an outage, new users who go on your app experience a dysfunctional app, and then they're going to exit. And so your next day, the key metrics that the app stores track are going to be something like retention rates. And so your next day, the key metrics that the app stores track are going to be something like retention rates. Money spent, and the star, like, the rating that they give you. In the app store. In the app store, yeah. Tyranny. So if you're ranked top 50 in entertainment, you're going to acquire a certain rate of users organically. If you go in and have a bad experience, it's going to tank where you're positioned in the algorithm. And then it can take a long time to kind of earn your way back up, at least if you wanted to do it organically. If you throw money at it, you can jump to the top. And I could talk about that. But broadly speaking, if we look at 2024, the first kink in the graph was outages due to hitting 500k DAU. The backend didn't want to scale past that. So then we just had to do the engineering and build through it. Okay, so we built through that, and then we get a little bit of growth. And so, okay, that's feeling a little bit good. I think the next thing, I think it's, I'm not going to lie, I have a feeling that when Character AI got... I was thinking. I think so. I think... So the Character AI team fundamentally got acquired by Google. And I don't know what they changed in their business. I don't know if they dialed down that ad spend. Products don't change, right? Products just what it is. I don't think so. Yeah, I think the product is what it is. It's like maintenance mode. Yes. I think the issue that people, you know, some people may think this is an obvious fact, but running a business can be very competitive, right? Because other businesses can see what you're doing, and they can imitate you. And then there's this... There's this question of, if you've got one company that's spending $100,000 a day on advertising, and you've got another company that's spending zero, if you consider market share, and if you're considering new users which are entering the market, the guy that's spending $100,000 a day is going to be getting 90% of those new users. And so I have a suspicion that when the founders of Character AI left, they dialed down their spending on user acquisition. And I think that kind of gave oxygen to like the other apps. And so Chai was able to then start growing again in a really healthy fashion. I think that's kind of like the second thing. I think a third thing is we've really built a great data flywheel. Like the AI team sort of perfected their flywheel, I would say, in end of Q2. And I could speak about that at length. But fundamentally, the way I would describe it is when you're building anything in life, you need to be able to evaluate it. And through evaluation, you can iterate, we can look at benchmarks, and we can say the issues with benchmarks and why they may not generalize as well as one would hope in the challenges of working with them. But something that works incredibly well is getting feedback from humans. And so we built this thing where anyone can submit a model to our developer backend, and it gets put in front of 5000 users, and the users can rate it. And we can then have a really accurate ranking of like which model, or users finding more engaging or more entertaining. And it gets, you know, it's at this point now, where every day we're able to, I mean, we evaluate between 20 and 50 models, LLMs, every single day, right. So even though we've got only got a team of, say, five AI researchers, they're able to iterate a huge quantity of LLMs, right. So our team ships, let's just say minimum 100 LLMs a week is what we're able to iterate through. Now, before that moment in time, we might iterate through three a week, we might, you know, there was a time when even doing like five a month was a challenge, right? By being able to change the feedback loops to the point where it's not, let's launch these three models, let's do an A-B test, let's assign, let's do different cohorts, let's wait 30 days to see what the day 30 retention is, which is the kind of the, if you're doing an app, that's like A-B testing 101 would be, do a 30-day retention test, assign different treatments to different cohorts and come back in 30 days. So that's insanely slow. That's just, it's too slow. And so we were able to get that 30-day feedback loop all the way down to something like three hours. And when we did that, we could really, really, really perfect techniques like DPO, fine tuning, prompt engineering, blending, rejection sampling, training a reward model, right, really successfully, like boom, boom, boom, boom, boom. And so I think in Q3 and Q4, we got, the amount of AI improvements we got was like astounding. It was getting to the point, I thought like how much more, how much more edge is there to be had here? But the team just could keep going and going and going. That was like number three for the inflection point.swyx [00:34:53]: There's a fourth?William [00:34:54]: The important thing about the third one is if you go on our Reddit or you talk to users of AI, there's like a clear date. It's like somewhere in October or something. The users, they flipped. Before October, the users... The users would say character AI is better than you, for the most part. Then from October onwards, they would say, wow, you guys are better than character AI. And that was like a really clear positive signal that we'd sort of done it. And I think people, you can't cheat consumers. You can't trick them. You can't b******t them. They know, right? If you're going to spend 90 minutes on a platform, and with apps, there's the barriers to switching is pretty low. Like you can try character AI, you can't cheat consumers. You can't cheat them. You can't cheat them. You can't cheat AI for a day. If you get bored, you can try Chai. If you get bored of Chai, you can go back to character. So the users, the loyalty is not strong, right? What keeps them on the app is the experience. If you deliver a better experience, they're going to stay and they can tell. So that was the fourth one was we were fortunate enough to get this hire. He was hired one really talented engineer. And then they said, oh, at my last company, we had a head of growth. He was really, really good. And he was the head of growth for ByteDance for two years. Would you like to speak to him? And I was like, yes. Yes, I think I would. And so I spoke to him. And he just blew me away with what he knew about user acquisition. You know, it was like a 3D chessswyx [00:36:21]: sort of thing. You know, as much as, as I know about AI. Like ByteDance as in TikTok US. Yes.William [00:36:26]: Not ByteDance as other stuff. Yep. He was interviewing us as we were interviewing him. Right. And so pick up options. Yeah, exactly. And so he was kind of looking at our metrics. And he was like, I saw him get really excited when he said, guys, you've got a million daily active users and you've done no advertising. I said, correct. And he was like, that's unheard of. He's like, I've never heard of anyone doing that. And then he started looking at our metrics. And he was like, if you've got all of this organically, if you start spending money, this is going to be very exciting. I was like, let's give it a go. So then he came in, we've just started ramping up the user acquisition. So that looks like spending, you know, let's say we're spending, we started spending $20,000 a day, it looked very promising than 20,000. Right now we're spending $40,000 a day on user acquisition. That's still only half of what like character AI or talkie may be spending. But from that, it's sort of, we were growing at a rate of maybe say, 2x a year. And that got us growing at a rate of 3x a year. So I'm growing, I'm evolving more and more to like a Silicon Valley style hyper growth, like, you know, you build something decent, and then you canswyx [00:37:33]: slap on a huge... You did the important thing, you did the product first.William [00:37:36]: Of course, but then you can slap on like, like the rocket or the jet engine or something, which is just this cash in, you pour in as much cash, you buy a lot of ads, and your growth is faster.swyx [00:37:48]: Not to, you know, I'm just kind of curious what's working right now versus what surprisinglyWilliam [00:37:52]: doesn't work. Oh, there's a long, long list of surprising stuff that doesn't work. Yeah. The surprising thing, like the most surprising thing, what doesn't work is almost everything doesn't work. That's what's surprising. And I'll give you an example. So like a year and a half ago, I was working at a company, we were super excited by audio. I was like, audio is going to be the next killer feature, we have to get in the app. And I want to be the first. So everything Chai does, I want us to be the first. We may not be the company that's strongest at execution, but we can always be theswyx [00:38:22]: most innovative. Interesting. Right? So we can... You're pretty strong at execution.William [00:38:26]: We're much stronger, we're much stronger. A lot of the reason we're here is because we were first. If we launched today, it'd be so hard to get the traction. Because it's like to get the flywheel, to get the users, to build a product people are excited about. If you're first, people are naturally excited about it. But if you're fifth or 10th, man, you've got to beswyx [00:38:46]: insanely good at execution. So you were first with voice? We were first. We were first. I only knowWilliam [00:38:51]: when character launched voice. They launched it, I think they launched it at least nine months after us. Okay. Okay. But the team worked so hard for it. At the time we did it, latency is a huge problem. Cost is a huge problem. Getting the right quality of the voice is a huge problem. Right? Then there's this user interface and getting the right user experience. Because you don't just want it to start blurting out. Right? You want to kind of activate it. But then you don't have to keep pressing a button every single time. There's a lot that goes into getting a really smooth audio experience. So we went ahead, we invested the three months, we built it all. And then when we did the A-B test, there was like, no change in any of the numbers. And I was like, this can't be right, there must be a bug. And we spent like a week just checking everything, checking again, checking again. And it was like, the users just did not care. And it was something like only 10 or 15% of users even click the button to like, they wanted to engage the audio. And they would only use it for 10 or 15% of the time. So if you do the math, if it's just like something that one in seven people use it for one seventh of their time. You've changed like 2% of the experience. So even if that that 2% of the time is like insanely good, it doesn't translate much when you look at the retention, when you look at the engagement, and when you look at the monetization rates. So audio did not have a big impact. I'm pretty big on audio. But yeah, I like it too. But it's, you know, so a lot of the stuff which I do, I'm a big, you can have a theory. And you resist. Yeah. Exactly, exactly. So I think if you want to make audio work, it has to be a unique, compelling, exciting experience that they can't have anywhere else.swyx [00:40:37]: It could be your models, which just weren't good enough.William [00:40:39]: No, no, no, they were great. Oh, yeah, they were very good. it was like, it was kind of like just the, you know, if you listen to like an audible or Kindle, or something like, you just hear this voice. And it's like, you don't go like, wow, this is this is special, right? It's like a convenience thing. But the idea is that if you can, if Chai is the only platform, like, let's say you have a Mr. Beast, and YouTube is the only platform you can use to make audio work, then you can watch a Mr. Beast video. And it's the most engaging, fun video that you want to watch, you'll go to a YouTube. And so it's like for audio, you can't just put the audio on there. And people go, oh, yeah, it's like 2% better. Or like, 5% of users think it's 20% better, right? It has to be something that the majority of people, for the majority of the experience, go like, wow, this is a big deal. That's the features you need to be shipping. If it's not going to appeal to the majority of people, for the majority of the experience, and it's not a big deal, it's not going to move you. Cool. So you killed it. I don't see it anymore. Yep. So I love this. The longer, it's kind of cheesy, I guess, but the longer I've been working at Chai, and I think the team agrees with this, all the platitudes, at least I thought they were platitudes, that you would get from like the Steve Jobs, which is like, build something insanely great, right? Or be maniacally focused, or, you know, the most important thing is saying no to, not to work on. All of these sort of lessons, they just are like painfully true. They're painfully true. So now I'm just like, everything I say, I'm either quoting Steve Jobs or Zuckerberg. I'm like, guys, move fast and break free.swyx [00:42:10]: You've jumped the Apollo to cool it now.William [00:42:12]: Yeah, it's just so, everything they said is so, so true. The turtle neck. Yeah, yeah, yeah. Everything is so true.swyx [00:42:18]: This last question on my side, and I want to pass this to Alessio, is on just, just multi-modality in general. This actually comes from Justine Moore from A16Z, who's a friend of ours. And a lot of people are trying to do voice image video for AI companions. Yes. You just said voice didn't work. Yep. What would make you revisit?William [00:42:36]: So Steve Jobs, he was very, listen, he was very, very clear on this. There's a habit of engineers who, once they've got some cool technology, they want to find a way to package up the cool technology and sell it to consumers, right? That does not work. So you're free to try and build a startup where you've got your cool tech and you want to find someone to sell it to. That's not what we do at Chai. At Chai, we start with the consumer. What does the consumer want? What is their problem? And how do we solve it? So right now, the number one problems for the users, it's not the audio. That's not the number one problem. It's not the image generation either. That's not their problem either. The number one problem for users in AI is this. All the AI is being generated by middle-aged men in Silicon Valley, right? That's all the content. You're interacting with this AI. You're speaking to it for 90 minutes on average. It's being trained by middle-aged men. The guys out there, they're out there. They're talking to you. They're talking to you. They're like, oh, what should the AI say in this situation, right? What's funny, right? What's cool? What's boring? What's entertaining? That's not the way it should be. The way it should be is that the users should be creating the AI, right? And so the way I speak about it is this. Chai, we have this AI engine in which sits atop a thin layer of UGC. So the thin layer of UGC is absolutely essential, right? It's just prompts. But it's just prompts. It's just an image. It's just a name. It's like we've done 1% of what we could do. So we need to keep thickening up that layer of UGC. It must be the case that the users can train the AI. And if reinforcement learning is powerful and important, they have to be able to do that. And so it's got to be the case that there exists, you know, I say to the team, just as Mr. Beast is able to spend 100 million a year or whatever it is on his production company, and he's got a team building the content, the Mr. Beast company is able to spend 100 million a year on his production company. And he's got a team building the content, which then he shares on the YouTube platform. Until there's a team that's earning 100 million a year or spending 100 million on the content that they're producing for the Chai platform, we're not finished, right? So that's the problem. That's what we're excited to build. And getting too caught up in the tech, I think is a fool's errand. It does not work.Alessio [00:44:52]: As an aside, I saw the Beast Games thing on Amazon Prime. It's not doing well. And I'mswyx [00:44:56]: curious. It's kind of like, I mean, the audience reading is high. The run-to-meet-all sucks, but the audience reading is high.Alessio [00:45:02]: But it's not like in the top 10. I saw it dropped off of like the... Oh, okay. Yeah, that one I don't know. I'm curious, like, you know, it's kind of like similar content, but different platform. And then going back to like, some of what you were saying is like, you know, people come to ChaiWilliam [00:45:13]: expecting some type of content. Yeah, I think it's something that's interesting to discuss is like, is moats. And what is the moat? And so, you know, if you look at a platform like YouTube, the moat, I think is in first is really is in the ecosystem. And the ecosystem, is comprised of you have the content creators, you have the users, the consumers, and then you have the algorithms. And so this, this creates a sort of a flywheel where the algorithms are able to be trained on the users, and the users data, the recommend systems can then feed information to the content creators. So Mr. Beast, he knows which thumbnail does the best. He knows the first 10 seconds of the video has to be this particular way. And so his content is super optimized for the YouTube platform. So that's why it doesn't do well on Amazon. If he wants to do well on Amazon, how many videos has he created on the YouTube platform? By thousands, 10s of 1000s, I guess, he needs to get those iterations in on the Amazon. So at Chai, I think it's all about how can we get the most compelling, rich user generated content, stick that on top of the AI engine, the recommender systems, in such that we get this beautiful data flywheel, more users, better recommendations, more creative, more content, more users.Alessio [00:46:34]: You mentioned the algorithm, you have this idea of the Chaiverse on Chai, and you have your own kind of like LMSYS-like ELO system. Yeah, what are things that your models optimize for, like your users optimize for, and maybe talk about how you build it, how people submit models?William [00:46:49]: So Chaiverse is what I would describe as a developer platform. More often when we're speaking about Chai, we're thinking about the Chai app. And the Chai app is really this product for consumers. And so consumers can come on the Chai app, they can come on the Chai app, they can come on the Chai app, they can interact with our AI, and they can interact with other UGC. And it's really just these kind of bots. And it's a thin layer of UGC. Okay. Our mission is not to just have a very thin layer of UGC. Our mission is to have as much UGC as possible. So we must have, I don't want people at Chai training the AI. I want people, not middle aged men, building AI. I want everyone building the AI, as many people building the AI as possible. Okay, so what we built was we built Chaiverse. And Chaiverse is kind of, it's kind of like a prototype, is the way to think about it. And it started with this, this observation that, well, how many models get submitted into Hugging Face a day? It's hundreds, it's hundreds, right? So there's hundreds of LLMs submitted each day. Now consider that, what does it take to build an LLM? It takes a lot of work, actually. It's like someone devoted several hours of compute, several hours of their time, prepared a data set, launched it, ran it, evaluated it, submitted it, right? So there's a lot of, there's a lot of, there's a lot of work that's going into that. So what we did was we said, well, why can't we host their models for them and serve them to users? And then what would that look like? The first issue is, well, how do you know if a model is good or not? Like, we don't want to serve users the crappy models, right? So what we would do is we would, I love the LMSYS style. I think it's really cool. It's really simple. It's a very intuitive thing, which is you simply present the users with two completions. You can say, look, this is from model one. This is from model two. This is from model three. This is from model A. This is from model B, which is better. And so if someone submits a model to Chaiverse, what we do is we spin up a GPU. We download the model. We're going to now host that model on this GPU. And we're going to start routing traffic to it. And we're going to send, we think it takes about 5,000 completions to get an accurate signal. That's roughly what LMSYS does. And from that, we're able to get an accurate ranking. And we're able to get an accurate ranking. And we're able to get an accurate ranking of which models are people finding entertaining and which models are not entertaining. If you look at the bottom 80%, they'll suck. You can just disregard them. They totally suck. Then when you get the top 20%, you know you've got a decent model, but you can break it down into more nuance. There might be one that's really descriptive. There might be one that's got a lot of personality to it. There might be one that's really illogical. Then the question is, well, what do you do with these top models? From that, you can do more sophisticated things. You can try and do like a routing thing where you say for a given user request, we're going to try and predict which of these end models that users enjoy the most. That turns out to be pretty expensive and not a huge source of like edge or improvement. Something that we love to do at Chai is blending, which is, you know, it's the simplest way to think about it is you're going to end up, and you're going to pretty quickly see you've got one model that's really smart, one model that's really funny. How do you get the user an experience that is both smart and funny? Well, just 50% of the requests, you can serve them the smart model, 50% of the requests, you serve them the funny model. Just a random 50%? Just a random, yeah. And then... That's blending? That's blending. You can do more sophisticated things on top of that, as in all things in life, but the 80-20 solution, if you just do that, you get a pretty powerful effect out of the gate. Random number generator. I think it's like the robustness of randomness. Random is a very powerful optimization technique, and it's a very robust thing. So you can explore a lot of the space very efficiently. There's one thing that's really, really important to share, and this is the most exciting thing for me, is after you do the ranking, you get an ELO score, and you can track a user's first join date, the first date they submit a model to Chaiverse, they almost always get a terrible ELO, right? So let's say the first submission they get an ELO of 1,100 or 1,000 or something, and you can see that they iterate and they iterate and iterate, and it will be like, no improvement, no improvement, no improvement, and then boom. Do you give them any data, or do you have to come up with this themselves? We do, we do, we do, we do. We try and strike a balance between giving them data that's very useful, you've got to be compliant with GDPR, which is like, you have to work very hard to preserve the privacy of users of your app. So we try to give them as much signal as possible, to be helpful. The minimum is we're just going to give you a score, right? That's the minimum. But that alone is people can optimize a score pretty well, because they're able to come up with theories, submit it, does it work? No. A new theory, does it work? No. And then boom, as soon as they figure something out, they keep it, and then they iterate, and then boom,Alessio [00:51:46]: they figure something out, and they keep it. Last year, you had this post on your blog, cross-sourcing the lead to the 10 trillion parameter, AGI, and you call it a mixture of experts, recommenders. Yep. Any insights?William [00:51:58]: Updated thoughts, 12 months later? I think the odds, the timeline for AGI has certainly been pushed out, right? Now, this is in, I'm a controversial person, I don't know, like, I just think... You don't believe in scaling laws, you think AGI is further away. I think it's an S-curve. I think everything's an S-curve. And I think that the models have proven to just be far worse at reasoning than people sort of thought. And I think whenever I hear people talk about LLMs as reasoning engines, I sort of cringe a bit. I don't think that's what they are. I think of them more as like a simulator. I think of them as like a, right? So they get trained to predict the next most likely token. It's like a physics simulation engine. So you get these like games where you can like construct a bridge, and you drop a car down, and then it predicts what should happen. And that's really what LLMs are doing. It's not so much that they're reasoning, it's more that they're just doing the most likely thing. So fundamentally, the ability for people to add in intelligence, I think is very limited. What most people would consider intelligence, I think the AI is not a crowdsourcing problem, right? Now with Wikipedia, Wikipedia crowdsources knowledge. It doesn't crowdsource intelligence. So it's a subtle distinction. AI is fantastic at knowledge. I think it's weak at intelligence. And a lot, it's easy to conflate the two because if you ask it a question and it gives you, you know, if you said, who was the seventh president of the United States, and it gives you the correct answer, I'd say, well, I don't know the answer to that. And you can conflate that with intelligence. But really, that's a question of knowledge. And knowledge is really this thing about saying, how can I store all of this information? And then how can I retrieve something that's relevant? Okay, they're fantastic at that. They're fantastic at storing knowledge and retrieving the relevant knowledge. They're superior to humans in that regard. And so I think we need to come up for a new word. How does one describe AI should contain more knowledge than any individual human? It should be more accessible than any individual human. That's a very powerful thing. That's superswyx [00:54:07]: powerful. But what words do we use to describe that? We had a previous guest on Exa AI that does search. And he tried to coin super knowledge as the opposite of super intelligence.William [00:54:20]: Exactly. I think super knowledge is a more accurate word for it.swyx [00:54:24]: You can store more things than any human can.William [00:54:26]: And you can retrieve it better than any human can as well. And I think it's those two things combined that's special. I think that thing will exist. That thing can be built. And I think you can start with something that's entertaining and fun. And I think, I often think it's like, look, it's going to be a 20 year journey. And we're in like, year four, or it's like the web. And this is like 1998 or something. You know, you've got a long, long way to go before the Amazon.coms are like these huge, multi trillion dollar businesses that every single person uses every day. And so AI today is very simplistic. And it's fundamentally the way we're using it, the flywheels, and this ability for how can everyone contribute to it to really magnify the value that it brings. Right now, like, I think it's a bit sad. It's like, right now you have big labs, I'm going to pick on open AI. And they kind of go to like these human labelers. And they say, we're going to pay you to just label this like subset of questions that we want to get a really high quality data set, then we're going to get like our own computers that are really powerful. And that's kind of like the thing. For me, it's so much like Encyclopedia Britannica. It's like insane. All the people that were interested in blockchain, it's like, well, this is this is what needs to be decentralized, you need to decentralize that thing. Because if you distribute it, people can generate way more data in a distributed fashion, way more, right? You need the incentive. Yeah, of course. Yeah. But I mean, the, the, that's kind of the exciting thing about Wikipedia was it's this understanding, like the incentives, you don't need money to incentivize people. You don't need dog coins. No. Sometimes, sometimes people get the satisfaction fro
I'm joined by a former colleague (prior to my retirement), Walter Eccles, for this podcast. Although I've known him as a technologist over the past decade or so, I learned about his involvement in local theater and TV in various capacities including acting. In this podcast, we discuss the intersection of technology in the performing arts in his decades of activities. We recorded this podcast in the Fête restaurant during lunch so I could test both a new Decibel Meter and a new set of wireless lavalier microphones. The lav's receiver was connected to an iPhone using a USB-C cable and recorded using the Just Press Record app. Although the lav mic system has a built-in noise reduction setting, I left this off to allow me more post-production granular control. This resulted in what I consider a very good voice recording of our conversation. However, I decided to post-process using Adobe Podcast Studio at 75% enhancement to reduce the background restaurant sounds and music.
Prime Minister Justin Trudeau's resignation earlier this week came after months of people calling for him to step down. How he will be remembered will largely depend on what comes next – how the Liberal party moves forward, what the next government does, and how Trudeau himself writes his next chapter.Campbell Clark is the Globe's chief political writer. Today, he joins The Decibel for a look back at Trudeau's career from the very beginning: the rise to the top, the long fall from grace, and what may come to define his legacy.Questions? Comments? Ideas? E-mail us at thedecibel@globeandmail.com
After three federal election wins and just over nine years in office, Prime Minister Justin Trudeau announced his resignation on Monday. Trudeau had been facing mounting pressure within his party to step down, after many months of polls showing dwindling public support for the Liberal party and several key by-election losses.Now, the Liberal Party has to choose a new leader while Parliament is prorogued.The Globe's senior political reporter Marieke Walsh joins The Decibel to explain what led to Trudeau's exit and what comes next as political uncertainty now looms over Canada.Questions? Comments? Ideas? Email us at thedecibel@globeandmail.com
Point Pelee National Park juts out into Lake Erie like a finger, as every spring thousands of birds touch down on it. It's a key stop along their migratory routes from the southern U.S., Central and South America to northern Canada.But climate change has been shifting the conditions of migration, making it harder for some birds and ultimately affecting bird populations, which are already in steep decline. Decibel host Menaka Raman-Wilms, producer Rachel Levy-McLaughlin and Globe and Mail columnist Marcus Gee headed to Point Pelee to see spring migration up close.A special thanks to Matt Fuirst and Birds Canada, and, as well as, the Cornell Lab of Ornithology, who provided some sounds from their Macaulay Library in this episode.Questions? Comments? Ideas? E-mail us at thedecibel@globeandmail.com
Distance running, once a relatively niche sport, has exploded in popularity. The trend has been ongoing for at least a decade, but 2024′s running season may be the biggest one yet. Marathon race organizers are expecting record participation in races this year, both in Canada and in cities around the world.Today, Ben Kaplan, general manager of iRun Magazine, Allison Hill, co-founder of Hill Run Club, and members of The Decibel's own running club explain how the sport has grown more inclusive and diverse, drawing in a whole new generation of runners.This episode originally aired on May 1, 2024.Questions? Comments? Ideas? Email us at thedecibel@globeandmail.com
This week we bring you the final installment of our coverage of Night Demon's summer 2024 European tour. We run through shows in Stuttgart, Switzerland and Spain, with commentary from the band and the crew as well as live clips from the gigs themselves. You will hear about oversold venues, volume restrictions, an epic jam in the crowd in Spain, and a missed connecting flight out of Lisbon. We conclude with Cirith Ungol's abbreviated set at the Maryland Doom Fest right after the European tour ended.Become a subscriber today at nightdemon.net/subscriber. This week, subscribers have access to the bonus content below:Streaming Audio: Full show - Stuttgart, Germany - June 17, 2024Streaming Audio: Full show - Aarau, Switzerland - June 18, 2024Streaming Video: Full show - Rock Imperium Fest, Spain- June 20, 2024 Listen at nightdemon.net/podcast or anywhere you listen to podcasts! Follow us on Instagram Like us on Facebook
Food and family are often front and centre during the holidays. These two ingredients also help make up our identities and cultures.So today, The Decibel is sharing stories of finding family through the act of baking.This episode originally aired on December 22, 2023.Questions? Comments? Ideas? Email us at thedecibel@globeandmail.com
At this time of year, there's nothing better than settling in with a good Christmas movie. When it comes to the made-for-TV variety – usually made by Hallmark, or, Netflix – they tend to follow a formula: girl from the big city ends up in a small town, connects with a local guy, they encounter a series of surmountable obstacles, and eventually, end up together – with a heavy sprinkle of holiday magic. A lot of these movies are filmed in and around Almonte, a town about 40 minutes west of Ottawa. It's been used so many times that SNL mentioned that they're all “filmed in a month in Ottawa” in a 2017 sketch, and the New York Times profiled the town back in 2020. According to the municipality, 24 movies have been shot there since 2015.The town has a sparkle that shines through in these movies… and our producers wondered whether that sparkle was as bright in real life. In this holiday special episode, The Decibel goes to Almonte to see if the town is as full of Christmas magic as it is on screen.Questions? Comments? Ideas? Email us at thedecibel@globeandmail.com
Spit Hit for Sept 19th, 2024: This episode is packed. We discuss roller coasters vs. water slides, dragon features we'd want, and being tall and skinny vs. short and ripped. Then we dive into our 20th Liar, Liar segment. Is today the day? Lastly we polish it off with a draft of songs that we are embarrassed to like. Re-brand Mondays with some comedy! Subscribe and tell your friends about another funny episode of The Spitballers Comedy Podcast! Connect with the Spitballers Comedy Podcast: Become an Official Spitwad: SpitballersPod.com Follow us on Twitter: x.com/SpitballersPod Follow us on IG: Instagram.com/SpitballersPod Subscribe on YouTube: YouTube.com/Spitballers