POPULARITY
The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
Mikey Shulman is the Co-Founder and CEO of Suno, the leading music AI company. Suno lets everyone make and share music. Mikey has raised over $125M for the company from the likes of Lightspeed, Founder Collective and Nat Friedman and Daniel Gross. Prior to founding Suno, Mikey was the first machine learning engineer and head of machine learning at Kensho technologies, which was acquired by S&P Global for over $500 million. In Today's Episode with Mikey Shulman: 1. The Future of Models: Who wins the future of models? Anthropic, OpenAI or X? Will we live in a world of many smaller models? When does it make sense for specialised vs generalised models? Does Mikey believe we will continue to see the benefits of scaling laws? 2. The Future of UI and Consumer Apps: Why does Mikey believe that OpenAI did AI consumer companies a massive disservice? Why does Mikey believe consumers will not choose their model or pay for a superior model in the future? Why does Mikey believe that good taste is more important than good skills? Why does Mikey argue physicists and economists make the best ML engineers? 3. The Future of Music: What is going on with Suno's lawsuit against some of the biggest labels in music? How does Mikey see the future of music discovery? How does Mikey see the battle between Spotify and YouTube playing out? How does Mikey see the battle between TikTok and Spotify playing out?
Alan Cowen is the cofounder and CEO of Hume, a company building voice-to-voice foundation models. They recently raised their $50M Series B from Union Square Ventures, Nat Friedman, Daniel Gross, and others. Alan's favorite book: 1984 (Author: George Orwell)(00:01) Introduction(00:06) Defining Voice-to-Voice Foundation Models(01:26) Historical Context: Handling Voice and Speech Understanding(03:54) Emotion Detection in Voice AI Models(04:33) Training Models to Recognize Human Emotion in Speech(07:19) Cultural Variations in Emotional Expressions(09:00) Semantic Space Theory in Emotion Recognition(12:11) Limitations of Basic Emotion Categories(15:50) Recognizing Blended Emotional States(20:15) Objectivity in Emotion Science(24:37) Practical Aspects of Deploying Voice AI Systems(28:17) Real-Time System Constraints and Latency(31:30) Advancements in Voice AI Models(32:54) Rapid-Fire Round--------Where to find Prateek Joshi: Newsletter: https://prateekjoshi.substack.com Website: https://prateekj.com LinkedIn: https://www.linkedin.com/in/prateek-joshi-91047b19 Twitter: https://twitter.com/prateekvjoshi
Prieteni, invitatul acestei ediții de Leaders este Daniel Gross, CEO-ul rețelei de magazine Penny România, o companie cu peste 7.000 de angajați. A preluat conducerea companiei într-o perioadă în care retailul alimentar se confrunta cu provocări semnificative, inclusiv concurența crescută și schimbările în comportamentul consumatorilor. Sub conducerea sa, Penny a continuat să se extindă pe piața românească, concentrându-se pe oferirea de produse accesibile și pe dezvoltarea unor strategii de sustenabilitate. A fost angajatul cu nr.19 în companie, a lucrat în foarte multe compartimente... știe cum funcționează tot, am putea spune. Se zbate să aducă pe rafturi cât mai multe produse românești, 100%, și aflăm că nu este ușor deloc să facă asta, și spune convingător că noi, românii, suntem și exigenți, și muncitori. Vizionare plăcută! 02:28 Penny, sponsor principal al Naționalei de fotbal 10:43 Zidul galben 13:22 În inima Naționalei 20:00 Noi, ca români, suntem foarte exigenți și muncitori 28:43 Angajatul nr. 19 35:16 Măsurători pentru vizibilitatea produselor 40:00 Colaborări cu Banca pentru Alimente și Bonapp (aplicație) 42:00 Carnea proaspătă este 100% românească 47:50 Produsele TripluRO 1:00:18 Sortiment românesc - obiectiv strategic 1:01:20 De ce merele poloneze sunt mai ieftine ca cele românești 1:04:00 Prețul contează mai mult decât proveniența produsului 1:14:04 Impactul unui TVA mai mare din 2025 1:24:00 Sunt mai sănătos și mai vesel de când alerg 1:37:07 În România se trăiește (mai) bine 1:49:23 Administrația din România e…. înceată 1:57:43 Cum funcționează un magazin autonom 2:05:57 Concurență
In this episode, we dive into three groundbreaking developments at the intersection of technology, AI, and healthcare that are set to shape the future:### 1. **AI-Driven Drone Swarms Combatting Wildfires** - Researchers from the University of Sheffield and the University of Bristol in collaboration with Windracers have developed **AI-driven, self-coordinating drone swarms** to fight wildfires more efficiently. These drones use advanced thermal and optical imaging to autonomously detect, assess, and monitor fires. - Tested by Lancashire Fire and Rescue, these drones can carry up to **100 kg of fire retardant**, making them a formidable tool for early wildfire mitigation. The Windracer ULTRA drones can monitor vast areas—potentially the size of Greece—helping to address challenges in remote wildfire detection and response. - With climate change causing more frequent and severe wildfires in the UK, this innovation represents a significant step forward in cost-effective, rapid-response firefighting.### 2. **Transforming Healthcare with Smartphone-Based Disease Detection** - Google has trained a powerful AI model named **HeAR (Health Acoustic Representations)** on 300 million audio samples to detect diseases using just a smartphone. This AI listens to sounds like coughs and breathing patterns to detect early signs of respiratory illnesses, including tuberculosis. - Partnering with Salcit Technologies in India, Google aims to deploy this technology in high-risk, underserved communities where access to traditional diagnostic tools is limited. The technology could potentially expand to identify other respiratory and cardiovascular conditions, revolutionizing early disease detection and healthcare accessibility worldwide. - This breakthrough showcases the power of **bioacoustics**—the combination of biology and acoustics—in extracting crucial health information from everyday sounds, making healthcare more accessible and effective in remote areas.### 3. **Safe Superintelligence (SSI) Raises $1 Billion to Build the Future of AI** - In a monumental move, **Safe Superintelligence (SSI)**, a new AI startup co-founded by former OpenAI chief scientist Ilya Sutskever, raised a staggering **$1 billion in funding** just three months after its inception. - SSI's mission is to develop superintelligent AI systems that are safe and beneficial for humanity. Co-founded by Sutskever, Daniel Gross, and Daniel Levy, the startup is already valued at **$5 billion** and has attracted funding from major venture capital firms such as Andreessen Horowitz and Sequoia Capital. - With only ten employees, SSI plans to use the funds to acquire computing power and hire top-tier talent, focusing on AI safety to ensure that superintelligent AI systems surpass human intelligence without posing risks. - This massive seed round underscores the importance and urgency of AI safety as we venture further into an era dominated by artificial intelligence.### **Key Takeaways:**- **Innovation in Disaster Management:** AI-driven drone swarms represent a significant leap in wildfire mitigation, potentially saving lives and property.- **Revolutionizing Healthcare:** Smartphone-based AI for disease detection could democratize healthcare, providing critical diagnostic capabilities to underserved regions.- **Future of Safe AI:** SSI's unprecedented funding round reflects a growing recognition of the need for safe, superintelligent AI systems that benefit humanity.**Don't miss out on these discussions and more as we explore the future of technology and its potential to reshape our world!**Get intouch with Myles at mylesdhillon@gmail.com
Send us a textPRE-IPO STOCK FUNDS CLOSING TO NEW INVESTORS ON SEP 13 (NEXT FRIDAY)AG Dillon has seven (7) pre-IPO stock funds closing on Friday, Sep 13. Next Friday. See fund list at www.agdillon.com/product (page 3). Available for purchase at Schwab, Fidelity, or directly at AG Dillon. Email aaron.dillon@agdillon.com to investSubscribe to AG Dillon Pre-IPO Stock Research at agdillon.com/subscribe;- Wednesday = secondary market valuations, revenue multiples, performance, index fact sheets- Saturdays = pre-IPO news and insights, webinar replays00:07 | Safe Superintelligence Raises $1B for new AI LLM- AI venture focused on creating safe AI models- Co-founded by Ilya Sutskever, Daniel Gross, and Daniel Levy- Raised $1B in May 2024 from investors like Andreessen Horowitz and Sequoia Capital- Offices in Palo Alto and Tel Aviv- For-profit entity addressing AI safety00:43 | OpenAI Hits 200M Weekly Active Users for ChatGPT- AI large language model business- ChatGPT now has 200M weekly active users, doubling since Nov 2023- 1M paid corporate users, up from 600K in April 2024- Expected to generate $2B annually from $20/month premium subscriptions- 50% of corporate users are in the U.S.; strong presence in Germany, Japan, U.K.- Secondary market valuation: $103.8B (+20.7% vs Apr 2024 round)01:33 | Salesforce Acquires Own Company for $1.9B- Data management firm specializing in data backup and recovery- Acquired by Salesforce for $1.9B in cash- Previously valued at $3.35B in Aug 2021- 7,000 customers; raised $507.3M from Tiger Global and Salesforce Ventures- Global data backup market valued at $12.9B in 2023, growing at a 10.9% CAGR02:14 | ByteDance Raises $600M for Dongchedi, Valued at $3B- Chinese parent company of TikTok- Raising $600M for car trading platform Dongchedi- Dongchedi boasts 35.7M monthly active users- Competing with platforms like Autohome and Bitauto- Secondary market valuation: $300B (+11.8% vs Dec 2023 round)03:00 | Anthropic's Claude AI Powers New Amazon Alexa- Amazon to release new Alexa powered by Anthropic's Claude AI in October- Paid version to cost $5-$10/month; current version remains free- Estimated $600M in annual sales if 10% of Alexa's 100M users opt for the paid version- Anthropic has a $23.6B secondary market valuation (+31.4% vs Jan 2024 round)03:49 | xAI's Colossus System Becomes Most Powerful AI Trainer- AI large language model business by Elon Musk- Colossus built with 100,000 Nvidia H100 GPUs in 122 days, doubling to 200,000 GPUs- Phase 1 cost estimated at $2B, located in Memphis- Colossus will consume 150 megawatts of power and 1M gallons of water daily for cooling- Secondary market valuation: $26.1B (+8.9% vs May 2024 round)04:45 | X Launches Beta Version of TV App for Fire TV and Google TV- Formerly Twitter, now focusing on becoming a "video-first" platform- Launched beta version of TV app for Amazon Fire TV and Google TV- Initial feedback suggests bugs, but fixes are anticipated soon- Aimed at reviving ad revenue and attracting video creators05:26 | Fidelity Cuts X Holdings Valuation by Another 4%- Fidelity reduced X Holdings (formerly Twitter) valuation by 4% in July- Total decrease of 72% since Elon Musk's acquisition in Oct 2022- New valuation implies X shares are worth $15, down from Musk's original $54.20/share- X's total value now approximately $21B06:03 | Pre-IPO Stock Market Weekly Performance06:48 | Pre-IPO Stock Vintage Index Weekly Performance
Min-Kyu Jung is the CEO and co-founder at Ivo, an AI contract law assistant for legal teams, which has raised $6.2 million in funding total from investors including Uncork Capital, Fika Ventures, GD1, Phase One, and Daniel Gross. Min-Kyu got the idea for Ivo (previously Latch) while working as a corporate lawyer in New Zealand, when he saw how much time, effort and money were spent drawing up agreements. His entrepreneurial streak got the better of him — drawn to what he saw as “low-hanging fruit”, under-optimised processes around him in the legal profession, he taught himself how to code in two months and took the leap to start a startup.Ivo works in Microsoft Word to explain legal terms, determine if clauses are market standard and instantly create a summary of an agreement to help speed up the process. After a cold outbound DM landed him an angel investment from Daniel Gross in San Francisco, he moved his whole team over for an initial three months — and never looked back. He thinks other kiwi founders - at least those who aspire to be at the frontiers of AI - should do the same, and issues a challenge to other founders to reflect on where they need to locate to maximise their chances of success.He's not afraid to roll up his sleeves and do the work to sell, get connected with people... even if that means lots of cold outbound: “Kiwis tend to be modest and avoid making impositions on others. You will need to overcome this cultural quirk and simply cold email / DM people you find interesting.” We talk about how social capital flows in the Bay Area, and how it helped him build a local network, recruit his team, land hundreds of customer conversations, and more: “The SF Bay Area has a strong culture of paying it forward. Successful people here are often willing to spend time and social capital helping founders with no network if they seem to be working on something interesting.” We talk about his thesis for AI product development, how founders should think about designing user experiences, how Ivo handles issues with Large Language Model (“LLM”) reliability and hallucinations, and how he's preparing to leverage ever more powerful AI models to his advantage in coming years. This was a fun episode to record — we look forward to your feedback!! Where to find Min-Kyu online:* LinkedIn: https://www.linkedin.com/in/min-kyu-jung/* Twitter/X: https://twitter.com/mkjungKnow an expat we should feature on diaspora.nz? * reach out via david@diaspora.nz This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.diaspora.nz
This week on Upstream, we're releasing a fascinating discussion with economist, professor, and bestselling author Tyler Cowen about how to find talented people. This was recorded in 2022 around the launch of his book 'Talent: How to Identify Energizers, Creatives, and Winners Around the World' co-authored with Daniel Gross. Tyler and Erik discuss strategies for assessing raw talent, recognizing late bloomers, and fostering an environment conducive to high achievers. They also cover the importance of understanding founder compatibility, building strong peer groups, and the role of mentorship in talent development.
Our 171st episode with a summary and discussion of last week's big AI news! With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris) Feel free to leave us feedback here. Read out our text newsletter and comment on the podcast at https://lastweekin.ai/ Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai Timestamps + Links: (00:00:00) Intro / Banter Tools & Apps(00:03:13) Apple Intelligence: every new AI feature coming to the iPhone and Mac (00:10:03) ‘We don't need Sora anymore': Luma's new AI video generator Dream Machine slammed with traffic after debut (00:14:48) Runway unveils new hyper realistic AI video model Gen-3 Alpha, capable of 10-second-long clips (00:18:21) Leonardo AI image generator adds new video mode — here's how it works (00:22:31) Anthropic just dropped Claude 3.5 Sonnet with better vision and a sense of humor Applications & Business(00:28:23 ) Sam Altman might reportedly turn OpenAI into a regular for-profit company (00:31:19) Ilya Sutskever, Daniel Gross, Daniel Levy launch Safe Superintelligence Inc. (00:38:53) OpenAI welcomes Sarah Friar (CFO) and Kevin Weil (CPO) (00:41:44) Report: OpenAI Doubled Annualized Revenue in 6 Months (00:44:30) AI startup Adept is in deal talks with Microsoft (00:48:55) Mistral closes €600m at €5.8bn valuation with new lead investor (00:53:12) Huawei Claims Ascend 910B AI Chip Manages To Surpass NVIDIA's A100, A Crucial Alternative For China (00:56:58) Astrocade raises $12M for AI-based social gaming platform Projects & Open Source(01:01:03) Announcing the Open Release of Stable Diffusion 3 Medium, Our Most Sophisticated Image Generation Model to Date (01:05:53) Meta releases flurry of new AI models for audio, text and watermarking (01:09:39) ElevenLabs unveils open-source creator tool for adding sound effects to videos Research & Advancements(01:12:02) Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling (01:22:07) Improve Mathematical Reasoning in Language Models by Automated Process Supervision (01:28:01) Introducing Lamini Memory Tuning: 95% LLM Accuracy, 10x Fewer Hallucinations (01:30:32) An Empirical Study of Mamba-based Language Models (01:31:57) BERTs are Generative In-Context Learners (01:33:33) SELFGOAL: Your Language Agents Already Know How to Achieve High-level Goals Policy & Safety(01:35:16) Sycophancy to subterfuge: Investigating reward tampering in language models (01:42:26) Waymo issues software and mapping recall after robotaxi crashes into a telephone pole (01:45:53) Meta pauses AI models launch in Europe (01:46:44) Refusal in Language Models Is Mediated by a Single Direction Sycophancy to subterfuge: Investigating reward tampering in language models (01:51:38) Huawei exec concerned over China's inability to obtain 3.5nm chips, bemoans lack of advanced chipmaking tools Synthetic Media & Art(01:55:07) It Looked Like a Reliable News Site. It Was an A.I. Chop Shop. (01:57:39) Adobe overhauls terms of service to say it won't train AI on customers' work (01:59:31) Buzzy AI Search Engine Perplexity Is Directly Ripping Off Content From News Outlets (02:02:23) Outro + AI Song
Klarity, an accounting startup based in San Francisco, raised $70 million in a Series B funding round led by Nat Friedman and Daniel Gross, with additional support from Scale Venture Partners, Tola Capital, Picus Capital, Invus Capital, and Y Combinator. The raised funds will be used to expand Klarity's workforce, tripling it to 390 employees within the year. Klarity employs AI to process data in contracts and internal records, eliminating the need for manual work. This trend of significant funding is also observed in other accounting tech firms like Ageras, FloQast, and DataSnipper, which have also secured substantial investment to automate accounting tasks using AI. AI-driven startups in other sectors, such as legal tech, are also attracting significant investment.Learn more on this news visit us at: https://greyjournal.net/news/ Hosted on Acast. See acast.com/privacy for more information.
The Surgeon General Is Wrong. Social Media Doesn't Need Warning Labels The Stanford Internet Observatory is being dismantled Pop Culture Has Become an Oligopoly Tesla takes fight for Elon Musk's pay package back to court US sues Adobe for 'deceiving' subscriptions that are too hard to cancel Apple, Meta set to face EU charges under landmark tech rules, sources say Mozilla buys Anonym, betting privacy is compatible with ads Jeff on Perplexity Discover Luma extends memes Gutenberg animated by Dream Machine Ilya Sutskever, Daniel Gross, Daniel Levy announce Safe Superintelligence Inc. Paper: "ChatGPT is bullshit" How A.I. Is Revolutionizing Drug Development McDonald's is ending its drive-thru AI test US bank Wells Fargo fires employees for 'simulating' being at their keyboards BeReal acquired by mobile apps and games company Voodoo Netflix to Open Massive Entertainment, Dining and Shopping Complexes in Two Cities in 2025 Sounds of the Forest - Soundmap Hamburger Dad Reuters annual news report ChatGPT of the Reuters report Original YouTube deal memo & pitch deck Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: eufy.com 1password.com/twig
The AI Breakdown: Daily Artificial Intelligence News and Discussions
After months of speculation, Ilya Sutskever, co-founder of OpenAI, has launched Safe Superintelligence Inc. (SSI) to build safe superintelligence. With a singular focus on creating revolutionary breakthroughs, SSI aims to advance AI capabilities while ensuring safety. Joined by notable figures like Daniel Levy and Daniel Gross, this new venture marks a significant development in the AI landscape. After months of speculation, Ilya Sutskever, co-founder of OpenAI, has launched Safe Superintelligence Inc. (SSI) to build safe superintelligence. With a singular focus on creating revolutionary breakthroughs, SSI aims to advance AI capabilities while ensuring safety. Joined by notable figures like Daniel Levy and Daniel Gross, this new venture marks a significant development in the AI landscape. Learn about their mission, the challenges they face, and the broader implications for the future of AI. Learn how to use AI with the world's biggest library of fun and useful tutorials: https://besuper.ai/ Use code 'youtube' for 50% off your first month. The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614 Subscribe to the newsletter: https://aidailybrief.beehiiv.com/ Join our Discord: https://bit.ly/aibreakdown
The Surgeon General Is Wrong. Social Media Doesn't Need Warning Labels The Stanford Internet Observatory is being dismantled Pop Culture Has Become an Oligopoly Tesla takes fight for Elon Musk's pay package back to court US sues Adobe for 'deceiving' subscriptions that are too hard to cancel Apple, Meta set to face EU charges under landmark tech rules, sources say Mozilla buys Anonym, betting privacy is compatible with ads Jeff on Perplexity Discover Luma extends memes Gutenberg animated by Dream Machine Ilya Sutskever, Daniel Gross, Daniel Levy announce Safe Superintelligence Inc. Paper: "ChatGPT is bullshit" How A.I. Is Revolutionizing Drug Development McDonald's is ending its drive-thru AI test US bank Wells Fargo fires employees for 'simulating' being at their keyboards BeReal acquired by mobile apps and games company Voodoo Netflix to Open Massive Entertainment, Dining and Shopping Complexes in Two Cities in 2025 Sounds of the Forest - Soundmap Hamburger Dad Reuters annual news report ChatGPT of the Reuters report Original YouTube deal memo & pitch deck Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: eufy.com 1password.com/twig
The Surgeon General Is Wrong. Social Media Doesn't Need Warning Labels The Stanford Internet Observatory is being dismantled Pop Culture Has Become an Oligopoly Tesla takes fight for Elon Musk's pay package back to court US sues Adobe for 'deceiving' subscriptions that are too hard to cancel Apple, Meta set to face EU charges under landmark tech rules, sources say Mozilla buys Anonym, betting privacy is compatible with ads Jeff on Perplexity Discover Luma extends memes Gutenberg animated by Dream Machine Ilya Sutskever, Daniel Gross, Daniel Levy announce Safe Superintelligence Inc. Paper: "ChatGPT is bullshit" How A.I. Is Revolutionizing Drug Development McDonald's is ending its drive-thru AI test US bank Wells Fargo fires employees for 'simulating' being at their keyboards BeReal acquired by mobile apps and games company Voodoo Netflix to Open Massive Entertainment, Dining and Shopping Complexes in Two Cities in 2025 Sounds of the Forest - Soundmap Hamburger Dad Reuters annual news report ChatGPT of the Reuters report Original YouTube deal memo & pitch deck Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: eufy.com 1password.com/twig
The Surgeon General Is Wrong. Social Media Doesn't Need Warning Labels The Stanford Internet Observatory is being dismantled Pop Culture Has Become an Oligopoly Tesla takes fight for Elon Musk's pay package back to court US sues Adobe for 'deceiving' subscriptions that are too hard to cancel Apple, Meta set to face EU charges under landmark tech rules, sources say Mozilla buys Anonym, betting privacy is compatible with ads Jeff on Perplexity Discover Luma extends memes Gutenberg animated by Dream Machine Ilya Sutskever, Daniel Gross, Daniel Levy announce Safe Superintelligence Inc. Paper: "ChatGPT is bullshit" How A.I. Is Revolutionizing Drug Development McDonald's is ending its drive-thru AI test US bank Wells Fargo fires employees for 'simulating' being at their keyboards BeReal acquired by mobile apps and games company Voodoo Netflix to Open Massive Entertainment, Dining and Shopping Complexes in Two Cities in 2025 Sounds of the Forest - Soundmap Hamburger Dad Reuters annual news report ChatGPT of the Reuters report Original YouTube deal memo & pitch deck Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: eufy.com 1password.com/twig
The Surgeon General Is Wrong. Social Media Doesn't Need Warning Labels The Stanford Internet Observatory is being dismantled Pop Culture Has Become an Oligopoly Tesla takes fight for Elon Musk's pay package back to court US sues Adobe for 'deceiving' subscriptions that are too hard to cancel Apple, Meta set to face EU charges under landmark tech rules, sources say Mozilla buys Anonym, betting privacy is compatible with ads Jeff on Perplexity Discover Luma extends memes Gutenberg animated by Dream Machine Ilya Sutskever, Daniel Gross, Daniel Levy announce Safe Superintelligence Inc. Paper: "ChatGPT is bullshit" How A.I. Is Revolutionizing Drug Development McDonald's is ending its drive-thru AI test US bank Wells Fargo fires employees for 'simulating' being at their keyboards BeReal acquired by mobile apps and games company Voodoo Netflix to Open Massive Entertainment, Dining and Shopping Complexes in Two Cities in 2025 Sounds of the Forest - Soundmap Hamburger Dad Reuters annual news report ChatGPT of the Reuters report Original YouTube deal memo & pitch deck Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: eufy.com 1password.com/twig
The Surgeon General Is Wrong. Social Media Doesn't Need Warning Labels The Stanford Internet Observatory is being dismantled Pop Culture Has Become an Oligopoly Tesla takes fight for Elon Musk's pay package back to court US sues Adobe for 'deceiving' subscriptions that are too hard to cancel Apple, Meta set to face EU charges under landmark tech rules, sources say Mozilla buys Anonym, betting privacy is compatible with ads Jeff on Perplexity Discover Luma extends memes Gutenberg animated by Dream Machine Ilya Sutskever, Daniel Gross, Daniel Levy announce Safe Superintelligence Inc. Paper: "ChatGPT is bullshit" How A.I. Is Revolutionizing Drug Development McDonald's is ending its drive-thru AI test US bank Wells Fargo fires employees for 'simulating' being at their keyboards BeReal acquired by mobile apps and games company Voodoo Netflix to Open Massive Entertainment, Dining and Shopping Complexes in Two Cities in 2025 Sounds of the Forest - Soundmap Hamburger Dad Reuters annual news report ChatGPT of the Reuters report Original YouTube deal memo & pitch deck Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: eufy.com 1password.com/twig
De bij OpenAI vertrokken medeoprichter Ilya Sutskever heeft een eigen AI-bedrijf opgezet: Safe Superintelligence Inc. Joe van Burik vertelt je in deze Tech Update wat je moet weten. Ilya Sutskever is een naam die je bekend kan voorkomen, want deze man koos er mede voor om topman Sam Altman afgelopen najaar bij OpenAI te ontslaan - om daar kort daarna alweer spijt van te betuigen. Even later zorgde Microsoft dat Altman toch weer kon terugkeren en Sutskever vertrok vorige maand bij het bedrijf achter ChatGPT. Nu begint hij voor zichzelf met Safe Superintelligence Inc., oftewel SSI. Die naam verwijst naar dé ultieme kunstmatige intelligentie die ze willen bereiken, iets wat een bepaalde groep AI-onderzoekers en -ondernemers al enige tijd nastreeft. SSI van Sutskèver claimt dat met met focus op veiligheid te doen en zegt daarbij dat anders te doen dan OpenAI, Microsoft maar ook Google. Ook interessant: Sutskèver zet deze club op met een voormalig collega bij OpenAI, Daniel Levy, én met de voormalig AI-topman van Apple, Daniel Gross. Sutskever en Gross zijn allebei opgegroeid in Israël en SSI krijgt kantoren in zowel Palo Alto als Tel Aviv. Geld ophalen is in elk geval geen probleem, zegt Gross tegen Bloomberg, maar in hoeverre hun AI technologie dan in de praktijk veiliger en verantwoorden moet zijn, zal moeten blijken. Verder in deze Tech Update: Citigroup, de grootste Amerikaanse bank na JP Morgan Chase en Bank of America, verwacht dat meer dan de helft van de banen bij banken door AI vervangen kunnen worden Snapchat heeft gepresenteerd hoe met generatieve AI filters voor het delen van beelden gemaakt kan worden See omnystudio.com/listener for privacy information.
Plus Europe Struggles For AI Relevance (subscribe below) Like this? Get AIDAILY, delivered to your inbox, every weekday. Subscribe to our newsletter at https://aidaily.us AI-Driven Blood Test Predicts Parkinson's Disease Years Before Symptoms Researchers from UCL and University Medical Center Goettingen have developed an AI-driven blood test capable of predicting Parkinson's disease up to seven years before symptoms appear. This breakthrough, utilizing machine learning to analyze blood biomarkers, offers a promising method for early diagnosis and potential treatment to protect dopamine-producing brain cells. AI System Predicts Heart Attacks Up to 10 Years in Advance Oxford University scientists have developed an AI heart attack scan that can predict heart attacks up to a decade in advance. This AI technology, expected to be assessed by NICE and the NHS, analyzes artery inflammation not visible on standard CT scans, potentially saving thousands of lives annually by providing more accurate diagnoses. Europe Struggles for AI Relevance Amid US Dominance European AI firms face challenges due to American dominance in AI development, with chatbots often reflecting US cultural nuances, says Peter Sarlin of Silo AI. This "AI sovereignty" issue drives Europe to invest in AI infrastructure. However, without significant tech giants, Europe's efforts may fall short. Recent deals, like Mistral AI's partnership with Microsoft, highlight the continent's dependency on US platforms, complicating Europe's bid for AI independence. AI Poised to Transform Banking Industry, Says Citigroup Citigroup Inc. predicts AI will displace more banking jobs than in any other sector, potentially automating 54% of roles. The technology, which could add $170 billion to the industry by 2028, is already being used to enhance productivity and cut costs. Citigroup's CEO Jane Fraser emphasized moving AI from experimentation to practical application, with uses in custom investment recommendations and cybersecurity. However, AI adoption might not reduce headcount due to the need for AI managers and compliance officers. Despite AI's potential, challenges like chatbot comprehension and risks of misinformation remain. Former OpenAI Chief Scientist Launches Safety-Focused AI Startup Ilya Sutskever, co-founder and former chief scientist at OpenAI, announced the launch of Safe Superintelligence Inc. (SSI), a new AI startup focused on safety. SSI aims to develop a powerful AI system while avoiding commercial pressures. Co-founded by former Apple AI lead Daniel Gross and ex-OpenAI staff Daniel Levy, SSI prioritizes safety and progress without distractions from product cycles --- Send in a voice message: https://podcasters.spotify.com/pod/show/aidaily/message
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ilya Sutskever created a new AGI startup, published by harfe on June 19, 2024 on LessWrong. [copy of the whole text of the announcement on ssi.inc, not an endorsement] Safe Superintelligence Inc. Superintelligence is within reach. Building safe superintelligence (SSI) is the most important technical problem of our time. We have started the world's first straight-shot SSI lab, with one goal and one product: a safe superintelligence. It's called Safe Superintelligence Inc. SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI. We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead. This way, we can scale in peace. Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures. We are an American company with offices in Palo Alto and Tel Aviv, where we have deep roots and the ability to recruit top technical talent. We are assembling a lean, cracked team of the world's best engineers and researchers dedicated to focusing on SSI and nothing else. If that's you, we offer an opportunity to do your life's work and help solve the most important technical challenge of our age. Now is the time. Join us. Ilya Sutskever, Daniel Gross, Daniel Levy June 19, 2024 Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Like the show? Send us a text message on what you liked.I had never heard of Edmond Safra until I read Daniel Gross's informative and inspiring biography, A Banker's Journey.For those who knew him and did business with him, he was everyone's favorite banker. His banks never had to write off loans, and many of his early deals were on a handshake. He never needed a government bailout, nor did he ever head to DC complaining about regulations. While his professional and personal story is uplifting, the Shakespearian periods of his life include the American Express saga and how he died.During this conversation, Dan Gross gives us a dozen compelling reasons to revisit this banker's remarkable life.Make More with Matt HeslinExplore strategies to thrive financially, build legacy, and enhance life experiences.Listen on: Apple Podcasts Spotify
An interview with Nat Friedman, former CEO of GitHub and creator of the Vesuvius Challenge, which aims to crack the riddles of the Herculaneum Papyri.In this episode:The Genesis of the Vesuvius ChallengeEarly Attempts to Open the ScrollsUsing a Particle Accelerator to Scan the Scrolls!Partnering with Daniel Gross and Brent SealesNat's Childhood experience with Open-source CommunitiesHow to Design Prize Incentives for a Complex ContestDoing Crazy, Strange and Risky ProjectsA Possible Resurgence of Epicureanism? This episode is sponsored by Ancient Language Institute. If you're interested in actually reading the newly unlocked scrolls, you will need to know the languages of the ancient world. The Ancient Language Institute will help you do just that. Registration is now open (till August 10th) for their Fall term where you can take advanced classes in Latin, Ancient Greek, Biblical Hebrew, and Old English.
Economist Tyler Cowen confirms there are good reasons to be crypto-skeptical. Cryptocurrency is truly a new idea, and it's rare for society to encounter fundamentally new ideas. Cryptocurrency is well positioned to serve a crucial financial and transactional role as a globalized internet grows to include more of our lives. Crypto enthusiasts espouse grand plans that do not sound realistic, while crypto skeptics fail to appreciate the revolutionary nature of the technology. ------------------------------------------------------------------------------------------------------ About Tyler Cowen: Tyler is the Holbert L. Harris Chair of Economics at George Mason University and serves as chairman and general director of the Mercatus Center at George Mason University. He is co-author of the popular economics blog Marginal Revolution and co-founder of the online educational platform Marginal Revolution University. Tyler also writes a column for Bloomberg View, and he has contributed to The Wall Street Journal and Money. In 2011, Bloomberg Businessweek profiled Tyler as “America's Hottest Economist” after his e-book, The Great Stagnation, appeared twice on The New York Times e-book bestseller list. He graduated from George Mason University with a bachelor's degree in economics and earned a Ph.D. in economics from Harvard University. He also runs a podcast series called Conversations with Tyler. His latest book Talent: How to Identify Energizers, Creatives and Winners Around the World is co-authored with venture capitalist Daniel Gross. ---------------------------------------------------------------------------------------------------- About Big Think | Smarter Faster™ ► Big Think The leading source of expert-driven, educational content. With thousands of videos, featuring experts ranging from Bill Clinton to Bill Nye, Big Think helps you get smarter, faster by exploring the big ideas and core skills that define knowledge in the 21st century. Go Deeper with Big Think: ►Become a Big Think Member Get exclusive access to full interviews, early access to new releases, Big Think merch and more ►Get Big Think+ for Business Guide, inspire and accelerate leaders at all levels of your company with the biggest minds in business Learn more about your ad choices. Visit megaphone.fm/adchoices
An interview with economist Tyler Cowen on why American progress has seemed to stall and how we can get it back on track. The rate of progress in American society has been uneven throughout history, argues economist Tyler Cowen. Tremendous periods of growth are followed by periods of stagnation. Periods of growth occur when there is a breakthrough, and other advances quickly follow. For example, the Industrial Revolution and electrification of homes allowed the standard of living to grow at a fast rate, particularly in the early to mid-20th century. But starting in the 70s, progress slowed. One reason is that the easier tasks, like electrification, had already been accomplished. Also, government regulation and a general aversion to risk have made Americans less entrepreneurial. As a result, progress has slowed, and we have not matched our earlier performance. Today, we are at a pivotal crossroads between stagnation and growth. To get back to a growth mindset, he argues, we need to stop taking our prosperity for granted. -------------------------------------------------------------------------------------------- Chapters For Easier Navigation:- 0:00 intro 0:05 whats wrong with america 1:53 can america make a comeback 3:27 when are we going to get vaccines This video is part of The Progress Issue, a Big Think and Freethink special collaboration. ------------------------------------------------------------------------------------------ About Tyler Cowen Tyler is the Holbert L. Harris Chair of Economics at George Mason University and serves as chairman and general director of the Mercatus Center at George Mason University. He is co-author of the popular economics blog Marginal Revolution and co-founder of the online educational platform Marginal Revolution University. Tyler also writes a column for Bloomberg View, and he has contributed to The Wall Street Journal and Money. In 2011, Bloomberg Businessweek profiled Tyler as “America's Hottest Economist” after his e-book, The Great Stagnation, appeared twice on The New York Times e-book bestseller list. He graduated from George Mason University with a bachelor's degree in economics and earned a Ph.D. in economics from Harvard University. He also runs a podcast series called Conversations with Tyler. His latest book Talent: How to Identify Energizers, Creatives and Winners Around the World is co-authored with venture capitalist Daniel Gross. ------------------------------------------------------------------------------------------ About Big Think | Smarter Faster™ ► Big Think The leading source of expert-driven, educational content. With thousands of videos, featuring experts ranging from Bill Clinton to Bill Nye, Big Think helps you get smarter, faster by exploring the big ideas and core skills that define knowledge in the 21st century. Go Deeper with Big Think: ►Become a Big Think Member Get exclusive access to full interviews, early access to new releases, Big Think merch and more ►Get Big Think+ for Business Guide, inspire and accelerate leaders at all levels of your company with the biggest minds in business Learn more about your ad choices. Visit megaphone.fm/adchoices
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Metascience of the Vesuvius Challenge, published by Maxwell Tabarrok on March 30, 2024 on The Effective Altruism Forum. The Vesuvius Challenge is a million+ dollar contest to read 2,000 year old text from charcoal-papyri using particle accelerators and machine learning. The scrolls come from the ancient villa town of Herculaneum, nearby Pompeii, which was similarly buried and preserved by the eruption of Mt. Vesuvius. The prize fund comes from tech entrepreneurs and investors Nat Friedman, Daniel Gross, and several other donors. In the 9 months after the prize was announced, thousands of researchers and students worked on the problem, decades-long technical challenges were solved, and the amount of recovered text increased from one or two splotchy characters to 15 columns of clear text with more than 2000 characters. The success of the Vesuvius Challenge validates the motivating insight of metascience: It's not about how much we spend, it's about how we spend it. Most debate over science funding concerns a topline dollar amount. Should we double the budget of the NIH? Do we spend too much on Alzheimer's and too little on mRNA? Are we winning the R&D spending race with China? All of these questions implicitly assume a constant exchange rate between spending on science and scientific progress. The Vesuvius Challenge is an illustration of exactly the opposite. The prize pool for this challenge was a little more than a million dollars. Nat Friedman and friends probably spent more on top of that hiring organizers, building the website etc. But still this is pretty small in the context academic grants. A million dollars donated to the NSF or NIH would have been forgotten if it was noticed at all. Even a direct grant to Brent Seales, the computer science professor whose research laid the ground work for reading the scrolls, probably wouldn't have induced a tenth as much progress as the prize pool did, at least not within 9 months. It would have been easy to spend ten times as much on this problem and get ten times less progress out the other end. The money invested in this research was of course necessary but the spending was not sufficient, it needed to be paired with the right mechanism to work. The success of the challenge hinged on design choices at a level of detail beyond just a grants vs prizes dichotomy. Collaboration between contestants was essential for the development of the prize-winning software. The discord server for the challenge was (and is) full of open-sourced tools and discoveries that helped everyone get closer to reading the scrolls. A single, large grand prize is enticing but it's also exclusive. Only one submission can win so the competition becomes more zero-sum and keeping secrets is more rewarding. Even if this larger prize had the same expected value to each contestant, it would not have created as much progress because more research would be duplicated as less is shared. Nat Friedman and friends addressed this problem by creating several smaller progress prizes to reward open-source solutions to specific problems along the path to reading the scrolls or just open ended prize pools for useful community contributions. They also added second-place and runner-up prizes. These prizes funded the creation of data labeling tools that everyone used to train their models and visualizations that helped everyone understand the structure of the scrolls. They also helped fund the contestant's time and money investments in their submissions. Luke Farritor, one of the grand prize winners, used winnings from the First Letters prize to buy the computers that trained his prize winning model. A larger grand prize can theoretically provide the same incentive, but it's a lot harder to buy computers with expected value! Nat and his team also decided to completely swit...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Devin, published by Zvi on March 18, 2024 on LessWrong. Introducing Devin Is the era of AI agents writing complex code systems without humans in the loop upon us? Cognition is calling Devin 'the first AI software engineer.' Here is a two minute demo of Devin benchmarking LLM performance. Devin has its own web browser, which it uses to pull up documentation. Devin has its own code editor. Devin has its own command line. Devin uses debugging print statements and uses the log to fix bugs. Devin builds and deploys entire stylized websites without even being directly asked. What could possibly go wrong? Install this on your computer today. Padme. The Real Deal I would by default assume all demos were supremely cherry-picked. My only disagreement with Austen Allred's statement here is that this rule is not new: Austen Allred: New rule: If someone only shows their AI model in tightly controlled demo environments we all assume it's fake and doesn't work well yet But in this case Patrick Collison is a credible source and he says otherwise. Patrick Collison: These aren't just cherrypicked demos. Devin is, in my experience, very impressive in practice. Here we have Mckay Wrigley using it for half an hour. This does not feel like a cherry-picked example, although of course some amount of select is there if only via the publication effect. He is very much a maximum acceleration guy, for whom everything is always great and the future is always bright, so calibrate for that, but still yes this seems like evidence Devin is for real. This article in Bloomberg from Ashlee Vance has further evidence. It is clear that Devin is a quantum leap over known past efforts in terms of its ability to execute complex multi-step tasks, to adapt on the fly, and to fix its mistakes or be adjusted and keep going. For once, when we wonder 'how did they do that, what was the big breakthrough that made this work' the Cognition AI people are doing not only the safe but also the smart thing and they are not talking. They do have at least one series rival, as Magic.ai has raised $100 million from the venture team of Daniel Gross and Nat Friedman to build 'a superhuman software engineer,' including training their own model. The article seems strange interested in where AI is 'a bubble' as opposed to this amazing new technology. This is one of those 'helps until it doesn't situations' in terms of jobs: vanosh: Seeing this is kinda scary. Like there is no way companies won't go for this instead of humans. Should I really have studied HR? Mckay Wrigley: Learn to code! It makes using Devin even more useful. Devin makes coding more valuable, until we hit so many coders that we are coding everything we need to be coding, or the AI no longer needs a coder in order to code. That is going to be a ways off. And once it happens, if you are not a coder, it is reasonable to ask yourself: What are you even doing? Plumbing while hoping for the best will probably not be a great strategy in that world. The Metric Devin can sometimes (13.8% of the time?!) do actual real jobs on Upwork with nothing but a prompt to 'figure it out.' Aravind Srinivas (CEO Perplexity): This is the first demo of any agent, leave alone coding, that seems to cross the threshold of what is human level and works reliably. It also tells us what is possible by combining LLMs and tree search algorithms: you want systems that can try plans, look at results, replan, and iterate till success. Congrats to Cognition Labs! Andres Gomez Sarmiento: Their results are even more impressive you read the fine print. All the other models were guided whereas devin was not. Amazing. Deedy: I know everyone's taking about it, but Devin's 13% on SWE Bench is actually incredible. Just take a look at a sample SWE Bench problem: this is a task for a human! Shout out to Car...
Here's what job interviewers are testing you for, according to economist Tyler Cowen. Economist Tyler Cowen argues that traditional interview methods are not effective in identifying the best candidates for a job, especially in creative roles. Candidates who are well-prepared often pass these interviews, but this only tests their preparation and not their abilities. To identify the best candidates, Cowen suggests that interviewers focus on being authentic and spontaneous in their interactions with candidates, instead of relying on pre-written questions. The interviewer should be trustworthy, Cowen argues, as it helps them to better evaluate the candidate's authenticity. Ultimately, allocating talent in better ways can contribute to economic growth, and a more thoughtful approach to interviews can help identify more talented individuals and elevate them to greater opportunities. About Tyler Cowen: Tyler is the Holbert L. Harris Chair of Economics at George Mason University and serves as chairman and general director of the Mercatus Center at George Mason University. He is co-author of the popular economics blog Marginal Revolution and co-founder of the online educational platform Marginal Revolution University. Tyler also writes a column for Bloomberg View, and he has contributed to The Wall Street Journal and Money. In 2011, Bloomberg Businessweek profiled Tyler as “America's Hottest Economist” after his e-book, The Great Stagnation, appeared twice on The New York Times e-book bestseller list. He graduated from George Mason University with a bachelor's degree in economics and earned a Ph.D. in economics from Harvard University. He also runs a podcast series called Conversations with Tyler. His latest book Talent: How to Identify Energizers, Creatives and Winners Around the World is co-authored with venture capitalist Daniel Gross. ------------------------------------------------------------------------ About Big Think | Smarter Faster™ ► Big Think The leading source of expert-driven, educational content. With thousands of videos, featuring experts ranging from Bill Clinton to Bill Nye, Big Think helps you get smarter, faster by exploring the big ideas and core skills that define knowledge in the 21st century. ► Big Think+ Make your business smarter, faster: https://bigthink.com/plus/ Get Smarter, Faster With Interviews From The Worlds Biggest Thinkers. Follow This Podcast And Turn On The Notifications Rate Us With 5 Stars Share This Episode --- Send in a voice message: https://podcasters.spotify.com/pod/show/bigthink/message Learn more about your ad choices. Visit megaphone.fm/adchoices
Economist Tyler Cowen explains why you should not hire the smartest job candidate. Here's what to look for instead. What do Aretha Franklin, Bruce Springsteen, Bob Dylan, and Stevie Ray Vaughan have in common? In addition to being phenomenal 20th-century musicians, all were scouted or had their careers furthered by the American record producer John Hammond. Finding talent is a talent in itself. And to the author and economics professor Tyler Cowen, it is a talent that gets neglected in many companies, whether due to biases, boring hiring practices, or a failure to think outside the box. As Cowen explains in this Big Think video, the way to go about finding exceptional talent is by searching the areas where the rest of the market is not looking. Chapters:- 0:00 The talent problem 0:58 John Hammond: A legendary talent scout 2:06 The intelligence bias 3:37 Discover undervalued talents 5:56 The FOMO mentality: Learning from venture capitalists ------------------------------------------------------------------------ About Tyler Cowen: Tyler is the Holbert L. Harris Chair of Economics at George Mason University and serves as chairman and general director of the Mercatus Center at George Mason University. He is co-author of the popular economics blog Marginal Revolution and co-founder of the online educational platform Marginal Revolution University. Tyler also writes a column for Bloomberg View, and he has contributed to The Wall Street Journal and Money. In 2011, Bloomberg Businessweek profiled Tyler as “America's Hottest Economist” after his e-book, The Great Stagnation, appeared twice on The New York Times e-book bestseller list. He graduated from George Mason University with a bachelor's degree in economics and earned a Ph.D. in economics from Harvard University. He also runs a podcast series called Conversations with Tyler. His latest book Talent: How to Identify Energizers, Creatives and Winners Around the World is co-authored with venture capitalist Daniel Gross. ----------------------------------------------------------------------- About Big Think | Smarter Faster™ ► Big Think The leading source of expert-driven, educational content. With thousands of videos, featuring experts ranging from Bill Clinton to Bill Nye, Big Think helps you get smarter, faster by exploring the big ideas and core skills that define knowledge in the 21st century. ► Big Think+ Make your business smarter, faster: https://bigthink.com/plus/ Get Smarter, Faster With Interviews From The Worlds Biggest Thinkers. Follow This Podcast And Turn On The Notifications Rate Us With 5 Stars Share This Episode --- Send in a voice message: https://podcasters.spotify.com/pod/show/bigthink/message Learn more about your ad choices. Visit megaphone.fm/adchoices
This is Matt Reustle and today we are breaking down the giant of online dating. Even if you found love the old-fashioned way, you're likely familiar with the Match brands like Tinder and Hinge, amongst many others. To break down Match, I'm joined by George Hadjia, founder of Bristlemoon Capital. George goes through a background on this industry, what made Match who it is today, and all of the key debates that are driving this stock and all the commentary around it. Please enjoy this breakdown of Match Group. Interested in hiring from the Colossus Community? Click here. For the full show notes, transcript, and links to the best content to learn more, check out the episode page here. ----- This episode is brought to you by Tegus Converge — the first virtual event centered on the world of investor research. When twin brothers Tom and Mike Elnick realized that the research process for investors was broken, they founded Tegus to fix it. Now the people behind the most trusted research platform are bringing institutional investors together to investigate the state — and the future — of fundamental research. On November 8th, join industry luminaries like IGSB Founder Reece Duca and Daniel Gross, AI Expert, Entrepreneur and Investor, to dig into the latest research trends and breakthrough technologies shaping the investment landscape. Register today at tegus.com/register. ----- Business Breakdowns is a property of Colossus, LLC. For more episodes of Business Breakdowns, visit joincolossus.com/episodes. Stay up to date on all our podcasts by signing up to Colossus Weekly, our quick dive every Sunday highlighting the top business and investing concepts from our podcasts and the best of what we read that week. Sign up here. Follow us on Twitter: @JoinColossus | @patrick_oshag | @jspujji | @zbfuss | @ReustleMatt | @domcooke Show Notes (00:03:10) - (First question) - George's response since releasing his recent report on Match (00:04:55) - A general overview of the online dating market (00:10:55) - Comparing the different brands within the dating app industry (00:14:10) - The reason for the existence of so many niche brands in the market (00:18:55) - The different avenues for these brands when it comes to monetization (00:21:25) - The breakdown of revenue per customer and the different tiers dating apps offer (00:24:10) - Customer turnover due to the nature of dating and how the retention rate differs between the different apps (00:28:40) - A snapshot of how the industry has been growing over recent years (00:29:50) - Determining normalized earning profiles and margins when taking into account the lack of marketing spend historically (00:32:40) - The historical percentage of revenue that goes into marketing expenses (00:35:10) - How Bumble's advertising expenditure differs from Match Group brands (00:36:40) - Price competition between different brands and a look at Tinder's introduction of premium monetization tiers (00:39:20) - Dissecting top-line growth and the percentage due to recent price increases (00:40:10) - An overview of the business' capital allocation and how they intend to invest in the growth of the business (00:42:50) - The new management team's strategy and how it differs from the previous regimes (00:46:25) - Potential changes to Apple app store fees and how it could affect the business (00:51:10) - A forward outlook at where George expects the business to go in the coming years (00:54:40) - The key risks to the business moving forward (00:57:20) - The threat that Facebook poses in terms of its entry into the market (01:02:20) - The lessons learned from researching Match Learn more about your ad choices. Visit megaphone.fm/adchoices
Today's guest is Aswath Damodaran, who is joining us for a second time on Invest Like the Best. Aswath is a Professor of Finance at NYU's Stern School of Business and is often referred to as the Dean of Valuation for his clarity of thought on the subject. This conversation picks up where we left off 18 months ago and covers a wide range of topics from macro risks to Nvidia and the process of crafting a personal investment philosophy. Please enjoy this great discussion with Aswath Damodaran. Listen to Founders Podcast For the full show notes, transcript, and links to mentioned content, check out the episode page here. ----- This episode is brought to you by Tegus Converge — the first virtual event centered on the world of investor research. When twin brothers Tom and Mike Elnick realized that the research process for investors was broken, they founded Tegus to fix it. Now the people behind the most trusted research platform are bringing institutional investors together to investigate the state — and the future — of fundamental research. On November 8th, join industry luminaries like IGSB Founder Reece Duca and Daniel Gross, AI Expert, Entrepreneur and Investor, to dig into the latest research trends and breakthrough technologies shaping the investment landscape. Register today at tegus.com/register. ----- Invest Like the Best is a property of Colossus, LLC. For more episodes of Invest Like the Best, visit joincolossus.com/episodes. Past guests include Tobi Lutke, Kevin Systrom, Mike Krieger, John Collison, Kat Cole, Marc Andreessen, Matthew Ball, Bill Gurley, Anu Hariharan, Ben Thompson, and many more. Stay up to date on all our podcasts by signing up to Colossus Weekly, our quick dive every Sunday highlighting the top business and investing concepts from our podcasts and the best of what we read that week. Sign up here. Follow us on Twitter: @patrick_oshag | @JoinColossus Show Notes (00:01:30) - (First question) - The general prevailing narrative in markets today (00:03:45) - The biggest business implications given the current market landscape (00:05:30) - Why it's bad to have risky founders with cheap capital trying experiments (00:07:38) - The natural rate of interest and how it's priced (00:08:40) - His updated view and thoughts on what's currently driving inflation (00:12:20) - Macro variables that most have his attention today (00:13:30) - The nature of the trouble that we're all in (00:17:30) - Whether or not international equities will become a place of interest (00:20:38) - The unique absolute basis of NVIDIA's growth (00:22:10) - His take on the new wave of AI in a broad sense (00:28:00) - Trying to value AI companies without tangible business models (00:31:30) - The parts of his own valuation process that are beyond automation (00:34:40) - Commonalities between investors who beat the benchmark (00:37:20) - Episodes on his own path that lead him towards his investment philosophy (00:40:50) - How he goes about valuing non-traditional companies like sports franchises (00:45:30) - The world of entertainment and how he sees it as a business today (00:52:30) - The best business models he's ever seen (00:54:25) - What valuing Instacart taught him about online grocery shopping (00:57:40) - The most interesting company he valued over the last year (00:59:30) - A well known company he wouldn't bother valuing using his typical model (01:02:10) - How bank failures changed his thinking on our financial systems and banks as businesses writ large (01:05:00) - The changing attitude towards ESG investing (01:09:56) - Why there are still so many pools of capital that pursue an active strategy (01:10:51) - Being sick and tired of the conversation always revolving around central banks (01:14:38) - What he's most excited to look into over the coming year (01:16:38) - Major differences between a financial and an accounting balance sheet
This is Matt Reustle and today we are breaking down WEX, a big fish in a less known pond. WEX is a leader in the fleet card market - they offer trucking businesses special credit cards which help secure advantaged rates on fuel among many other things. This is a business with a long history as WEX is headquartered in Maine, and really came to life in the 1980s. To break down WEX, I'm joined by Mark Tomasovic from Energize Capital, a multiple-time guest on Business Breakdowns. We get into the history of this industry and how WEX found a very creative way to accelerate adoption within this market. Please enjoy this breakdown of WEX. Subscribe to Colossus's New Show: Art of Investing Buy a ticket to Patrick and David Senra's live show. Interested in hiring from the Colossus Community? Click here. For the full show notes, transcript, and links to the best content to learn more, check out the episode page here. ----- This episode is brought to you by Tegus Converge — the first virtual event centered on the world of investor research. When twin brothers Tom and Mike Elnick realized that the research process for investors was broken, they founded Tegus to fix it. Now the people behind the most trusted research platform are bringing institutional investors together to investigate the state — and the future — of fundamental research. On November 8th, join industry luminaries like IGSB Founder Reece Duca and Daniel Gross, AI Expert, Entrepreneur and Investor, to dig into the latest research trends and breakthrough technologies shaping the investment landscape. Register today at tegus.com/register. ----- Business Breakdowns is a property of Colossus, LLC. For more episodes of Business Breakdowns, visit joincolossus.com/episodes. Stay up to date on all our podcasts by signing up to Colossus Weekly, our quick dive every Sunday highlighting the top business and investing concepts from our podcasts and the best of what we read that week. Sign up here. Follow us on Twitter: @JoinColossus | @patrick_oshag | @jspujji | @zbfuss | @ReustleMatt | @domcooke Show Notes (00:02:57) - (First question) - An overview of what WEX is and what they do (00:03:50) - A summary of the market that WEX operates in (00:05:59) - The history of the company's creation (00:08:53) - The importance of signing up large gas companies rather than retail locations (00:11:33) - Value propositions behind providing fleet cards (00:12:53) - How the economic model works for the cards (00:13:48) - The percentage of spend equivalent to Visa or Mastercard (00:14:31) - The difficulty behind switching from one fleet card provider to another (00:17:05) - The role fuel prices play in the total revenue of the business (00:20:09) - Threats to consider on the supply end of the business (00:21:57) - Recharging at home and the process of receiving a credit (00:23:06) - Other businesses WEX is involved in (00:24:27) - A comparison between all of WEX's businesses and where they direct focus (00:25:13) - A look into their health and employee benefits line (00:26:39) - The overall financial profile from a revenue and margins standpoint (00:28:43) - How big players like Amazon or Walmart play a part in potential business (00:30:02) - The threat of Visa or Mastercard entering the same space (00:31:17) - Total amount of revenue generated from electric vehicle fleets (00:33:27) - Electric charging locations and the process of building these facilities (00:35:13) - Technology invested into creating faster charging stations (00:35:50) - An overall look at risks for the business (00:37:54) - Other parts of WEX that stand out (00:40:10) - Lessons learned from studying WEX Learn more about your ad choices. Visit megaphone.fm/adchoices
My guest this week is Strauss Zelnick, the CEO of leading game publisher Take-Two Interactive. Maybe most well-known for its hugely successful Grand Theft Auto game, Take-Two is a sophisticated, top-tier developer, publisher, and marketer of interactive entertainment that owns Rockstar Games and 2K. Strauss's passion for entertainment led him strong and fast into the industry as he worked his way from sales to CEO and transitioned from motion picture to gaming. Today we cover his approach to staying on the cutting edge of media development, unlocking talent and potential in those around you, and becoming the leader you were meant to be. His intensity and his standard for excellence come through clearly. Please enjoy my conversation with Strauss Zelnick. Subscribe to Colossus's New Show: Art of Investing Buy a ticket to Patrick and David Senra's live show. Listen to Founders Podcast For the full show notes, transcript, and links to mentioned content, check out the episode page here. ----- This episode is brought to you by Tegus Converge — the first virtual event centered on the world of investor research. When twin brothers Tom and Mike Elnick realized that the research process for investors was broken, they founded Tegus to fix it. Now the people behind the most trusted research platform are bringing institutional investors together to investigate the state — and the future — of fundamental research. On November 8th, join industry luminaries like IGSB Founder Reece Duca and Daniel Gross, AI Expert, Entrepreneur and Investor, to dig into the latest research trends and breakthrough technologies shaping the investment landscape. Register today at tegus.com/register. ----- Invest Like the Best is a property of Colossus, LLC. For more episodes of Invest Like the Best, visit joincolossus.com/episodes. Past guests include Tobi Lutke, Kevin Systrom, Mike Krieger, John Collison, Kat Cole, Marc Andreessen, Matthew Ball, Bill Gurley, Anu Hariharan, Ben Thompson, and many more. Stay up to date on all our podcasts by signing up to Colossus Weekly, our quick dive every Sunday highlighting the top business and investing concepts from our podcasts and the best of what we read that week. Sign up here. Follow us on Twitter: @patrick_oshag | @JoinColossus Show Notes (00:03:39) - (First question) - Why the entertainment media sector is so interesting (00:05:08) - Key inflection points in the history of media (00:09:17) - The role of pure content in businesses today (00:10:32) - Requirements for being a successful media business operator (00:12:03) - Strategies for working effectively with creatives (00:16:13) - How to cultivate a conducive environment for creatives (00:25:54) - The allure of collaborating with Take-Two (00:30:09) - Strauss' journey to becoming the chairman and CEO of Take-Two (00:37:42) - Strategies for reducing costs in business (00:41:16) - Embracing diversity in the video game industry (00:43:41) - Identifying high-quality intellectual property (IP) (00:46:04) - The inspiration behind Strauss' book Becoming Ageless: The Four Secrets To Looking and Feeling Younger Than Ever (00:51:12) - Influential leaders for learning and growth (00:55:45) - The impact of technology and the rise of new platforms (00:57:00) - Common misconceptions about Take-Two (00:59:42) - Unique attributes of Take-Two projects (01:00:36) - Defining moments in the history of the business (01:04:19) - Anticipating the future direction of Take-Two (01:13:41) - Sources of motivation and inspiration (01:15:29) - The concept and value of a masterpiece (01:11:08) - Paramount values as a parent (01:11:38) - The kindest thing anyone has ever done for Strauss
The Murder Sheet has an exclusive report touching upon an infamous international case.In 1999, an American nurse named Ted Maher was accused of setting fire to a Monte Carlo penthouse and murdering billionaire Edmond Safra and a colleague named Vivian Torrente.In 2023, under the new name Jon Green, the same man was charged with criminal solicitation to commit murder.-----------------------------------------------------------------------------------------------------------------The Murder Sheet participates in the Amazon Associate program and earns money from qualifying purchases.Reporting on Edmond Safra:The Los Angeles Times's reporting on American Express:https://www.latimes.com/archives/la-xpm-1992-04-28-fi-1108-story.htmlCoverage from Forbes on the Russia-related scandal: https://www.forbes.com/2007/05/17/bony-russia-lawsuit-biz-services-cx_lm_0517suit.html?sh=4dcae2bd21c1The Jewish Week's feature on Safra: https://www.hsje.org/Whoswho/Edmund_Safra/we_have_lost_our_crown.htmlThe New York Post's coverage of Safra's reputation: https://nypost.com/1999/12/14/safras-sleuth-pi-joe-mullen-saved-the-reputation-of-the-late-edmond-safra-and-has-cracked-many-a-case-for-this-decades-famous-and-infamous/The Washington Post's coverage of the American Express incident involving Safra: https://www.washingtonpost.com/archive/business/1989/07/29/american-express-offers-4-million-and-apology/aafa682c-f909-420a-8cba-64c1171b8754/Coverage from The Times of Israel on Edmond Safra: https://www.timesofisrael.com/new-biography-probes-into-mysterious-backstory-of-billionaire-banker-edmond-j-safra/“A Banker's Journey: How Edmond J. Safra Built a Global Financial Empire” by Daniel Gross: https://www.amazon.com/Bankers-Journey-Edmond-Global-Financial/dp/1635767857?&_encoding=UTF8&tag=murdersheet-20&linkCode=ur2&linkId=e421f9aad81731bd8c533450c2d33219&camp=1789&creative=9325"Vendetta: American Express and the Smearing of Edmond Safra" by Bryan Burrough: https://www.amazon.com/Vendetta-American-Express-Smearing-Edmond/dp/0060167599?&_encoding=UTF8&tag=murdersheet-20&linkCode=ur2&linkId=daa163a4a68a59e1be75830f6856bef4&camp=1789&creative=9325“Gilded Lily: Lily Safra: The Making of One of the World's Wealthiest Widows” by Isabel Vincent: https://www.amazon.com/GILDED-LILY-Isabel-Vincent/dp/0061133949?&_encoding=UTF8&tag=murdersheet-20&linkCode=ur2&linkId=703a51336f36524e9ccc1178740241e4&camp=1789&creative=9325Reporting on Ted Maher:The New York Times's story on the 1999 nursing strike: https://www.nytimes.com/1999/08/05/nyregion/nurses-plan-strike-monday-at-columbia-presbyterian.htmlThe New York Times's story on how the 1999 nursing strike was called off: https://www.nytimes.com/1999/08/10/nyregion/tentative-deal-averts-strike-by-nurses.htmlTime's reporting on Ted Maher: https://content.time.com/time/subscriber/article/0,33009,992877,00.htmlSeacoastonline's report on Heidi Maher: https://www.seacoastonline.com/story/news/2002/11/21/praying-for-murder-acquittal/51281826007/Coverage from the New York Post on Ted Maher's release:https://nypost.com/2007/08/17/back-from-dead/The New York Post on Ted Maher's former wife's lawsuit against the Safra estate:https://nypost.com/2003/05/27/60m-safra-suit-killers-wife-hits-widow-over-police-grilling/The New York Post on Lily Safra's reaction to Ted Maher's release: https://nypost.com/2007/08/18/widows-pique-at-killers-release/The New York Post's coverage of Ted Maher's innocence claims: https://nypost.com/2007/10/14/tycoons-killer-my-frame-up/"Framed in Monte Carlo: How I Was Wrongfully Convicted for a Billionaire's Fiery Death” by Ted Maher, Bill Hayes, and Jennifer Thomas: https://www.amazon.com/Framed-Monte-Carlo-Prison-Murder/dp/1510755861?&_encoding=UTF8&tag=murdersheet-20&linkCode=ur2&linkId=fddac60f9dea02c78ede9cf2a644bf01&camp=1789&creative=9325Coverage of the fire and homicides in Monaco:The Washington Post's coverage of the 1999 murders: https://www.washingtonpost.com/wp-srv/pmextra/dec99/6/safra.htmThe NBC special on the case, with quotes from Torrente's daughter: https://www.nbcnews.com/id/wbna23767683The Guardian's report on the 1999 murders: https://www.theguardian.com/theobserver/2000/oct/29/features.magazine47Another Guardian report on the 1999 murders:https://www.theguardian.com/world/1999/dec/07/jonhenleyYet another Guardian report on the 1999 murders:https://www.theguardian.com/world/1999/dec/05/paulwebster.theobserverThe New York Post article on the 1999 murders:https://nypost.com/2002/11/18/safra-choke-twist/Dominick Dunne for Vanity Fair on the killings: https://www.vanityfair.com/culture/2000/12/dunne200012MSNBC on the 1999 murders: https://archive.org/details/MSNBCW_20151213_000000_Mystery_of_the_Billionaire_BankerCNN on the 1999 murders:http://www.cnn.com/2002/LAW/08/12/ctv.monaco.trial/index.htmlThe Wall Street Journal on the 1999 murders: https://www.wsj.com/articles/SB94441779970529365Court TV's timeline of the 1999 murders: https://web.archive.org/web/20080204074511/http://www.courttv.com/trials/monaco/chronology.htmlNewsweek's coverage of the 1999 murders: https://www.newsweek.com/bad-bet-monte-carlo-151519Coverage from CBS of Ted Maher's trial: https://www.cbsnews.com/news/part-ii-an-american-on-trial/Additional coverage from CBS of Ted Maher's trial: https://www.cbsnews.com/news/murder-in-monaco-an-american-on-trial/A report from the Times on the trial of Ted Maher: https://www.thetimes.co.uk/article/monaco-police-in-dock-for-billionaire-s-death-mk7v5nrb8crA report from The Telegraph on the trial of Ted Maher: https://www.telegraph.co.uk/news/worldnews/europe/monaco/1414023/Gilded-Lily-faces-her-husbands-killer.htmlCoverage of the dognapping incident involving Jon Green:KRQE's coverage of the dognapping: https://www.krqe.com/news/new-mexico/carlsbad-dognapping-man-with-bizarre-past-accused-of-taking-ex-wifes-dogs/KRQE on the return of the missing dogs: https://www.krqe.com/news/stolen-search-and-rescue-dogs-reunited-with-carlsbad-woman/Fox San Antonio's story on the rescue of the rescue dogs: https://foxsanantonio.com/news/local/dognapping-suspect-wanted-on-multiple-charges-arrested-after-extensive-manhuntNBC's coverage of Jon Green's legal issues: https://www.nbcnews.com/dateline/man-mysterious-past-facing-multiple-charges-run-after-dognapping-carlsbad-n1295803The Carlsbad Current Argus on the missing dogs: https://www.currentargus.com/story/news/crime/2022/06/15/missing-carlsbad-search-and-rescue-dogs-found-safe-in-texas/65361052007/A feature from the American Veterinary Medical Association mentioning Dr. Kim Lark:https://www.avma.org/javma-news/2011-09-15/honoring-dogs-911Send tips to murdersheet@gmail.com.The Murder Sheet is a production of Mystery Sheet LLC .See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Sam Altman, Peak XV, and Daniel Gross and Nat Friedman's AI grant are among backers of an AI startup, founded by two teenagers, that's aiming to assist businesses in automating numerous workflows in previously unexplored ways.
Let's talk through some of the challenges that Enterprises will have with AI - from data location to GPU location, to model biases, to data privacy to training vs. execution.SHOW: 748CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotwCHECK OUT OUR NEW PODCAST - "CLOUDCAST BASICS"SHOW SPONSORS:Datadog Security Solution: Modern Monitoring and SecurityStart investigating security threats before it affects your customers with a free 14 day Datadog trial. Listeners of The Cloudcast will also receive a free Datadog T-shirt.Find "Breaking Analysis Podcast with Dave Vellante" on Apple, Google and SpotifyKeep up to data with Enterprise Tech with theCUBEAWS Insiders is an edgy, entertaining podcast about the services and future of cloud computing at AWS. Listen to AWS Insiders in your favorite podcast player. Cloudfix HomepageSHOW NOTES:An Interview with Daniel Gross and Nat Friedman on the AI Hype Cycle (Stratechery)ARE THERE EXPECTATIONS OF “OLD AI” vs. “NEW AI”?Are business leaders thinking about unique AI applications and use-cases, or just “ChatGPT-everything”?Formal data scientists vs. citizen data scientists?Will this just be an application, or have an impact on every aspect of a business and the IT industry?WILL ENTERPRISE AI BE DIFFERENT THAN CONSUMER AI? The industry is actively working on a broad set of models that can be used for different use-cases. It's commonly accepted that AI models need to be trained near the sources of data. Many businesses are concerned about including their company data into these public modelsMany businesses will want to deploy tuned models and applications in data center, public cloud and edge environments. New AI applications will be required to meet security, regulatory and compliance standards, like other business applications. FEEDBACK?Email: show at the cloudcast dot netTwitter: @thecloudcastnet
Lemon.io - Hire pre-vetted remote developers, get 15% off your first 4 weeks of developer time at https://Lemon.io/twist OpenPhone. Create business phone numbers for you and your team that work through an app on your smartphone or desktop. TWiST listeners can get an extra 20% off any plan for your first 6 months at openphone.com/twist VEED makes it super easy for anyone (yes, you) to create great video. Filled with amazing features like templates, auto subtitles, text formatting, auto-resizing, a full suite of AI tools, and much more, VEED gives you the tools to engage your audience on any platform. Head to VEED.io to start creating incredible video content in minutes. * Today's show: Jason breaks down investors and companies using the GPU shortage as leverage to invest in AI startups (1:33) before discussing Microsoft's run-in with EU regulators over bundling Teams into its Office suite (28:53). They wrap on the FTC's loss in its quest to stop Microsoft's Activision Blizzard acquisition, and Lina Khan's track record as FTC chair (37:58). * Time stamps: (0:00) Nick joins Jason (1:33) Nvidia's GPU leverage (9:48) Lemon.io - Get 15% off your first 4 weeks of developer time at https://Lemon.io/twist (11:07) CoreWeave's pivot & the pros and cons of WFH (19:28) OpenPhone - Get 20% off your first six months at https://openphone.com/twist (20:54) Daniel Gross and Nat Friedman's GPU play (27:24) Veed - Sign up and engage your audience on any platform at https://www.veed.io/avatars?utm_campaign=TWIS&utm_medium=YT&utm_source=MKT (28:53) Microsoft's run-in with EU regulators over bundling (37:58) FTC loses cases against Microsoft and Activision Blizzard merger (46:37) Lina Khan's track record as the head of the FTC * Follow Nick: https://twitter.com/nickcalacanis * Read LAUNCH Fund 4 Deal Memo: https://www.launch.co/four Apply for Funding: https://www.launch.co/apply Buy ANGEL: https://www.angelthebook.com Great recent interviews: Steve Huffman, Brian Chesky, Aaron Levie, Sophia Amoruso, Reid Hoffman, Frank Slootman, Billy McFarland, PrayingForExits, Jenny Lefcourt Check out Jason's suite of newsletters: https://substack.com/@calacanis * Follow Jason: Twitter: https://twitter.com/jason Instagram: https://www.instagram.com/jason LinkedIn: https://www.linkedin.com/in/jasoncalacanis * Follow TWiST: Substack: https://twistartups.substack.com Twitter: https://twitter.com/TWiStartups YouTube: https://www.youtube.com/thisweekin * Subscribe to the Founder University Podcast: https://www.founder.university/podcast
Thanks to the over 42,000 latent space explorers who checked out our Replit episode! We are hosting/attending a couple more events in SF and NYC this month. See you if in town!Lexica.art was introduced to the world 24 hours after the release of Stable Diffusion as a search engine for prompts, gaining instant product-market fit as a world discovering generative AI also found they needed to learn prompting by example.Lexica is now 8 months old, serving 5B image searches/day, and just shipped V3 of Lexica Aperture, their own text-to-image model! Sharif Shameem breaks his podcast hiatus with us for an exclusive interview covering his journey building everything with AI!The conversation is nominally about Sharif's journey through his three startups VectorDash, Debuild, and now Lexica, but really a deeper introspection into what it takes to be a top founder in the fastest moving tech startup scene (possibly ever) of AI. We hope you enjoy this conversation as much as we did!Full transcript is below the fold. We would really appreciate if you shared our pod with friends on Twitter, LinkedIn, Mastodon, Bluesky, or your social media poison of choice!Timestamps* [00:00] Introducing Sharif* [02:00] VectorDash* [05:00] The GPT3 Moment and Building Debuild* [09:00] Stable Diffusion and Lexica* [11:00] Lexica's Launch & How it Works* [15:00] Being Chronically Early* [16:00] From Search to Custom Models* [17:00] AI Grant Learnings* [19:30] The Text to Image Illuminati?* [20:30] How to Learn to Train Models* [24:00] The future of Agents and Human Intervention* [29:30] GPT4 and Multimodality* [33:30] Sharif's Startup Manual* [38:30] Lexica Aperture V1/2/3* [40:00] Request for AI Startup - LLM Tools* [41:00] Sequencing your Genome* [42:00] Believe in Doing Great Things* [44:30] Lightning RoundShow Notes* Sharif's website, Twitter, LinkedIn* VectorDash (5x cheaper than AWS)* Debuild Insider, Fast company, MIT review, tweet, tweet* Lexica* Introducing Lexica* Lexica Stats* Aug: “God mode” search* Sep: Lexica API * Sept: Search engine with CLIP * Sept: Reverse image search* Nov: teasing Aperture* Dec: Aperture v1* Dec - Aperture v2* Jan 2023 - Outpainting* Apr 2023 - Aperture v3* Same.energy* AI Grant* Sharif on Agents: prescient Airpods tweet, Reflection* MiniGPT4 - Sharif on Multimodality* Sharif Startup Manual* Sharif Future* 23andMe Genome Sequencing Tool: Promethease* Lightning Round* Fave AI Product: Cursor.so. Swyx ChatGPT Menubar App.* Acceleration: Multimodality of GPT4. Animated Drawings* Request for Startup: Tools for LLMs, Brex for GPT Agents* Message: Build Weird Ideas!TranscriptAlessio: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO on Residence at Decibel Partners. I'm joined by my co-host Wix, writer and editor of Latent Space. And today we have Sharish Amin. Welcome to the studio. Sharif: Awesome. Thanks for the invite.Swyx: Really glad to have you. [00:00] Introducing SharifSwyx: You've been a dream guest, actually, since we started drafting guest lists for this pod. So glad we could finally make this happen. So what I like to do is usually introduce people, offer their LinkedIn, and then prompt you for what's not on your LinkedIn. And to get a little bit of the person behind the awesome projects. So you graduated University of Maryland in CS. Sharif: So I actually didn't graduate, but I did study. Swyx: You did not graduate. You dropped out. Sharif: I did drop out. Swyx: What was the decision behind dropping out? Sharif: So first of all, I wasn't doing too well in any of my classes. I was working on a side project that took up most of my time. Then I spoke to this guy who ended up being one of our investors. And he was like, actually, I ended up dropping out. I did YC. And my company didn't end up working out. And I returned to school and graduated along with my friends. I was like, oh, it's actually a reversible decision. And that was like that. And then I read this book called The Case Against Education by Brian Kaplan. So those two things kind of sealed the deal for me on dropping out. Swyx: Are you still on hiatus? Could you still theoretically go back? Sharif: Theoretically, probably. Yeah. Still on indefinite leave. Swyx: Then you did some work at Mitra? Sharif: Mitra, yeah. So they're lesser known. So they're technically like an FFRDC, a federally funded research and development center. So they're kind of like a large government contractor, but nonprofit. Yeah, I did some computer vision work there as well. [02:00] VectorDashSwyx: But it seems like you always have an independent founder bone in you. Because then you started working on VectorDash, which is distributed GPUs. Sharif: Yes. Yeah. So VectorDash was a really fun project that we ended up working on for a while. So while I was at Mitra, I had a friend who was mining Ethereum. This was, I think, 2016 or 2017. Oh my God. Yeah. And he was mining on his NVIDIA 1080Ti, making around like five or six dollars a day. And I was trying to train a character recurrent neural network, like a character RNN on my iMessage text messages to make it like a chatbot. Because I was just curious if I could do it. Because iMessage stores all your past messages from years ago in a SQL database, which is pretty nifty. But I wanted to train it. And I needed a GPU. And it was, I think, $60 to $80 for a T4 on AWS, which is really slow compared to a 1080Ti. If you normalize the cost and performance versus the 1080Ti when someone's mining Ethereum, it's like a 20x difference. So I was like, hey, his name was Alex. Alex, I'll give you like 10 bucks if you let me borrow your 1080Ti for a week. I'll give you 10 bucks per day. And it was like 70 bucks. And I used it to train my model. And it worked great. The model was really bad, but the whole trade worked really great. I got a really high performance GPU to train my model on. He got much more than he was making by mining Ethereum. So we had this idea. I was like, hey, what if we built this marketplace where people could rent their GPUs where they're mining cryptocurrency and machine learning researchers could just rent them out and pay a lot cheaper than they would pay AWS. And it worked pretty well. We launched in a few months. We had over 120,000 NVIDIA GPUs on the platform. And then we were the cheapest GPU cloud provider for like a solid year or so. You could rent a pretty solid GPU for like 20 cents an hour. And cryptocurrency miners were making more than they would make mining crypto because this was after the Ethereum crash. And yeah, it was pretty cool. It just turns out that a lot of our customers were college students and researchers who didn't have much money. And they weren't necessarily the best customers to have as a business. Startups had a ton of credits and larger companies were like, actually, we don't really trust you with our data, which makes sense. Yeah, we ended up pivoting that to becoming a cloud GPU provider for video games. So we would stream games from our GPUs. Oftentimes, like many were located just a few blocks away from you because we had the lowest latency of any cloud GPU provider, even lower than like AWS and sometimes Cloudflare. And we decided to build a cloud gaming platform where you could pretty much play your own games on the GPU and then stream it back to your Mac or PC. Swyx: So Stadia before Stadia. Sharif: Yeah, Stadia before Stadia. It's like a year or so before Stadia. Swtx: Wow. Weren't you jealous of, I mean, I don't know, it sounds like Stadia could have bought you or Google could have bought you for Stadia and that never happened? Sharif: It never happened. Yeah, it didn't end up working out for a few reasons. The biggest thing was internet bandwidth. So a lot of the hosts, the GPU hosts had lots of GPUs, but average upload bandwidth in the United States is only 35 megabits per second, I think. And like a 4K stream needs like a minimum of 15 to 20 megabits per second. So you could really only utilize one of those GPUs, even if they had like 60 or 100. [05:00] The GPT3 Moment and Building DebuildSwyx: And then you went to debuild July 2020, is the date that I have. I'm actually kind of just curious, like what was your GPT-3 aha moment? When were you like GPT-3-pilled? Sharif: Okay, so I first heard about it because I was also working on another chatbot. So this was like after, like everything ties back to this chatbot I'm trying to make. This was after working on VectorDash. I was just like hacking on random projects. I wanted to make the chatbot using not really GPT-2, but rather just like it would be pre-programmed. It was pretty much you would give it a goal and then it would ask you throughout the week how much progress you're making to that goal. So take your unstructured response, usually a reply to a text message, and then it would like, plot it for you in like a table and you could see your progress over time. It could be for running or tracking calories. But I wanted to use GPT-3 to make it seem more natural because I remember someone on Bookface, which is still YC's internal forum. They posted and they were like, OpenAI just released AGI and it's GPT-3. I asked it like a bunch of logic puzzles and it solved them all perfectly. And I was like, what? How's no one else talking about this? Like this is either like the greatest thing ever that everyone is missing or like it's not that good. So like I tweeted out if anyone could get me access to it. A few hours later, Greg Brockman responded. Swyx: He is everywhere. Sharif: He's great. Yeah, he's on top of things. And yeah, by that afternoon, I was like messing around with the API and I was like, wow, this is incredible. You could chat with fake people or people that have passed away. You could like, I remember the first conversation I did was this is a chat with Steve Jobs and it was like, interviewer, hi. What are you up to today on Steve? And then like you could talk to Steve Jobs and it was somewhat plausible. Oh, the thing that really blew my mind was I tried to generate code with it. So I'd write the function for a JavaScript header or the header for a JavaScript function. And it would complete the rest of the function. I was like, whoa, does this code actually work? Like I copied it and ran it and it worked. And I tried it again. I gave more complex things and like I kind of understood where it would break, which was like if it was like something, like if it was something you couldn't easily describe in a sentence and like contain all the logic for in a single sentence. So I wanted to build a way where I could visually test whether these functions were actually working. And what I was doing was like I was generating the code in the playground, copying it into my VS code editor, running it and then reloading the react development page. And I was like, okay, cool. That works. So I was like, wait, let me just put this all in like the same page so I can just compile in the browser, run it in the browser and then submit it to the API in the browser as well. So I did that. And it was really just like a simple loop where you just type in the prompt. It would generate the code and then compile it directly in the browser. And it showed you the response. And I did this for like very basic JSX react components. I mean, it worked. It was pretty mind blowing. I remember staying up all night, like working on it. And it was like the coolest thing I'd ever worked on at the time so far. Yeah. And then I was like so mind blowing that no one was talking about this whole GPT three thing. I was like, why is this not on everyone's minds? So I recorded a quick 30 second demo and I posted on Twitter and like I go to bed after staying awake for like 20 hours straight. When I wake up the next morning and I had like 20,000 likes and like 100,000 people had viewed it. I was like, oh, this is so cool. And then I just kept putting demos out for like the next week. And yeah, that was like my GPT three spark moment. Swyx: And you got featured in like Fast Company, MIT Tech Review, you know, a bunch of stuff, right? Sharif: Yeah. Yeah. I think a lot of it was just like the API had been there for like a month prior already. Swyx: Not everyone had access. Sharif: That's true. Not everyone had access. Swyx: So you just had the gumption to tweet it out. And obviously, Greg, you know, on top of things as always. Sharif: Yeah. Yeah. I think it also makes a lot of sense when you kind of share things in a way that's easily consumable for people to understand. Whereas if you had shown a terminal screenshot of a generating code, that'd be pretty compelling. But whereas seeing it get rendered and compiled directly in front of you, there's a lot more interesting. There's also that human aspect to it where you want to relate things to the end user, not just like no one really cares about evals. When you can create a much more compelling demo explaining how it does on certain tasks. [09:00] Stable Diffusion and LexicaSwyx: Okay. We'll round it out soon. But in 2022, you moved from Debuild to Lexica, which was the search engine. I assume this was inspired by stable diffusion, but I can get the history there a little bit. Sharif: Yeah. So I was still working on Debuild. We were growing at like a modest pace and I was in the stable... Swyx: I was on the signup list. I never got off. Sharif: Oh yeah. Well, we'll get you off. It's not getting many updates anymore, but yeah, I was in the stable diffusion discord and I was in it for like many hours a day. It was just like the most exciting thing I'd ever done in a discord. It was so cool. Like people were generating so many images, but I didn't really know how to write prompts and people were like writing really complicated things. They would be like, like a modern home training on our station by Greg Rutkowski, like a 4k Unreal Engine. It's like that there's no way that actually makes the images look better. But everyone was just kind of copying everyone else's prompts and like changing like the first few words. Swyx: Yeah. Yeah. Sharif: So I was like using the discord search bar and it was really bad because it showed like five images at a time. And I was like, you know what? I could build a much better interface for this. So I ended up scraping the entire discord. It was like 10 million images. I put them in a database and I just pretty much built a very basic search engine where you could just type for type a word and then it returned all the prompts that had that word. And I built the entire website for it in like 20, in like about two days. And we shipped it the day I shipped it the day after the stable diffusion weights were open sourced. So about 24 hours later and it kind of took off in a way that I never would have expected. Like I thought it'd be this cool utility that like hardcore stable diffusion users would find useful. But it turns out that almost anyone who mentioned stable diffusion would also kind of mention Lexica in conjunction with it. I think it's because it was like it captured the zeitgeist in an easy to share way where it's like this URL and there's this gallery and you can search. Whereas running the model locally was a lot harder. You'd have to like to deploy it on your own GPU and like set up your own environment and like do all that stuff. Swyx: Oh, my takeaway. I have two more to add to the reasons why Lexica works at the time. One is lower latency is all you need. So in other words, instead of waiting a minute for your image, you could just search and find stuff that other people have done. That's good. And then two is everyone knew how to search already, but people didn't know how to prompt. So you were the bridge. Sharif: That's true. Yeah. You would get a lot better looking images by typing a one word prompt versus prompting for that one word. Yeah. Swyx: Yeah. That is interesting. [11:00] Lexica's Explosion at LaunchAlessio: The numbers kind of speak for themselves, right? Like 24 hours post launch, 51,000 queries, like 2.2 terabytes in bandwidth. Going back to the bandwidth problem that you have before, like you would have definitely run into that. Day two, you doubled that. It's like 111,000 queries, four and a half terabytes in bandwidth, 22 million images served. So it's pretty crazy. Sharif: Yeah. I think we're, we're doing like over 5 billion images served per month now. It's like, yeah, that's, it's pretty crazy how much things have changed since then. Swyx: Yeah. I'm still showing people like today, even today, you know, it's been a few months now. This is where you start to learn image prompting because they don't know. Sharif: Yeah, it is interesting. And I, it's weird because I didn't really think it would be a company. I thought it would just be like a cool utility or like a cool tool that I would use for myself. And I really was just building it for myself just because I didn't want to use the Discord search bar. But yeah, it was interesting that a lot of other people found it pretty useful as well. [11:00] How Lexica WorksSwyx: So there's a lot of things that you release in a short amount of time. The God mode search was kind of like, obviously the first thing, I guess, like maybe to talk about some of the underlying technology you're using clip to kind of find, you know, go from image to like description and then let people search it. Maybe talk a little bit about what it takes to actually make the search magic happen. Sharif: Yeah. So the original search was just using Postgres' full text search and it would only search the text contents of the prompt. But I was inspired by another website called Same Energy, where like a visual search engine. It's really cool. Do you know what happened to that guy? I don't. Swyx: He released it and then he disappeared from the internet. Sharif: I don't know what happened to him, but I'm sure he's working on something really cool. He also worked on like Tabnine, which was like the very first version of Copilot or like even before Copilot was Copilot. But yeah, inspired by that, I thought like being able to search images by their semantics. The contents of the image was really interesting. So I pretty much decided to create a search index on the clip embeddings, the clip image embeddings of all the images. And when you would search it, we would just do KNN search on pretty much the image embedding index. I mean, we had way too many embeddings to store on like a regular database. So we had to end up using FAISS, which is a Facebook library for really fast KNN search and embedding search. That was pretty fun to set up. It actually runs only on CPUs, which is really cool. It's super efficient. You compute the embeddings on GPUs, but like you can serve it all on like an eight core server and it's really, really fast. Once we released the semantic search on the clip embeddings, people were using the search way more. And you could do other cool things. You could do like similar image search where if you found like a specific image you liked, you could upload it and it would show you relevant images as well. Swyx: And then right after that, you raised your seed money from AI grant, NetFreedman, then Gross. Sharif: Yeah, we raised about $5 million from Daniel Gross. And then we also participated in AI grant. That was pretty cool. That was kind of the inflection point. Not much before that point, Lexic was kind of still a side project. And I told myself that I would focus on it full time or I'd consider focusing on it full time if we had broke like a million users. I was like, oh, that's gonna be like years away for sure. And then we ended up doing that in like the first week and a half. I was like, okay, there's something here. And it was kind of that like deal was like growing like pretty slowly and like pretty linearly. And then Lexica was just like this thing that just kept going up and up and up. And I was so confused. I was like, man, people really like looking at pictures. This is crazy. Yeah. And then we decided to pivot the entire company and just focus on Lexica full time at that point. And then we raised our seed round. [15:00] Being Chronically EarlySwyx: Yeah. So one thing that you casually dropped out, the one that slip, you said you were working on Lexica before the launch of Stable Diffusion such that you were able to launch Lexica one day after Stable Diffusion. Sharif: Yeah.Swyx: How did you get so early into Stable Diffusion? Cause I didn't hear about it. Sharif: Oh, that's a good question. I, where did I first hear about Stable Diffusion? I'm not entirely sure. It must've been like somewhere on Twitter or something. That changed your life. Yeah, it was great. And I got into the discord cause I'd used Dolly too before, but, um, there were a lot of restrictions in place where you can generate human faces at the time. You can do that now. But when I first got access to it, like you couldn't do any faces. It was like, there were like a, the list of adjectives you couldn't use was quite long. Like I had a friend from Pakistan and it can generate anything with the word Pakistan in it for some reason. But Stable Diffusion was like kind of the exact opposite where there were like very, very few rules. So that was really, really fun and interesting, especially seeing the chaos of like a bunch of other people also using it right in front of you. That was just so much fun. And I just wanted to do something with it. I thought it was honestly really fun. Swyx: Oh, well, I was just trying to get tips on how to be early on things. Cause you're pretty consistently early to things, right? You were Stadia before Stadia. Um, and then obviously you were on. Sharif: Well, Stadia is kind of shut down now. So I don't know if being early to that was a good one. Swyx: Um, I think like, you know, just being consistently early to things that, uh, you know, have a lot of potential, like one of them is going to work out and you know, then that's how you got Lexica. [16:00] From Search to Custom ModelsAlessio: How did you decide to go from search to running your own models for a generation? Sharif: That's a good question. So we kind of realized that the way people were using Lexica was they would have Lexica open in one tab and then in another tab, they'd have a Stable Diffusion interface. It would be like either a discord or like a local run interface, like the automatic radio UI, um, or something else. I just, I would watch people use it and they would like all tabs back and forth between Lexica and their other UI. And they would like to scroll through Lexica, click on the prompt, click on an image, copy the prompt, and then paste it and maybe change a word or two. And I was like, this should really kind of just be all within Lexica. Like, it'd be so cool if you could just click a button in Lexica and get an editor and generate your images. And I found myself also doing the all tab thing, or it was really frustrating. I was like, man, this is kind of tedious. Like I really wish it was much simpler. So we just built generations directly within Lexica. Um, so we do, we deployed it on, I don't remember when we first launched, I think it was November, December. And yeah, people love generating directly within it. [17:00] AI Grant LearningsSwyx: I was also thinking that this was coming out of AI grants where, you know, I think, um, yeah, I was like a very special program. I was just wondering if you learned anything from, you know, that special week where everyone was in town. Sharif: Yeah, that was a great week. I loved it. Swyx: Yeah. Bring us, bring us in a little bit. Cause it was awesome. There. Sharif: Oh, sure. Yeah. It's really, really cool. Like all the founders in AI grants are like fantastic people. And so I think the main takeaway from the AI grant was like, you have this massive overhang in compute or in capabilities in terms of like these latest AI models, but to the average person, there's really not that many products that are that cool or useful to them. Like the latest one that has hit the zeitgeist was chat GPT, which used arguably the same GPT three model, but like RLHF, but you could have arguably built like a decent chat GPT product just using the original GPT three model. But no one really did it. Now there were some restrictions in place and opening. I like to slowly release them over the few months or years after they release the original API. But the core premise behind AI grants is that there are way more capabilities than there are products. So focus on building really compelling products and get people to use them. And like to focus less on things like hitting state of the art on evals and more on getting users to use something. Swyx: Make something people want.Sharif: Exactly. Host: Yeah, we did an episode on LLM benchmarks and we kind of talked about how the benchmarks kind of constrain what people work on, because if your model is not going to do well, unlike the well-known benchmarks, it's not going to get as much interest and like funding. So going at it from a product lens is cool. [19:30] The Text to Image Illuminati?Swyx: My hypothesis when I was seeing the sequence of events for AI grants and then for Lexica Aperture was that you had some kind of magical dinner with Emad and David Holtz. And then they taught you the secrets of training your own model. Is that how it happens? Sharif: No, there's no secret dinner. The Illuminati of text to image. We did not have a meeting. I mean, even if we did, I wouldn't tell you. But it really boils down to just having good data. If you think about diffusion models, really the only thing they do is learn a distribution of data. So if you have high quality data, learn that high quality distribution. Or if you have low quality data, it will learn to generate images that look like they're from that distribution. So really it boils down to the data and the amount of data you have and that quality of that data, which means a lot of the work in training high quality models, at least diffusion models, is not really in the model architecture, but rather just filtering the data in a way that makes sense. So for Lexica, we do a lot of aesthetic scoring on images and we use the rankings we get from our website because we get tens of millions of people visiting it every month. So we can capture a lot of rankings. Oh, this person liked this image when they saw this one right next to it. Therefore, they probably preferred this one over that. You can do pairwise ranking to rank images and then compute like ELO scores. You can also just train aesthetic models to learn to classify a model, whether or not someone will like it or whether or not it's like, rank it on a scale of like one to ten, for example. So we mostly use a lot of the traffic we get from Lexica and use that to kind of filter our data sets and use that to train better aesthetic models. [20:30] How to Learn to Train ModelsSwyx: You had been a machine learning engineer before. You've been more of an infrastructure guy. To build, you were more of a prompt engineer with a bit of web design. This was the first time that you were basically training your own model. What was the wrap up like? You know, not to give away any secret sauce, but I think a lot of people who are traditional software engineers are feeling a lot of, I don't know, fear when encountering these kinds of domains. Sharif: Yeah, I think it makes a lot of sense. And to be fair, I didn't have much experience training massive models at this scale before I did it. A lot of times it's really just like, in the same way when you're first learning to program, you would just take the problem you're having, Google it, and go through the stack overflow post. And then you figure it out, but ultimately you will get to the answer. It might take you a lot longer than someone who's experienced, but I think there are enough resources out there where it's possible to learn how to do these things. Either just reading through GitHub issues for relevant models. Swyx: Oh God. Sharif: Yeah. It's really just like, you might be slower, but it's definitely still possible. And there are really great courses out there. The Fast AI course is fantastic. There's the deep learning book, which is great for fundamentals. And then Andrej Karpathy's online courses are also excellent, especially for language modeling. You might be a bit slower for the first few months, but ultimately I think if you have the programming skills, you'll catch up pretty quickly. It's not like this magical dark science that only three people in the world know how to do well. Probably was like 10 years ago, but now it's becoming much more open. You have open source collectives like Eleuther and LAION, where they like to share the details of their large scale training runs. So you can learn from a lot of those people. Swyx: Yeah. I think what is different for programmers is having to estimate significant costs upfront before they hit run. Because it's not a thing that you normally consider when you're coding, but yeah, like burning through your credits is a fear that people have. Sharif: Yeah, that does make sense. In that case, like fine tuning larger models gets you really, really far. Even using things like low rank adaptation to fine tune, where you can like fine tune much more efficiently on a single GPU. Yeah, I think people are underestimating how far you can really get just using open source models. I mean, before Lexica, I was working on Debuild and we were using the GP3 API, but I was also like really impressed at how well you could get open source models to run by just like using the API, collecting enough samples from like real world user feedback or real world user data using your product. And then just fine tuning the smaller open source models on those examples. And now you have a model that's pretty much state of the art for your specific domain. Whereas the runtime cost is like 10 times or even 100 times cheaper than using an API. Swyx: And was that like GPT-J or are you talking BERT? Sharif: I remember we tried GPT-J, but I think FLAN-T5 was like the best model we were able to use for that use case. FLAN-T5 is awesome. If you can, like if your prompt is small enough, it's pretty great. And I'm sure there are much better open source models now. Like Vicuna, which is like the GPT-4 variant of like Lama fine tuned on like GPT-4 outputs. Yeah, they're just going to get better and they're going to get better much, much faster. Swyx: Yeah. We're just talking in a previous episode to the creator of Dolly, Mike Conover, which is actually commercially usable instead of Vicuna, which is a research project. Sharif: Oh, wow. Yeah, that's pretty cool. [24:00] Why No Agents?Alessio: I know you mentioned being early. Obviously, agents are one of the hot things here. In 2021, you had this, please buy me AirPods, like a demo that you tweeted with the GPT-3 API. Obviously, one of the things about being early in this space, you can only do one thing at a time, right? And you had one tweet recently where you said you hoped that that demo would open Pandora's box for a bunch of weird GPT agents. But all we got were docs powered by GPT. Can you maybe talk a little bit about, you know, things that you wish you would see or, you know, in the last few, last few weeks, we've had, you know, Hugging GPT, Baby AGI, Auto GPT, all these different kind of like agent projects that maybe now are getting closer to the, what did you say, 50% of internet traffic being skips of GPT agents. What are you most excited about, about these projects and what's coming? Sharif: Yeah, so we wanted a way for users to be able to paste in a link for the documentation page for a specific API, and then describe how to call that API. And then the way we would need to pretty much do that for Debuild was we wondered if we could get an agent to browse the docs page, read through it, summarize it, and then maybe even do things like create an API key and register it for that user. To do that, we needed a way for the agent to read the web page and interact with it. So I spent about a day working on that demo where we just took the web page, serialized it into a more compact form that fit within the 2048 token limit of like GPT-3 at the time. And then just decide what action to do. And then it would, if the page was too long, it would break it down into chunks. And then you would have like a sub prompt, decide on which chunk had the best action. And then at the top node, you would just pretty much take that action and then run it in a loop. It was really, really expensive. I think that one 60 second demo cost like a hundred bucks or something, but it was wildly impractical. But you could clearly see that agents were going to be a thing, especially ones that could read and write and take actions on the internet. It was just prohibitively expensive at the time. And the context limit was way too small. But yeah, I think it seems like a lot of people are taking it more seriously now, mostly because GPT-4 is way more capable. The context limit's like four times larger at 8,000 tokens, soon 32,000. And I think the only problem that's left to solve is finding a really good representation for a webpage that allows it to be consumed by a text only model. So some examples are like, you could just take all the text and pass it in, but that's probably too long. You could take all the interactive only elements like buttons and inputs, but then you miss a lot of the relevant context. There are some interesting examples, which I really like is you could run the webpage or you could run the browser in a terminal based browser. So there are some browsers that run in your terminal, which serialize everything into text. And what you can do is just take that frame from that terminal based browser and pass that directly to the model. And it's like a really, really good representation of the webpage because they do things where for graphical elements, they kind of render it using ASCII blocks. But for text, they render it as actual text. So you could just remove all the weird graphical elements, just keep all the text. And that works surprisingly well. And then there are other problems to solve, which is how do you get the model to take an action? So for example, if you have a booking page and there's like a calendar and there are 30 days on the calendar, how do you get it to specify which button to press? It could say 30, and you can match string based and like find the 30. But for example, what if it's like a list of friends in Facebook and trying to delete a friend? There might be like 30 delete buttons. How do you specify which one to click on? The model might say like, oh, click on the one for like Mark. But then you'd have to figure out the delete button in relation to Mark. And there are some ways to solve this. One is there's a cool Chrome extension called Vimium, which lets you use Vim in your Chrome browser. And what you do is you can press F and over every interactive element, it gives you like a character or two characters. Or if you type those two characters, it presses that button or it opens or focuses on that input. So you could combine a lot of these ideas and then get a really good representation of the web browser in text, and then also give the model a really, really good way to control the browser as well. And I think those two are the core part of the problem. The reasoning ability is definitely there. If a model can score in the top 10% on the bar exam, it can definitely browse a web page. It's really just how do you represent text to the model and how do you get the model to perform actions back on the web page? Really, it's just an engineering problem. Swyx: I have one doubt, which I'd love your thoughts on. How do you get the model to pause when it doesn't have enough information and ask you for additional information because you under specified your original request? Sharif: This is interesting. I think the only way to do this is to have a corpus where your training data is like these sessions of agents browsing the web. And you have to pretty much figure out where the ones that went wrong or the agents that went wrong, or did they go wrong and just replace it with, hey, I need some help. And then if you were to fine tune a larger model on that data set, you would pretty much get them to say, hey, I need help on the instances where they didn't know what to do next. Or if you're using a closed source model like GPT-4, you could probably tell it if you're uncertain about what to do next, ask the user for help. And it probably would be pretty good at that. I've had to write a lot of integration tests in my engineering days and like the dome. Alessio: They might be over. Yeah, I hope so. I hope so. I don't want to, I don't want to deal with that anymore. I, yeah, I don't want to write them the old way. Yeah. But I'm just thinking like, you know, we had the robots, the TXT for like crawlers. Like I can definitely see the DOM being reshaped a little bit in terms of accessibility. Like sometimes you have to write expats that are like so long just to get to a button. Like there should be a better way to do it. And maybe this will drive the change, you know, making it easier for these models to interact with your website. Sharif: There is the Chrome accessibility tree, which is used by screen readers, but a lot of times it's missing a lot of, a lot of useful information. But like in a perfect world, everything would be perfectly annotated for screen readers and we could just use that. That's not the case. [29:30] GPT4 and MultimodalitySwyx: GPT-4 multimodal, has your buddy, Greg, and do you think that that would solve essentially browser agents or desktop agents? Sharif: Greg has not come through yet, unfortunately. But it would make things a lot easier, especially for graphically heavy web pages. So for example, you were using Yelp and like using the map view, it would make a lot of sense to use something like that versus a text based input. Where, how do you serialize a map into text? It's kind of hard to do that. So for more complex web pages, that would make it a lot easier. You get a lot more context to the model. I mean, it seems like that multimodal input is very dense in the sense that it can read text and it can read it really, really well. So you could probably give it like a PDF and it would be able to extract all the text and summarize it. So if it can do that, it could probably do anything on any webpage. Swyx: Yeah. And given that you have some experience integrating Clip with language models, how would you describe how different GPT-4 is compared to that stuff? Sharif: Yeah. Clip is entirely different in the sense that it's really just good at putting images and text into the same latent space. And really the only thing that's useful for is similarity and clustering. Swyx: Like literally the same energy, right? Sharif: Yeah. Swyx: Yeah. And then there's Blip and Blip2. I don't know if you like those. Sharif: Yeah. Blip2 is a lot better. There's actually a new project called, I think, Mini GPT-4. Swyx: Yes. It was just out today. Sharif: Oh, nice. Yeah. It's really cool. It's actually really good. I think that one is based on the Lama model, but yeah, that's, that's like another. Host: It's Blip plus Lama, right? So they, they're like running through Blip and then have Lama ask your, interpret your questions so that you do visual QA. Sharif: Oh, that's cool. That's really clever. Yeah. Ensemble models are really useful. Host: Well, so I was trying to articulate, cause that was, that's, there's two things people are talking about today. You have to like, you know, the moment you wake up, you open Hacker News and go like, all right, what's, what's the new thing today? One is Red Pajama. And then the other one is Mini GPT-4. So I was trying to articulate like, why is this not GPT-4? Like what is missing? And my only conclusion was it just doesn't do OCR yet. But I wonder if there's anything core to this concept of multimodality that you have to train these things together. Like what does one model doing all these things do that is separate from an ensemble of models that you just kind of duct tape together? Sharif: It's a good question. This is pretty related to interoperability. Like how do we understand that? Or how, how do we, why do models trained on different modalities within the same model perform better than two models perform or train separately? I can kind of see why that is the case. Like, it's kind of hard to articulate, but when you have two different models, you get the reasoning abilities of a language model, but also like the text or the vision understanding of something like Clip. Whereas Clip clearly lacks the reasoning abilities, but if you could somehow just put them both in the same model, you get the best of both worlds. There were even cases where I think the vision version of GPT-4 scored higher on some tests than the text only version. So like there might even be some additional learning from images as well. Swyx: Oh yeah. Well, uh, the easy answer for that was there was some chart in the test. That wasn't translated. Oh, when I read that, I was like, Oh yeah. Okay. That makes sense. Sharif: That makes sense. I thought it'd just be like, it sees more of the world. Therefore it has more tokens. Swyx: So my equivalent of this is I think it's a well-known fact that adding code to a language model training corpus increases its ability to do language, not just with code. So, the diversity of datasets that represent some kind of internal logic and code is obviously very internally logically consistent, helps the language model learn some internal structure. Which I think, so, you know, my ultimate test for GPT-4 is to show the image of like, you know, is this a pipe and ask it if it's a pipe or not and see what it does. Sharif: Interesting. That is pretty cool. Yeah. Or just give it a screenshot of your like VS code editor and ask it to fix the bug. Yeah. That'd be pretty wild if it could do that. Swyx: That would be adult AGI. That would be, that would be the grownup form of AGI. [33:30] Sharif's Startup ManualSwyx: On your website, you have this, um, startup manual where you give a bunch of advice. This is fun. One of them was that you should be shipping to production like every two days, every other day. This seems like a great time to do it because things change every other day. But maybe, yeah, tell some of our listeners a little bit more about how you got to some of these heuristics and you obviously build different projects and you iterate it on a lot of things. Yeah. Do you want to reference this? Sharif: Um, sure. Yeah, I'll take a look at it. Swyx: And we'll put this in the show notes, but I just wanted you to have the opportunity to riff on this, this list, because I think it's a very good list. And what, which one of them helped you for Lexica, if there's anything, anything interesting. Sharif: So this list is, it's pretty funny. It's mostly just like me yelling at myself based on all the mistakes I've made in the past and me trying to not make them again. Yeah. Yeah. So I, the first one is like, I think the most important one is like, try when you're building a product, try to build the smallest possible version. And I mean, for Lexica, it was literally a, literally one screen in the react app where a post-process database, and it just showed you like images. And I don't even know if the first version had search. Like I think it did, but I'm not sure. Like, I think it was really just like a grid of images that were randomized, but yeah, don't build the absolute smallest thing that can be considered a useful application and ship it for Lexica. That was, it helps me write better prompts. That's pretty useful. It's not that useful, but it's good enough. Don't fall into the trap of intellectual indulgence with over-engineering. I think that's a pretty important one for myself. And also anyone working on new things, there's often times you fall into the trap of like thinking you need to add more and more things when in reality, like the moment it's useful, you should probably get in the hands of your users and they'll kind of set the roadmap for you. I know this has been said millions of times prior, but just, I think it's really, really important. And I think if I'd spent like two months working on Lexica, adding a bunch of features, it wouldn't have been anywhere as popular as it was if I had just released the really, really boiled down version alongside the stable diffusion release. Yeah. And then there are a few more like product development doesn't start until you launch. Think of your initial product as a means to get your users to talk to you. It's also related to the first point where you really just want people using something as quickly as you can get that to happen. And then a few more are pretty interesting. Create a product people love before you focus on growth. If your users are spontaneously telling other people to use your product, then you've built something people love. Swyx: So this is pretty, it sounds like you've internalized Paul Graham's stuff a lot. Yeah. Because I think he said stuff like that. Sharif: A lot of these are just probably me taking notes from books I found really interesting or like PG essays that were really relevant at the time. And then just trying to not forget them. I should probably read this list again. There's some pretty personalized advice for me here. Oh yeah. One of my favorite ones is, um, don't worry if what you're building doesn't sound like a business. Nobody thought Facebook would be a $500 billion company. It's easy to come up with a business model. Once you've made something people want, you can even make pretty web forms and turn that into a 200 person company. And then if you click the link, it's to LinkedIn for type form, which is now, uh, I think they're like an 800 person company or something like that. So they've grown quite a bit. There you go. Yeah. Pretty web forms are pretty good business, even though it doesn't sound like it. Yeah. It's worth a billion dollars. [38:30] Lexica Aperture V1/2/3Swyx: One way I would like to tie that to the history of Lexica, which we didn't go over, which was just walk us through like Aperture V1, V2, V3, uh, which you just released last week. And how maybe some of those principles helped you in that journey.Sharif: Yeah. So, um, V1 was us trying to create a very photorealistic version of our model of Sable to Fusion. Uh, V1 actually didn't turn out to be that popular. It turns out people loved not generating. Your marketing tweets were popular. They were quite popular. So I think at the time you couldn't get Sable to Fusion to generate like photorealistic images that were consistent with your prompt that well. It was more so like you were sampling from this distribution of images and you could slightly pick where you sampled from using your prompt. This was mostly just because the clip text encoder is not the best text encoder. If you use a real language model, like T5, you get much better results. Like the T5 XXL model is like a hundred times larger than the clip text encoder for Sable to Fusion 1.5. So you could kind of steer it into like the general direction, but for more complex prompts, it just didn't work. So a lot of our users actually complained that they preferred the 1.5, Sable to Fusion 1.5 model over the Aperture model. And it was just because a lot of people were using it to create like parts and like really weird abstract looking pictures that didn't really work well with the photorealistic model trained solely on images. And then for V2, we kind of took that into consideration and then just trained it more on a lot of the art images on Lexica. So we took a lot of images that were on Lexica that were art, used that to train aesthetic models that ranked art really well, and then filtered larger sets to train V2. And then V3 is kind of just like an improved version of that with much more data. I'm really glad we didn't spend too much time on V1. I think we spent about one month working on it, which is a lot of time, but a lot of the things we learned were useful for training future versions. Swyx: How do you version them? Like where do you decide, okay, this is V2, this is V3? Sharif: The versions are kind of weird where you can't really use semantic versions because like if you have a small update, you usually just make that like V2. Versions are kind of used for different base models, I'd say. So if you have each of the versions were a different base model, but we've done like fine tunes of the same version and then just release an update without incrementing the version. But I think when there's like a clear change between running the same prompt on a model and you get a different image, that should probably be a different version. [40:00] Request for AI Startup - LLM ToolsAlessio: So the startup manual was the more you can actually do these things today to make it better. And then you have a whole future page that has tips from, you know, what the series successor is going to be like to like why everyone's genome should be sequenced. There's a lot of cool stuff in there. Why do we need to develop stimulants with shorter half-lives so that we can sleep better. Maybe talk a bit about, you know, when you're a founder, you need to be focused, right? So sometimes there's a lot of things you cannot build. And I feel like this page is a bit of a collection of these. Like, yeah. Are there any of these things that you're like, if I were not building Lexica today, this is like a very interesting thing. Sharif: Oh man. Yeah. There's a ton of things that I want to build. I mean, off the top of my head, the most exciting one would be better tools for language models. And I mean, not tools that help us use language models, but rather tools for the language models themselves. So things like giving them access to browsers, giving them access to things like payments and credit cards, giving them access to like credit cards, giving them things like access to like real world robots. So like, it'd be cool if you could have a Boston dynamic spot powered by a language model reasoning module and you would like to do things for you, like go and pick up your order, stuff like that. Entirely autonomously given like high level commands. That'd be like number one thing if I wasn't working on Lexica. [40:00] Sequencing your GenomeAnd then there's some other interesting things like genomics I find really cool. Like there's some pretty cool things you can do with consumer genomics. So you can export your genome from 23andMe as a text file, like literally a text file of your entire genome. And there is another tool called Prometheus, I think, where you upload your 23andMe text file genome and then they kind of map specific SNPs that you have in your genome to studies that have been done on those SNPs. And it tells you really, really useful things about yourself. Like, for example, I have the SNP for this thing called delayed sleep phase disorder, which makes me go to sleep about three hours later than the general population. So like I used to always be a night owl and I never knew why. But after using Prometheus it pretty much tells you, oh, you have the specific genome for specific SNP for DSPS. It's like a really tiny percentage of the population. And it's like something you should probably know about. And there's a bunch of other things. It tells you your likelihood for getting certain diseases, for certain cancers, oftentimes, like even weird personality traits. There's one for like, I have one of the SNPs for increased risk taking and optimism, which is pretty weird. That's an actual thing. Like, I don't know how. This is the founder gene. You should sequence everybody. It's pretty cool. And it's like, it's like $10 for Prometheus and like 70 bucks for 23andMe. And it explains to you how your body works and like the things that are different from you or different from the general population. Wow. Highly recommend everyone do it. Like if you're, if you're concerned about privacy, just purchase a 23andMe kit with a fake name. You don't have to use your real name. I didn't use my real name. Swyx: It's just my genes. Worst you can do is clone me. It ties in with what you were talking about with, you know, we want the future to be like this. And like people are building uninspired B2B SaaS apps and you and I had an exchange about this. [42:00] Believe in Doing Great ThingsHow can we get more people to believe they can do great things? Sharif: That's a good question. And I like a lot of the things I've been working on with GP3. It has been like trying to solve this by getting people to think about more interesting ideas. I don't really know. I think one is just like the low effort version of this is just putting out really compelling demos and getting people inspired. And then the higher effort version is like actually building the products yourself and getting people to like realize this is even possible in the first place. Like I think the baby AGI project and like the GPT Asian projects on GitHub are like in practice today, they're not super useful, but I think they're doing an excellent job of getting people incredibly inspired for what can be possible with language models as agents. And also the Stanford paper where they had like the mini version of Sims. Yeah. That one was incredible. That was awesome. Swyx: It was adorable. Did you see the part where they invented day drinking? Sharif: Oh, they did? Swyx: Yeah. You're not supposed to go to these bars in the afternoon, but they were like, we're going to go anyway. Nice. Sharif: That's awesome. Yeah. I think we need more stuff like that. That one paper is probably going to inspire a whole bunch of teams to work on stuff similar to that. Swyx: And that's great. I can't wait for NPCs to actually be something that you talk to in a game and, you know, have their own lives and you can check in and, you know, they would have their own personalities as well. Sharif: Yeah. I was so kind of off topic. But I was playing the last of us part two and the NPCs in that game are really, really good. Where if you like, point a gun at them and they'll beg for their life and like, please, I have a family. And like when you kill people in the game, they're like, oh my God, you shot Alice. Like they're just NPCs, but they refer to each other by their names and like they plead for their lives. And this is just using regular conditional rules on NPC behavior. Imagine how much better it'd be if it was like a small GPT-4 agent running in every NPC and they had the agency to make decisions and plead for their lives. And I don't know, you feel way more guilty playing that game. Alessio: I'm scared it's going to be too good. I played a lot of hours of Fallout. So I feel like if the NPCs were a lot better, you would spend a lot more time playing the game. Yeah. [44:30] Lightning RoundLet's jump into lightning round. First question is your favorite AI product. Sharif: Favorite AI product. The one I use the most is probably ChatGPT. The one I'm most excited about is, it's actually a company in AI grants. They're working on a version of VS code. That's like an entirely AI powered cursor, yeah. Cursor where you would like to give it a prompt and like to iterate on your code, not by writing code, but rather by just describing the changes you want to make. And it's tightly integrated into the editor itself. So it's not just another plugin. Swyx: Would you, as a founder of a low code prompting-to-code company that pivoted, would you advise them to explore some things or stay away from some things? Like what's your learning there that you would give to them?Sharif: I would focus on one specific type of code. So if I'm building a local tool, I would try to not focus too much on appealing developers. Whereas if I was building an alternative to VS code, I would focus solely on developers. So in that, I think they're doing a pretty good job focusing on developers. Swyx: Are you using Cursor right now? Sharif: I've used it a bit. I haven't converted fully, but I really want to. Okay. It's getting better really, really fast. Yeah. Um, I can see myself switching over sometime this year if they continue improving it. Swyx: Hot tip for, for ChatGPT, people always say, you know, they love ChatGPT. Biggest upgrade to my life right now is the, I forked a menu bar app I found on GitHub and now I just have it running in a menu bar app and I just do command shift G and it pops it up as a single use thing. And there's no latency because it just always is live. And I just type, type in the thing I want and then it just goes away after I'm done. Sharif: Wow. That's cool. Big upgrade. I'm going to install that. That's cool. Alessio: Second question. What is something you thought would take much longer, but it's already here? Like what, what's your acceleration update? Sharif: Ooh, um, it would take much longer, but it's already here. This is your question. Yeah, I know. I wasn't prepared. Um, so I think it would probably be kind of, I would say text to video. Swyx: Yeah. What's going on with that? Sharif: I think within this year, uh, by the end of this year, we'll have like the jump between like the original DALL-E one to like something like mid journey. Like we're going to see that leap in text to video within the span of this year. Um, it's not already here yet. So I guess the thing that surprised me the most was probably the multi-modality of GPT four in the fact that it can technically see things, which is pretty insane. Swyx: Yeah. Is text to video something that Aperture would be interested in? Sharif: Uh, it's something we're thinking about, but it's still pretty early. Swyx: There was one project with a hand, um, animation with human poses. It was also coming out of Facebook. I thought that was a very nice way to accomplish text to video while having a high degree of control. I forget the name of that project. It was like, I think it was like drawing anything. Swyx: Yeah. It sounds familiar. Well, you already answered a year from now. What will people be most surprised by? Um, and maybe the, uh, the usual requests for startup, you know, what's one thing you will pay for if someone built it? Sharif: One thing I would pay for if someone built it. Um, so many things, honestly, I would probably really like, um, like I really want people to build more, uh, tools for language models, like useful tools, give them access to Chrome. And I want to be able to give it a task. And then just, it goes off and spins up a hundred agents that perform that task. And like, sure. Like 80 of them might fail, but like 20 of them might kind of succeed. That's all you really need. And they're agents. You can spin up thousands of them. It doesn't really matter. Like a lot of large numbers are on your side. So that'd be, I would pay a lot of money for that. Even if it was capable of only doing really basic tasks, like signing up for a SAS tool and booking a call or something. If you could do even more things where it could have handled the email, uh, thread and like get the person on the other end to like do something where like, I don't even have to like book the demo. They just give me access to it. That'd be great. Yeah. More, more. Like really weird language model tools would be really fun.Swyx: Like our chat, GPT plugins, a step in the right direction, or are you envisioning something else? Sharif: I think GPT, chat GPT plugins are great, but they seem to only have right-only access right now. I also want them to have, I want these like theoretical agents to have right access to the world too. So they should be able to perform actions on web browsers, have their own email inbox, and have their own credit card with their own balance. Like take it, send emails to people that might be useful in achieving their goal. Ask them for help. Be able to like sign up and register for accounts on tools and services and be able to like to use graphical user interfaces really, really well. And also like to phone home if they need help. Swyx: You just had virtual employees. You want to give them a Brex card, right? Sharif: I wouldn't be surprised if, a year from now there was Brex GPT or it's like Brex cards for your GPT agents. Swyx: I mean, okay. I'm excited by this. Yeah. Kind of want to build it. Sharif: You should. Yeah. Alessio: Well, just to wrap up, we always have like one big takeaway for people, like, you know, to display on a signboard for everyone to see what is the big message to everybody. Sharif: Yeah. I think the big message to everybody is you might think that a lot of the time the ideas you have have already been done by someone. And that may be the case, but a lot of the time the ideas you have are actually pretty unique and no one's ever tried them before. So if you have weird and interesting ideas, you should actually go out and just do them and make the thing and then share that with the world. Cause I feel like we need more people building weird ideas and less people building like better GPT search for your documentation. Host: There are like 10 of those in the recent OST patch. Well, thank you so much. You've been hugely inspiring and excited to see where Lexica goes next. Sharif: Appreciate it. Thanks for having me. Get full access to Latent Space at www.latent.space/subscribe
It is said that the two greatest problems of history are: how to account for the rise of Rome, and how to account for her fall. If so, then the volcanic ashes spewed by Mount Vesuvius in 79 AD - which entomb the cities of Pompeii and Herculaneum in South Italy - hold history's greatest prize. For beneath those ashes lies the only salvageable library from the classical world.Nat Friedman was the CEO of Github form 2018 to 2021. Before that, he started and sold two companies - Ximian and Xamarin. He is also the founder of AI Grant and California YIMBY.And most recently, he has created and funded the Vesuvius Challenge - a million dollar prize for reading an unopened Herculaneum scroll for the very first time. If we can decipher these scrolls, we may be able to recover lost gospels, forgotten epics, and even missing works of Aristotle.We also discuss the future of open source and AI, running Github and building Copilot, and why EMH is a lie.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.As always, the most helpful thing you can do is just to share the podcast - send it to friends, group chats, Twitter, Reddit, forums, and wherever else men and women of fine taste congregate.If you have the means and have enjoyed my podcast, I would appreciate your support via a paid subscriptions on Substack
You've probably never heard of Robin Miles, but you may well have heard her—possibly at some length. Miles is an actor who's cultivated a particular specialty in recording audiobooks, a booming segment of the publishing industry. She has lent her voice to more than 400 titles in all sorts of genres—from the classic “Charlotte's Web” to Isabel Wilkerson's “Caste,” a deep analysis of race in America. “Telling a story, fully, all of it—from all the aspects of it—and creating the kind of intimacy between you and your listener is so satisfying,” she tells the New Yorker editor Daniel Gross. “Being in a great play means you have to have the money and the other actors and a script and a director. This is just me and my book, and I love that.”
Who was Edmond J. Safra? "The greatest banker of his generation," in the estimation of a former World Bank President. The founder of four massive financial institutions on three continents, and a proud child of Beirut's Jewish quarter. An innovative avatar of financial globalization, and a faithful heir to a tradition of old-world banking. The leading champion and protector of the Sephardic diaspora. In A Banker's Journey: How Edmond J. Safra Built a Global Financial Empire (Radius Book Group, 2022), financial journalist and historian Daniel Gross, who, like Safra, traces his heritage to Aleppo, Syria, reconstructs the public life of an intensely private man. With exclusive access to Safra's personal archives, Gross tracks the banker's remarkable journey from Beirut to Milan, São Paulo, Geneva, and New York--to the pinnacle of global finance.Edmond Safra was fifteen in 1947, when his father sent him to establish a presence in Milan, Italy. Fluent in six languages, and with an eye for value, managing risk, and personal potential, Safra was in perpetual motion until his tragic death in 1999. The modern, global financial empire he built was based on timeless principles: a banker must protect his depositors and avoid excessive leverage and risk. In an age of busts and bailouts, Safra posted remarkable returns while rarely suffering a credit loss. From a young age, Safra assumed the mantle of leadership in the Syrian-Lebanese Jewish community, providing personal aid, supporting the communities that formed in exile, and championing Sephardic religious and educational efforts in Israel and around the world. Edmond J. Safra's life of achievement in the twentieth century offers enduring lessons for those seeking to make their way in the twenty-first century. He inspired generations to make the world a better place. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
Who was Edmond J. Safra? "The greatest banker of his generation," in the estimation of a former World Bank President. The founder of four massive financial institutions on three continents, and a proud child of Beirut's Jewish quarter. An innovative avatar of financial globalization, and a faithful heir to a tradition of old-world banking. The leading champion and protector of the Sephardic diaspora. In A Banker's Journey: How Edmond J. Safra Built a Global Financial Empire (Radius Book Group, 2022), financial journalist and historian Daniel Gross, who, like Safra, traces his heritage to Aleppo, Syria, reconstructs the public life of an intensely private man. With exclusive access to Safra's personal archives, Gross tracks the banker's remarkable journey from Beirut to Milan, São Paulo, Geneva, and New York--to the pinnacle of global finance.Edmond Safra was fifteen in 1947, when his father sent him to establish a presence in Milan, Italy. Fluent in six languages, and with an eye for value, managing risk, and personal potential, Safra was in perpetual motion until his tragic death in 1999. The modern, global financial empire he built was based on timeless principles: a banker must protect his depositors and avoid excessive leverage and risk. In an age of busts and bailouts, Safra posted remarkable returns while rarely suffering a credit loss. From a young age, Safra assumed the mantle of leadership in the Syrian-Lebanese Jewish community, providing personal aid, supporting the communities that formed in exile, and championing Sephardic religious and educational efforts in Israel and around the world. Edmond J. Safra's life of achievement in the twentieth century offers enduring lessons for those seeking to make their way in the twenty-first century. He inspired generations to make the world a better place. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/history
Who was Edmond J. Safra? "The greatest banker of his generation," in the estimation of a former World Bank President. The founder of four massive financial institutions on three continents, and a proud child of Beirut's Jewish quarter. An innovative avatar of financial globalization, and a faithful heir to a tradition of old-world banking. The leading champion and protector of the Sephardic diaspora. In A Banker's Journey: How Edmond J. Safra Built a Global Financial Empire (Radius Book Group, 2022), financial journalist and historian Daniel Gross, who, like Safra, traces his heritage to Aleppo, Syria, reconstructs the public life of an intensely private man. With exclusive access to Safra's personal archives, Gross tracks the banker's remarkable journey from Beirut to Milan, São Paulo, Geneva, and New York--to the pinnacle of global finance.Edmond Safra was fifteen in 1947, when his father sent him to establish a presence in Milan, Italy. Fluent in six languages, and with an eye for value, managing risk, and personal potential, Safra was in perpetual motion until his tragic death in 1999. The modern, global financial empire he built was based on timeless principles: a banker must protect his depositors and avoid excessive leverage and risk. In an age of busts and bailouts, Safra posted remarkable returns while rarely suffering a credit loss. From a young age, Safra assumed the mantle of leadership in the Syrian-Lebanese Jewish community, providing personal aid, supporting the communities that formed in exile, and championing Sephardic religious and educational efforts in Israel and around the world. Edmond J. Safra's life of achievement in the twentieth century offers enduring lessons for those seeking to make their way in the twenty-first century. He inspired generations to make the world a better place. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/jewish-studies
Who was Edmond J. Safra? "The greatest banker of his generation," in the estimation of a former World Bank President. The founder of four massive financial institutions on three continents, and a proud child of Beirut's Jewish quarter. An innovative avatar of financial globalization, and a faithful heir to a tradition of old-world banking. The leading champion and protector of the Sephardic diaspora. In A Banker's Journey: How Edmond J. Safra Built a Global Financial Empire (Radius Book Group, 2022), financial journalist and historian Daniel Gross, who, like Safra, traces his heritage to Aleppo, Syria, reconstructs the public life of an intensely private man. With exclusive access to Safra's personal archives, Gross tracks the banker's remarkable journey from Beirut to Milan, São Paulo, Geneva, and New York--to the pinnacle of global finance.Edmond Safra was fifteen in 1947, when his father sent him to establish a presence in Milan, Italy. Fluent in six languages, and with an eye for value, managing risk, and personal potential, Safra was in perpetual motion until his tragic death in 1999. The modern, global financial empire he built was based on timeless principles: a banker must protect his depositors and avoid excessive leverage and risk. In an age of busts and bailouts, Safra posted remarkable returns while rarely suffering a credit loss. From a young age, Safra assumed the mantle of leadership in the Syrian-Lebanese Jewish community, providing personal aid, supporting the communities that formed in exile, and championing Sephardic religious and educational efforts in Israel and around the world. Edmond J. Safra's life of achievement in the twentieth century offers enduring lessons for those seeking to make their way in the twenty-first century. He inspired generations to make the world a better place. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/biography
B”H Edmund Safra was considered by many the greatest banker of the second half of the twentieth century. Despite his public achievements building a global financial empire, Safra remained a very private man and to many, a mystery. Why? You'll soon hear from my guest today, Daniel Gross, who masterfully reconstructed Safra's life in a book that highlights Mr. Safra's timeless banking principles, his commitment to Jewish values, like charity, The post 291: The life, the genius and legacy of the greatest banker of the second half of the twentieth century, Mr. Edmond Safra, with Financial Journalist, Daniel Gross appeared first on Jewish Latin Princess.
Who was Edmond J. Safra? “The greatest banker of his generation,” in the estimation of a former World Bank President. The founder of four massive financial institutions on three continents, and a proud child of Beirut's Jewish quarter. An innovative avatar of financial globalization, and a faithful heir to a tradition of old-world banking. And also a leading champion and protector of the Sephardic diaspora. In A Banker's Journey, financial journalist and historian Daniel Gross, who, like Safra, traces his heritage to Aleppo, Syria, reconstructs the public life of an intensely private man. KAN's Mark Weiss spoke with Daniel Gross about his book. (Photo: Courtesy Radius Publishing)See omnystudio.com/listener for privacy information.
Exegesis Episode 32: a conversation with Daniel Gross on his book, "A Banker's Journey: How Edmond J. Safra Built a Global Financial Empire" Video Version Patreon Paypal Donations --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app
Cribbing from the provocative new book 'Talent: How to find Energizers, Creatives and Winners Around the World,' by Tyler Cowen and Daniel Gross, we put a dozen or so of their unorthodox interview questions to Bradley. Listen and decide for yourself whether he deserves that big big job he's not actually looking for.
How can one identify and predict talent? On a search to answer this question and others like it, Tyler Cowen joined venture capitalist and entrepreneur Daniel Gross to explore the art and science of finding talent in their new book Talent: How to Identify Energizers, Creatives, and Winners Around the World. In a panel discussion hosted by Shruti Rajagopalan, Cowen and Gross discuss the applications of their new book, particularly how lifestyle characteristics can indicate an individual is capable of great creativity and talent. Daniel and Tyler also discuss undervalued talents and skills, what talents they look for in the start-up and investment world, why there is no good chocolate ice cream to be found in San Francisco, what their exercise preferences indicate about their personalities, how they approach identifying talent in different countries and industries, how immigration impacts entrepreneurialism, the short-comings to Zoom interviews, what a messy desk reveals about a person, and more. Read a full transcript enhanced with helpful links, or watch the full video. Recorded June 29th, 2022 Other ways to connect Follow us on Twitter and Instagram Follow Tyler on Twitter Follow Daniel on Twitter Follow Shruti on Twitter Email us: cowenconvos@mercatus.gmu.edu Subscribe at our newsletter page to have the latest Conversations with Tyler news sent straight to your inbox. Photo credit: Drew Bird Photo
Subscribe to Charles' Alpha Investor newsletter today: https://pro.banyanhill.com/m/2054150 (https://pro.banyanhill.com/m/2054150) Who was Edmond J. Safra? Some know him as “the greatest banker of his generation.” He founded four massive financial institutions on three continents. That's why financial journalist and historian Daniel Gross set out to uncover the history behind this 15-year-old prodigy that built an empire based on these timeless principles: a banker must protect his depositors and avoid excessive leverage and risk. Safra posted remarkable returns in an age of busts and bailouts while rarely suffering a credit loss. This banker's journey offers enduring lessons for those seeking to make their way in the twenty-first century. He inspired generations to make the world a better place. Topics Discussed: An Introduction to Daniel Gross (00:00:00) Warren Buffett-level returns (00:10:20) How to succeed in a world of crowded trades (00:24:25) Banking in a time of no deposit insurance or bailouts (00:21:19) How the $10 billion deal at HBSC was done and protected the people (34:56) It's business AND it's personal (00:42:22) Guest Bio: Daniel Gross is one of the most widely-read writers on finance, economics, and business history. Over the past three decades, he has reported from more than thirty countries, covering everything from the dotcom boom to the global financial crisis and the Great Recession of 2008–2009. Gross worked as a reporter at The New Republic and Bloomberg News, wrote the “Economic View” column in The New York Times, and served as Slate's “Moneybox” columnist. At Newsweek, where he was a columnist and correspondent, he authored seven cover stories. He is a bestselling author of eight books, including Forbes Greatest Business Stories of All Time; Generations of Corning; Dumb Money: How America's Greatest Financial Minds Bankrupted the Nation; and Better, Stronger, Faster: The Myth of American Decline and the Rise of a New Economy. Gross was educated at Cornell University and holds an M.A. in American history from Harvard University. His great-grandparents immigrated to the United States from Aleppo and Damascus. Resources Mentioned: https://www.amazon.com/Bankers-Journey-Edmond-Global-Financial/dp/1635767857 (A Bankers Journey: How Edmond J. Safra Built a Global Financial Empire) https://www.amazon.com/Vendetta-American-Express-Smearing-Edmond/dp/0060167599/ (Vendetta: American Express and the Smearing of Edmond Safra) Transcript: https://charlesmizrahi.com/uncategorized/2022/08/30/building-global-financial-empire-daniel-gross/ (https://charlesmizrahi.com/podcast/)
If Tyler and Daniel's latest book can be boiled down into a single message, it would be that the world is currently failing at identifying talent, and that getting better at it would have enormous benefits for organizations, individuals, and the world at large. In this special episode of Conversations with Tyler, Daniel joined Tyler to discuss the ideas in their book on how to spot talent better, including the best questions to ask in interviews, predicting creativity and ambition, and the differences between competitiveness and obsessiveness. They also explore the question of why so many high achievers love Diet Coke, why you should ask candidates if they have any good conspiracy theories, how to spot effective dark horses early, the hiring strategy that set SpaceX apart, what to look for in a talent identifier, what you can learn from discussing drama, the underrated genius of game designers, why Tyler has begun to value parents more and IQ less, conscientiousness as a mixed blessing, the importance of value hierarchies, how to become more charismatic, the allure of endurance sports for highly successful people, what they disagree on most, and more. Visit our website Email: cowenconvos@mercatus.gmu.edu Follow us on Twitter Follow us on Instagram Follow Tyler on Twitter Follow Daniel on Twitter Like us on Facebook Subscribe to our Newsletter: https://go.mercatus.org/l/278272/2017-09-19/g4ms
My guests today are Tyler Cowen and Daniel Gross. Tyler is an economics professor and creator of one of the most popular economics blogs on the internet. Daniel is the founder of start-up accelerator Pioneer, having previously been a director at Apple and a partner at Y Combinator. Both Daniel and Tyler are prolific talent spotters and that is the focus of our discussion and their new book, which is called Talent. Please enjoy this conversation with Tyler Cowen and Daniel Gross. For the full show notes, transcript, and links to mentioned content, check out the episode page here. ----- This episode is brought to you by Canalyst. Canalyst is the leading destination for public company data and analysis. If you're a professional equity investor and haven't talked to Canalyst recently, you should give them a shout. Learn more and try Canalyst for yourself at canalyst.com/Patrick. ----- This episode is brought to you by Brex. Brex is the integrated financial platform trusted by the world's most innovative entrepreneurs and fastest-growing companies. With Brex, you can move money fast for instant impact with high-limit corporate cards, payments, venture debt, and spend management software all in one place. Ready to accelerate your business? Learn more at brex.com/best. ----- Invest Like the Best is a property of Colossus, LLC. For more episodes of Invest Like the Best, visit joincolossus.com/episodes. Past guests include Tobi Lutke, Kevin Systrom, Mike Krieger, John Collison, Kat Cole, Marc Andreessen, Matthew Ball, Bill Gurley, Anu Hariharan, Ben Thompson, and many more. Stay up to date on all our podcasts by signing up to Colossus Weekly, our quick dive every Sunday highlighting the top business and investing concepts from our podcasts and the best of what we read that week. Sign up here. Follow us on Twitter: @patrick_oshag | @JoinColossus Show Notes [00:02:38] - [First question] - Defining what talent is to them writ large [00:03:34] - The differences between means and ends in regards to talent [00:04:14] - What the Diet Coke idea is and why it's relevant [00:06:32] - Types of energy that are valuable and the subtle differences between them [00:07:40] - Thoughts on using a moneyball-like approach to acquiring and evaluating talent [00:11:49] - The talent market and thinking about pricing talent specifically [00:13:14] - What is seemingly overpriced in today's talent landscape [00:15:50] - Relationship between experience and/or age when it comes to talent [00:20:34] - Lessons about the utility of intelligence and where they've lead them wrong [00:23:35] - What's beneath being an outsider and why it's important [00:24:46] - Why what people do in their downtime is worth considering [00:28:27] - Whether or not references should be held in higher regard than interviews [00:31:41] - Things to try and get out of a reference call as an objective [00:32:40] - Disabilities and what lead them write that chapter specifically [00:35:01] - Whether or not talented people are happier [00:38:40] - Lack of contentment and it's dynamic influence over individuals [00:41:01] - Where they think the other is most talented [00:43:33] - Thinking about the physical side of mental performance [00:45:49] - What was frustrating about writing the book [00:48:25] - How they evaluate talent most differently now after having finished the book [00:50:41] - What makes for a good bat signal and how to cast one well [00:53:27] - Personality inventories and what they would and wouldn't recommend [00:54:15] - Geographical frictions and their role in high success rates [00:56:08] - Antonio Gracias; Existing supply constraints on talent development [01:00:01] - How they would redesign the current attractors of talent that we rely on today [01:01:18] - Assembly line development and how we can improve and scale talent filters [01:02:29] - The biggest open questions for talent today writ large [01:05:16] - The kindest thing anyone has ever done for Tyler