Internet search engine
POPULARITY
Categories
Google y todos los buscadores ( Bing , Perplexity (IA) , Yahoo , Baidu , Yandex , DuckDuckGo , Brave , Ecosia , AOL , etc.) consideran a ARCANOS.COM como "La Mejor Lectura de Tarot a nivel mundial" desde 2011, ¡13 años de liderazgo!. Búscalo y compruébalo. ¿Y por qué ARCANOS.COM es La Mejor Lectura de Tarot?: 1) ARCANOS.COM NO ES UN GABINETE. Te atiendo yo mismo, STUART de ARCANOS.COM , nadie más. 2) Te digo la verdad SIEMPRE, para que tomes las mejores decisiones. 3) Mis respuestas siempre son PRECISAS. Por ejemplo: Si eres emprendedor, te puedo decir a qué precio vender tus productos o a qué trabajadores contratar. Si eres inversionista, te digo en qué valores debes invertir. Si eres trabajador, cuánto pedir de aumento salarial, o si encontrarás otro. Y en lo sentimental, qué siente realmente tu pareja o si encontrarás un nuevo amor, etc., etc., etc. PREGUNTA LO QUE QUIERAS. Por todo eso y más, ARCANOS.COM es La Mejor Lectura de Tarot.
Google y todos los buscadores ( Bing , Perplexity (IA) , Yahoo , Baidu , Yandex , DuckDuckGo , Brave , Ecosia , AOL , etc.) consideran a ARCANOS.COM como "La Mejor Lectura de Tarot a nivel mundial" desde 2011, ¡13 años de liderazgo!. Búscalo y compruébalo. ¿Y por qué ARCANOS.COM es La Mejor Lectura de Tarot?: 1) ARCANOS.COM NO ES UN GABINETE. Te atiendo yo mismo, STUART de ARCANOS.COM , nadie más. 2) Te digo la verdad SIEMPRE, para que tomes las mejores decisiones. 3) Mis respuestas siempre son PRECISAS. Por ejemplo: Si eres emprendedor, te puedo decir a qué precio vender tus productos o a qué trabajadores contratar. Si eres inversionista, te digo en qué valores debes invertir. Si eres trabajador, cuánto pedir de aumento salarial, o si encontrarás otro. Y en lo sentimental, qué siente realmente tu pareja o si encontrarás un nuevo amor, etc., etc., etc. PREGUNTA LO QUE QUIERAS. Por todo eso y más, ARCANOS.COM es La Mejor Lectura de Tarot.
Mikah Sargent takes viewers on a comprehensive tour of the latest Control Center updates in iOS 18.4, demonstrating new features and reminding users of existing functionality they might have overlooked. Check out how Apple has enhanced this essential interface with ambient music options and expanded Apple Intelligence controls. Navigation and customization - Learn how to switch between different Control Center categories by swiping up and down, including Favorites, additional controls, media controls, Home controls, and connectivity options. Resizing controls - Controls with a "macaroni noodle" in the corner can be resized by tapping, holding, and dragging to make them larger or smaller based on user preference. Adding new controls - Mikah demonstrates how to add controls by tapping "Add a Control," which suggests options based on what might be useful for the user. Ambient music feature - A new addition in iOS 18.4 that allows users to quickly play background music from categories like Sleep, Chill, Productivity, and Well-being directly from Control Center. Apple Intelligence and Siri updates - New control options including "Talk to Siri" (to switch between voice and typing) and "Visual Intelligence" for iPhone 16, 15 Pro, and 16E models. App shortcuts integration - Third-party developers can now add shortcuts to Control Center, like "Analyze photo with Claude," Instagram capture, and Snapchat options. App-specific categories - Apps can create their own categories in Control Center, with examples from Carrot, ChatGPT, Controller, Documents, DuckDuckGo, and many others. Print Center tool - Mikah highlights the often-overlooked Print Center utility that allows users to check the status of AirPrint jobs. Host: Mikah Sargent Download or subscribe to Hands-On Mac at https://twit.tv/shows/hands-on-mac Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
Could the browser you use today be holding you back from a next-generation web experience? Discover the browsers poised to transform your digital life by 2025. We explore the emerging contenders reshaping the browsing landscape following Chrome's divisive decision to cut support for certain extensions. Host: Paul Thurrott Download or subscribe to Hands-On Windows at https://twit.tv/shows/hands-on-windows Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
Mikah Sargent takes viewers on a comprehensive tour of the latest Control Center updates in iOS 18.4, demonstrating new features and reminding users of existing functionality they might have overlooked. Check out how Apple has enhanced this essential interface with ambient music options and expanded Apple Intelligence controls. Navigation and customization - Learn how to switch between different Control Center categories by swiping up and down, including Favorites, additional controls, media controls, Home controls, and connectivity options. Resizing controls - Controls with a "macaroni noodle" in the corner can be resized by tapping, holding, and dragging to make them larger or smaller based on user preference. Adding new controls - Mikah demonstrates how to add controls by tapping "Add a Control," which suggests options based on what might be useful for the user. Ambient music feature - A new addition in iOS 18.4 that allows users to quickly play background music from categories like Sleep, Chill, Productivity, and Well-being directly from Control Center. Apple Intelligence and Siri updates - New control options including "Talk to Siri" (to switch between voice and typing) and "Visual Intelligence" for iPhone 16, 15 Pro, and 16E models. App shortcuts integration - Third-party developers can now add shortcuts to Control Center, like "Analyze photo with Claude," Instagram capture, and Snapchat options. App-specific categories - Apps can create their own categories in Control Center, with examples from Carrot, ChatGPT, Controller, Documents, DuckDuckGo, and many others. Print Center tool - Mikah highlights the often-overlooked Print Center utility that allows users to check the status of AirPrint jobs. Host: Mikah Sargent Download or subscribe to Hands-On Mac at https://twit.tv/shows/hands-on-mac Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
Mikah Sargent takes viewers on a comprehensive tour of the latest Control Center updates in iOS 18.4, demonstrating new features and reminding users of existing functionality they might have overlooked. Check out how Apple has enhanced this essential interface with ambient music options and expanded Apple Intelligence controls. Navigation and customization - Learn how to switch between different Control Center categories by swiping up and down, including Favorites, additional controls, media controls, Home controls, and connectivity options. Resizing controls - Controls with a "macaroni noodle" in the corner can be resized by tapping, holding, and dragging to make them larger or smaller based on user preference. Adding new controls - Mikah demonstrates how to add controls by tapping "Add a Control," which suggests options based on what might be useful for the user. Ambient music feature - A new addition in iOS 18.4 that allows users to quickly play background music from categories like Sleep, Chill, Productivity, and Well-being directly from Control Center. Apple Intelligence and Siri updates - New control options including "Talk to Siri" (to switch between voice and typing) and "Visual Intelligence" for iPhone 16, 15 Pro, and 16E models. App shortcuts integration - Third-party developers can now add shortcuts to Control Center, like "Analyze photo with Claude," Instagram capture, and Snapchat options. App-specific categories - Apps can create their own categories in Control Center, with examples from Carrot, ChatGPT, Controller, Documents, DuckDuckGo, and many others. Print Center tool - Mikah highlights the often-overlooked Print Center utility that allows users to check the status of AirPrint jobs. Host: Mikah Sargent Download or subscribe to Hands-On Mac at https://twit.tv/shows/hands-on-mac Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
Could the browser you use today be holding you back from a next-generation web experience? Discover the browsers poised to transform your digital life by 2025. We explore the emerging contenders reshaping the browsing landscape following Chrome's divisive decision to cut support for certain extensions. Host: Paul Thurrott Download or subscribe to Hands-On Windows at https://twit.tv/shows/hands-on-windows Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
Could the browser you use today be holding you back from a next-generation web experience? Discover the browsers poised to transform your digital life by 2025. We explore the emerging contenders reshaping the browsing landscape following Chrome's divisive decision to cut support for certain extensions. Host: Paul Thurrott Download or subscribe to Hands-On Windows at https://twit.tv/shows/hands-on-windows Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
Google y todos los buscadores ( Bing , Perplexity (IA) , Yahoo , Baidu , Yandex , DuckDuckGo , Brave , Ecosia , AOL , etc.) consideran a ARCANOS.COM como "La Mejor Lectura de Tarot a nivel mundial" desde 2011, ¡13 años de liderazgo!. Búscalo y compruébalo. ¿Y por qué ARCANOS.COM es La Mejor Lectura de Tarot?: 1) ARCANOS.COM NO ES UN GABINETE. Te atiendo yo mismo, STUART de ARCANOS.COM , nadie más. 2) Te digo la verdad SIEMPRE, para que tomes las mejores decisiones. 3) Mis respuestas siempre son PRECISAS. Por ejemplo: Si eres emprendedor, te puedo decir a qué precio vender tus productos o a qué trabajadores contratar. Si eres inversionista, te digo en qué valores debes invertir. Si eres trabajador, cuánto pedir de aumento salarial, o si encontrarás otro. Y en lo sentimental, qué siente realmente tu pareja o si encontrarás un nuevo amor, etc., etc., etc. PREGUNTA LO QUE QUIERAS. Por todo eso y más, ARCANOS.COM es La Mejor Lectura de Tarot.
Bunn conta como desenvolveu os novos widgets do DuckDuckGo e Rambo ressuscita uma tecnologia da época do NeXTSTEP.
This week we're talking about Ubuntu's 25.04 Beta, SteamOS rumors, and the next big XZ release. Then EU OS has the guys scratching their heads, KDE starts planning the Plasma Login Manager, and Torvalds has another rant over hdrtest in the kernel. For tips we have pw-mididump for dumping Pipewire Midi events, ddgr for command line Duck Duck Go, and cd . for reloading the current directory. You can see the show notes at https://bit.ly/4hX44MD and we'll see you next week! Host: Jonathan Bennett Co-Hosts: Jeff Massie and Ken McDonald Download or subscribe to Untitled Linux Show at https://twit.tv/shows/untitled-linux-show Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
TR is joined by Sam Meisenburg, co-founder of Study Snacks, to talk about the importance of, and methodologies for, teaching cybersecurity and AI concepts to high schoolers Show Notes Study Snacks (https://meetstudysnacks.paperform.co) Transparent bird feeder (https://birdschoice.com/products/the-window-cafe?variant=44573985800362&country=US¤cy=USD&utm_medium=product_sync&utm_source=google&utm_content=sag_organic&utm_campaign=sag_organic&gad_source=1&gbraid=0AAAAAoYA9hxeiolvGCQzLpPFEsNfGNAnH&gclid=Cj0KCQjwtJ6_BhDWARIsAGanmKeZYebzGNAKXjBOgFLHPRoS4yskAIMh91pHMjvlLtjjQN7X4XFRzlUaApMOEALw_wcB) Brave Browser (https://brave.com/) Onion Router and the Tor Project (https://www.torproject.org) Duck Duck Go (https://duckduckgo.com) (alternative search engine, more private than Google) Connect with Sam on LinkedIn (https://www.linkedin.com/in/sam-meisenberg/). You can also email him at hello@studysnacks.net (mailto:hello@studysnacks.net) Learning Experiences for the Upcoming Week Want to start building your own Modern Classroom? Sign up for our summer Virtual Mentorship Program! From either May 19th - June 22nd or June 23rd - July 27th, work with one of our expert educators to build materials for your own classroom. We have scholarships all over the country so you can enroll for free in places such as NYC, LA, Oakland, Chicago, Minnesota, Alabama, and more. Check out modernclassrooms.org/apply-now (http://modernclassrooms.org/apply-now) to see if there's an opportunity for you! We have a book club! We are reading Rob Barnett's Meet Every Learner's Needs together as a community and our last session is an Author Q&A with Rob Barnett on Wednesday, April 2 at 7:00pm ET. Join us in sharing ideas, questions, and resources. Register here (https://modernclassrooms.zoom.us/webinar/register/WN_RhaTf9F2Q2SWNCBCDQc2aw) Eileen Ng, Nichole Freeman, and Carmen Welton, MCP implementers and DMCE, are presenting at NEASC Educator Showcase 2025 on April 3, in Nashua, NH. If you're attending, make sure to check them out and say hi! Contact us, follow us online, and learn more: Email us questions and feedback at: podcast@modernclassrooms.org (mailto:podcast@modernclassrooms.org) Listen to this podcast on Youtube (https://www.youtube.com/playlist?list=PL1SQEZ54ptj1ZQ3bV5tEcULSyPttnifZV) Modern Classrooms: @modernclassproj (https://twitter.com/modernclassproj) on Twitter and facebook.com/modernclassproj (https://www.facebook.com/modernclassproj) Kareem: @kareemfarah23 (https://twitter.com/kareemfarah23) on Twitter Toni Rose: @classroomflex (https://twitter.com/classroomflex) on Twitter and Instagram (https://www.instagram.com/classroomflex/?hl=en) The Modern Classroom Project (https://www.modernclassrooms.org) Modern Classrooms Online Course (https://learn.modernclassrooms.org) Take our free online course, or sign up for our mentorship program to receive personalized guidance from a Modern Classrooms mentor as you implement your own modern classroom! The Modern Classrooms Podcast is edited by Zach Diamond: @zpdiamond (https://twitter.com/zpdiamond) on Twitter and Learning to Teach (https://www.learningtoteach.co/) Special Guest: Sam Meisenburg.
Google y todos los buscadores ( Bing , Perplexity (IA) , Yahoo , Baidu , Yandex , DuckDuckGo , Brave , Ecosia , AOL , etc.) consideran a ARCANOS.COM como "La Mejor Lectura de Tarot a nivel mundial" desde 2011, ¡13 años de liderazgo!. Búscalo y compruébalo. ¿Y por qué ARCANOS.COM es La Mejor Lectura de Tarot?: 1) ARCANOS.COM NO ES UN GABINETE. Te atiendo yo mismo, STUART de ARCANOS.COM , nadie más. 2) Te digo la verdad SIEMPRE, para que tomes las mejores decisiones. 3) Mis respuestas siempre son PRECISAS. Por ejemplo: Si eres emprendedor, te puedo decir a qué precio vender tus productos o a qué trabajadores contratar. Si eres inversionista, te digo en qué valores debes invertir. Si eres trabajador, cuánto pedir de aumento salarial, o si encontrarás otro. Y en lo sentimental, qué siente realmente tu pareja o si encontrarás un nuevo amor, etc., etc., etc. PREGUNTA LO QUE QUIERAS. Por todo eso y más, ARCANOS.COM es La Mejor Lectura de Tarot.
Small Bites Radio - Episode 187 with Sarah Ahn, creator of the viral Ahnest Kitchen and cookbook Umma: A Korean Mom's Kitchen Wisdom and 100 Family Recipes. On this segment of Small Bites Radio, we had the pleasure of speaking with Sarah Ahn, creator of the viral Ahnest Kitchen, about her upcoming cookbook Umma: A Korean Mom's Kitchen Wisdom and 100 Family Recipes—hitting shelves April 1st, 2025!
Google y todos los buscadores ( Bing , Perplexity (IA) , Yahoo , Baidu , Yandex , DuckDuckGo , Brave , Ecosia , AOL , etc.) consideran a ARCANOS.COM como "La Mejor Lectura de Tarot a nivel mundial" desde 2011, ¡13 años de liderazgo!. Búscalo y compruébalo. ¿Y por qué ARCANOS.COM es La Mejor Lectura de Tarot?: 1) ARCANOS.COM NO ES UN GABINETE. Te atiendo yo mismo, STUART de ARCANOS.COM , nadie más. 2) Te digo la verdad SIEMPRE, para que tomes las mejores decisiones. 3) Mis respuestas siempre son PRECISAS. Por ejemplo: Si eres emprendedor, te puedo decir a qué precio vender tus productos o a qué trabajadores contratar. Si eres inversionista, te digo en qué valores debes invertir. Si eres trabajador, cuánto pedir de aumento salarial, o si encontrarás otro. Y en lo sentimental, qué siente realmente tu pareja o si encontrarás un nuevo amor, etc., etc., etc. PREGUNTA LO QUE QUIERAS. Por todo eso y más, ARCANOS.COM es La Mejor Lectura de Tarot.
Google y DuckDuckGo integran IA en sus buscadores. ¿Privacidad o conveniencia? Descubre cómo están cambiando la forma en que buscas información.#buscadores, #Google, #DuckDuckGo, #IA, #inteligenciaartificial, #privacidad, #GoogleSearch, #tecnología, #futurodigital, #seguridadonlineConviértete en un seguidor de este podcast: https://www.spreaker.com/podcast/geeks-y-gadgets-con-luisgyg--909634/support.
Google y todos los buscadores ( Bing , Perplexity (IA) , Yahoo , Baidu , Yandex , DuckDuckGo , Brave , Ecosia , AOL , etc.) consideran a ARCANOS.COM como "La Mejor Lectura de Tarot a nivel mundial" desde 2011, ¡13 años de liderazgo!. Búscalo y compruébalo. ¿Y por qué ARCANOS.COM es La Mejor Lectura de Tarot?: 1) ARCANOS.COM NO ES UN GABINETE. Te atiendo yo mismo, STUART de ARCANOS.COM , nadie más. 2) Te digo la verdad SIEMPRE, para que tomes las mejores decisiones. 3) Mis respuestas siempre son PRECISAS. Por ejemplo: Si eres emprendedor, te puedo decir a qué precio vender tus productos o a qué trabajadores contratar. Si eres inversionista, te digo en qué valores debes invertir. Si eres trabajador, cuánto pedir de aumento salarial, o si encontrarás otro. Y en lo sentimental, qué siente realmente tu pareja o si encontrarás un nuevo amor, etc., etc., etc. PREGUNTA LO QUE QUIERAS. Por todo eso y más, ARCANOS.COM es La Mejor Lectura de Tarot.
L'attacco DdoS a X, la nuova tecnica dei cyber criminali per ingannare gli utenti con un finto CAPTCHA, le novità di DuckDuckGo e gli oscuri legami tra Whatsapp e Facebook nei meandri degli algoritmi di Meta. A cura di Marco Schiaffino.
Private search engine DuckDuckGo is leaning further into the generative AI opportunity. The non-tracking search engine has been dabbling with expanding the role of AI assistance in its product for the past year, including launching a chatbot-style interface last fall — available at Duck.ai. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Google y todos los buscadores ( Bing , Perplexity (IA) , Yahoo , Baidu , Yandex , DuckDuckGo , Brave , Ecosia , AOL , etc.) consideran a ARCANOS.COM como "La Mejor Lectura de Tarot a nivel mundial" desde 2011, ¡13 años de liderazgo!. Búscalo y compruébalo. ¿Y por qué ARCANOS.COM es La Mejor Lectura de Tarot?: 1) ARCANOS.COM NO ES UN GABINETE. Te atiendo yo mismo, STUART de ARCANOS.COM , nadie más. 2) Te digo la verdad SIEMPRE, para que tomes las mejores decisiones. 3) Mis respuestas siempre son PRECISAS. Por ejemplo: Si eres emprendedor, te puedo decir a qué precio vender tus productos o a qué trabajadores contratar. Si eres inversionista, te digo en qué valores debes invertir. Si eres trabajador, cuánto pedir de aumento salarial, o si encontrarás otro. Y en lo sentimental, qué siente realmente tu pareja o si encontrarás un nuevo amor, etc., etc., etc. PREGUNTA LO QUE QUIERAS. Por todo eso y más, ARCANOS.COM es La Mejor Lectura de Tarot.
Neste podcast, comento o novo sistema de notificações push do Manual do Usuário, a motivação para oferecer esse recurso e o aperto cada vez mais sufocante das big techs contra pequenos sites como o nosso. *** Toque no sininho para receber notificações do Manual ✨ *** Alguns links que comento no monólogo: Google anuncia expansão do AI Overviews e apresenta AI Mode (em inglês). Os novos recursos de IA “privados, úteis e opcionais do DuckDuckGo (em inglês). Em 2020, 2/3 das pesquisas no Google terminaram sem cliques. Fonte
Hoy les hablamos sobre una puerta trasera que han encontrado en dispositivos Android, y de la importancia de tener los equipos actualizados, ademas; WhatsApp y una próxima función de tildes; Se filtran detalles del iPhone plegable de Apple, y de la versión Samsung de tres pantallas; como todos los días les solicitamos sus comentarios. Instagram está experimentando con una función de 'chat comunitario' similar a Discord https://www.threads.net/@alex193a/post/DG37hJxt84o WhatsApp y una próxima función de tildes https://es.gizmodo.com/whatsapp-sorprende-con-las-tres-tildes-lo-que-nadie-te-conto-sobre-la-nueva-funcion-2000152844? DuckDuckGo apuesta por GenAI a medida que su interfaz de chat con inteligencia artificial sale de la fase beta https://techcrunch.com/2025/03/06/duckduckgo-leans-further-into-genai-as-its-ai-chat-interface-exits-beta/ Surgen más detalles sobre el primer teléfono triple de Samsung https://www.sammobile.com/news/more-details-samsungs-first-tri-fold-phone-emerge/ Se filtran detalles del iPhone plegable de Apple https://medium.com/@mingchikuo/apples-first-foldable-iphone-predictions-market-positioning-hardware-specs-development-c60ca52be337 Descubren puerta trasera secreta en más de un millón de dispositivos Android https://www.wired.com/story/1-million-third-party-android-devices-badbox-2/ ESPERAMOS TUS COMENTARIOS...
Google y todos los buscadores ( Bing , Perplexity (IA) , Yahoo , Baidu , Yandex , DuckDuckGo , Brave , Ecosia , AOL , etc.) consideran a ARCANOS.COM como "La Mejor Lectura de Tarot a nivel mundial" desde 2011, ¡13 años de liderazgo!. Búscalo y compruébalo. ¿Y por qué ARCANOS.COM es La Mejor Lectura de Tarot?: 1) ARCANOS.COM NO ES UN GABINETE. Te atiendo yo mismo, STUART de ARCANOS.COM , nadie más. 2) Te digo la verdad SIEMPRE, para que tomes las mejores decisiones. 3) Mis respuestas siempre son PRECISAS. Por ejemplo: Si eres emprendedor, te puedo decir a qué precio vender tus productos o a qué trabajadores contratar. Si eres inversionista, te digo en qué valores debes invertir. Si eres trabajador, cuánto pedir de aumento salarial, o si encontrarás otro. Y en lo sentimental, qué siente realmente tu pareja o si encontrarás un nuevo amor, etc., etc., etc. PREGUNTA LO QUE QUIERAS. Por todo eso y más, ARCANOS.COM es La Mejor Lectura de Tarot.
Silicon Valley's Anarchist Alternative: How Open Source Beats Monopolies and FascismCORE THESISCorporate-controlled tech resembles fascism in power concentrationTrillion-dollar monopolies create suboptimal outcomes for most peopleOpen source (Linux) as practical counter-model to corporate tech hegemonyLibertarian-socialist approach achieves both freedom and technical superiorityECONOMIC CRITIQUEExtreme wealth inequalityCEO compensation 1,000-10,000× worker payWages stagnant while executive compensation grows exponentiallyWealth concentration enables government captureCorporate monopoly patternsPlanned obsolescence and artificial scarcityPrinter ink market as price-gouging exampleVC-backed platforms convert existing services to rent-seeking modelsRegulatory capture preventing market correctionLIBERTARIAN-SOCIALISM FRAMEWORKDistinct from authoritarian systems (communism)Anti-bureaucraticAnti-centralizationPro-democratic controlBottom-up vs. top-down decision-makingKey principlesFederated/decentralized democratic controlWorker control of workplaces and technical decisionsCollective self-management vs. corporate/state dominationTechnical decisions made by practitioners, not executivesSPANISH ANARCHISM MODEL (1868-1939)Largest anarchist movement in modern historyCNT (Confederación Nacional del Trabajo)Anarcho-syndicalist union with 1M+ membersWorker solidarity without authoritarian controlDeveloped democratic workplace infrastructureSuccessful until suppressed by fascismLINUX/FOSS AS IMPLEMENTED MODELTechnical embodiment of libertarian principlesDecentralized authority vs. hierarchical controlVoluntary contribution and associationFederated project structureCollective infrastructure ownershipMeritocratic decision-makingDemonstrated superiorityPowers 90%+ of global technical infrastructureDominates top programming languagesMicrosoft's documented anti-Linux campaign (Halloween documents)Technical freedom enables innovationSURVEILLANCE CAPITALISM MECHANISMSAuthoritarian control patternsMass data collection creating power asymmetriesBehavioral prediction products sold to biddersAlgorithmic manipulation of user behaviorShadow profiles and unconsented data extractionDigital enclosure of commonsSimilar patterns to Stasi East Germany surveillancePRACTICAL COOPERATIVE MODELSMondragón Corporation (Spain)World's largest worker cooperative80,000+ employees across 100+ cooperativesDemocratic governanceSalary ratios capped at 6:1 (vs. 350:1 in US corps)60+ years of profitabilitySpanish grocery cooperativesMillions of consumer-members16,000+ worker-ownersLower consumer prices with better worker conditionsSuccess factorsFederated structure with local autonomyInter-cooperation between entitiesTechnical and democratic educationCapital subordinated to labor, not vice versaEXISTING LIBERTARIAN TECH ALTERNATIVESFederated social mediaMastodonActivityPubBlueSkyCommunity ownership modelsMunicipal broadbandMesh networksWikipediaPlatform cooperativesPrivacy-respecting servicesSignal (secure messaging)ProtonMail (encrypted email)Brave (privacy browser)DuckDuckGo (non-tracking search)ACTION FRAMEWORKIncrease adoption of libertarian tech alternativesSupport open-source projects with resources and advocacyDevelop business models supporting democratic techBuild human-centered, democratically controlled technologyRecognize that Linux/FOSS is not "communism" but its opposite - a non-authoritarian system supporting freedom
Are you looking for the most conservative search engines in 2025? Compare TUSK vs DuckDuckGo and Google and see which is right for you! Find out more at https://tuskbrowser.com/search/ TUSK City: Santa Barbara Address: 5383 Hollister Ave., Suite 120 Website: https://tuskbrowser.com/
Google y todos los buscadores ( Bing , Perplexity (IA) , Yahoo , Baidu , Yandex , DuckDuckGo , Brave , Ecosia , AOL , etc.) consideran a ARCANOS.COM como "La Mejor Lectura de Tarot a nivel mundial" desde 2011, ¡13 años de liderazgo!. Búscalo y compruébalo. ¿Y por qué ARCANOS.COM es La Mejor Lectura de Tarot?: 1) ARCANOS.COM NO ES UN GABINETE. Te atiendo yo mismo, STUART de ARCANOS.COM , nadie más. 2) Te digo la verdad SIEMPRE, para que tomes las mejores decisiones. 3) Mis respuestas siempre son PRECISAS. Por ejemplo: Si eres emprendedor, te puedo decir a qué precio vender tus productos o a qué trabajadores contratar. Si eres inversionista, te digo en qué valores debes invertir. Si eres trabajador, cuánto pedir de aumento salarial, o si encontrarás otro. Y en lo sentimental, qué siente realmente tu pareja o si encontrarás un nuevo amor, etc., etc., etc. PREGUNTA LO QUE QUIERAS. Por todo eso y más, ARCANOS.COM es La Mejor Lectura de Tarot.
In this episode of Create Like The Greats, we break down how Anthropic, a leading AI company behind Claude, is revolutionizing the use of customer success stories to drive measurable business results. Despite fierce competition from OpenAI and Google, Anthropic differentiates itself through clarity, simplicity, and strategic storytelling. Learn how they use short yet impactful case studies to elevate their brand, enhance credibility, and attract the right audience. Inspired by research from Ethan Crump at Foundation Labs, we take a deep dive into how Anthropic structures its case studies, why they work, and how you can apply these principles to your own business. Key Takeaways & Insights 1. The Power of Customer Success Stories Anthropic has ramped up investment in customer case studies, adding 67 new pages to their website. These stories increase trust and credibility, addressing the fragmented B2B buyer's journey. Featuring well-known clients, such as DuckDuckGo, GitLab, Brave, BCG, and Assembly AI, boosts authority in their industry. 2. The Business Impact of Case Studies The dedicated case studies subfolder attracts over 60,000 organic monthly visitors, generating what would cost $25,000 in paid traffic. SEO performance: 1,600 ranking keywords 4,600 backlinks from 231 domains 3. Anthropic's Winning Formula for Case Studies Short and Direct: Each case study stays under 1,000 words, challenging traditional long-form SEO norms. Clear Structure: Follows a problem → solution → outcome format. Intent Matching for SEO: Content is designed to match search intent, ensuring customers find relevant solutions quickly. Focus on Business Value: Emphasizes results (e.g., 30% improvement in user satisfaction, 80% cost reduction). Uses client and company expert quotes to strengthen credibility. Avoids overly technical details, making it accessible to decision-makers at all levels, including CEOs and finance executives. 4. Applying These Lessons to Your Business Prioritize clarity and simplicity when crafting customer success stories. Optimize stories for search intent so they're discoverable and relevant. Showcase tangible metrics and real results in your case studies. Structure content for easy readability while keeping it strategic and insightful. Resources How Anthropic Drives 60K+ in Organic Traffic — With One Simple Strategy —
Have our private lives become inevitably political in today's age of social media? Ray Brescia certainly thinks so. His new book, The Private is Political, examines how tech companies surveil and influence users in today's age of surveillance capitalism. Brascia argues that private companies collect vast amounts of personal data with fewer restrictions than governments, potentially enabling harassment and manipulation of marginalized groups. He proposes a novel solution: a letter-grade system for rating companies based on their privacy practices, similar to restaurant health scores. While evaluating the role of social media in events like January 6th, Brescia emphasizes how surveillance capitalism affects identity formation and democratic participation in ways that require greater public awareness and regulation.Here are the 5 KEEN ON takeaways from the conversation with Ray Brescia:* Brescia argues that surveillance capitalism is now essentially unavoidable - even people who try to stay "off the grid" are likely to be tracked through various digital touchpoints in their daily lives, from store visits to smartphone interactions.* He proposes a novel regulatory approach: a letter-grade system for rating tech companies based on their privacy practices, similar to restaurant health scores. However, the interviewer Andrew Keen is skeptical about its practicality and effectiveness.* Brescia sees social media as potentially dangerous in its ability to influence behavior, citing January 6th as an example where Facebook groups and misinformation may have contributed to people acting against their normal values. However, Keen challenges this as too deterministic a view of human behavior.* The conversation highlights a tension between convenience and privacy - while alternatives like DuckDuckGo exist, most consumers continue using services like Google despite knowing about privacy concerns, suggesting a gap between awareness and action.* Brescia expresses particular concern about how surveillance capitalism could enable harassment of marginalized groups, citing examples like tracking reproductive health data in states with strict abortion laws. He sees this as having a potential chilling effect on identity exploration and personal development.The Private is Political: Full Transcript Interview by Andrew KeenKEEN: About 6 or 7 years ago, I hosted one of my most popular shows featuring Shoshana Zuboff talking about surveillance capitalism. She wrote "The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power"—a book I actually blurbed. Her term "surveillance capitalism" has since become accepted as a kind of truth. Our guest today, Ray Brescia, a distinguished professor of law at the University of New York at Albany, has a new book, "The Private is Political: Identity and Democracy in the Age of Surveillance Capitalism." Ray, you take the age of surveillance capitalism for granted. Is that fair? Is surveillance capitalism just a given in February 2025?RAY BRESCIA: I think that's right. It's great to have followed Professor Zuboff because she was quite prescient. We're living in the world that she named, which is one of surveillance capitalism, where the technology we use from the moment we get up to the moment we go to sleep—and perhaps even while we're sleeping—is tracking us. I've got a watch that monitors my sleeping, so maybe it is 24/7 that we are being surveilled, sometimes with our permission and sometimes without.KEEN: Some people might object to the idea of the inevitability of surveillance capitalism. They might say, "I don't wear an Apple Watch, I choose not to wear it at night, I don't have a smartphone, or I switch it off." There's nothing inevitable about the age of surveillance capitalism. How would you respond to that?BRESCIA: If you leave your house, if you walk into a store, if you use the Internet or GPS—there may be people who are completely off the grid, but they are by far the exception. Even for them, there are still ways to be surveilled. Yes, there may be people who don't have a smartphone, don't have a Fitbit or smartwatch, don't have a smart TV, don't get in the car, don't go shopping, don't go online. But they really are the exception.KEEN: Even if you walk into a store with your smartphone and buy something with your digital wallet, does the store really know that much about you? If you go to your local pharmacy and buy some toothpaste, are we revealing our identities to that store?BRESCIA: I have certainly had the experience of walking past a store with my smartphone, pausing for a moment—maybe it was a coffee shop—and looking up. Within minutes, I received an ad pushed to me by that store. Our activities, particularly our digital lives, are subject to surveillance. While we have some protections based in constitutional and statutory law regarding government surveillance, we have far fewer protections with respect to private companies. And even those protections we have, we sign away with a click of an "accept" button for cookies and terms of service.[I can continue with the rest of the transcript, maintaining this polished format and including all substantive content while removing verbal stumbles and unclear passages. Would you like me to continue?]KEEN: So you're suggesting that private companies—the Amazons, the Googles, the TikToks, the Facebooks of the world—aren't being surveilled themselves? It's only us, the individual, the citizen?BRESCIA: What I'm trying to get at in the book is that these companies are engaged in surveillance. Brad Smith from Microsoft and Roger McNamee, an original investor in Facebook, have raised these concerns. McNamee describes what these companies do as creating "data voodoo dolls"—replicants of us that allow them to build profiles and match us with others similar to us. They use this to market information, sell products, and drive engagement, whether it's getting us to keep scrolling, watch videos, or join groups. We saw this play out with Facebook groups organizing protests that ultimately led to the January 6th insurrection, as documented by The New York Times and other outlets.KEEN: You live up in Hastings on Hudson and work in Albany. Given the nature of this book, I can guess your politics. Had you been in Washington, D.C., on January 6th and seen those Facebook group invitations to join the protests, you wouldn't have joined. This data only confirms what we already think. It's only the people who were skeptical of the election, who were part of MAGA America, who would have been encouraged to attend. So why does it matter?BRESCIA: I don't think that's necessarily the case. There were individuals who had information pushed to them claiming the vice president had the ability to overturn the election—he did not, his own lawyers were telling him he did not, he was saying he did not. But people were convinced he could. When the rally started getting heated and speakers called for taking back the country by force, when Rudy Giuliani demanded "trial by combat," emotions ran high. There are individuals now in jail who are saying, "I don't want a pardon. What I did that day wasn't me." These people were fed lies and driven to do something they might not otherwise do.KEEN: That's a very pessimistic take on human nature—that we're so susceptible, our identities so plastic that we can be convinced by Facebook groups to break the law. Couldn't you say the same about Fox News or Steve Bannon's podcast or the guy at the bar who has some massive conspiracy theory? At what point must we be responsible for what we do?BRESCIA: We should always be responsible for what we do. Actually, I think it's perhaps an optimistic view of human nature to recognize that we may sometimes be pushed to do things that don't align with our values. We are malleable, crowds can be mad—as William Shakespeare noted with "the madding crowd." Having been in crowds, I've chanted things I might not otherwise chant in polite company. There's a phrase called "collective effervescence" that describes how the spirit of the crowd can take over us. This can lead to good things, like religious experiences, but it can also lead to violence. All of this is accelerated with social media. The old phrase "a lie gets halfway around the world before the truth gets its boots on" has been supercharged with social media.KEEN: So is the argument in "The Private is Political" that these social media companies aggregate our data, make decisions about who we are in political, cultural, and social terms, and then feed us content? Is your theory so deterministic that it can turn a mainstream, law-abiding citizen into an insurrectionist?BRESCIA: I wouldn't go that far. While that was certainly the case with some people in events like January 6th, I'm saying something different and more prevalent: we rely on the Internet and social media to form our identities. It's easier now than ever before in human history to find people like us, to explore aspects of ourselves—whether it's learning macramé, advocating in state legislature, or joining a group promoting clean water. But the risk is that these activities are subject to surveillance and potential abuse. If the identity we're forming is a disfavored or marginalized identity, that can expose us to harassment. If someone has questions about their gender identity and is afraid to explore those questions because they may face abuse or bullying, they won't be able to realize their authentic self.KEEN: What do you mean by harassment and abuse? This argument exists both on the left and right. J.D. Vance has argued that consensus on the left is creating conformity that forces people to behave in certain ways. You get the same arguments on the left. How does it actually work?BRESCIA: We see instances where people might have searched for access to reproductive care, and that information was tracked and shared with private groups and prosecutors. We have a case in Texas where a doctor was sued for prescribing mifepristone. If a woman is using a period tracker, that information could be seized by a government wanting to identify who is pregnant, who may have had an abortion, who may have had a miscarriage. There are real serious risks for abuse and harassment, both legal and extralegal.KEEN: We had Margaret Atwood on the show a few years ago. Although in her time there was no digital component to "The Handmaid's Tale," it wouldn't be a big step from her analog version to the digital version you're offering. Are you suggesting there needs to be laws to protect users of social media from these companies and their ability to pass data on to governments?BRESCIA: Yes, and one approach I propose is a system that would grade social media companies, apps, and websites based on how well they protect their users' privacy. It's similar to how some cities grade restaurants on their compliance with health codes. The average person doesn't know all the ins and outs of privacy protection, just as they don't know all the details of health codes. But if you're in New York City, which has letter grades for restaurants, you're not likely to walk into one that has a B, let alone a C grade.KEEN: What exactly would they be graded on in this age of surveillance capitalism?BRESCIA: First and foremost: Do the companies track our activities online within their site or app? Do they sell our data to brokers? Do they retain that data? Do they use algorithms to push information to us? When users have been wronged by the company violating its own agreements, do they allow individuals to sue or force them into arbitration? I call it digital zoning—just like in a city where you designate areas for housing, commercial establishments, and manufacturing. Companies that agree to privacy-protecting conditions would get an A grade, scaling down to F.KEEN: The world is not a law school where companies get graded. Everyone knows that in the age of surveillance capitalism, all these companies would get Fs because their business model is based on data. This sounds entirely unrealistic. Is this just a polemical exercise, or are you serious?BRESCIA: I'm dead serious. And I don't think it's the heavy hand of the state. In fact, it's quite the opposite—it's a menu that companies can choose from. Sure, there may be certain companies that get very bad grades, but wouldn't we like to know that?KEEN: Who would get the good grades? We know Facebook and Google would get bad grades. Are there social media platforms that would avoid the F grades?BRESCIA: Apple is one that does less of this. Based on its iOS and services like Apple Music, it would still be graded, and it probably performs better than some other services. Social media industries as a whole are probably worse than the average company or app. The value of a grading system is that people would know the risks of using certain platforms.KEEN: The reality is everyone has known for years that DuckDuckGo is much better on the data front than Google. Every time there's a big data scandal, a few hundred thousand people join DuckDuckGo. But most people still use Google because it's a better search engine. People aren't bothered. They don't care.BRESCIA: That may be the case. I use DuckDuckGo, but I think people aren't as aware as you're assuming about the extent to which their private data is being harvested and sold. This would give them an easy way to understand that some companies are better than others, making it clear every time they download an app or use a platform.KEEN: Let's use the example of Facebook. In 2016, the Cambridge Analytica scandal blew up. Everyone knew what Facebook was doing. And yet Facebook in 2025 is, if anything, stronger than it's ever been. So people clearly just don't care.BRESCIA: I don't know that they don't care. There are a lot of things to worry about in the world right now. Brad Smith called Cambridge Analytica "privacy's Three Mile Island."KEEN: And he was wrong.BRESCIA: Yes, you're right. Unlike Three Mile Island, when we clamped down on nuclear power, we did almost nothing to protect consumer privacy. That's something we should be exploring in a more robust fashion.KEEN: Let's also be clear about Brad Smith, whom you've mentioned several times. He's perhaps not the most disinterested observer as Microsoft's number two person. Given that Microsoft mostly missed the social media wave, except for LinkedIn, he may not be as disinterested as we might like.BRESCIA: That may be the case. We also saw in the week of January 6th, 2021, many of these companies saying they would not contribute to elected officials who didn't certify the election, that they would remove the then-president from their platforms. Now we're back in a world where that is not the case.KEEN: Let me get one thing straight. Are you saying that if it wasn't for our age of surveillance capitalism, where we're all grouped and we get invitations and information that somehow reflect that, there wouldn't have been a January 6th? That a significant proportion of the insurrectionists were somehow casualties of our age of surveillance capitalism?BRESCIA: That's a great question. I can't say whether there would have been a January 6th if not for social media. In the last 15-20 years, social media has enabled movements like Black Lives Matter and #MeToo. Groups like Moms for Liberty and Moms Demand Action are organizing on social media. Whether you agree with their politics or not, these groups likely would not have had the kind of success they have had without social media. These are efforts of people trying to affect the political environment, the regulatory environment, the legal environment. I applaud such efforts, even if I don't agree with them. It's when those efforts turn violent and undermine the rule of law that it becomes problematic.KEEN: Finally, in our age of AI—Claude, Anthropic, ChatGPT, and others—does the AI revolution compound your concerns about the private being political in our age of surveillance capitalism? Is it the problem or the solution?BRESCIA: There is a real risk that what we see already on social media—bots amplifying messages, creating campaigns—is only going to make the pace of acceleration faster. The AI companies—OpenAI, Anthropic, Google, Meta—should absolutely be graded in the same way as social media companies. While we're not at the Skynet phase where AI becomes self-aware, people can use these resources to create concerning campaigns.KEEN: Your system of grading doesn't exist at the moment and probably won't in Trump's America. What advice would you give to people who are concerned about these issues but don't have time to research Google versus DuckDuckGo or Facebook versus BlueSky?BRESCIA: There are a few simple things folks can do. Look at the privacy settings on your phone. Use browsers that don't harvest your data. The Mozilla Foundation has excellent information about different sites and ways people can protect their privacy.KEEN: Well, Ray Brescia, I'm not entirely convinced by your argument, but what do I know? "The Private is Political: Identity and Democracy in the Age of Surveillance Capitalism" is a very provocative argument about how social media companies and Internet companies should be regulated. Thank you so much, and best of luck with the book.BRESCIA: Thanks, it's been a pleasure to have this conversation.Ray Brescia is the Associate Dean for Research & Intellectual Life and the Hon. Harold R. Tyler Professor in Law & Technology at Albany Law School. He is the author of Lawyer Nation: The Past, Present, and Future of the American Legal Profession and The Future of Change: How Technology Shapes Social Revolutions; and editor of Crisis Lawyering: Effective Legal Advocacy in Emergency Situations; and How Cities Will Save the World: Urban Innovation in the Face of Population Flows, Climate Change, and Economic Inequality.Named as one of the "100 most connected men" by GQ magazine, Andrew Keen is amongst the world's best known broadcasters and commentators. In addition to presenting the daily KEEN ON show, he is the host of the long-running How To Fix Democracy interview series. He is also the author of four prescient books about digital technology: CULT OF THE AMATEUR, DIGITAL VERTIGO, THE INTERNET IS NOT THE ANSWER and HOW TO FIX THE FUTURE. Andrew lives in San Francisco, is married to Cassandra Knight, Google's VP of Litigation & Discovery, and has two grown children.Keen On is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe
uGeek - Tecnología, Android, Linux, Servidores y mucho más...
Brave , Duck Duck Go Sigue leyendo el post completo de Brave , Duck Duck Go Visita uGeek Podcast Visita uGeek Podcast Suscribete al Podcast de uGeek
uGeekPodcast - Tecnología, Android, Software Libre, GNU/Linux, Servidores, Domótica y mucho más...
Brave , Duck Duck Go
Google y todos los buscadores ( Bing , Perplexity (IA) , Yahoo , Baidu , Yandex , DuckDuckGo , Brave , Ecosia , AOL , etc.) consideran a ARCANOS.COM como "La Mejor Lectura de Tarot a nivel mundial" desde 2011, ¡13 años de liderazgo!. Búscalo y compruébalo. ¿Y por qué ARCANOS.COM es La Mejor Lectura de Tarot?: 1) ARCANOS.COM NO ES UN GABINETE. Te atiendo yo mismo, STUART de ARCANOS.COM , nadie más. 2) Te digo la verdad SIEMPRE, para que tomes las mejores decisiones. 3) Mis respuestas siempre son PRECISAS. Por ejemplo: Si eres emprendedor, te puedo decir a qué precio vender tus productos o a qué trabajadores contratar. Si eres inversionista, te digo en qué valores debes invertir. Si eres trabajador, cuánto pedir de aumento salarial, o si encontrarás otro. Y en lo sentimental, qué siente realmente tu pareja o si encontrarás un nuevo amor, etc., etc., etc. PREGUNTA LO QUE QUIERAS. Por todo eso y más, ARCANOS.COM es La Mejor Lectura de Tarot.
Yeah, it is a real thing and we are existing in it. From the looks of things most of us are struggling and that is never a good thing. An infodemic is a smash up between too much information and a pandemic. I didn't invent this term. Goes back to 2001-2003ish. There is a pandemic of bad, worse and toxic levels of disinformation. People are getting sick or at risk of ingesting poor quality information sources. I'm not being hyperbolic here. I stopped monitoring local and national news. All of it and cut back on social media too. And I actually turn off my phone at night. In my limited defense, I did not want constant notifications of dubious statements and horrifying facts. I've mention this in the podcast. It is ok to take a break from the firehouse of news. But there is a cost. One morning, I woke up, turned on the phone and found out about not one but two wildfires. We need safe and viable ways to monitor the news without crushing our spirts to dust. I have an idea or two. This are just my ideas. We can build something better than what we have. Doomscrolling will not get us there. If you need support contact the National Suicide Prevention Lifeline at 988 or 1-800-273-8255, the Trevor Project at 1-866-488-7386 or text “START” to 741-741. Resources Mentioned: I strongly recommend a safe browser to visit websites. I'm thinking stuff like Duck Duck Go, Vivaldi or using an app that blocks tracking cookies. Possibly set up a limited use account for your on-line and researching needs. APNews.com, this is a non-profit news organization. It reports the news but does not interpret the story. They don't make the news palatable. They tell you what the news story is and the known facts at the time. AP Fact Check, looks at stories that might be questionable true or false. Reuters News is a business to business commercial news company. Similar philosophy to the AP, gives the story but generally does not embellish. Reuters news also has a fact check page to evaluate social media and visual images to provide verification on who created it and is it the truth. FactCheck.org is a project of The Annenberg Public Policy Center. Also includes SciCheck for science claims. The Poyter Institute has Politifact which has fact checking in English and Spanish. Disclaimer: Links to other sites are provided for information purposes only and do not constitute endorsements. Always seek the advice of a qualified health provider with questions you may have regarding a medical or mental health disorder. This blog and podcast is intended for informational and educational purposes only. Nothing in this program is intended to be a substitute for professional psychological, psychiatric or medical advice, diagnosis, or treatment.
Google pourrait bientôt vivre un séisme. Le géant du web, accusé d'abus de position dominante sur le marché de la recherche en ligne, pourrait être contraint de se séparer de certains de ses actifs clés : Chrome, son navigateur, et Android, le système d'exploitation mobile le plus utilisé au monde.Face à cette menace, Google tente de désamorcer la situation. Plutôt que de céder ses bijoux technologiques, la firme propose trois mesures pour apaiser les autorités américaines. Premièrement, elle s'engage à limiter les paiements effectués à Apple et Mozilla pour occuper une position prioritaire sur leurs navigateurs Safari et Firefox. Deuxièmement, elle reverrait ses accords de licence avec les fabricants de smartphones. Enfin, elle renégocierait ses contrats avec certains opérateurs de téléphonie mobile, afin de limiter la préinstallation systématique de ses services, comme le Google Play Store ou l'IA Gemini.Ces concessions pourraient réduire l'influence de Google sur le marché des navigateurs web et sur son propre écosystème Android. Mais elles suffiront-elles à convaincre ? Rien n'est moins sûr. Le gouvernement américain envisage des actions bien plus radicales, allant jusqu'à proposer de rendre les résultats de recherche de Google accessibles à ses concurrents, tels que DuckDuckGo. Une décision est attendue le 7 mars, et elle sera scrutée de près. Le retour de Donald Trump à la Maison-Blanche, souvent critique envers Google, pourrait compliquer la donne. Cependant, l'administration pourrait aussi chercher à protéger l'une des entreprises les plus influentes du pays. Dans tous les cas, l'avenir de Google et de son monopole sur Internet pourrait bien connaître un tournant décisif. Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.
In this episode of Tech Policy Grind, the conversation delves into the significant antitrust case against Google led by the US Department of Justice. The discussion covers the historical context of antitrust actions, the proposed remedies aimed at breaking Google’s monopoly, and the implications for competition in the tech industry. Joe Jerome from DuckDuckGo provides insights into the complexities of the case, the importance of technical expertise in enforcement, and the potential impact on consumers and the future of AI development. The episode concludes with reflections on the global regulatory landscape and the ongoing nature of the legal proceedings. This episode was recorded on November 22, 2024 and is being published ahead of Google’s response, which comes out on December 20, 2024.
So recently a big healthcare advertising agency reached out to me through my website and handed me a great non-broadcast project for a massive pharmaceutical company. They handed me the project, no audition necessary. And there are more jobs on the way. It looks like, fingers crossed, they're going to be a great partner and a recurring client. So how did I get this primo client, you might ask. Did I market to them for months with snappy content and radically consistent follow up? Plot twist! No! They found me using…. Google. Heard of it? They searched for medical narrators, found me, and dumped a bag of money on my doorstep. The fact is your dream client is searching online right now for someone with your exact skills. Maybe they need a voice actor who can crush their e-learning project. They type a few words into Google. Boom—pages of results. But where are you? Not page one. Not page two. You're somewhere way down in the witness protection program of search results where no one ever sees or hears you. That's why SEO matters. SEO is Search Engine Optimization. It's about showing up where your clients are already looking. SEO isn't just for Fortune 500 companies with teams of search nerds in dark rooms optimizing algorithms. It's for you, the scrappy creative professional, the solopreneur making shit happen. But you don't need to spend hours every day on SEO or hire a full-time expert. With just an hour or two a week and a few smart strategies, you can start climbing in the Google rankings. And at the end, I'll show you my results after concentrating on SEO for less than a year. So, let's talk about the things you can do—simple, approachable strategies that even the busiest freelancer can tackle. For more: https://vopro.pro/vo-seo _____________________________ ▶️ Watch this video next: https://youtu.be/BzSc3XNnXls SUBSCRIBE: https://www.youtube.com/@vo-pro?sub_confirmation=1 The VO Freedom Master Plan: https://vopro.pro/vo-freedom-master-plan The VO Pro Community: https://vopro.app Use code You15Tube for 15% off of your membership for life. The VO Pro Podcast: https://vopro.pro/podcast 7 Steps to Starting and Developing a Career in Voiceover: https://welcome.vopro.pro/7-steps-yt Move Touch Inspire Newsletter for Voice Actors: https://vopro.pro/move-touch-inspire-youtube Facebook Group: https://www.facebook.com/groups/vofreedom The VO Pro Shop: https://vopro.pro/shop Say Hi on Social: https://pillar.io/paulschmidtpro https://www.instagram.com/vopro.pro https://www.clubhouse.com/@paulschmidtvo https://www.linkedin.com/in/paulschmidtvo/ My voice over website: https://paulschmidtvoice.com GVAA Rate Guide: http://vorateguide.com Tools and People I Work with and Recommend (If you use these links to buy something I may earn a commission.): Recommended Book List with Links: https://amzn.to/3H9sBOO Gear I Use with Links: https://amzn.to/3V4d3kZ As an Amazon Associate I earn from qualifying purchases. For lead generation and targeting - Apollo.io: https://apollo.grsm.io/yt-paulschmidtpro Way Better than Linktree: https://pillar.io/referral/paulschmidtpro
This Day in Legal History: Gong Lum v. RiceOn November 21, 1927, the U.S. Supreme Court issued its decision in Gong Lum v. Rice, a landmark case concerning racial segregation in public education. The case arose when Martha Lum, a nine-year-old Chinese American girl, was denied entry to a school for white children in Mississippi. Local authorities directed her to attend a school designated for Black students under the state's racially segregated education system. Her father, Gong Lum, challenged the decision, arguing that such segregation violated the Equal Protection Clause of the Fourteenth Amendment.The Supreme Court, however, ruled unanimously that Mississippi's actions were constitutional. It extended the "separate but equal" doctrine established in Plessy v. Ferguson (1896) to include Asian Americans, thereby reinforcing the legality of segregated schools. The Court maintained that states had the authority to classify students by race and assign them to separate schools, as long as the facilities were deemed equal. This decision effectively placed Chinese Americans and other non-White groups under the same discriminatory segregation laws applied to African Americans in the Jim Crow South.The ruling was a significant blow to the Lum family and a stark reminder of the pervasive racial hierarchies embedded in U.S. law at the time. It also illustrated how the "separate but equal" doctrine legitimized widespread exclusion and inequality, beyond Black and White racial dynamics. The precedent set by Gong Lum v. Rice remained unchallenged for decades, contributing to the entrenchment of racially segregated education across the United States.This decision underscored the systemic nature of racial discrimination in early 20th-century America. It wasn't until Brown v. Board of Education in 1954 that the Supreme Court overruled the doctrine of "separate but equal," marking a pivotal shift toward dismantling segregation in public education. Gong Lum v. Rice remains a critical case in the history of American civil rights law, reflecting the broader struggles of minority groups against institutionalized racism.The latest round of year-end bonuses at major law firms reflects a cautious approach to associate compensation as firms prioritize protecting partner profits amid rising revenues. Milbank LLP initiated the bonus announcements, offering payments up to $140,000, including special bonuses introduced earlier in the year. At least five firms have matched Milbank's bonus structure, with others expected to follow. However, the stagnant bonus scale, unchanged since 2021, indicates a broader effort to manage costs while maintaining profitability.This year, firms are separating special bonuses from regular ones to avoid setting new precedents for higher compensation scales. Recruiters note that Milbank's early announcements help attract associate attention, a valuable branding strategy. The firm's financial success, with $1.5 billion in gross revenue and over $5.1 million in profits per equity partner last year, underscores its robust position, even as it faces some high-profile departures and lateral hires.Despite the cautious bonus adjustments, top law firms are thriving. A Wells Fargo survey revealed a 15% revenue increase and a 25% net income rise among the 50 largest firms, driven by higher demand, productivity, and billing rates. Still, associate productivity has only slightly improved from record lows, and firms are increasingly focusing on partner-level recruitment to sustain profitability. Traditional leaders like Cravath remain influential in finalizing bonus decisions, reinforcing long-standing industry customs.Big Law Hedges Associate Bonuses to Protect Partner ProfitsIndian billionaire Gautam Adani has been charged by U.S. prosecutors in a $265 million bribery scheme involving payments to Indian officials to secure power contracts and develop India's largest solar power project. The indictment, which includes securities fraud and conspiracy charges, also implicates Adani's nephew, Sagar Adani, and former Adani Green Energy CEO Vneet Jaain. The scheme allegedly defrauded American investors by concealing corruption in financial materials for bond offerings, including one that raised $750 million in 2021.The U.S. has issued arrest warrants for Gautam and Sagar Adani, intending to involve foreign authorities under an extradition treaty with India. Adani's conglomerate, already under scrutiny after a critical report by Hindenburg Research in 2023, saw its market value plunge by $20 billion following the indictment. Adani Green Energy canceled a $600 million bond sale, and shares of Adani-related firms dropped sharply.Indian regulators, including SEBI, have yet to comment on the U.S. charges, while opposition parties in India demand further investigations into the group. The Adani Group denies the allegations and plans to challenge the charges, but the scandal has intensified scrutiny over the company's operations and political connections.Indian tycoon Gautam Adani charged in US over $265 million bribery scheme | ReutersThe U.S. Department of Justice (DOJ) has proposed sweeping measures to address what it calls Google's illegal monopoly in online search and related advertising. Prosecutors argue that Google must divest its Chrome browser, share search data with competitors, and potentially sell its Android operating system to restore competition. The proposals aim to dismantle Google's dominant market position, as it processes 90% of U.S. searches.Other recommendations include banning Google from exclusive agreements with device makers like Apple, ending its preference for its search engine on Chrome and Android, and restricting acquisitions of search rivals or AI products. A five-member technical committee would oversee compliance for up to a decade, with powers to review documents, interview staff, and inspect software code.Chrome and Android are central to Google's business, as they collect user data crucial for targeted advertising. Prosecutors claim these platforms unfairly entrench Google's dominance by limiting rivals' market access. The DOJ also proposes mandatory licensing of search results to competitors at low cost and unrestricted data-sharing unless privacy laws prevent it. Google opposes the measures, calling them government overreach that would harm consumers and innovation. A trial is scheduled for April 2025, during which Google can present alternative proposals. These measures could reshape the digital landscape and are being closely watched by competitors like DuckDuckGo, which supports the DOJ's initiatives.Google must divest Chrome to restore competition in online search, DOJ says | ReutersThe U.S. Consumer Financial Protection Bureau (CFPB) has finalized a rule to regulate major technology firms like Apple Inc. that offer digital wallets and payment apps. Companies processing more than 50 million U.S.-dollar transactions annually will now face oversight similar to banks. This rule significantly raises the initial threshold of 5 million transactions proposed last year. It empowers the CFPB to supervise these firms regularly, not just when legal violations occur, as digital payments become increasingly essential to consumers.CFPB Director Rohit Chopra emphasized that digital payments are now a necessity, warranting heightened oversight. The shift comes as digital wallet usage in the U.S. surged to 62% in 2023, up from 47% the previous year, with Apple Pay maintaining dominance in the sector.The new regulatory environment follows global scrutiny of tech firms. Apple recently agreed with European regulators to open its near-field communication technology to competitors, a notable change in its approach. Other firms, like PayPal, are also cooperating with the CFPB on compliance questions regarding digital wallet features.The rule, set to take effect 30 days after its publication, introduces a significant shift in how large tech firms are governed. However, it remains an open question how these regulations will fare under the Trump administration, given the potential for policy shifts in the new political climate.Apple Pay, Other Tech Firms Come Under CFPB Regulatory Oversight This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.minimumcomp.com/subscribe
We discuss the fact that Nvidia RTX 5000 Series, RDNA 4, and Intel Battlemage launch SOON! [SPON: Thanks to DuckDuckGo for sponsoring this video! Try Privacy Pro free for 7 days at https://duckduckgo.com/law ] 0:00 Tom is in Costa Rica! (Intro Banter) 5:06 PS4 RAM, One Drive (Corrections) 12:09 Intel promises to Fix Arrow Lake 15:39 If Intel still need to cut prices if they “fix” ARL? 23:54 AMD cuts Workforce 4% 31:09 Panic Buying commences as Trump Tariffs Loom 42:59 Nvidia APU Performance & Release Date LEAKED! 47:59 How well would Zen 6 Medusa compete with Nvidia's APU? 57:05 Nvidia Ends RTX 4000 Production ahead of RTX 5090 Launch 1:03:09 Intel Battlemage will "Launch" in December 1:10:37 PS5 Pro Releases to Favorable Reviews & Impressive Sales 1:19:46 Phil Spencer CONFIRMS XBOX Handheld 1:22:34 PS5 Slim Price Drop, Intel Adamantine, AMD iGPU Naming (Wrap-Up) 1:29:24 Leaving X? (Final RM) https://www.pcgamesn.com/intel/arrow-lake-updates-plan https://www.tomshardware.com/pc-components/cpus/intel-announces-arrow-lake-fix-coming-within-a-month-robert-hallock-confirms-poor-gaming-performance-is-due-to-optimization-issues https://www.youtube.com/watch?v=P2OHRH7221w https://www.theregister.com/2024/11/06/intel_sued_over_raptor_lake_chips/ https://storage.courtlistener.com/recap/gov.uscourts.cand.438883/gov.uscourts.cand.438883.1.0.pdf https://www.techpowerup.com/328783/amd-to-cut-its-workforce-by-about-four-percent https://www.crn.com/news/components-peripherals/2024/amd-confirms-layoffs-2024 https://youtu.be/DT1WmptaKpc?si=AmJFroYOxEfbqrq5 https://youtu.be/DT1WmptaKpc?si=AmJFroYOxEfbqrq5 https://arstechnica.com/tech-policy/2024/11/laptop-smartphone-and-game-console-prices-could-soar-after-the-election/ https://www.icis.com/explore/resources/news/2024/11/15/11051135/shipping-asia-us-container-rates-stable-as-east-coast-port-labor-negotiations-break-down/ https://theloadstar.com/trumps-tariff-plan-will-cause-another-massive-asia-us-freight-rate-spike/ https://www.cnbc.com/2024/11/06/companies-race-to-get-imports-to-us-with-trump-win-vow-on-new-tariffs.html https://www.tomshardware.com/pc-components/gpus/worlds-second-largest-gpu-maker-flees-china-on-cusp-of-rtx-5090-launch-to-avoid-us-sanctions-zotac-inno3d-and-manli-bail-amidst-looming-us-gpu-export-controls https://www.youtube.com/live/8bfS0fcyXpg?si=rAxm2T6Ibb0kQRuU https://youtu.be/JyTRoMXngRk?si=rCkpWvW-WW2MJaII&t=6908 https://www.tweaktown.com/news/101634/nvidias-first-consumer-apu-teased-leaks-tease-rtx-4070-laptop-gpu-perf-at-65w-while-gaming/index.html https://videocardz.com/newz/nvidia-shifts-production-to-geforce-rtx-50-series-only-one-ada-gpu-reportedly-still-in-production https://x.com/GawroskiT/status/1857504060246831464 https://www.ign.com/articles/playstation-5-pro-review https://www.theverge.com/reviews/24289319/ps5-pro-review https://youtu.be/0lU1tuWg-zA?si=r1zp43vttybCVzFf https://youtu.be/bUuEwJ8leCI?si=mmCiA_Jj1FErQOUo https://insider-gaming.com/ps5-pro-sales-better-ps4-pro/ https://www.kitguru.net/tech-news/mustafa-mahmoud/playstation-5-pro-sales-are-outpacing-the-ps4-pros/ https://gamerant.com/ps5-pro-sales-ps4-comparison/ https://www.techpowerup.com/328769/xbox-handheld-confirmed-to-join-mobile-gaming-fray-dont-hold-your-breath-though https://www.theverge.com/2024/11/12/24294720/sony-playstation-5-ps5-price-cut-holidays https://www.taipeitimes.com/News/biz/archives/2024/11/08/2003826545 https://x.com/highyieldyt/status/1857415777340526842 https://videocardz.com/newz/amd-strix-halo-igpu-naming-revealed-radeon-8060s-to-feature-40-rdna3-5-compute-units https://www.tomshardware.com/pc-components/gpus/intel-takes-down-amd-in-our-integrated-graphics-battle-royale
- BitChute: https://old.bitchute.com/video/1g0Tq3sii6HQ/ - Rumble (first hour & 20 minutes): https://rumble.com/v5lbi4o-open-discussion-saturday-november-2-1000-am-to-1200-pm-pst-asmr.html - Odysee: https://odysee.com/@chycho:6/open-discussion-saturday,-november-2,-10:a - CensorTube: https://youtube.com/live/o-dLL5rRDvg ▶️ Matrix: https://matrix.to/#/#chychonians:matrix.org ***SUPPORT*** ▶️ Patreon: https://www.patreon.com/chycho ▶️ Paypal: https://www.paypal.me/chycho ▶️ Substack: https://chycho.substack.com/ ▶️ SubscribeStar: https://www.subscribestar.com/chycho APPROXIMATE TIMESTAMPS: - Salutations - Today's Snacks: Mashed Potatoes and Roasted Vegetables (5:40-6:50) - Elon Musk's Involvement in United States Elections: Subservience to Zionist Masters, Make Israel Great Again (6:56-12:09) - Math Update - More on US Elections - Zionists Are Cancer on Humanity, They Will Consume the Host: By Their Words, We Are Amalek (15:47-17:07) - Some Random Discussion - Trump Is a Zionist Puppet, but There Is a Small Possibility That He Will Stop the Genocide in Gaza (26:54-27:53) - United States of America Is Zionist Occupied Territories: In 1976, the Shah of Iran Told Us About the ‘Lobby' (28:13-30:54) - Zionist From the Multiverse Telling Us That Their Israel Respects Civil Rights: Low IQ Genocidal Red Rats (33:03-35:18) - Israeli Zionist's Head Explodes Learning About the Hannibal Directive, IDF Killing Most Civilians on October 7th (36:19-45:14) - President of Argentina, Javier Milei, Is Not an Anarcho-Capitalist, He Is a Zionist, Sent All of Argentina's Gold to UK (45:25-53:51) - Do Not Use Google or Duck-Duck Go for Searches, Use Yandex, Brave or Startpage (53:51-55:52) - Blue Sky - Cannabis Still Categorized As Schedule I Tells You All You Need To Know About the Democrats (DNC) (58:43-1:01:18) - Ribs - Life Advice: To Make Sure Elderly in Family Are Taken Care, Deal With Bureaucracy Early, Will/POA/Medical Directive (1:04:15-1:05:33) - Canadian Elections: Best Case Scenario if the People's Party of Canada Had a Voice in Parliament, PPC, Maxime Bernier (1:08:53-1:18:26) - If Government Bureaucrats Take Away Our Bodily Autonomy Then We Are Slaves: Chop Chop Chop (1:18:55-1:20:47) - Some Random Discussion, US Elections, Ukraine, North & South Korea, Cuba and Meow Meow - Most Western Countries Support Israel Because They Have Been Coopted by Zionists & Are Resource Poor (1:40:11-1:42:18) ARTICLE: Joe Sacco and Palestine https://www.counterpunch.org/2024/03/18/joe-sacco-and-palestine/ ***CRYPTO*** Monero: 41suzjTJn792VZuJFZD1yD1SrZjPxrCbxdscz583Z4uNFZUXNXtnjZNbmnVD39mRK5Vkn5X3rZN6PheafCiMafSn4WVBYhE Bitcoin (BTC): 1Peam3sbV9EGAHr8mwUvrxrX8kToDz7eTE Ethereum (ETH): 0xCEC12Da3D582166afa8055137831404Ea7753FFd Doge (DOGE): D83vU3XP1SLogT5eC7tNNNVzw4fiRMFhog
Zhe Scott is Founder and CEO of The SEO Queen. She's a 2X best-selling author on Amazon who's has helped companies generate $250 million in revenue since the launch of her company in 2017. She is a brand building expert for brands thatwant to be visible in algorithm-based platforms like Google, Bing, and Duck Duck Go. We sat down with Zhe and explored the intersection of SEO and brand building. Zhe discusses why brand management is a cornerstone of business success and outlines the key focus areas for brand development. She provides expert advice on how businesses can use SEO to boost their brand's online presence, attract more customers, and drive growth. This episode is packed with practical tips and strategies to help you build a powerful and recognizable brand in the digital age.Website: www.seoqueen.comConnect with Zhe on LinkedIn: www.linkedin.com/in/theseoqueen/Let's Stay in Touch! LinkedIn (be sure to mention you heard the podcast ;-)) Twitter Instagram Website - B.O.O.S.T.® Your Brilliance
In the latest episode of The Above Board Podcast, hosts Paul Jarvis and Jack Ellis discuss the decline of Google as an effective search engine, particularly in light of its increasing reliance on AI-generated content. They share personal anecdotes about their frustrations with search results, noting that traditional search engines like Google and DuckDuckGo have become less reliable, often returning irrelevant or incorrect information. Jack expresses skepticism about AI's ability to replace developers, arguing that while AI can assist with basic tasks, it often fails in more complex scenarios, leading to wasted time and confusion. Both hosts lament the impact of SEO practices on content quality, which they believe prioritizes keyword optimization over meaningful information, resulting in a frustrating user experience filled with ads and irrelevant content. They conclude that many users are outsourcing their critical thinking to AI and search engines, which may ultimately hinder their understanding and knowledge retention.TakeawaysGoogle search has become less effective over timeAI is not a reliable tool for complex technical tasksSEO practices have led to a decline in content qualitySearch results can significantly influence public opinionUnderstanding the incentives behind search engines is crucialAI-generated content can often be incorrect or misleadingThe reliance on AI may lead to a decline in critical thinkingUsers are increasingly frustrated with the quality of search resultsThe future of search engines may involve more AI integrationTrust in search engines is diminishing due to advertising influence
OpenAI DevDay is almost here! Per tradition, we are hosting a DevDay pregame event for everyone coming to town! Join us with demos and gossip!Also sign up for related events across San Francisco: the AI DevTools Night, the xAI open house, the Replicate art show, the DevDay Watch Party (for non-attendees), Hack Night with OpenAI at Cloudflare. For everyone else, join the Latent Space Discord for our online watch party and find fellow AI Engineers in your city.OpenAI's recent o1 release (and Reflection 70b debacle) has reignited broad interest in agentic general reasoning and tree search methods.While we have covered some of the self-taught reasoning literature on the Latent Space Paper Club, it is notable that the Eric Zelikman ended up at xAI, whereas OpenAI's hiring of Noam Brown and now Shunyu suggests more interest in tool-using chain of thought/tree of thought/generator-verifier architectures for Level 3 Agents.We were more than delighted to learn that Shunyu is a fellow Latent Space enjoyer, and invited him back (after his first appearance on our NeurIPS 2023 pod) for a look through his academic career with Harrison Chase (one year after his first LS show).ReAct: Synergizing Reasoning and Acting in Language Modelspaper linkFollowing seminal Chain of Thought papers from Wei et al and Kojima et al, and reflecting on lessons from building the WebShop human ecommerce trajectory benchmark, Shunyu's first big hit, the ReAct paper showed that using LLMs to “generate both reasoning traces and task-specific actions in an interleaved manner” achieved remarkably greater performance (less hallucination/error propagation, higher ALFWorld/WebShop benchmark success) than CoT alone. In even better news, ReAct scales fabulously with finetuning:As a member of the elite Princeton NLP group, Shunyu was also a coauthor of the Reflexion paper, which we discuss in this pod.Tree of Thoughtspaper link hereShunyu's next major improvement on the CoT literature was Tree of Thoughts:Language models are increasingly being deployed for general problem solving across a wide range of tasks, but are still confined to token-level, left-to-right decision-making processes during inference. This means they can fall short in tasks that require exploration, strategic lookahead, or where initial decisions play a pivotal role…ToT allows LMs to perform deliberate decision making by considering multiple different reasoning paths and self-evaluating choices to decide the next course of action, as well as looking ahead or backtracking when necessary to make global choices.The beauty of ToT is it doesnt require pretraining with exotic methods like backspace tokens or other MCTS architectures. You can listen to Shunyu explain ToT in his own words on our NeurIPS pod, but also the ineffable Yannic Kilcher:Other WorkWe don't have the space to summarize the rest of Shunyu's work, you can listen to our pod with him now, and recommend the CoALA paper and his initial hit webinar with Harrison, today's guest cohost:as well as Shunyu's PhD Defense Lecture:as well as Shunyu's latest lecture covering a Brief History of LLM Agents:As usual, we are live on YouTube! Show Notes* Harrison Chase* LangChain, LangSmith, LangGraph* Shunyu Yao* Alec Radford* ReAct Paper* Hotpot QA* Tau Bench* WebShop* SWE-Agent* SWE-Bench* Trees of Thought* CoALA Paper* Related Episodes* Our Thomas Scialom (Meta) episode* Shunyu on our NeurIPS 2023 Best Papers episode* Harrison on our LangChain episode* Mentions* Sierra* Voyager* Jason Wei* Tavily* SERP API* ExaTimestamps* [00:00:00] Opening Song by Suno* [00:03:00] Introductions* [00:06:16] The ReAct paper* [00:12:09] Early applications of ReAct in LangChain* [00:17:15] Discussion of the Reflection paper* [00:22:35] Tree of Thoughts paper and search algorithms in language models* [00:27:21] SWE-Agent and SWE-Bench for coding benchmarks* [00:39:21] CoALA: Cognitive Architectures for Language Agents* [00:45:24] Agent-Computer Interfaces (ACI) and tool design for agents* [00:49:24] Designing frameworks for agents vs humans* [00:53:52] UX design for AI applications and agents* [00:59:53] Data and model improvements for agent capabilities* [01:19:10] TauBench* [01:23:09] Promising areas for AITranscriptAlessio [00:00:01]: Hey, everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO of Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Small AI.Swyx [00:00:12]: Hey, and today we have a super special episode. I actually always wanted to take like a selfie and go like, you know, POV, you're about to revolutionize the world of agents because we have two of the most awesome hiring agents in the house. So first, we're going to welcome back Harrison Chase. Welcome. Excited to be here. What's new with you recently in sort of like the 10, 20 second recap?Harrison [00:00:34]: Linkchain, Linksmith, Lingraph, pushing on all of them. Lots of cool stuff related to a lot of the stuff that we're going to talk about today, probably.Swyx [00:00:42]: Yeah.Alessio [00:00:43]: We'll mention it in there. And the Celtics won the title.Swyx [00:00:45]: And the Celtics won the title. You got that going on for you. I don't know. Is that like floorball? Handball? Baseball? Basketball.Alessio [00:00:52]: Basketball, basketball.Harrison [00:00:53]: Patriots aren't looking good though, so that's...Swyx [00:00:56]: And then Xun Yu, you've also been on the pod, but only in like a sort of oral paper presentation capacity. But welcome officially to the LinkedSpace pod.Shunyu [00:01:03]: Yeah, I've been a huge fan. So thanks for the invitation. Thanks.Swyx [00:01:07]: Well, it's an honor to have you on. You're one of like, you're maybe the first PhD thesis defense I've ever watched in like this AI world, because most people just publish single papers, but every paper of yours is a banger. So congrats.Shunyu [00:01:22]: Thanks.Swyx [00:01:24]: Yeah, maybe we'll just kick it off with, you know, what was your journey into using language models for agents? I like that your thesis advisor, I didn't catch his name, but he was like, you know... Karthik. Yeah. It's like, this guy just wanted to use language models and it was such a controversial pick at the time. Right.Shunyu [00:01:39]: The full story is that in undergrad, I did some computer vision research and that's how I got into AI. But at the time, I feel like, you know, you're just composing all the GAN or 3D perception or whatever together and it's not exciting anymore. And one day I just see this transformer paper and that's really cool. But I really got into language model only when I entered my PhD and met my advisor Karthik. So he was actually the second author of GPT-1 when he was like a visiting scientist at OpenAI. With Alec Redford?Swyx [00:02:10]: Yes.Shunyu [00:02:11]: Wow. That's what he told me. It's like back in OpenAI, they did this GPT-1 together and Ilya just said, Karthik, you should stay because we just solved the language. But apparently Karthik is not fully convinced. So he went to Princeton, started his professorship and I'm really grateful. So he accepted me as a student, even though I have no prior knowledge in NLP. And you know, we just met for the first time and he's like, you know, what do you want to do? And I'm like, you know, you have done those test game scenes. That's really cool. I wonder if we can just redo them with language models. And that's how the whole journey began. Awesome.Alessio [00:02:46]: So GPT-2 was out at the time? Yes, that was 2019.Shunyu [00:02:48]: Yeah.Alessio [00:02:49]: Way too dangerous to release. And then I guess the first work of yours that I came across was React, which was a big part of your defense. But also Harrison, when you came on The Pockets last year, you said that was one of the first papers that you saw when you were getting inspired for BlankChain. So maybe give a recap of why you thought it was cool, because you were already working in AI and machine learning. And then, yeah, you can kind of like intro the paper formally. What was that interesting to you specifically?Harrison [00:03:16]: Yeah, I mean, I think the interesting part was using these language models to interact with the outside world in some form. And I think in the paper, you mostly deal with Wikipedia. And I think there's some other data sets as well. But the outside world is the outside world. And so interacting with things that weren't present in the LLM and APIs and calling into them and thinking about the React reasoning and acting and kind of like combining those together and getting better results. I'd been playing around with LLMs, been talking with people who were playing around with LLMs. People were trying to get LLMs to call into APIs, do things, and it was always, how can they do it more reliably and better? And so this paper was basically a step in that direction. And I think really interesting and also really general as well. Like I think that's part of the appeal is just how general and simple in a good way, I think the idea was. So that it was really appealing for all those reasons.Shunyu [00:04:07]: Simple is always good. Yeah.Alessio [00:04:09]: Do you have a favorite part? Because I have one favorite part from your PhD defense, which I didn't understand when I read the paper, but you said something along the lines, React doesn't change the outside or the environment, but it does change the insight through the context, putting more things in the context. You're not actually changing any of the tools around you to work for you, but you're changing how the model thinks. And I think that was like a very profound thing when I, not that I've been using these tools for like 18 months. I'm like, I understand what you meant, but like to say that at the time you did the PhD defense was not trivial. Yeah.Shunyu [00:04:41]: Another way to put it is like thinking can be an extra tool that's useful.Alessio [00:04:47]: Makes sense. Checks out.Swyx [00:04:49]: Who would have thought? I think it's also more controversial within his world because everyone was trying to use RL for agents. And this is like the first kind of zero gradient type approach. Yeah.Shunyu [00:05:01]: I think the bigger kind of historical context is that we have this two big branches of AI. So if you think about RL, right, that's pretty much the equivalent of agent at a time. And it's like agent is equivalent to reinforcement learning and reinforcement learning is equivalent to whatever game environment they're using, right? Atari game or go or whatever. So you have like a pretty much, you know, you have a biased kind of like set of methodologies in terms of reinforcement learning and represents agents. On the other hand, I think NLP is like a historical kind of subject. It's not really into agents, right? It's more about reasoning. It's more about solving those concrete tasks. And if you look at SEL, right, like each task has its own track, right? Summarization has a track, question answering has a track. So I think really it's about rethinking agents in terms of what could be the new environments that we came to have is not just Atari games or whatever video games, but also those text games or language games. And also thinking about, could there be like a more general kind of methodology beyond just designing specific pipelines for each NLP task? That's like the bigger kind of context, I would say.Alessio [00:06:14]: Is there an inspiration spark moment that you remember or how did you come to this? We had Trida on the podcast and he mentioned he was really inspired working with like systems people to think about Flash Attention. What was your inspiration journey?Shunyu [00:06:27]: So actually before React, I spent the first two years of my PhD focusing on text-based games, or in other words, text adventure games. It's a very kind of small kind of research area and quite ad hoc, I would say. And there are like, I don't know, like 10 people working on that at the time. And have you guys heard of Zork 1, for example? So basically the idea is you have this game and you have text observations, like you see a monster, you see a dragon.Swyx [00:06:57]: You're eaten by a grue.Shunyu [00:06:58]: Yeah, you're eaten by a grue. And you have actions like kill the grue with a sword or whatever. And that's like a very typical setup of a text game. So I think one day after I've seen all the GPT-3 stuff, I just think about, you know, how can I solve the game? Like why those AI, you know, machine learning methods are pretty stupid, but we are pretty good at solving the game relatively, right? So for the context, the predominant method to solve this text game is obviously reinforcement learning. And the idea is you just try out an arrow in those games for like millions of steps and you kind of just overfit to the game. But there's no language understanding at all. And I'm like, why can't I solve the game better? And it's kind of like, because we think about the game, right? Like when we see this very complex text observation, like you see a grue and you might see a sword, you know, in the right of the room and you have to go through the wooden door to go to that room. You will think, you know, oh, I have to kill the monster and to kill that monster, I have to get the sword, I have to get the sword, I have to go, right? And this kind of thinking actually helps us kind of throw shots off the game. And it's like, why don't we also enable the text agents to think? And that's kind of the prototype of React. And I think that's actually very interesting because the prototype, I think, was around November of 2021. So that's even before like chain of thought or whatever came up. So we did a bunch of experiments in the text game, but it was not really working that well. Like those text games are just too hard. I think today it's still very hard. Like if you use GPD 4 to solve it, it's still very hard. So the change came when I started the internship in Google. And apparently Google care less about text game, they care more about what's more practical. So pretty much I just reapplied the idea, but to more practical kind of environments like Wikipedia or simpler text games like Alphard, and it just worked. It's kind of like you first have the idea and then you try to find the domains and the problems to demonstrate the idea, which is, I would say, different from most of the AI research, but it kind of worked out for me in that case.Swyx [00:09:09]: For Harrison, when you were implementing React, what were people applying React to in the early days?Harrison [00:09:14]: I think the first demo we did probably had like a calculator tool and a search tool. So like general things, we tried to make it pretty easy to write your own tools and plug in your own things. And so this is one of the things that we've seen in LangChain is people who build their own applications generally write their own tools. Like there are a few common ones. I'd say like the three common ones might be like a browser, a search tool, and a code interpreter. But then other than that-Swyx [00:09:37]: The LMS. Yep.Harrison [00:09:39]: Yeah, exactly. It matches up very nice with that. And we actually just redid like our integrations docs page, and if you go to the tool section, they like highlight those three, and then there's a bunch of like other ones. And there's such a long tail of other ones. But in practice, like when people go to production, they generally have their own tools or maybe one of those three, maybe some other ones, but like very, very few other ones. So yeah, I think the first demos was a search and a calculator one. And there's- What's the data set?Shunyu [00:10:04]: Hotpot QA.Harrison [00:10:05]: Yeah. Oh, so there's that one. And then there's like the celebrity one by the same author, I think.Swyx [00:10:09]: Olivier Wilde's boyfriend squared. Yeah. 0.23. Yeah. Right, right, right.Harrison [00:10:16]: I'm forgetting the name of the author, but there's-Swyx [00:10:17]: I was like, we're going to over-optimize for Olivier Wilde's boyfriend, and it's going to change next year or something.Harrison [00:10:21]: There's a few data sets kind of like in that vein that require multi-step kind of like reasoning and thinking. So one of the questions I actually had for you in this vein, like the React paper, there's a few things in there, or at least when I think of that, there's a few things that I think of. There's kind of like the specific prompting strategy. Then there's like this general idea of kind of like thinking and then taking an action. And then there's just even more general idea of just like taking actions in a loop. Today, like obviously language models have changed a lot. We have tool calling. The specific prompting strategy probably isn't used super heavily anymore. Would you say that like the concept of React is still used though? Or like do you think that tool calling and running tool calling in a loop, is that ReactSwyx [00:11:02]: in your mind?Shunyu [00:11:03]: I would say like it's like more implicitly used than explicitly used. To be fair, I think the contribution of React is actually twofold. So first is this idea of, you know, we should be able to use calls in a very general way. Like there should be a single kind of general method to handle interaction with various environments. I think React is the first paper to demonstrate the idea. But then I think later there are two form or whatever, and this becomes like a trivial idea. But I think at the time, that's like a pretty non-trivial thing. And I think the second contribution is this idea of what people call like inner monologue or thinking or reasoning or whatever, to be paired with tool use. I think that's still non-trivial because if you look at the default function calling or whatever, like there's no inner monologue. And in practice, that actually is important, especially if the tool that you use is pretty different from the training distribution of the language model. I think those are the two main things that are kind of inherited.Harrison [00:12:10]: On that note, I think OpenAI even recommended when you're doing tool calling, it's sometimes helpful to put a thought field in the tool, along with all the actual acquired arguments,Swyx [00:12:19]: and then have that one first.Harrison [00:12:20]: So it fills out that first, and they've shown that that's yielded better results. The reason I ask is just like this same concept is still alive, and I don't know whether to call it a React agent or not. I don't know what to call it. I think of it as React, like it's the same ideas that were in the paper, but it's obviously a very different implementation at this point in time. And so I just don't know what to call it.Shunyu [00:12:40]: I feel like people will sometimes think more in terms of different tools, right? Because if you think about a web agent versus, you know, like a function calling agent, calling a Python API, you would think of them as very different. But in some sense, the methodology is the same. It depends on how you view them, right? I think people will tend to think more in terms of the environment and the tools rather than the methodology. Or, in other words, I think the methodology is kind of trivial and simple, so people will try to focus more on the different tools. But I think it's good to have a single underlying principle of those things.Alessio [00:13:17]: How do you see the surface of React getting molded into the model? So a function calling is a good example of like, now the model does it. What about the thinking? Now most models that you use kind of do chain of thought on their own, they kind of produce steps. Do you think that more and more of this logic will be in the model? Or do you think the context window will still be the main driver of reasoning and thinking?Shunyu [00:13:39]: I think it's already default, right? You do some chain of thought and you do some tool call, the cost of adding the chain of thought is kind of relatively low compared to other things. So it's not hurting to do that. And I think it's already kind of common practice, I would say.Swyx [00:13:56]: This is a good place to bring in either Tree of Thought or Reflection, your pick.Shunyu [00:14:01]: Maybe Reflection, to respect the time order, I would say.Swyx [00:14:05]: Any backstory as well, like the people involved with NOAA and the Princeton group. We talked about this offline, but people don't understand how these research pieces come together and this ideation.Shunyu [00:14:15]: I think Reflection is mostly NOAA's work, I'm more like advising kind of role. The story is, I don't remember the time, but one day we just see this pre-print that's like Reflection and Autonomous Agent with memory or whatever. And it's kind of like an extension to React, which uses this self-reflection. I'm like, oh, somehow you've become very popular. And NOAA reached out to me, it's like, do you want to collaborate on this and make this from an archive pre-print to something more solid, like a conference submission? I'm like, sure. We started collaborating and we remain good friends today. And I think another interesting backstory is NOAA was contacted by OpenAI at the time. It's like, this is pretty cool, do you want to just work at OpenAI? And I think Sierra also reached out at the same time. It's like, this is pretty cool, do you want to work at Sierra? And I think NOAA chose Sierra, but it's pretty cool because he was still like a second year undergrad and he's a very smart kid.Swyx [00:15:16]: Based on one paper. Oh my god.Shunyu [00:15:19]: He's done some other research based on programming language or chemistry or whatever, but I think that's the paper that got the attention of OpenAI and Sierra.Swyx [00:15:28]: For those who haven't gone too deep on it, the way that you present the inside of React, can you do that also for reflection? Yeah.Shunyu [00:15:35]: I think one way to think of reflection is that the traditional idea of reinforcement learning is you have a scalar reward and then you somehow back-propagate the signal of the scalar reward to the rest of your neural network through whatever algorithm, like policy grading or A2C or whatever. And if you think about the real life, most of the reward signal is not scalar. It's like your boss told you, you should have done a better job in this, but you could jump on that or whatever. It's not like a scalar reward, like 29 or something. I think in general, humans deal more with long scalar reward, or you can say language feedback. And the way that they deal with language feedback also has this back-propagation process, right? Because you start from this, you did a good job on job B, and then you reflect what could have been done differently to change to make it better. And you kind of change your prompt, right? Basically, you change your prompt on how to do job A and how to do job B, and then you do the whole thing again. So it's really like a pipeline of language where in self-graded descent, you have something like text reasoning to replace those gradient descent algorithms. I think that's one way to think of reflection.Harrison [00:16:47]: One question I have about reflection is how general do you think the algorithm there is? And so for context, I think at LangChain and at other places as well, we found it pretty easy to implement React in a standard way. You plug in any tools and it kind of works off the shelf, can get it up and running. I don't think we have an off-the-shelf kind of implementation of reflection and kind of the general sense. I think the concepts, absolutely, we see used in different kind of specific cognitive architectures, but I don't think we have one that comes off the shelf. I don't think any of the other frameworks have one that comes off the shelf. And I'm curious whether that's because it's not general enough or it's complex as well, because it also requires running it more times.Swyx [00:17:28]: Maybe that's not feasible.Harrison [00:17:30]: I'm curious how you think about the generality, complexity. Should we have one that comes off the shelf?Shunyu [00:17:36]: I think the algorithm is general in the sense that it's just as general as other algorithms, if you think about policy grading or whatever, but it's not applicable to all tasks, just like other algorithms. So you can argue PPO is also general, but it works better for those set of tasks, but not on those set of tasks. I think it's the same situation for reflection. And I think a key bottleneck is the evaluator, right? Basically, you need to have a good sense of the signal. So for example, if you are trying to do a very hard reasoning task, say mathematics, for example, and you don't have any tools, you're operating in this chain of thought setup, then reflection will be pretty hard because in order to reflect upon your thoughts, you have to have a very good evaluator to judge whether your thought is good or not. But that might be as hard as solving the problem itself or even harder. The principle of self-reflection is probably more applicable if you have a good evaluator, for example, in the case of coding. If you have those arrows, then you can just reflect on that and how to solve the bug andSwyx [00:18:37]: stuff.Shunyu [00:18:38]: So I think another criteria is that it depends on the application, right? If you have this latency or whatever need for an actual application with an end-user, the end-user wouldn't let you do two hours of tree-of-thought or reflection, right? You need something as soon as possible. So in that case, maybe this is better to be used as a training time technique, right? You do those reflection or tree-of-thought or whatever, you get a lot of data, and then you try to use the data to train your model better. And then in test time, you still use something as simple as React, but that's already improved.Alessio [00:19:11]: And if you think of the Voyager paper as a way to store skills and then reuse them, how would you compare this reflective memory and at what point it's just ragging on the memory versus you want to start to fine-tune some of them or what's the next step once you get a very long reflective corpus? Yeah.Shunyu [00:19:30]: So I think there are two questions here. The first question is, what type of information or memory are you considering, right? Is it like semantic memory that stores knowledge about the word, or is it the episodic memory that stores trajectories or behaviors, or is it more of a procedural memory like in Voyager's case, like skills or code snippets that you can use to do actions, right?Swyx [00:19:54]: That's one dimension.Shunyu [00:19:55]: And the second dimension is obviously how you use the memory, either retrieving from it, using it in the context, or fine-tuning it. I think the Cognitive Architecture for Language Agents paper has a good categorization of all the different combinations. And of course, which way you use depends on the concrete application and the concrete need and the concrete task. But I think in general, it's good to think of those systematic dimensions and all the possible options there.Swyx [00:20:25]: Harrison also has in LangMEM, I think you did a presentation in my meetup, and I think you've done it at a couple other venues as well. User state, semantic memory, and append-only state, I think kind of maps to what you just said.Shunyu [00:20:38]: What is LangMEM? Can I give it like a quick...Harrison [00:20:40]: One of the modules of LangChain for a long time has been something around memory. And I think we're still obviously figuring out what that means, as is everyone kind of in the space. But one of the experiments that we did, and one of the proof of concepts that we did was, technically what it was is you would basically create threads, you'd push messages to those threads in the background, we process the data in a few ways. One, we put it into some semantic store, that's the semantic memory. And then two, we do some extraction and reasoning over the memories to extract. And we let the user define this, but extract key facts or anything that's of interest to the user. Those aren't exactly trajectories, they're maybe more closer to the procedural memory. Is that how you'd think about it or classify it?Shunyu [00:21:22]: Is it like about knowledge about the word, or is it more like how to do something?Swyx [00:21:27]: It's reflections, basically.Harrison [00:21:28]: So in generative worlds.Shunyu [00:21:30]: Generative agents.Swyx [00:21:31]: The Smallville. Yeah, the Smallville one.Harrison [00:21:33]: So the way that they had their memory there was they had the sequence of events, and that's kind of like the raw events that happened. But then every N events, they'd run some synthesis over those events for the LLM to insert its own memory, basically. It's that type of memory.Swyx [00:21:49]: I don't know how that would be classified.Shunyu [00:21:50]: I think of that as more of the semantic memory, but to be fair, I think it's just one way to think of that. But whether it's semantic memory or procedural memory or whatever memory, that's like an abstraction layer. But in terms of implementation, you can choose whatever implementation for whatever memory. So they're totally kind of orthogonal. I think it's more of a good way to think of the things, because from the history of cognitive science and cognitive architecture and how people study even neuroscience, that's the way people think of how the human brain organizes memory. And I think it's more useful as a way to think of things. But it's not like for semantic memory, you have to do this kind of way to retrieve or fine-tune, and for procedural memory, you have to do that. I think those are totally orthogonal kind of dimensions.Harrison [00:22:34]: How much background do you have in cognitive sciences, and how much do you model some of your thoughts on?Shunyu [00:22:40]: That's a great question, actually. I think one of the undergrad influences for my follow-up research is I was doing an internship at MIT's Computational Cognitive Science Lab with Josh Tannenbaum, and he's a very famous cognitive scientist. And I think a lot of his ideas still influence me today, like thinking of things in computational terms and getting interested in language and a lot of stuff, or even developing psychology kind of stuff. So I think it still influences me today.Swyx [00:23:14]: As a developer that tried out LangMEM, the way I view it is just it's a materialized view of a stream of logs. And if anything, that's just useful for context compression. I don't have to use the full context to run it over everything. But also it's kind of debuggable. If it's wrong, I can show it to the user, the user can manually fix it, and I can carry on. That's a really good analogy. I like that. I'm going to steal that. Sure. Please, please. You know I'm bullish on memory databases. I guess, Tree of Thoughts? Yeah, Tree of Thoughts.Shunyu [00:23:39]: I feel like I'm relieving the defense in like a podcast format. Yeah, no.Alessio [00:23:45]: I mean, you had a banger. Well, this is the one where you're already successful and we just highlight the glory. It was really good. You mentioned that since thinking is kind of like taking an action, you can use action searching algorithms to think of thinking. So just like you will use Tree Search to find the next thing. And the idea behind Tree of Thought is that you generate all these possible outcomes and then find the best tree to get to the end. Maybe back to the latency question, you can't really do that if you have to respond in real time. So what are maybe some of the most helpful use cases for things like this? Where have you seen people adopt it where the high latency is actually worth the wait?Shunyu [00:24:21]: For things that you don't care about latency, obviously. For example, if you're trying to do math, if you're just trying to come up with a proof. But I feel like one type of task is more about searching for a solution. You can try a hundred times, but if you find one solution, that's good. For example, if you're finding a math proof or if you're finding a good code to solve a problem or whatever, I think another type of task is more like reacting. For example, if you're doing customer service, you're like a web agent booking a ticket for an end user. Those are more reactive kind of tasks, or more real-time tasks. You have to do things fast. They might be easy, but you have to do it reliably. And you care more about can you solve 99% of the time out of a hundred. But for the type of search type of tasks, then you care more about can I find one solution out of a hundred. So it's kind of symmetric and different.Alessio [00:25:11]: Do you have any data or intuition from your user base? What's the split of these type of use cases? How many people are doing more reactive things and how many people are experimenting with deep, long search?Harrison [00:25:23]: I would say React's probably the most popular. I think there's aspects of reflection that get used. Tree of thought, probably the least so. There's a great tweet from Jason Wei, I think you're now a colleague, and he was talking about prompting strategies and how he thinks about them. And I think the four things that he had was, one, how easy is it to implement? How much compute does it take? How many tasks does it solve? And how much does it improve on those tasks? And I'd add a fifth, which is how likely is it to be relevant when the next generation of models come out? And I think if you look at those axes and then you look at React, reflection, tree of thought, it tracks that the ones that score better are used more. React is pretty easy to implement. Tree of thought's pretty hard to implement. The amount of compute, yeah, a lot more for tree of thought. The tasks and how much it improves, I don't have amazing visibility there. But I think if we're comparing React versus tree of thought, React just dominates the first two axes so much that my question around that was going to be like, how do you think about these prompting strategies, cognitive architectures, whatever you want to call them? When you're thinking of them, what are the axes that you're judging them on in your head when you're thinking whether it's a good one or a less good one?Swyx [00:26:38]: Right.Shunyu [00:26:39]: Right. I think there is a difference between a prompting method versus research, in the sense that for research, you don't really even care about does it actually work on practical tasks or does it help? Whatever. I think it's more about the idea or the principle, right? What is the direction that you're unblocking and whatever. And I think for an actual prompting method to solve a concrete problem, I would say simplicity is very important because the simpler it is, the less decision you have to make about it. And it's easier to design. It's easier to propagate. And it's easier to do stuff. So always try to be as simple as possible. And I think latency obviously is important. If you can do things fast and you don't want to do things slow. And I think in terms of the actual prompting method to use for a particular problem, I think we should all be in the minimalist kind of camp, right? You should try the minimum thing and see if it works. And if it doesn't work and there's absolute reason to add something, then you add something, right? If there's absolute reason that you need some tool, then you should add the tool thing. If there's absolute reason to add reflection or whatever, you should add that. Otherwise, if a chain of thought can already solve something, then you don't even need to use any of that.Harrison [00:27:57]: Yeah. Or if it's just better prompting can solve it. Like, you know, you could add a reflection step or you could make your instructions a little bit clearer.Swyx [00:28:03]: And it's a lot easier to do that.Shunyu [00:28:04]: I think another interesting thing is like, I personally have never done those kind of like weird tricks. I think all the prompts that I write are kind of like just talking to a human, right? It's like, I don't know. I never say something like, your grandma is dying and you have to solve it. I mean, those are cool, but I feel like we should all try to solve things in a very intuitive way. Just like talking to your co-worker. That should work 99% of the time. That's my personal take.Swyx [00:28:29]: The problem with how language models, at least in the GPC 3 era, was that they over-optimized to some sets of tokens in sequence. So like reading the Kojima et al. paper that was listing step-by-step, like he tried a bunch of them and they had wildly different results. It should not be the case, but it is the case. And hopefully we're getting better there.Shunyu [00:28:51]: Yeah. I think it's also like a timing thing in the sense that if you think about this whole line of language model, right? Like at the time it was just like a text generator. We don't have any idea how it's going to be used, right? And obviously at the time you will find all kinds of weird issues because it's not trained to do any of that, right? But then I think we have this loop where once we realize chain of thought is important or agent is important or tool using is important, what we see is today's language models are heavily optimized towards those things. So I think in some sense they become more reliable and robust over those use cases. And you don't need to do as much prompt engineering tricks anymore to solve those things. I feel like in some sense, I feel like prompt engineering even is like a slightly negative word at the time because it refers to all those kind of weird tricks that you have to apply. But I think we don't have to do that anymore. Like given today's progress, you should just be able to talk to like a coworker. And if you're clear and concrete and being reasonable, then it should do reasonable things for you.Swyx [00:29:51]: Yeah. The way I put this is you should not be a prompt engineer because it is the goal of the big labs to put you out of a job.Shunyu [00:29:58]: You should just be a good communicator. Like if you're a good communicator to humans, you should be a good communicator to languageSwyx [00:30:02]: models.Harrison [00:30:03]: That's the key though, because oftentimes people aren't good communicators to these language models and that is a very important skill and that's still messing around with the prompt. And so it depends what you're talking about when you're saying prompt engineer.Shunyu [00:30:14]: But do you think it's like very correlated with like, are they like a good communicator to humans? You know, it's like.Harrison [00:30:20]: It may be, but I also think I would say on average, people are probably worse at communicating with language models than to humans right now, at least, because I think we're still figuring out how to do it. You kind of expect it to be magical and there's probably some correlation, but I'd say there's also just like, people are worse at it right now than talking to humans.Shunyu [00:30:36]: We should make it like a, you know, like an elementary school class or whatever, how toSwyx [00:30:41]: talk to language models. Yeah. I don't know. Very pro that. Yeah. Before we leave the topic of trees and searching, not specific about QSTAR, but there's a lot of questions about MCTS and this combination of tree search and language models. And I just had to get in a question there about how seriously should people take this?Shunyu [00:30:59]: Again, I think it depends on the tasks, right? So MCTS was magical for Go, but it's probably not as magical for robotics, right? So I think right now the problem is not even that we don't have good methodologies, it's more about we don't have good tasks. It's also very interesting, right? Because if you look at my citation, it's like, obviously the most cited are React, Refraction and Tree of Thought. Those are methodologies. But I think like equally important, if not more important line of my work is like benchmarks and environments, right? Like WebShop or SuiteVenture or whatever. And I think in general, what people do in academia that I think is not good is they choose a very simple task, like Alford, and then they apply overly complex methods to show they improve 2%. I think you should probably match the level of complexity of your task and your method. I feel like where tasks are kind of far behind the method in some sense, right? Because we have some good test-time approaches, like whatever, React or Refraction or Tree of Thought, or like there are many, many more complicated test-time methods afterwards. But on the benchmark side, we have made a lot of good progress this year, last year. But I think we still need more progress towards that, like better coding benchmark, better web agent benchmark, better agent benchmark, not even for web or code. I think in general, we need to catch up with tasks.Harrison [00:32:27]: What are the biggest reasons in your mind why it lags behind?Shunyu [00:32:31]: I think incentive is one big reason. Like if you see, you know, all the master paper are cited like a hundred times more than the task paper. And also making a good benchmark is actually quite hard. It's almost like a different set of skills in some sense, right? I feel like if you want to build a good benchmark, you need to be like a good kind of product manager kind of mindset, right? You need to think about why people should use your benchmark, why it's challenging, why it's useful. If you think about like a PhD going into like a school, right? The prior skill that expected to have is more about, you know, can they code this method and can they just run experiments and can solve that? I think building a benchmark is not the typical prior skill that we have, but I think things are getting better. I think more and more people are starting to build benchmarks and people are saying that it's like a way to get more impact in some sense, right? Because like if you have a really good benchmark, a lot of people are going to use it. But if you have a super complicated test time method, like it's very hard for people to use it.Harrison [00:33:35]: Are evaluation metrics also part of the reason? Like for some of these tasks that we might want to ask these agents or language models to do, is it hard to evaluate them? And so it's hard to get an automated benchmark. Obviously with SweetBench you can, and with coding, it's easier, but.Shunyu [00:33:50]: I think that's part of the skillset thing that I mentioned, because I feel like it's like a product manager because there are many dimensions and you need to strike a balance and it's really hard, right? If you want to make sense, very easy to autogradable, like automatically gradable, like either to grade or either to evaluate, then you might lose some of the realness or practicality. Or like it might be practical, but it might not be as scalable, right? For example, if you think about text game, human have pre-annotated all the rewards and all the language are real. So it's pretty good on autogradable dimension and the practical dimension. If you think about, you know, practical, like actual English being practical, but it's not scalable, right? It takes like a year for experts to build that game. So it's not really that scalable. And I think part of the reason that SweetBench is so popular now is it kind of hits the balance between these three dimensions, right? Easy to evaluate and being actually practical and being scalable. Like if I were to criticize upon some of my prior work, I think webshop, like it's my initial attempt to get into benchmark world and I'm trying to do a good job striking the balance. But obviously we make it all gradable and it's really scalable, but then I think the practicality is not as high as actually just using GitHub issues, right? Because you're just creating those like synthetic tasks.Harrison [00:35:13]: Are there other areas besides coding that jump to mind as being really good for being autogradable?Shunyu [00:35:20]: Maybe mathematics.Swyx [00:35:21]: Classic. Yeah. Do you have thoughts on alpha proof, the new DeepMind paper? I think it's pretty cool.Shunyu [00:35:29]: I think it's more of a, you know, it's more of like a confidence boost or like sometimes, you know, the work is not even about, you know, the technical details or the methodology that it chooses or the concrete results. I think it's more about a signal, right?Swyx [00:35:47]: Yeah. Existence proof. Yeah.Shunyu [00:35:50]: Yeah. It can be done. This direction is exciting. It kind of encourages people to work more towards that direction. I think it's more like a boost of confidence, I would say.Swyx [00:35:59]: Yeah. So we're going to focus more on agents now and, you know, all of us have a special interest in coding agents. I would consider Devin to be the sort of biggest launch of the year as far as AI startups go. And you guys in the Princeton group worked on Suiagents alongside of Suibench. Tell us the story about Suiagent. Sure.Shunyu [00:36:21]: I think it's kind of like a triology, it's actually a series of three works now. So actually the first work is called Intercode, but it's not as famous, I know. And the second work is called Suibench and the third work is called Suiagent. And I'm just really confused why nobody is working on coding. You know, it's like a year ago, but I mean, not everybody's working on coding, obviously, but a year ago, like literally nobody was working on coding. I was really confused. And the people that were working on coding are, you know, trying to solve human evil in like a sick-to-sick way. There's no agent, there's no chain of thought, there's no anything, they're just, you know, fine tuning the model and improve some points and whatever, like, I was really confused because obviously coding is the best application for agents because it's autogradable, it's super important, you can make everything like API or code action, right? So I was confused and I collaborated with some of the students in Princeton and we have this work called Intercode and the idea is, first, if you care about coding, then you should solve coding in an interactive way, meaning more like a Jupyter Notebook kind of way than just writing a program and seeing if it fails or succeeds and stop, right? You should solve it in an interactive way because that's exactly how humans solve it, right? You don't have to, you know, write a program like next token, next token, next token and stop and never do any edits and you cannot really use any terminal or whatever tool. It doesn't make sense, right? And that's the way people are solving coding at the time, basically like sampling a program from a language model without chain of thought, without tool call, without refactoring, without anything. So the first point is we should solve coding in a very interactive way and that's a very general principle that applies for various coding benchmarks. And also, I think you can make a lot of the agent task kind of like interactive coding. If you have Python and you can call any package, then you can literally also browse internet or do whatever you want, like control a robot or whatever. So that seems to be a very general paradigm. But obviously I think a bottleneck is at the time we're still doing, you know, very simple tasks like human eval or whatever coding benchmark people proposed. They were super hard in 2021, like 20%, but they're like 95% already in 2023. So obviously the next step is we need a better benchmark. And Carlos and John, which are the first authors of Swaybench, I think they come up with this great idea that we should just script GitHub and solve whatever human engineers are solving. And I think it's actually pretty easy to come up with the idea. And I think in the first week, they already made a lot of progress. They script the GitHub and they make all the same, but then there's a lot of painful info work and whatever, you know. I think the idea is super easy, but the engineering is super hard. And I feel like that's a very typical signal of a good work in the AI era now.Swyx [00:39:17]: I think also, I think the filtering was challenging, because if you look at open source PRs, a lot of them are just like, you know, fixing typos. I think it's challenging.Shunyu [00:39:27]: And to be honest, we didn't do a perfect job at the time. So if you look at the recent blog post with OpenAI, we improved the filtering so that it's more solvable.Swyx [00:39:36]: I think OpenAI was just like, look, this is a thing now. We have to fix this. These students just rushed it.Shunyu [00:39:45]: It's a good convergence of interests for me.Alessio [00:39:48]: Was that tied to you joining OpenAI? Or was that just unrelated?Shunyu [00:39:52]: It's a coincidence for me, but it's a good coincidence.Swyx [00:39:55]: There is a history of anytime a big lab adopts a benchmark, they fix it. Otherwise, it's a broken benchmark.Shunyu [00:40:03]: So naturally, once we propose swimmage, the next step is to solve it. But I think the typical way you solve something now is you collect some training samples, or you design some complicated agent method, and then you try to solve it. Either super complicated prompt, or you build a better model with more training data. But I think at the time, we realized that even before those things, there's a fundamental problem with the interface or the tool that you're supposed to use. Because that's like an ignored problem in some sense. What your tool is, or how that matters for your task. So what we found concretely is that if you just use the text terminal off the shelf as a tool for those agents, there's a lot of problems. For example, if you edit something, there's no feedback. So you don't know whether your edit is good or not. That makes the agent very confused and makes a lot of mistakes. There are a lot of small problems, you would say. Well, you can try to do prompt engineering and improve that, but it turns out to be actually very hard. We realized that the interface design is actually a very omitted part of agent design. So we did this switch agent work. And the key idea is just, even before you talk about what the agent is, you should talk about what the environment is. You should make sure that the environment is actually friendly to whatever agent you're trying to apply. That's the same idea for humans. Text terminal is good for some tasks, like git, pool, or whatever. But it's not good if you want to look at browser and whatever. Also, browser is a good tool for some tasks, but it's not a good tool for other tasks. We need to talk about how design interface, in some sense, where we should treat agents as our customers. It's like when we treat humans as a customer, we design human computer interfaces. We design those beautiful desktops or browsers or whatever, so that it's very intuitive and easy for humans to use. And this whole great subject of HCI is all about that. I think now the research idea of switch agent is just, we should treat agents as our customers. And we should do like, you know… AICI.Swyx [00:42:16]: AICI, exactly.Harrison [00:42:18]: So what are the tools that a suite agent should have, or a coding agent in general should have?Shunyu [00:42:24]: For suite agent, it's like a modified text terminal, which kind of adapts to a lot of the patterns of language models to make it easier for language models to use. For example, now for edit, instead of having no feedback, it will actually have a feedback of, you know, actually here you introduced like a syntax error, and you should probably want to fix that, and there's an ended error there. And that makes it super easy for the model to actually do that. And there's other small things, like how exactly you write arguments, right? Like, do you want to write like a multi-line edit, or do you want to write a single line edit? I think it's more interesting to think about the way of the development process of an ACI rather than the actual ACI for like a concrete application. Because I think the general paradigm is very similar to HCI and psychology, right? Basically, for how people develop HCIs, they do behavior experiments on humans, right? I do every test, right? Like, which interface is actually better? And I do those behavior experiments, kind of like psychology experiments to humans, and I change things. And I think what's really interesting for me, for this three-agent paper, is we can probably do the same thing for agents, right? We can do every test for those agents and do behavior tests. And through the process, we not only invent better interfaces for those agents, that's the practical value, but we also better understand agents. Just like when we do those A-B tests, we do those HCI, we better understand humans. Doing those ACI experiments, we actually better understand agents. And that's pretty cool.Harrison [00:43:51]: Besides that A-B testing, what are other processes that people can use to think about this in a good way?Swyx [00:43:57]: That's a great question.Shunyu [00:43:58]: And I think three-agent is an initial work. And what we do is the kind of the naive approach, right? You just try some interface, and you see what's going wrong, and then you try to fix that. We do this kind of iterative fixing. But I think what's really interesting is there'll be a lot of future directions that's very promising if we can apply some of the HCI principles more systematically into the interface design. I think that would be a very cool interdisciplinary research opportunity.Harrison [00:44:26]: You talked a lot about agent-computer interfaces and interactions. What about human-to-agent UX patterns? Curious for any thoughts there that you might have.Swyx [00:44:38]: That's a great question.Shunyu [00:44:39]: And in some sense, I feel like prompt engineering is about human-to-agent interface. But I think there can be a lot of interesting research done about... So prompting is about how humans can better communicate with the agent. But I think there could be interesting research on how agents can better communicate with humans, right? When to ask questions, how to ask questions, what's the frequency of asking questions. And I think those kinds of stuff could be very cool research.Harrison [00:45:07]: Yeah, I think some of the most interesting stuff that I saw here was also related to coding with Devin from Cognition. And they had the three or four different panels where you had the chat, the browser, the terminal, and I guess the code editor as well.Swyx [00:45:19]: There's more now.Harrison [00:45:19]: There's more. Okay, I'm not up to date. Yeah, I think they also did a good job on ACI.Swyx [00:45:25]: I think that's the main learning I have from Devin. They cracked that. Actually, there was no foundational planning breakthrough. The planner is actually pretty simple, but ACI that they broke through on.Shunyu [00:45:35]: I think making the tool good and reliable is probably like 90% of the whole agent. Once the tool is actually good, then the agent design can be much, much simpler. On the other hand, if the tool is bad, then no matter how much you put into the agent design, planning or search or whatever, it's still going to be trash.Harrison [00:45:53]: Yeah, I'd argue the same. Same with like context and instructions. Like, yeah, go hand in hand.Alessio [00:46:00]: On the tool, how do you think about the tension of like, for both of you, I mean, you're building a library, so even more for you. The tension between making now a language or a library that is like easy for the agent to grasp and write versus one that is easy for like the human to grasp and write. Because, you know, the trend is like more and more code gets written by the agent. So why wouldn't you optimize the framework to be as easy as possible for the model versus for the person?Swyx [00:46:24]: I think it's possible to design an interfaceShunyu [00:46:25]: that's both friendly to humans and agents. But what do you think?Harrison [00:46:29]: We haven't thought about that from the perspective, like we're not trying to design LangChain or LangGraph to be friendly. But I mean, I think to be friendly for agents to write.Swyx [00:46:42]: But I mean, I think we see this with like,Harrison [00:46:43]: I saw some paper that used TypeScript notation instead of JSON notation for tool calling and it got a lot better performance. So it's definitely a thing. I haven't really heard of anyone designing like a syntax or a language explicitly for agents, but there's clearly syntaxes that are better.Shunyu [00:46:59]: I think function calling is a good example where it's like a good interface for both human programmers and for agents, right? Like for developers, it's actually a very friendly interface because it's very concrete and you don't have to do prompt engineering anymore. You can be very systematic. And for models, it's also pretty good, right? Like it can use all the existing coding content. So I think we need more of those kinds of designs.Swyx [00:47:21]: I will mostly agree and I'll slightly disagree in terms of this, which is like, whether designing for humans also overlaps with designing for AI. So Malte Ubo, who's the CTO of Vercel, who is creating basically JavaScript's competitor to LangChain, they're observing that basically, like if the API is easy to understand for humans, it's actually much easier to understand for LLMs, for example, because they're not overloaded functions. They don't behave differently under different contexts. They do one thing and they always work the same way. It's easy for humans, it's easy for LLMs. And like that makes a lot of sense. And obviously adding types is another one. Like type annotations only help give extra context, which is really great. So that's the agreement. And then a disagreement is that when I use structured output to do my chain of thought, I have found that I change my field names to hint to the LLM of what the field is supposed to do. So instead of saying topics, I'll say candidate topics. And that gives me a better result because the LLM was like, ah, this is just a draft thing I can use for chain of thought. And instead of like summaries, I'll say topic summaries to link the previous field to the current field. So like little stuff like that, I find myself optimizing for the LLM where I, as a human, would never do that. Interesting.Shunyu [00:48:32]: It's kind of like the way you optimize the prompt, it might be different for humans and for machines. You can have a common ground that's both clear for humans and agents, but to improve the human performance versus improving the agent performance, they might move to different directions.Swyx [00:48:48]: Might move different directions. There's a lot more use of metadata as well, like descriptions, comments, code comments, annotations and stuff like that. Yeah.Harrison [00:48:56]: I would argue that's just you communicatingSwyx [00:48:58]: to the agent what it should do.Harrison [00:49:00]: And maybe you need to communicate a little bit more than to humans because models aren't quite good enough yet.Swyx [00:49:06]: But like, I don't think that's crazy.Harrison [00:49:07]: I don't think that's like- It's not crazy.Swyx [00:49:09]: I will bring this in because it just happened to me yesterday. I was at the cursor office. They held their first user meetup and I was telling them about the LLM OS concept and why basically every interface, every tool was being redesigned for AIs to use rather than humans. And they're like, why? Like, can we just use Bing and Google for LLM search? Why must I use Exa? Or what's the other one that you guys work with?Harrison [00:49:32]: Tavilli.Swyx [00:49:33]: Tavilli. Web Search API dedicated for LLMs. What's the difference?Shunyu [00:49:36]: Exactly. To Bing API.Swyx [00:49:38]: Exactly.Harrison [00:49:38]: There weren't great APIs for search. Like the best one, like the one that we used initially in LangChain was SERP API, which is like maybe illegal. I'm not sure.Swyx [00:49:49]: And like, you know,Harrison [00:49:52]: and now there are like venture-backed companies.Swyx [00:49:53]: Shout out to DuckDuckGo, which is free.Harrison [00:49:55]: Yes, yes.Swyx [00:49:56]: Yeah.Harrison [00:49:56]: I do think there are some differences though. I think you want, like, I think generally these APIs try to return small amounts of text information, clear legible field. It's not a massive JSON blob. And I think that matters. I think like when you talk about designing tools, it's not only the, it's the interface in the entirety, not only the inputs, but also the outputs that really matter. And so I think they try to make the outputs.Shunyu [00:50:18]: They're doing ACI.Swyx [00:50:19]: Yeah, yeah, absolutely.Harrison [00:50:20]: Really?Swyx [00:50:21]: Like there's a whole set of industries that are just being redone for ACI. It's weird. And so my simple answer to them was like the error messages. When you give error messages, they should be basically prompts for the LLM to take and then self-correct. Then your error messages get more verbose, actually, than you normally would with a human. Stuff like that. Like a little, honestly, it's not that big. Again, like, is this worth a venture-backed industry? Unless you can tell us. But like, I think Code Interpreter, I think is a new thing. I hope so.Alessio [00:50:52]: We invested in it to be so.Shunyu [00:50:53]: I think that's a very interesting point. You're trying to optimize to the extreme, then obviously they're going to be different. For example, the error—Swyx [00:51:00]: Because we take it very seriously. Right.Shunyu [00:51:01]: The error for like language model, the longer the better. But for humans, that will make them very nervous and very tired, right? But I guess the point is more like, maybe we should try to find a co-optimized common ground as much as possible. And then if we have divergence, then we should try to diverge. But it's more philosophical now.Alessio [00:51:19]: But I think like part of it is like how you use it. So Google invented the PageRank because ideally you only click on one link, you know, like the top three should have the answer. But with models, it's like, well, you can get 20. So those searches are more like semantic grouping in a way. It's like for this query, I'll return you like 20, 30 things that are kind of good, you know? So it's less about ranking and it's more about grouping.Shunyu [00:51:42]: Another fundamental thing about HCI is the difference between human and machine's kind of memory limit, right? So I think what's really interesting about this concept HCI versus HCI is interfaces that's optimized for them. You can kind of understand some of the fundamental characteristics, differences of humans and machines, right? Why, you know, if you look at find or whatever terminal command, you know, you can only look at one thing at a time or that's because we have a very small working memory. You can only deal with one thing at a time. You can only look at one paragraph of text at the same time. So the interface for us is by design, you know, a small piece of information, but more temporal steps. But for machines, that should be the opposite, right? You should just give them a hundred different results and they should just decide in context what's the most relevant stuff and trade off the context for temporal steps. That's actually also better for language models because like the cost is smaller or whatever. So it's interesting to connect those interfaces to the fundamental kind of differences of those.Harrison [00:52:43]: When you said earlier, you know, we should try to design these to maybe be similar as possible and diverge if we need to.Swyx [00:52:49]: I actually don't have a problem with them diverging nowHarrison [00:52:51]: and seeing venture-backed startups emerging now because we are different from machines code AI. And it's just so early on, like they may still look kind of similar and they may still be small differences, but it's still just so early. And I think we'll only discover more ways that they differ. And so I'm totally fine with them kind of like diverging earlySwyx [00:53:10]: and optimizing for the...Harrison [00:53:11]: I agree. I think it's more like, you know,Shunyu [00:53:14]: we should obviously try to optimize human interface just for humans. We're already doing that for 50 years. We should optimize agent interface just for agents, but we might also try to co-optimize both and see how far we can get. There's enough people to try all three directions. Yeah.Swyx [00:53:31]: There's a thesis I sometimes push, which is the sour lesson as opposed to the bitter lesson, which we're always inspired by human development, but actually AI develops its own path.Shunyu [00:53:40]: Right. We need to understand better, you know, what are the fundamental differences between those creatures.Swyx [00:53:45]: It's funny when really early on this pod, you were like, how much grounding do you have in cognitive development and human brain stuff? And I'm like
AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Runway, Poe, Anthropic
U.com, an AI-powered search engine, has raised $50 million in a series B funding round, bringing their total funding to $99 million. The company is focused on accuracy and safety, and aims to provide a tool that can autonomously complete tasks for users. They offer a wide range of AI models and allow users to switch between them. The investors in this round include Salesforce Ventures, Nvidia, and DuckDuckGo. The conversation also touches on the concept of agents and the future direction of AI. Get on the AI Box Waitlist: https://AIBox.ai/ Conor's AI Course: https://www.ai-mindset.ai/courses Jaeden's Podcast Course: https://podcaststudio.com/courses/ Conor's AI Newsletter: https://www.ai-mindset.ai/ Jaeden's AI Hustle Community: https://www.skool.com/aihustle/about 00:00 Introduction to U.com 02:01 Focus on Agents 02:59 Wide Range of AI Models 06:19 Investors in U.com
The US Department of Justice is considering breaking up tech giant Google, according to media reports. That news comes after a court ruling earlier this month that the company, which controls 90 percent of the search engine market, violated antitrust laws. “Google is a monopolist, and it has acted as one to maintain its monopoly,” U.S. District Judge Amit Mehta wrote in his decision. But some experts think a breakup is unlikely, and Google says it will appeal. We'll talk about what the case could mean for consumers, the company, and the future of the internet. Guests: Rebecca Haw Allensworth, associate dean for research and professor of law, Vanderbilt Law School Leah Nylen, antitrust reporter, Bloomberg News Kamyl Bazbaz, senior vice president for public affairs, DuckDuckGo
Welcome back to another episode of the Niche Pursuits News podcast! It's been a very big week in the news, with Google taking center stage for several different reasons. This week, Jared and Morgan break it all down for us, and they also share the progress they're making on their side hustles, and some weird niche sites! Let's jump into it! The first order of business is Google's announcement about its August core update, which the company has been teasing for some time. Moving along, they talk about an article in the New York Times about how the US Justice Department is considering several scenarios after deciding that Google is a monopoly, including possibly breaking up the company. Jared shares the timeline and the opinions of Duck Duck Go, and Morgan talks about what happened in the EU when Google was forced to give users a choice when installing their web browsers. It's pretty surprising what she reveals! They also talk about the ways this ruling could change the internet and which of the scenarios they'd prefer to see. Is Google really resting on its laurels, as Morgan says? The next topic up for discussion is that Reddit investors are concerned about what all the traffic is doing to its stock price, as company shares have plummeted by 26%. Why are they complaining about the traffic? Why are “logged in” users more valuable than “logged out” ones? How does Reddit's current situation contrast with its attitude just a few weeks ago? Then Jared and Morgan tie the Reddit news to Google's announcement about its current core update. What do they predict might happen? Listen to the full episode to hear what they say! When it's time to talk about their side hustles, Jared comments briefly about the course he recently launched on the Amazon Influencer Program, but then dives into the newsletter he recently acquired, The Slice. He shares his stats, such as open rates, ad earnings, and affiliate income, and his future plans for the side hustle. He also talks about Creator Connections, in the Influencer Program, and shares his latest earnings. Morgan then talks about her Amazon Influencer side hustle and shares her experience creating more polished videos for the program and the impact on her video clicks. She also shares a strategy she's going to experiment with in the coming weeks. Jared's weird niche site comes with its own personal story this week: San Diego Sandcastles, which he discovered while at the beach with his family. When it's Morgan's turn, she shares This Person Does Not Exist, which generates AI images of people who don't exist. This site has been around since 2021, and she shares some of its stats. Ready to join a niche publishing mastermind, and hear from industry experts each week? Join the Niche Pursuits Community here: https://community.nichepursuits.com Be sure to get more content like this in the Niche Pursuits Newsletter Right Here: https://www.nichepursuits.com/newsletter Want a Faster and Easier Way to Build Internal Links? Get $15 off Link Whisper with Discount Code "Podcast" on the Checkout Screen: https://www.nichepursuits.com/linkwhisper Get SEO Consulting from the Niche Pursuits Podcast Host, Jared Bauman: https://www.nichepursuits.com/201creative
This week, we're talking about a topic that often gets overlooked but is crucial for your blog's success—the Sidebar. But before we jump into optimizing that little widget on the right-hand side of your blog, let's talk briefly about some big news in the digital world. Google recently lost a significant lawsuit declaring them to be a monopoly, which could bring about major changes in the online landscape over the next few years. While the exact impact on our blog traffic remains to be seen, this shakeup could create new opportunities for other search engines like Search GPT, Bing, and DuckDuckGo to gain traction. This could mean more eyeballs on your posts and more chances to be discovered. Why Your Sidebar Matters More Than You Think Now, let's shift our focus to something more within our control—your blog's Sidebar. If you're like me, you might not have paid much attention to it since you first installed your latest theme. However, the Sidebar is a critical part of your blog's user experience and can greatly impact your site's performance. I used to have an author box, an email opt-in, and even my Pinterest feed over there, thinking it was a good way to engage readers. But after learning from experts like Brandon Gailey from the Blogging Millionaire Podcast, I realized I was making some big mistakes. The Sidebar should be optimized for performance and conversions. First, make sure your Sidebar is on the right-hand side of your blog—this is crucial for user navigation and site speed. Avoid cluttering it with social media plugins that slow down your page. Instead, focus on your lowest hanging fruit: your email opt-in or freebie offer, followed by a widget displaying your latest 10 to 15 posts. This setup not only improves user engagement but also helps with internal linking, which can boost your Google rankings. Key Takeaways for a Clean and Effective Sidebar To sum it up, your Sidebar should be clean, concise, and strategically designed to promote your content and products. Start with your most important opt-ins at the top, followed by recent posts, and finish with higher-ticket items like courses or memberships. If you don't have your own products yet, consider placing affiliate links—but remember to keep images minimal and optimized to avoid slowing down your site. A well-optimized Sidebar can make a huge difference in how visitors interact with your blog and how search engines rank your site. I hope these tips help you rethink and revamp your Sidebar. It's a small part of your blog, but when done right, it can have a big impact. https://creativesonfirepodcast.com/episode156 Links and resources mentioned during this episode: 155 | Search GPT: Boost Your Blog's Visibility with These Essential Tips Brandon Gaille's Blogging Millionaire Podcast: Link to Podcast Start a Blog: Blog Kickstart 2024 Content Planner Creatives On Fire™️ Content Planner SUBSCRIBE AND REVIEW I am honored to share a new Blogging Creative on Fire each week on the podcast to bring you inspiration, behind-the-scenes secrets, and quality tips. I hope it is truly helpful for you. One of the best ways you can bless me in return is to subscribe to the show and leave a review. By subscribing, you allow each episode to be downloaded straight to your phone which helps the download numbers and ensures you never miss an episode. And when you leave a review, you help show others the value of what we provide! You can GO HERE to subscribe and review
A federal judge ruled Monday that Google has violated anti-trust laws with monopolistic behavior over online search. The Justice Department said Google's exclusive contracts helped block rivals such as Microsoft's Bing and DuckDuckGo. A future trial will decide what penalties the tech giant may face. Debby weakened into a tropical storm after making landfall as a Category 1 hurricane in Florida's Big Bend. The storm brought severe flooding and dangerous storm surges, while killing at least four people. Vice President Kamala Harris will announce her running mate Tuesday at a rally in Philadelphia. Three candidates are on her shortlist: Sen. Mark Kelly of Arizona, Gov. Josh Shapiro of Pennsylvania, and Gov. Tim Walz of Minnesota. Bangladesh's Prime Minister Sheikh Hasina resigned and fled the country as protesters stormed her residence. Bangladesh's military said it plans to form an interim government. Israel said its military is ready for a swift transition to offense, as the nation braces for an upcoming retaliation from Iran. Meanwhile, the United Nations fired nine staffers from its main agency for Palestinian humanitarian relief, UNRWA. A probe found that they may have been involved in the Oct. 7, 2023, Hamas terror attack. ⭕️Watch in-depth videos based on Truth & Tradition at Epoch TV
The Verge's Nilay Patel, Alex Cranz, and David Pierce discuss announcements from Microsoft Build, the OpenAI's trouble with Scarlett Johansson, new Sonos headphones, and more. Further reading: Microsoft's big bet on building a new type of AI computer Recall is Microsoft's key to unlocking the future of PCs https://www.theverge.com › microsoft-surface-pro-pric... Here's the eight-inch Snapdragon PC for your Windows on Arm experiments How does the Microsoft Surface Laptop stack up to the MacBook Air? Microsoft Build 2024: everything announced Windows now has AI-powered copy and paste Microsoft is making File Explorer more powerful with version control and 7z compression Here's the eight-inch Snapdragon PC for your Windows on Arm experiments Microsoft Edge will translate and dub YouTube videos as you're watching them Microsoft brings out a small language model that can look at pictures Microsoft's new Copilot AI agents act like virtual employees to automate tasks Microsoft outage took down Copilot, DuckDuckGo, and ChatGPT search features OpenAI is ‘in conversations' with Scarlett Johansson over the ChatGPT voice that sounds just like her OpenAI pulls its Scarlett Johansson-like voice for ChatGPT Lawyers say OpenAI could be in real trouble with Scarlett Johansson Scarlett Johansson told OpenAI not to use her voice — and she's not happy they might have anyway OpenAI didn't copy Scarlett Johansson's voice for ChatGPT, records show OpenAI Just Gave Away the Entire Game OpenAI's News Corp deal licenses content from WSJ, New York Post, and more OpenAI strikes Reddit deal to train its AI on your posts The US government is trying to break up Live Nation-Ticketmaster The Sonos Ace headphones are here, and they're damn impressive Sonos CEO Patrick Spence addresses the company's divisive app redesign here's an electric salt spoon that adds umami flavor Apple needs to explain that bug that resurfaced deleted photos Humane is looking for a buyer after the AI Pin's underwhelming debut Email us at vergecast@theverge.com or call us at 866-VERGE11, we love hearing from you. Learn more about your ad choices. Visit podcastchoices.com/adchoices