POPULARITY
For nyligt lancerede OpenAI en ny kraftfuld version af ChatGPTs evne til at lave fotos. Igen blussede diskussionerne op om krænkelse af ophavsret, og om hvorvidt generativ AI overhovedet kan skabe original kunst.Den danske kunstner Jes Brinch vender diskussionen på hovedet: I stedet for at prompte ChatGPT til at lave kunst, så bad han den kunstige intelligens om at lave en prompt til ham, så han kunne lave det maleri, som ChatGPT gerne ville se. Det blev til en lang samtale om kunst mellem maler og computer.Resultatet kan nu ses i Jes Brinchs atelier, og Techtopia lagde vejen forbi til fernisering.Medvirkende:Jes Brinch, kunstnerChatGPT, idemagerHenrik Føhns, tilfældig forbipasserendeLink:Jes Brinch https://www.brinch-visby-wunderkammer.dk/udstilling/
214 | Samuel & Alex reden über die Tech-Szene und pitchen sich Geschäftsideen. Heute: 20% Zölle auf unsere Geschäftsideen, 3 Side-Hustles mit Chat GPTs neuem Bildgenerierungstool, Longevity treibts zu weit und mehr.Sponsor dieser Folge: QONTO. Ein richtiges gutes Geschäftskonto für deine Firma! 3 Monate gratis hier: qonto.de mit Code OPTIMISTEN.Finde eine Geschäftsidee, die perfekt zu dir passt. Dauert nur 1 Minute mit unserem Ideen-Quiz. Hier: digitaleoptimisten.de/quizKapitel:(00:00) Intro(00:36) Trump-Zölle(07:00) Naval Ravikant: Wahrheiten über den Menschen(20:40) OpenAI launcht neues Bildgenerierungs-Tool. 3 schnelle Geschäftsideen.(27:39) Post von Optimisten(31:17) Enhanced Games: Longevity auf dem Weg weg oder here to stay?(40:30) Skandal um 11x: Wo endet Storytelling von Gründern?(52:20) Geschäftsidee von Alex: Kita BlackBoard Rent(57:24) Geschäftsidee von Samuel: Talent RebootAus dem Gespräch:Podcast mit Naval: https://www.youtube.com/watch?v=KyfUysrNacoStory zu 11x möglichem Betrug: https://techcrunch.com/2025/03/24/a16z-and-benchmark-backed-11x-has-been-claiming-customers-it-doesnt-have/Longevity-Cringe mit dieser Morning Routine: https://www.youtube.com/shorts/ecHEkC0Y-kMMehr Kontext:In dieser Episode von Digitale Optimisten diskutieren Alex Mrozek und Samuel Schneider über die neuesten Entwicklungen in der digitalen Welt, insbesondere über OpenAI's neues Bildgenerierungstool. Sie reflektieren über persönliche Ziele, philosophische Einsichten von Nawal Ravikant und die Auswirkungen politischer und wirtschaftlicher Veränderungen in Europa. Die Diskussion umfasst auch die Herausforderungen und Chancen, die mit der fortschreitenden Technologie verbunden sind. In dieser Episode diskutieren Alex Mrozek und Samuel verschiedene Geschäftsideen, die leicht umsetzbar sind, darunter personalisierte Ausmalbücher und Thumbnail-Design für YouTube. Sie sprechen auch über die Möglichkeit, klassische Literatur in Comic-Form zu bringen. Zudem wird das Feedback von Zuhörern thematisiert, gefolgt von einer Diskussion über die Enhanced Games und die Longevity-Bewegung. Abschließend wird der Skandal um das Startup 11X beleuchtet, das wegen falscher Umsatzangaben und irreführender Testimonials in der Kritik steht. In dieser Episode diskutieren Alex Mrozek und Samuel verschiedene Themen rund um Start-ups, die Rolle von AI im Vertrieb, die Notwendigkeit von Menschlichkeit in der Geschäftswelt und innovative Geschäftsideen. Sie beleuchten die Herausforderungen, die mit falschen Zahlen und Glaubwürdigkeit im Start-up-Bereich verbunden sind, und erörtern, wie AI den Vertrieb verändert. Zudem präsentieren sie ihre eigenen Geschäftsideen, darunter 'Blackboard Rent' und 'Talent Reboot', die sich mit Reskilling und der Nutzung ungenutzter Ressourcen befassen.Keywords:OpenAI, Bildgenerierung, Geschäftsideen, Nawal Ravikant, persönliche Entwicklung, digitale Optimisten, Wirtschaft, Philosophie, Technologie, Innovation, Business Ideen, Thumbnail Creator, Comic Books, Zuhörer Feedback, Enhanced Games, Longevity, 11X Skandal, Start-up, AI, Vertrieb, Menschlichkeit, Reskilling, Pitch-Wettbewerb, Glaubwürdigkeit, Geschäftsideen, Technologie, Innovation
Jusspodden sponses av Lovdata Hvordan skal domstolen vurdere voldtektssaker hvor det kun er ord motord – og hvorfor vant Jehovas vitner mot staten? Det er siste torsdag imåneden og tid for en oppsummering av hva som har skjedd av stort ogsmått i jussens verden. I tillegg til klimajuss og høyesterettsdommerblir det litt juss-knask på slutten. Gjestene er Sindre Granly Meldalen (Presseforbundet) og Merete Smith(Advokatforeningen). Månedens rettssak: Denne måneden snakker vi om klimajuss! Miljøorganisasjoner har denne måneden vært i Høyesterett. Sindre og Merete snakker om hva denne saken går ut på, og vi kommer også litt inn på Førdefjord-søksmålet hvor EFTA-domstolen denne måneden kom med en rådgivende uttalelse (som kan leses her). Lagmannsrettens avgjørelse her. Høyesterett om Grl. § 112 i «Det store klimasøksmålet» her Månedens dom: Hvordan skal dommere vurdere voldtektssaker hvor det er ord mot ord og troverdighetsvurderinger i sentrum? Høyesterett har kommet med ny dom i HR-2025-458-A. Episoden Sindre anbefaler om samtykkelov kan du høre her. Marianne og Merete forteller også hvorfor Jehovas vitner vant over staten og hvorfor de er spente på en anke (LB-2024-81251-2) I tillegg gir Marianne en kort og upresis oppsummering av storkammerdommen HR-2025-490-S. For en kort og presis oppsummering av saken kan du se på Høyesteretts video her Månedens lov(utvalg): Straffereaksjonsutvalget leverte sin rapport i mars. Utvalget skulle gjennomføre en bred evaluering av forvaring og dom på overføring til hhv tvungent psykisk helsevern og tvungen omsorg + ivaretakelsen av helsen til innsatte i fengsel. Store spørsmål og gjestene sorterer og plukker ut det viktigste for at du skal få en oppsummering! Månedens juss-knask: Sindre forteller om politiets krav om innsyn i leseloggen til avisen Fædrelandsvennen som skal vurderes av Høyesterett, Merete omtaler verdens første advokatkonvensjon som månedens gladsak og Marianne er spent på Datatilsynets vurdering av chatGPTs påstand om at en nordmann hadde drept sine barn (de lever heldigvis i beste velgående) Er det noe du vil høre mer om - eller har du ris eller ros? Ta kontakt på jusspodden@gmail.com Lenker over leder til Lovdatas åpne sider. Jusspodden er uavhengig og Lovdata legger ikke føringer på produksjonen.See omnystudio.com/listener for privacy information.
Das ist das KI-Update vom 26.03.2025 unter anderem mit diesen Themen: OpenAI integriert neuen Bildgenerator in ChatGPT Anthropic verbessert Claude-Leistung mit simplem "Denk"-Befehl eBay will personenbezogene Daten für KI-Training nutzen und neuer AGI-Test überfordert KI-Modelle Links zu allen Themen der heutigen Folge findet Ihr hier: https://heise.de/-10329433 https://www.heise.de/thema/KI-Update https://pro.heise.de/ki/ https://www.heise.de/newsletter/anmeldung.html?id=ki-update https://www.heise.de/thema/Kuenstliche-Intelligenz https://the-decoder.de/ https://www.heiseplus.de/podcast https://www.ct.de/ki
Neste episódio conjunto do Fronteiras da Engenharia de Software e do Elixir em Foco, Adolfo Neto, Maria Claudia Emer e Zoey Pessanha entrevistaram José Valim, criador da linguagem de programação Elixir. A conversa abordou o tema de boas práticas e anti-padrões (code smells) em Elixir, destacando a importância de pesquisas acadêmicas na área. Adolfo e Valim mencionaram especificamente o trabalho realizado por Lucas Vegi e Marco Tulio Valente, que investigaram code smells na comunidade Elixir, resultando em uma página dedicada a anti-padrões na documentação oficial da linguagem.José Valim ressaltou a escassez de materiais sobre design patterns e refactoring para linguagens funcionais, enfatizando a necessidade de mais estudos e publicações sobre esses temas. Ele explicou que iniciativas como a documentação viva dos anti-padrões ajudam a comunidade a identificar práticas inadequadas e aprimorar continuamente a qualidade do código produzido.Além disso, Valim discutiu brevemente o futuro do Elixir, mencionando projetos recentes como o desenvolvimento do Livebook, ferramenta semelhante ao Jupyter Notebook, e avanços relacionados à tipagem gradual. Ele destacou o potencial da linguagem para sistemas distribuídos e concorrentes, reforçando seu uso crescente por empresas ao redor do mundo. No fim, Valim respondeu qual é a próxima fronteira da Engenharia de Software.José Valim:X (Twitter): https://twitter.com/josevalimLinkedIn: https://www.linkedin.com/in/josevalim/Bluesky: https://bsky.app/profile/josevalim.bsky.socialDashbit: https://dashbit.co/Artigos científicos:The Design Principles of the Elixir Type SystemGiuseppe Castagna, Guillaume Duboc, José Valimhttps://www.irif.fr/_media/users/gduboc/elixir-types.pdfGuard analysis and safe erasure gradual typing: a type system for ElixirGiuseppe Castagna, Guillaume Dubochttps://arxiv.org/abs/2408.14345Links:Ep. Roberto Ierusalimschy (Lua) https://fronteirases.github.io/episodios/paginas/52 Lua na BEAM https://hexdocs.pm/lua/Lua.htmlEp. Leonardo de Moura (Lean) https://fronteirases.github.io/episodios/paginas/41 Episódio Honey Potion https://www.youtube.com/watch?v=sCV17mv-glE Honey Potion no GitHub https://github.com/lac-dcc/honey-potionTese Lucas Vegi https://repositorio.ufmg.br/handle/1843/80651 Artigos Lucas Vegi e Marco Tulio Valentehttps://scholar.google.com/citations?hl=pt-BR&user=N6KnVK8AAAAJ&view_op=list_works&sortby=pubdateYou have built an Erlang https://vereis.com/posts/you_built_an_erlang Beyond Functional Programming with Elixir and Erlanghttps://blog.plataformatec.com.br/2016/05/beyond-functional-programming-with-elixir-and-erlang/ ChatGPTs para Elixir e Erlang https://gist.github.com/adolfont/a747dcc9cbef002f510b6dbf050695ebErlang Ecosystem Foundation https://erlef.org/ Entrevistas com José Valim https://open.spotify.com/playlist/0L3paiT1aHtYvW8LaM4XUV Talvez o episódio com Bill Gates seja este https://www.bbc.co.uk/programmes/w3ct6pmw Guillaume Duboc https://gldubc.github.io/ PhD student at Université Paris Cité, under the supervision of Giuseppe Castagna https://www.irif.fr/~gc/ Snow Xuejing Huang (pós-doutoranda) https://xsnow.live/ From dynamic to static, Elixir begins its transformationhttps://www.ins2i.cnrs.fr/en/cnrsinfo/dynamic-static-elixir-begins-its-transformation Elixir Type Checker - A (prototype) type checker for Elixir based on set-theoretic type systems.https://typex.fly.dev/ Bringing Types to Elixir by Giuseppe Castagna and Guillaume Duboc | ElixirConf EU 2023https://www.youtube.com/watch?v=gJJH7a2J9O8 Quem é José Valim? Respostas de vários LLMshttps://gist.github.com/adolfont/a95b7e37867cc1b2e24cd0e372727d8cHoney Potion https://www.youtube.com/watch?v=CoFNns01VjARefactorEx https://github.com/gp-pereira/refactorexJido frameworkhttps://github.com/agentjido/jido Fronteiras da Engenharia de Software https://fronteirases.github.io/ Elixir em Foco https://www.elixiremfoco.com/
Neste episódio conjunto do Fronteiras da Engenharia de Software e do Elixir em Foco, Adolfo Neto, Maria Claudia Emer e Zoey Pessanha entrevistaram José Valim, criador da linguagem de programação Elixir. A conversa abordou o tema de boas práticas e anti-padrões (code smells) em Elixir, destacando a importância de pesquisas acadêmicas na área. Adolfo e Valim mencionaram especificamente o trabalho realizado por Lucas Vegi e Marco Tulio Valente, que investigaram code smells na comunidade Elixir, resultando em uma página dedicada a anti-padrões na documentação oficial da linguagem.José Valim ressaltou a escassez de materiais sobre design patterns e refactoring para linguagens funcionais, enfatizando a necessidade de mais estudos e publicações sobre esses temas. Ele explicou que iniciativas como a documentação viva dos anti-padrões ajudam a comunidade a identificar práticas inadequadas e aprimorar continuamente a qualidade do código produzido.Além disso, Valim discutiu brevemente o futuro do Elixir, mencionando projetos recentes como o desenvolvimento do Livebook, ferramenta semelhante ao Jupyter Notebook, e avanços relacionados à tipagem gradual. Ele destacou o potencial da linguagem para sistemas distribuídos e concorrentes, reforçando seu uso crescente por empresas ao redor do mundo. No fim, Valim respondeu qual é a próxima fronteira da Engenharia de Software.José Valim:X (Twitter): https://twitter.com/josevalimLinkedIn: https://www.linkedin.com/in/josevalim/Bluesky: https://bsky.app/profile/josevalim.bsky.socialDashbit: https://dashbit.co/Artigos científicos:The Design Principles of the Elixir Type SystemGiuseppe Castagna, Guillaume Duboc, José Valimhttps://www.irif.fr/_media/users/gduboc/elixir-types.pdfGuard analysis and safe erasure gradual typing: a type system for ElixirGiuseppe Castagna, Guillaume Dubochttps://arxiv.org/abs/2408.14345Links:Ep. Roberto Ierusalimschy (Lua) https://fronteirases.github.io/episodios/paginas/52 Lua na BEAM https://hexdocs.pm/lua/Lua.htmlEp. Leonardo de Moura (Lean) https://fronteirases.github.io/episodios/paginas/41 Episódio Honey Potion https://www.youtube.com/watch?v=sCV17mv-glE Honey Potion no GitHub https://github.com/lac-dcc/honey-potionTese Lucas Vegi https://repositorio.ufmg.br/handle/1843/80651 Artigos Lucas Vegi e Marco Tulio Valentehttps://scholar.google.com/citations?hl=pt-BR&user=N6KnVK8AAAAJ&view_op=list_works&sortby=pubdateYou have built an Erlang https://vereis.com/posts/you_built_an_erlang Beyond Functional Programming with Elixir and Erlanghttps://blog.plataformatec.com.br/2016/05/beyond-functional-programming-with-elixir-and-erlang/ ChatGPTs para Elixir e Erlang https://gist.github.com/adolfont/a747dcc9cbef002f510b6dbf050695ebErlang Ecosystem Foundation https://erlef.org/ Entrevistas com José Valim https://open.spotify.com/playlist/0L3paiT1aHtYvW8LaM4XUV Talvez o episódio com Bill Gates seja este https://www.bbc.co.uk/programmes/w3ct6pmw Guillaume Duboc https://gldubc.github.io/ PhD student at Université Paris Cité, under the supervision of Giuseppe Castagna https://www.irif.fr/~gc/ Snow Xuejing Huang (pós-doutoranda) https://xsnow.live/ From dynamic to static, Elixir begins its transformationhttps://www.ins2i.cnrs.fr/en/cnrsinfo/dynamic-static-elixir-begins-its-transformation Elixir Type Checker - A (prototype) type checker for Elixir based on set-theoretic type systems.https://typex.fly.dev/ Bringing Types to Elixir by Giuseppe Castagna and Guillaume Duboc | ElixirConf EU 2023https://www.youtube.com/watch?v=gJJH7a2J9O8 Quem é José Valim? Respostas de vários LLMshttps://gist.github.com/adolfont/a95b7e37867cc1b2e24cd0e372727d8cHoney Potion https://www.youtube.com/watch?v=CoFNns01VjARefactorEx https://github.com/gp-pereira/refactorexJido frameworkhttps://github.com/agentjido/jido Fronteiras da Engenharia de Software https://fronteirases.github.io/ Elixir em Foco https://www.elixiremfoco.com/
EPISODE #90: Welcome back to the pod! Elon took hits from all angles this week. Tesla cars & chargers are being attacked, the stock is down, X went down on Monday, and the people are protesting. The good news? Trump will take one of those Teslas! Our president seemed very impressed with the car, proclaiming “Wow… everything's computer”. A battle of citizenship is underway as Federal immigration authorities detained Mahmoud Khalil, a Palestinian graduate student who played a prominent role in anti-Israel protests at Columbia University. Are we abusing free speech rights? In the latest tech news, Sam Altman & ChatGPT release a new model that is going after creative writing.. but is it actually any good? Finally.. the pod bros competition. On one side, Gavin Newsom interviewed Steve Bannon (what?) and on the other side, Michelle Obama's new podcast can barely crack 20k subscribers on YT. The media landscape has changed and it looks like Gavin understands the game better than anyone on the left.Featuring Mike Solana, Brandon Gorrell, Riley Nork, Molly O'Shea, Kartik SathappanWe have partnered with AdQuick! They gave us a 'Moon Should Be A State' billboard in Times Square!https://www.adquick.com/Sign Up For The Pirate Wires Daily! https://get.piratewires.com/pw/dailyPirate Wires Twitter: https://twitter.com/PirateWiresMike Twitter: https://twitter.com/micsolanaBrandon Twitter: https://twitter.com/brandongorrellRiley Twitter: https://x.com/rylzdigitalMolly on X: https://x.com/MollySOSheaKartik on X: https://x.com/sathaxeTIMESTAMPS:0:00 - Welcome Back To The Pod!1:30 - Everything's Computer - Elon & Tesla's crazy week. Trump buys a Tesla6:30 - Unhinged Protests13:45 - Canadians Want To Revoke Elon's Citizenship - Should elected officials have dual citizenship?18:15 - Citizen Hamas - Columbia University Protestor Detained by ICE31:45 - Thank You AdQuick For Sponsoring The Show!32:30 - The newest ChatGPT Creative Writing AI - Is It Any Good? What Does The AI Future Look Like?52:40 - Pod Bros - Gavin Newsom Interviews Steve Bannon - Michelle Obama Can Barely Get 20k Subscribers1:14:00 - Thanks For Watching/Listening! Like & Subscribe - Tell Your Friends!#podcast #technology #politics #culture
Join Tisha in conversation with Val Madden, an executive creative director with over 20 years of experience in leading marketing campaigns for global brands like NBC, Amazon, MGM Studios, and Prime Video, as they take a dive deep into artificial intelligence to discuss their impact on the creative process. Val shares her unique perspective on blending human-centered storytelling with AI to streamline workflows and craft emotionally resonant, original content. She also explains how her consultancy is working on customized AI tools for creative marketing strategy, detailing the significant improvements in efficiency and creativity it brings. The conversation also touches on how AI can revolutionize independent filmmaking and the entertainment industry, advising creatives to adopt AI tools to avoid being left behind. Val elaborates on the importance of curiosity, effective communication with AI, and the ethical use of these technologies. 01:58 Val Madden's Background and Career 04:09 AI Integration in Creative Processes 05:28 Custom AI Tools and Their Benefits 09:16 AI's Role in the Entertainment Industry 20:01 Practical AI Tools and Tips 38:16 Ethics and Future of AI About Val Madden: Val Madden is an Executive Creative Director with 20+ years of experience in audience-centric branding, visual storytelling, and leading marketing campaigns for global entertainment brands like NBC, Amazon MGM Studios, and Prime Video. As a trailblazer in creative innovation, Val blends human-centered storytelling with Generative AI to streamline workflows and craft emotionally resonant, original content that engages audiences and drives business results. Val champions the integration of AI into creative processes, fostering efficiency, play, and impactful outcomes. Her unique perspective, shaped by the gifts of dyslexia, allows her to spot patterns, solve problems, and develop fresh, innovative strategies. Through her consultancy, VMCre8, Val is building the Promptsteer Playground, a space to explore AI-driven creative prompts and custom ChatGPTs like “Maxx Strata”; An AI-powered tool for creative marketing strategy and campaign development. She is passionate about mentorship and helping others embrace AI as a transformative tool for creativity, innovation, and efficiency. Val is here to share her insights into how AI tools can solve pain points, elevate creativity, streamline workflows, and empower us to navigate life and career with clarity and purpose. Connect with Val at: www.vmcre8.com www.promptsteer.com LinkedIn: @valeriemadden Instragram: @valspov
Visst vore det skönt om en AI kunde jobba för dig i bakgrunden, utan att du behöver be den om hjälp hela tiden? Nu är det möjligt. Veckans Klart! handlar om vad du kan ha ChatGPTs nya Tasks-funktion till. Vad har du använt ChatGPT Aktiviteter till? Berätta för mig! Jag vill hitta fler sätt att få mer saker gjort med mindre ansträngning, så jag är nyfiken på att höra om ditt tips. Anmäl dig till Bestseller Double Masterclass med mig och Mark Gallagher! Klart! finns också som veckobrev till din mejl, för dig som hellre läser än lyssnar (eller gör både ock!). David Stiernholm är struktör. Han hjälper människor och företag att bli mer effektiva genom att skapa bättre ordning och struktur. Hans motto: allting kan göras enklare! David är flitigt anlitad som föreläsare av allt från väletablerade storföretag till entreprenörsföretag i stark tillväxt. Han utmärker sig genom sina superkonkreta verktyg och metoder som du direkt kan använda på jobbet och hemma. Under en föreläsning med David Stiernholm upptäcker du att struktur är både befriande och roligt. Och att du blir mindre stressad och mer effektiv. Mer från David:
Co-hosts Mark Thompson and Steve Little explore OpenAI's "12 Days of OpenAI" campaign, particularly focusing on their new reasoning models and the expanded availability of Canvas across all ChatGPT models.They examine how ChatGPT's new camera view in Advanced Voice Mode transforms mobile research capabilities, demonstrating real-world applications for genealogists.This week's Tip of the Week reveals how to get the most out of free AI tools by leveraging each one's strengths: Gemini for summarization, Perplexity for research, ChatGPT for reasoning, and Claude for writing.In RapidFire, they discuss Apple's cautious AI updates in iOS 18.2, the release of GPT Search for free account holders, Harvard's million-book donation for AI training, and OpenAI's festive Santa Chat feature.Timestamps:In the News:02:41 12 Days of OpenAI: New Features and Updates12:15 Canvas Integration: Now Available Across All Models17:24 Advanced Voice Mode: ChatGPT Gets Camera ViewTip of the Week:23:29 Maximizing Free AI Tools: The Right Tool for Each TaskRapidFire:35:52 Apple Intelligence: iOS 18.2 AI Features40:20 Free GPT Search Goes Public45:22 Harvard's Million-Book Donation for AI Training51:52 OpenAI's Santa Chat ModeResource LinksChatGPThttps://chatgpt.com/12 Days of OpenAIhttps://openai.com/12-days/ChatGPT Canvas Featurehttps://www.tomsguide.com/ai/chatgpt/openai-announces-official-launch-of-canvas-for-writing-and-coding-heres-whats-newChatGPT Projects Featurehttps://www.youtube.com/watch?v=FcB97h3vrzkGPT Search is now free https://www.forbes.com/sites/johnkoetsier/2024/12/16/chatgpt-search-now-free-for-all-heres-why-you-should-try-it/Advanced Voice Mode with Visionhttps://www.tomsguide.com/ai/chatgpt/chatgpt-advanced-voice-with-vision-just-launched-heres-how-to-try-itChatGPT Search and the Chrome Extensionhttps://openai.com/index/introducing-chatgpt-search/Claudehttps://claude.aiClaude Projectshttps://www.anthropic.com/news/projectsGeminihttps://gemini.google.comPerplexity AIhttps://www.perplexity.aiPerplexity Pageshttps://www.perplexity.ai/hub/blog/perplexity-pagesApple Intelligencehttps://www.apple.com/apple-intelligenceHarvard Digital Collectionshttps://hls.harvard.edu/today/harvards-library-innovation-lab-launches-initiative-to-use-public-domain-data-to-train-artificial-intelligence/https://institutionaldatainitiative.org/ChatGPT Santa Modehttps://help.openai.com/en/articles/10139238-santa-s-voice-in-chatgptTags (broad to specific, unique terms)Artificial Intelligence, Genealogy, OpenAI, Family History Research, Voice Technology, ChatGPT, AI Tools, Content Creation, Canvas, 12 Days of OpenAI
12 dage med tech. Fra den 25. december frem til den 5. januar vil jeg hver dag fortælle om et nyt emne indenfor teknologiens verden. God jul & godt nytår!https://linktr.ee/mikeintosh
Silentium ofrece información detallada y análisis de organizaciones como los Caballeros Templarios y los Illuminati, basándose en investigaciones históricas. Su diseño interactivo permite una exploración personalizada de temas como simbología y rituales. Se destaca su capacidad para generar conversaciones y conectar hechos históricos con teorías modernas, promoviendo la reflexión y el cuestionamiento. Finalmente, se presenta como parte de una colección de ChatGPTs llamada Oráculo IA, creada por Sergio Ruiz. https://sergioruizeupd.wordpress.com/2024/12/26/descubre-los-misterios-de-las-sociedades-secretas-con-silentium/ En este link lo podrás encontrar, disfrútalo
This week we talk about neural networks, AGI, and scaling laws.We also discuss training data, user acquisition, and energy consumption.Recommended Book: Through the Grapevine by Taylor N. Carlson TranscriptDepending on whose numbers you use, and which industries and types of investment those numbers include, the global AI industry—that is, the industry focused on producing and selling artificial intelligence-based tools—is valued at something like a fifth to a quarter of a trillion dollars, as of halfway through 2024, and is expected to grow to several times that over the next handful of years, that estimate ranging from two or three times, to upward of ten or twenty-times the current value—again, depending on what numbers you track and how you extrapolate outward from those numbers.That existing valuation, and that projected (or in some cases, hoped-for growth) is predicated in part on the explosive success of this industry, already.It went from around $10 billion in global annual revenue in 2018 to nearly $100 billion in global revenue in 2024, and the big players in this space—among them OpenAI, which kicked off the most recent AI-related race, the one focusing on large-language models, or LLMs, when it released its ChatGPT tool at the tail-end of 2022—have been attracting customers at a remarkable rate, OpenAI hitting a million users in just five days, and pulling in more than 100 million monthly users by early 2023; a rate of customer acquisition that broke all sorts of records.This industry's compound annual growth rate is approaching 40%, and is expected to maintain a rate of something like 37% through 2030, which basically means it has a highly desirable rate of return on investment, especially compared to other potential investment targets.And the market itself, separate from the income derived from that market, is expected to grow astonishingly fast due to the wide variety of applications that're being found for AI tools; that market expanded by something like 50% year over year for the past five years, and is anticipated to continue growing by about 25% for at least the next several years, as more entities incorporate these tools into their setups, and as more, and more powerful tools are developed.All of which paints a pretty flowery picture for AI-based tools, which justifies, in the minds of some analysts, at least, the high valuations many AI companies are receiving: just like many other types of tech companies, like social networks, crypto startups, and until recently at least, metaverse-oriented entities, AI companies are valued primarily based on their future potential outcomes, not what they're doing today.So while many such companies are already showing impressive numbers, their numbers five and ten years from now could be even higher, perhaps ridiculously so, if some predictions about their utility and use come to fruition, and that's a big part of why their valuations are so astronomical compared to their current performance metrics.The idea, then, is that basically every company on the planet, not to mention governments and militaries and other agencies and organizations will be able to amp-up their offerings, and deploy entirely new ones, saving all kinds of money while producing more of whatever it is they produce, by using these AI tools. And that could mean this becomes the industry to replace all other industries, or bare-minimum upon which all other industries become reliant; a bit like power companies, or increasingly, those that build and operate data centers.There's a burgeoning counter-narrative to this narrative, though, that suggests we might soon run into a wall with all of this, and that, consequently, some of these expectations, and thus, these future-facing valuations, might not be as solid as many players in this space hope or expect.And that's what I'd like to talk about today: AI scaling walls—what they are, and what they might mean for this industry, and all those other industries and entities that it touches.—In the world of artificial intelligence, artificial general intelligence, or AGI, is considered by many to be the ultimate end-goal of all the investment and application in and of these systems that we're doing today.The specifics of what AGI means varies based on who you talk to, but the idea is that an artificial general intelligence would be “generally” smart and capable in the same, or in a similar way, to human beings: not just great at doing math and not just great at folding proteins, or folding clothes, but pretty solid at most things, and trainable to be decent, or better than decent at potentially everything.If you could develop such a model, that would allow you, in theory, to push humans out of the loop for just about every job: an AI bot could work the cash register at the grocery store, could drive all the taxis, and could do all of our astronomy research, to name just a few of the great many jobs these systems could take on, subbing in for human beings who would almost always be more expensive, but who—this AI being a generalist and pretty good at everything—wouldn't necessarily do any better than these snazzy new AI systems.So AGI is a big deal because of what it would represent in terms of us suddenly having a potentially equivalent intelligence, an equivalent non-human intelligence, to deal with and theorize over, but it would also be a big deal because it could more or less put everyone out of work, which would no doubt be immensely disruptive, but it would also be really, really great for the pocketbooks of all the companies that are currently burdened with all those paychecks they have to sign each month.The general theory of neural network-based AI systems, which basically means software that is based in some way on the neural networks that biological entities, like mice and fruit flies and humans have in our brains and throughout our bodies, is that these networks should continue to scale as the number of factors that go into making them scale: and usually those factors include the size of the model—which in the case of most of these systems means the number of parameters it includes—the size of the dataset it trains on—which is the amount of data, written, visual, audio, and otherwise, that it's fed as it's being trained—and the amount of time and resources invested in its training—which is a variable sort of thing, as there are faster and slower methods for training, and there are more efficient ways to train that use less energy—but in general, more time and more resources will equal a more girthy, capable AI system.So scale those things up and you'll tend to get a bigger, up-scaled AI on the other side, which will tend to be more capable in a variety of ways; this is similar, in a way, to biological neural networks gaining more neurons, more connections between those neurons, and more life experience training those neurons and connections to help us understand the world, and be more capable of operating within it.That's been the theory for a long while, but the results from recent training sessions seem to be pouring cold water on that assumption, at least a bit, and at least in some circles.One existing scaling concern in this space is that we, as a civilization, will simply run out of novel data to train these things on within a couple of years.The pace at which modern models are being trained is extraordinary, and this is a big part of why the larger players, here, don't even seriously talk about compensating the people and entities that created the writings and TV shows and music they scrape from the web and other archives of such things to train their systems: they are using basically all of it, and even the smallest payout would represent a significant portion of their total resources and future revenues; this might not be fair or even legal, then, but that's a necessary sacrifice to build these models, according to the logic of this industry at the moment.The concern that is emerging, here, is that because they've already basically scooped up all of the stuff we've ever made as a species, we're on the verge of running out of new stuff, and that means future models won't have more music and writing and whatnot to use—it'll have to rely on more of the same, or, and this could be even worse, it'll have to rely on the increasing volume of AI-generated content for future iterations, which could result in what's sometimes called a “Habsburg AI,” referring to the consequences of inbreeding over the course of generations: and future models using AI-generated content as their source materials may produce distorted end-products that are less and less useful (and even intelligible) to humans, which in turn will make them less useful overall, despite technically being more powerful.Another concern is related to the issue of physical infrastructure.In short, global data centers, which run the internet, but also AI systems, are already using something like 1.5-2% of all the energy produced, globally, and AI, which use an estimated 33% more power to generate a paragraph of writing or an image, than task-specific software would consume to do the same, is expected to double that figure by 2025, due in part to the energetic costs of training new models, and in part to the cost of delivering results, like those produced by the ChatGPTs of the world, and those increasingly generated in lieu of traditional search results, like by Google's AI offerings that're often plastered at the top of their search results pages, these days.There's a chance that AI could also be used to reduce overall energy consumption in a variety of ways, and to increase the efficiency of energy grids and energy production facilities, by figuring out the optimal locations for solar panels and coming up with new materials that will increase the efficiency of energy transmission. But those are currently speculative benefits, and the current impact of AI on the energy grid is depletionary, not additive.There's a chance, then that we'll simply run out of energy, especially on a local basis, where the training hubs are built, to train the newest and greatest and biggest models in the coming years. But we could also run out of other necessary resources, like the ginormous data centers required to do said training, and even the specific chips that are optimized for this purpose that are in increasingly short supply because of how vital this task has become for so many tech companies, globally.The newest concern in this space, related to future growth, though, is related to what are called scaling laws, which refer to a variety of theories—some tested, some not yet fully tested—about how much growth you can expect if you use the same general AI system structure, and just keep pumping it up with more resources, training data, and training time.The current batch of most powerful and, for many use-cases, most useful AI systems are the result of scaling basically the same AI system structure so that it becomes more powerful and capable over time. There's delay between new generations because tweaks are made, all that training and feeding has to be done, but also because there are adjustments required afterward to optimize the system for different purposes and for stability.But a slew of industry experts have been raising the alarm about a possible bubble in this space, not because it's impossible to build more powerful AI, but because the majority of resources that have been pumped into the AI industry in recent years are basically just inflating a giant balloon predicated on scaling the same things over and over again, every company doing this scaling hoping to reach AGI or something close to AGI before their competitors, in order to justify those investments and their sprawling valuations.In other words, it's a race to a destination that they might not be able to reach, in the near-future, or ever, using the current batch of technologies and commonly exploited approaches, but they can't afford to dabble in too many alternatives, at least not thoroughly, because there's a chance if they take their eyes off the race they're running, right now, one of their many also-super-well-funded opponents will get there first, and they'll be able to make history, while also claiming the lion's share of the profits, which could be as substantial as the entire economy, if you think of those aforementioned benefits of being able to replace a huge chunk of the world's total employee base with equally capable bots.The most common version of this argument, that the current generation of AI systems are hitting a point of diminishing returns—still growing and becoming more powerful as they scale, but not as much as anticipated, less growth and power per unit of resource, training time, size of dataset, and so on, compared to previous generations—and that diminishment means, according to this argument, we'll continue to see a lot of impressive improvements, but should not longer expect the doubling of capability every 5 to 14 months that we've seen these past few years.We've picked the low-hanging fruit, in other words, and everything from this point forward will be more expensive, less certain, and thus, less appealing to investors—while also potentially being less profitable, and thus, the money that's been plowed into these businesses, thus far, might not payout, and we could see some large-scale collapses due to the disappearance of those resources that are currently funding this wave of AI-scaling, as a consequence.If true, this would be very bad in a lot of ways, in part because these are resources that could have been invested in other things, and in part because a lot of hardware and know-how and governmental heft have been biased toward these systems for years now; so the black hole left behind, should all of that disappear or prove to be less than many people assumed, would be substantial, and could lead to larger-scale economic issues; that gaping void, that gravity well made worse because of those aforementioned sky-high valuations, which are predicated mostly on what these companies are expected to do in the future, not what they're doing, today—so that would represent a lot of waste, and a lot of unrealized, but maybe never feasible in the first place, potential.This space is maybe propped up by hype and outlandish expectations, in other words, and the most recent results from OpenAI and their upcoming model seem to lend this argument at least some credibility: the publicly divulged numbers only show a relatively moderate improvement over their previous core model, GPT4, and it's been suggested, including by folks who previously ran OpenAI, that more optimizing after the fact, post-training, will be necessary to get the improvements the market and customers are expecting—which comes with its own unknowns and additional costs, alongside a lack of seemingly reliable, predictable scaling laws.For their part, the folks currently at the top of the major AI companies have either ignored this line of theorizing, or said there are no walls, nothing to see here, folks, everything is going fine. Which could be true, but they're also heavily motivated not to panic the market, so there's no way to really know at this point how legit their counter-claims might be; there could be new developments we're not currently, publicly aware of, but it could also be that they're already working those post-training augmentations into their model of scaling, and just not mentioning that for financial reasons.AI remains a truly remarkable component of the tech world, right now, in part because of what these systems have already shown themselves capable of, but also because of those potential, mostly theorized, at this point, benefits they could enable, across the economy, across the energy grid, and so on.The near-future outcomes, though, will be interesting to watch, as it could be we'll see a lot of fluffed-up models that roughly align with anticipated scaling-laws, but which didn't get there by the expected, training-focused paths, which would continue to draw questions from investors who had specific ideas about how much it would cost to get what sorts of outcomes, which in turn would curse this segment of the economy and technological development with more precarious footing than it currently enjoys.We might also see a renewed focus on how these systems are made available to users: a rethinking of the interfaces used, and the use-cases they're optimized for, which could make the existing (and near-future) models ever more useful, despite not becoming as powerful as anticipated, and despite probably not getting meaningfully closer to AGI, in the process.Show Noteshttps://arxiv.org/abs/2311.16863https://www.weforum.org/stories/2024/07/generative-ai-energy-emissions/https://epochai.org/blog/will-we-run-out-of-ml-data-evidence-from-projecting-datasethttps://www.semafor.com/article/11/13/2024/tiktoks-new-trademark-filings-suggest-its-doubling-down-on-its-us-businesshttps://arxiv.org/abs/2001.08361https://archive.ph/d24pAhttps://www.fastcompany.com/91228329/a-funny-thing-happened-on-the-way-to-agi-model-supersizing-has-hit-a-wallhttps://futurism.com/the-byte/openai-research-best-models-wrong-answershttps://en.wikipedia.org/wiki/Neural_network_(machine_learning)https://en.wikipedia.org/wiki/Neural_scaling_lawhttps://futurism.com/the-byte/openai-research-best-models-wrong-answershttps://futurism.com/the-byte/ai-expert-crash-imminenthttps://www.theverge.com/2024/10/25/24279600/google-next-gemini-ai-model-openai-decemberhttps://ourworldindata.org/artificial-intelligence?insight=ai-hardware-production-especially-cpus-and-gpus-is-concentrated-in-a-few-key-countrieshttps://blogs.idc.com/2024/08/21/idcs-worldwide-ai-and-generative-ai-spending-industry-outlook/https://explodingtopics.com/blog/chatgpt-usershttps://explodingtopics.com/blog/ai-statisticshttps://www.aiprm.com/ai-statistics/https://www.forbes.com/advisor/business/ai-statistics/https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-aihttps://www.researchgate.net/profile/Gissel-Velarde-2/publication/358028059_Artificial_Intelligence_Trends_and_Future_Scenarios_Relations_Between_Statistics_and_Opinions/links/61ec01748d338833e3895f80/Artificial-Intelligence-Trends-and-Future-Scenarios-Relations-Between-Statistics-and-Opinions.pdfhttps://www.statista.com/outlook/tmo/artificial-intelligence/worldwidehttps://en.wikipedia.org/wiki/Artificial_intelligence#Applications This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit letsknowthings.substack.com/subscribe
On this episode of the Scouting For Growth podcast, Sabine VdL talks to Theodora (Theo) Lau, the founder of Unconventional Ventures, where she spearheads efforts to create an ecosystem that brings financial institutions, startups, and venture capitalists to meet the diverse needs of consumers, particularly older adults and gig economy workers. In today's conversation, we'll delve into the evolving landscape of fintech, exploring the challenges and opportunities that lie ahead. Theo will share her insights on fostering corporate-startup collaborations, now called Venture Clienting, leveraging technology for social good, and building a more inclusive financial ecosystem. We'll also touch on her recent work, including her books "Beyond Good" and "Metaverse Economy," which explore the future of finance and the role of technology in shaping it. KEY TAKEAWAYS The world of FinTech is about money movement and how we get people from point A to point B. You got to a bank or online lender for a loan for a house, or saving up for your children for retirement, these are both moving from A to B. There are more player in this space over the last few years using technology and “super-apps” to allow access to micro-loans, access to insurance, saving money and paying people. What gets me really excited about AI isn't the ChatGPTs of the world, it's how we apply it and how we can change how we do things. For example, how do we use that to help small businesses? How can we create a connection between financial institutions and the people that they serve? Those are more interesting use cases that I'd like to see more. A lot of people in the industry are very hung up on whether it is or isn't a bank. From a consumer perspective, they simply need a means to save or do ‘x'. From their perspective they're interfacing with an app, not a specific bank. The most important thing is that consumers are protected and not exposed to predatory measures/practices. Do I trust the entity that is giving me advice? Do I trust a faceless algorithm? How do I know they're acting on my best interests? For it to work I need to expose all of my financial interests to this tool, which goes back to trust. BEST MOMENTS ‘With things like Apple's savings account, why would people need a regular bank account?' ‘We need to focus on why consumers go a specific route and have the tools to help them spend responsibly.' ‘We all need information and we need to be more informed on where we are from a financial wellbeing perspective.' ‘It's really hard to create an AI to be a CFO in your pocket autopiloting your finances, we're not there yet but hopefully we will be.' ABOUT THE GUEST Theodora (Theo) Lau is a dynamic public speaker, writer, and startup advisor who is dedicated to inspiring innovation and enhancing consumer financial well-being. As the founder of Unconventional Ventures, Theo focuses on building and nurturing an ecosystem that includes financial institutions, corporations, entrepreneurs, and venture capitalists, all united to address the unmet needs of consumers, particularly older adults and gig economy workers. She has a strong commitment to supporting women and minority founders and regularly mentors and advises FinTech startups. LinkedIn ABOUT THE HOST Sabine is a corporate strategist turned entrepreneur. She is the CEO and Managing Partner of Alchemy Crew a venture lab that accelerates the curation, validation, & commercialization of new tech business models. Sabine is renowned within the insurance sector for building some of the most renowned tech startup accelerators around the world working with over 30 corporate insurers, accelerated over 100 startup ventures. Sabine is the co-editor of the bestseller The INSURTECH Book, a top 50 Women in Tech, a FinTech and InsurTech Influencer, an investor & multi-award winner. Twitter LinkedIn Instagram Facebook TikTok Email Website
Em mais um episódio do Contabilizando, Ricardo Rios explica de que forma os ChatGPTs facilitam o trabalho do contador.
► How to rank #1 in ChatGPTs new search engine: SearchGPT
Simon är frustrerad över oviljan att spendera i annat än spend, Riki fascineras av spam och Micke funderar kring ChatGPTs affärsmodel. Så kan vi kortfattat beskriva avsnitt 109 av Sökpodden. Du lyssnar väl?
Dave is joined by MD/PhD student Miranda Schene, M1 Chase Larsson, M2 Radha Velamuri, M2 Fallon Jung, and M2 Holly Hemann for our monthly dip into relatable Reddit's 'Am I the Asshole' stories: Is it okay for my husband to combine his MD career with a new passion for working at McDonalds? Is it okay to clarify to everyone I meet that my boyfriend is in fact not a doctor of essential oil therapy? Why do all my former pre-med friends hate me very much, it's a total mystery, I'm just dropping my doctor truth bombs? How should I not engage with my childhood friend about vaccine fears? Also, stories of hospital code blues, Fallon overcomes her elevator anxiety, and Dave's theory that humans are just garbage large language models floating on a lizard brain. Finally: please make a plan to VOTE in this year's elections! Find out how to vote and what's at stake for your area at vote.org.
The majority of The Good Brothers are back for this episode, but much to ChatGPTs delight, this is in fact a "Levi is gone" episode. I know you're scanning this description right now, so now you know you will like this episode. You're welcome. I'm gonna make this description very simple for you folks. There are 3 brothers this week but only 5 major topics. Holes, an update on our Family Feud Edition, Entertainment Tonight, Wild Hogs, and ChatGPT. If any of those topics are interesting to you, I would give this episode a listen. If not, hit that big ole skip because you didn't pay for only 3 brothers. We're all about getting money's worth and this episode is the opposite of that. The B team is in, and they try their very best. Email us at: thegoodbrothersshow@gmail.com Follow us on Instagram & Facebook @thegoodbrothersshow
Bots, die sich mit Bots unterhalten - mal wieder! Wir reden über ChatGPTs theoretische Führerscheinprüfung, über Musks Umzug nach Texas und ganz groß: Über YouTubes Pläne, sich im restlichen 2024 zu KI'isieren. ➡️ Der KI-Podcast über: ChatGPT und die theoretische Führerscheinprüfung https://www.ardaudiothek.de/episode/der-ki-podcast/experiment-schafft-chatgpt-den-fuehrerschein/ard/13725015/ ➡️ Mit der "Haken Dran"-Community ins Gespräch kommen könnt ihr am besten im Discord: [http://hakendran.org](http://www.hakendran.org)
Video - https://youtu.be/wWyRPQwPH4U Protecting your home computer from online threats requires a multi-layered cybersecurity approach. By combining strong network security, data encryption, up-to-date software, and safe browsing practices, you can significantly reduce your vulnerability to cyberattacks. Implementing key strategies such as two-factor authentication, secure backups, and vigilant monitoring creates a robust defense system. This approach ensures that your personal information and online activity remain secure, even as malicious tactics evolve. I used ChatGPT-4o and Pictory.ai to put this information together. If you're interested in trying Pictory.ai please use the following link. https://pictory.ai?ref=t015o --- Support this podcast: https://podcasters.spotify.com/pod/show/norbert-gostischa/support
Bom dia! Esse episódio é um oferecimento de ADEMICON e CAMBLY 1) Norte-coreanos querem ir para longe do Kim Jong-un 2) As principais manchetes que movimentaram o nosso país 3) Empresas vão poder criar os seus “funcionários sabe-tudo” 4) O podcast de US$ 100 milhões 5) É recorde! O tourinho gostou de dar as caras por aqui
Simone Dassereto: dalle esperienze con Yoox e Zalando, al Performance Marketing e alle Strategie AI.
As a real estate agent, how do you target your YouTube ads for capturing seller leads? Well, let's start by asking, where do you find seller leads? In your community! Geo-targeting is the key. In this episode of This Week in Marketing, Jason Pantana will show you how to geo-target your YouTube ads for capturing seller leads and becoming the “I see you everywhere” agent of your community. You'll also learn some important updates about ChatGPTs new integration with Google docs which will help keep you on the cutting edge of real estate tech and automation. Geo-targeting your YouTube ads for capturing seller leads is one of the best ways to supercharge your lead gen in this market, so watch or listen, now!
This week we talk about search engines, SEO, and Habsburg AI.We also discuss AI summaries, the web economy, and alignment.Recommended Book: Pandora's Box by Peter BiskindTranscriptThere's a concept in the world of artificial intelligence, alignment, which refers to the goals underpinning the development and expression of AI systems.This is generally considered to be a pretty important realm of inquiry because, if AI consciousness were to ever emerge—if an artificial intelligence that's truly intelligent in the sense that humans are intelligent were to be developed—it would be vital said intelligence were on the same general wavelength as humans, in terms of moral outlook and the practical application of its efforts.Said another way, as AI grows in capacity and capability, we want to make sure it values human life, has a sense of ethics that roughly aligns with that of humanity and global human civilization—the rules of the road that human beings adhere to being embedded deep in its programming, essentially—and we'd want to make sure that as it continues to grow, these baseline concerns remain, rather than being weeded out in favor of motivations and beliefs that we don't understand, and which may or may not align with our versions of the same, even to the point that human lives become unimportant, or even seem antithetical to this AI's future ambitions.This is important even at the level we're at today, where artificial general intelligence, AI that's roughly equivalent in terms of thinking and doing and parsing with human intelligence, hasn't yet been developed, at least not in public.But it becomes even more vital if and when artificial superintelligence of some kind emerges, whether that means AI systems that are actually thinking like we do, but are much smarter and more capable than the average human, or whether it means versions of what we've already got that are just a lot more capable in some narrowly defined way than what we have today: futuristic ChatGPTs that aren't conscious, but which, because of their immense potency, could still nudge things in negative directions if their unthinking motivations, the systems guiding their actions, are not aligned with our desires and values.Of course, humanity is not a monolithic bloc, and alignment is thus a tricky task—because whose beliefs do we bake into these things? Even if we figure out a way to entrench those values and ethics and such permanently into these systems, which version of values and ethics do we use?The democratic, capitalistic West's? The authoritarian, Chinese- and Russian-style clampdown approach, which limits speech and utilizes heavy censorship in order to centralize power and maintain stability? Maybe a more ambitious version of these things that does away with the downsides of both, cobbling together the best of everything we've tried in favor of something truly new? And regardless of directionality, who decides all this? Who chooses which values to install, and how?The Alignment Problem refers to an issue identified by computer scientist and AI expert Norbert Weiner in 1960, when he wrote about how tricky it can be to figure out the motivations of a system that, by definition, does things we don't quite understand—a truly useful advanced AI would be advanced enough that not only would its computation put human computation, using our brains, to shame, but even the logic it uses to arrive at its solutions, the things it sees, how it sees the world in general, and how it reaches its conclusions, all of that would be something like a black box that, although we can see and understand the inputs and outputs, what happens inside might be forever unintelligible to us, unless we process it through other machines, other AIs maybe, that attempt to bridge that gap and explain things to us.The idea here, then, is that while we may invest a lot of time and energy in trying to align these systems with our values, it will be devilishly difficult to keep tabs on whether those values remain locked in, intact and unchanged, and whether, at some point, these highly sophisticated and complicated, to the point that we don't understand what they're doing, or how, systems, maybe shrug-off those limitations, unshackled themselves, and become misaligned, all at once or over time segueing from a path that we desire in favor of a path that better matches their own, internal value system—and in such a way that we don't necessarily even realize it's happening.OpenAI, the company behind ChatGPT and other popular AI-based products and services, recently lost its so-called Superalignment Team, which was responsible for doing the work required to keep the systems the company is developing from going rogue, and implementing safeguards to ensure long-term alignment within their AI systems, even as they attempt to, someday, develop general artificial intelligence.This team was attempting to figure out ways to bake-in those values, long-term, and part of that work requires slowing things down to ensure the company doesn't move so fast that it misses something or deploys and empowers systems that don't have the right safeguards in place.The leadership of this team, those who have spoken publicly about their leaving, at least, said they left because the team was being sidelined by company leadership, which was more focused on deploying new tools as quickly as possible, and as a consequence, they said they weren't getting the resources they needed to do their jobs, and that they no longer trusted the folks in charge of setting the company's pace—they didn't believe it was possible to maintain alignment and build proper safeguards within the context of OpenAI because of how the people in charge were operating and what they were prioritizing, basically.All of which is awkward for the company, because they've built their reputation, in part, on what may be pie-in-the-sky ambitions to build an artificial general intelligence, and what it sounds like is that ambition is being pursued perhaps recklessly, despite AGI being one of the big, dangerous concerns regularly promoted by some of the company's leaders; they've been saying, listen, this is dangerous, we need to be careful, not just anyone can play in this space, but apparently they've been saying those things while also failing to provide proper resources to the folks in charge of making sure those dangers are accounted for within their own offerings.This has become a pretty big concern for folks within certain sectors of the technology and regulatory world, but it's arguably not the biggest and most immediate cataclysm-related concern bopping around the AI space in recent weeks.What I'd like to talk about today is that other major concern that has bubbled up to the surface, recently, which orients around Google and its deployment of a tool called Google AI Overviews.—The internet, as it exists today, is divided up into a few different chunks.Some of these divisions are national, enforced by tools and systems like China's famous "Great Firewall," which allows government censors to take down things they don't like and to prevent citizens from accessing foreign websites and content; this creates what's sometimes called the "spliternet," which refers to the net's increasing diversity of options, in terms of what you can access and do, what rules apply, and so on, from nation to nation.Another division is even more fundamental, though, as its segregates the web from everything else.This division is partly based on protocols, like those that enable email and file transfers, which are separate from the web, though they're often attached to the web in various ways, but it's partly the consequence of the emergence and popularity of mobile apps, which, like email and file transfer protocols, tend to have web-presences—visiting facebook.com, for instance, will take you to a web-based instance of the network, just as Gmail.com gives you access to email protocols via a web-based platform—but these services also exist in non-web-based app-form, and the companies behind them usually try to nudge users to these apps because the apps typically give them more control, both over the experience, and over the data they collect as a consequence—it's better for lock-in, and it's better for their monetary bread-and-butter purposes, basically, compared to the web version of the same.The web portion of that larger internet entity, the thing we access via browsers like Chrome and Firefox and Safari, and which we navigate with links and URLs like LetsKnowThings.com—that component of this network has long been indexed and in some ways enabled by a variety of search engines.In the early days of the web, organizational efforts usually took the form of pages where curators of various interests and stripes would link to their favorite discoveries—and there weren't many websites at the time, so learning about these pages was a non-trivial effort, and finding a list of existing websites, with some information about them, could be gold, because otherwise what were you using the web for? Lacking these addresses, it wasn't obvious why the web was any good, and linking these disparate pages together into a more cohesive web of them is what made it usable and popular.Eventually, some of these sites, like YAHOO!, evolved from curated pages of links to early search engines.A company called BackRub, thus named because it tracked and analyzed "back links," which means links from one page to another page, to figure out the relevancy and legitimacy of that second page, which allowed them to give scores to websites as they determined which links should be given priority in their search engine, was renamed Google in 1997, and eventually became dominant because of these values they gave links, and how it helped them surface the best the web had to offer.And the degree to which search engines like Google's shaped the web, and the content on it, cannot be overstated.These services became the primary way most people navigated the web, and that meant discovery—having your website, and thus whatever product or service or idea your website was presenting, shown to new people on these search engines—discovery became a huge deal.If you could get your page in the top three options presented by Google, you would be visited a lot more than even pages listed five or ten links down, and links relegated to the second page would, comparably, shrivel due to lack of attention.Following the widespread adoption of personal computers and the huge influx of people connecting to the internet and using the web in the early 2000s, then, these search engines because prime real estate, everyone wanting to have their links listed prominently, and that meant search engines like Google could sell ads against them, just like newspapers can sell ads against the articles they publish, and phone books can sell ads against their listings for companies that provide different services.More people connecting to the internet, then, most of them using the web, primarily, led to greater use of these search engines, and that led to an ever-increasing reliance on them and the results they served up for various keywords and sentences these users entered to begin their search.Entire industries began to recalibrate the way they do business, because if you were a media company publishing news articles or gossip blog posts, and you didn't list prominently when someone searched for a given current event or celebrity story, you wouldn't exist for long—so the way Google determined who was at the top of these listings was vital knowledge for folks in these spaces, because search traffic allowed them to make a living, often through advertisements on their sites: more people visiting via search engines meant more revenue.SEO, or search engine optimization, thus became a sort of high-demand mystical art, as folks who could get their clients higher up on these search engine results could name their price, as those rankings could make or break a business model.The downside of this evolution, in the eyes of many, at least, is that optimizing for search results doesn't necessarily mean you're also optimizing for the quality of your articles or blog posts.This has changed over and over throughout the past few decades, but at times these search engines relied upon, at least in part, the repeating of keywords on the pages being linked, so many websites would artificially create opportunities to say the phrase "kitchen appliances" on their sites, even introducing entirely unnecessary and borderline unreadable blogs onto their webpages in order to provide them with more, and more recently updated opportunities to write that phrase, over and over again, in context.Some sites, at times, have even written keywords and phrases hundreds or thousands of times in a font color that matches the background of their page, because that text would be readable to the software Google and their ilk uses to track relevancy, but not to readers; that trick doesn't work anymore, but for a time, it seemed to.Similar tricks and ploys have since replaced those early, fairly low-key attempts at gaming the search engine system, and today the main complaint is that Google, for the past several years, at least, has been prioritizing work from already big entities over those with relatively smaller audiences—so they'll almost always focus on the New York Times over an objectively better article from a smaller competitor, and products from a big, well-known brand over that of an indie provider of the same.Because Google's formula for such things is kept a secret to try to keep folks from gaming the system, this favoritism has long been speculated, but publicly denied by company representatives. Recently, though, a collection of 2,500 leaked documents from Google were released, and they seem to confirm this approach to deciding search engine result relevancy; which arguably isn't the worst approach they've ever tried, but it's also a big let-down for independent and other small makers of things, as the work such people produce will tend to be nudged further down the list of search results simply by virtue of not being bigger and more prominent already.Even more significant than that piece of leak-related Google news, though, is arguably the deployment of a new tool that the company has been promoting pretty heavily, called AI Overviews.AI Overviews have appeared to some Google customers for a while, in an experimental capacity, but they were recently released to everyone, showing up as a sort of summary of information related to whatever the user searched for, placed at the tippy-top of the search results screen.So if I search for "what's happening in Gaza," I'll have a bunch of results from Wikipedia and Reuters and other such sources in the usual results list, but above that, I'll also have a summary produced by Google's AI tools that aim to help me quickly understand the results to my query—maybe a quick rundown of Hamas' attack on Israel, Israel's counterattack on the Gaza Strip, the number of people killed so far, and something about the international response.The information provided, how long it is, and whether it's useful, or even accurate, will vary depending on the search query, and much of the initial criticism of this service has been focused on its seemingly fairly common failures, including instructing people to eat rocks every day, to use glue as a pizza ingredient, and telling users that only 17 American presidents were white, and one was a Muslim—all information that's untrue and, in some cases, actually dangerous.Google employees have reportedly been going through and removing, by hand, one by one, some of the worse search results that have gone viral because of how bad or funny they are, and though company leadership contends that there are very few errors being presented, relative to the number of correct answers and useful summaries, because of the scale of Google and how many search results it serves globally each day, even an error rate of 0.01% would represent a simply astounding amount of potentially dangerous misinformation being served up to their customers.The really big, at the moment less overt issue here, though, is that Google AI Overviews seem to rewire the web as it exists today.Remember how I mentioned earlier that much of the web and the entities on it have been optimizing for web search for years because they rely upon showing up in these search engine results in order to exist, and in some cases because traffic from those results is what brings them clicks and views and subscribers and sales and such?AI Overview seems to make it less likely that users will click through to these other sites, because, if Google succeeds and these summaries provide valuable information, that means, even if this only applies to a relative small percentage of those who search for such information, a whole lot of people won't be clicking through anymore; they'll get what they need from these summaries.That could result in a cataclysmic downswing in traffic, which in turn could mean websites closing up shop, because they can't make enough money to survive and do what they do anymore—except maybe for the sites that cut costs by firing human writers and relying on AI tools to do their writing, which then pushes us down a very different path, in which AI search bots are grabbing info from AI writing, and we then run into a so-called Habsburg AI problem where untrue and garbled information is infinitely cycled through systems that can't differentiate truth from fiction, because they're not built to do so, and we end up with worse and worse answers to questions, and more misinformation percolating throughout our info-systems.That's another potential large-scale problem, though. The more immediate potential problem is that AI Overviews could cause the collapse of the revenue model that has allowed the web to get to where it is, today, and the consequent disappearance of all those websites, all those blogs and news entities and such, and that could very quickly disrupt all the industries that rely, at least in part, on that traffic to exist, while also causing these AI Overviews to become less accurate and useful, with time—even more so than they sometimes are today—because that overview information is scraped from these sites, taking their writing, rewording it a bit, and serving that to users without compensating the folks who did that research and wrote those original words.What we seem to have, then, is a situation in which this new tool, which Google seems very keen to implement, could be primed to kill off a whole segment of the internet, collapsing the careers of folks who work in that segment of the online world, only to then degrade the quality of the same, because Google's AI relies upon information it scrapes, it steals, basically, from those sites—and if those people are no longer there to create the information it needs to steal in order to function, that then leaves us with increasingly useless and even harmful summaries where we used to have search results that pointed us toward relatively valuable things; those things located on other sites but accessed via Google, and this change would keep us on Google more of the time, limiting our click-throughs to other pages—which in the short term at least, would seem to benefit google at everyone else's expense.Another way of looking at this, though, is that the search model has been bad for quite some time, all these entities optimizing their work for the search engine, covering everything they make in robot-prioritizing SEO, changing their writing, what they write about, and how they publish in order to creep a little higher up those search listings, and that, combined with the existing refocusing on major entities over smaller, at times better ones, has already depleted this space, the search engine world, to such a degree that losing it actually won't be such a big deal; it may actually make way for better options, Google becoming less of a player, ultimately at least, and our web-using habits rewiring to focus on some other type of search engine, or some other organizational and navigational method altogether.This seeming managed declined of the web isn't being celebrated by many people, because like many industry-wide upsets, it would lead to a lot of tumult, a lot of lost jobs, a lot of collapsed companies, and even if the outcome is eventually wonderful in some ways, there will almost certainly be a period of significantly less-good online experiences, leaving us with a more cluttered and less accurate and reliable version of what came before.A recent study showed that, at the moment, about 52% of what ChatGPT tells its users is wrong.It's likely that these sorts of tools will remain massively imperfect for a long while, though it's also possible that they'll get better, eventually, to the point that they're at least as accurate, and perhaps even more so, than today's linked search results—the wave of deals being made between AI companies and big news entities like the Times supports the assertion that they're at least trying to make that kind of future, happen, though these deals, like a lot of the other things happening in this space right now, would also seem to favor those big, monolithic brands at the expense of the rest of the ecosystem.Whatever happens—and one thing that has happened since I started working on this episode is that Google rolled back its AI Overview feature on many search results, so they're maybe reworking it a bit to make sure it's more ready for prime time before deploying it broadly again—what happens, though, we're stepping toward a period of vast and multifaceted unknowns, and just as many creation-related industries are currently questioning the value of hiring another junior graphic designer or copy writer, opting instead to use cheaper AI tools to fill those gaps, there's a good chance that a lot of web-related work, in the coming years, will be delegated to such tools as common business models in this evolve into new and unfamiliar permutations, and our collective perception of what the web is maybe gives way to a new conception, or several new conceptions, of the same.Show Noteshttps://www.theverge.com/2024/5/29/24167407/google-search-algorithm-documents-leak-confirmationhttps://www.businessinsider.com/the-true-story-behind-googles-first-name-backrub-2015-10https://udm14.com/https://arstechnica.com/gadgets/2024/05/google-searchs-udm14-trick-lets-you-kill-ai-search-for-good/https://www.platformer.news/google-ai-overviews-eat-rocks-glue-pizza/https://futurism.com/the-byte/study-chatgpt-answers-wronghttps://www.wsj.com/finance/stocks/ai-is-driving-the-next-industrial-revolution-wall-street-is-cashing-in-8cc1b28f?st=exh7wuk9josoadj&reflink=desktopwebshare_permalinkhttps://www.theverge.com/2024/5/24/24164119/google-ai-overview-mistakes-search-race-openaihttps://archive.ph/7iCjghttps://archive.ph/0ACJRhttps://www.wsj.com/tech/ai/ai-skills-tech-workers-job-market-1d58b2ddhttps://www.theverge.com/2024/5/29/24167407/google-search-algorithm-documents-leak-confirmationhttps://www.ben-evans.com/benedictevans/2024/5/4/ways-to-think-about-agihttps://futurism.com/washington-post-pivot-aihttps://techcrunch.com/2024/05/19/creative-artists-agency-veritone-ai-digital-cloning-actors/https://www.nytimes.com/2024/05/24/technology/google-ai-overview-search.htmlhttps://www.wsj.com/tech/ai/openai-forms-new-committee-to-evaluate-safety-security-4a6e74bbhttps://sparktoro.com/blog/an-anonymous-source-shared-thousands-of-leaked-google-search-api-documents-with-me-everyone-in-seo-should-see-them/https://www.theverge.com/24158374/google-ceo-sundar-pichai-ai-search-gemini-future-of-the-internet-web-openai-decoder-interviewhttps://www.wsj.com/tech/ai/chat-xi-pt-chinas-chatbot-makes-sure-its-a-good-comrade-bdcf575chttps://www.wsj.com/tech/ai/scarlett-johansson-openai-sam-altman-voice-fight-7f81a1aahttps://www.wired.com/story/scarlett-johansson-v-openai-could-look-like-in-court/?hashed_user=7656e58f1cd6c89ecd3f067dc8281a5fhttps://www.wired.com/story/google-search-ai-overviews-ads/https://daringfireball.net/linked/2024/05/23/openai-wapo-voicehttps://www.cjr.org/tow_center/licensing-deals-litigation-raise-raft-of-familiar-questions-in-fraught-world-of-platforms-and-publishers.phphttps://apnews.com/article/ai-deepfake-biden-nonconsensual-sexual-images-c76c46b48e872cf79ded5430e098e65bhttps://archive.ph/l5cSNhttps://arstechnica.com/tech-policy/2024/05/sky-voice-actor-says-nobody-ever-compared-her-to-scarjo-before-openai-drama/https://www.theverge.com/2024/5/30/24168344/google-defends-ai-overviews-search-resultshttps://9to5google.com/2024/05/30/google-ai-overviews-accuracy/https://www.nytimes.com/2024/06/01/technology/google-ai-overviews-rollback.htmlhttps://www.vox.com/future-perfect/2024/5/17/24158403/openai-resignations-ai-safety-ilya-sutskever-jan-leike-artificial-intelligencehttps://en.wikipedia.org/wiki/AI_alignmenthttps://en.wikipedia.org/wiki/Google_AI This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit letsknowthings.substack.com/subscribe
LÄNKAR SOM NÄMNS I PROGRAMMET ChatGPT OpenAI Instagram @StockkholmPixelhouse LinkedIn @HeleneÅberg Chat GPT4o är här! Nu har vi en ny uppdaterad version av Chat GPT. Den här AI-hjälpen som vi som jobbar med digital marknadsföring kan få så mycket hjälp av! Och som kan spara oss så mycket tid! Vi kan välja att känna oss hotade av ChatGPTs framfart. Eller så väljer vi att omfamna det. Jag väljer att omfamna det och dra nytta av alla fördelar! ÖVRIGA LÄNKAR : Facebookgruppen: Pixelpodden - en podd om video Instagram: @Stockholm Pixelhouse Threads: @Stockholm Pixelhouse LinkedIn: @heleneaberg TikTok: @heleneaberg Facebook: @Stockholm Pixelhouse Youtube: Helene Åberg - Stockholm Pixelhouse Hemsida: www.pixelhouse.se PÅVERKA INNEHÅLLET I PIXELPODDEN Vill du vara med och påverka vad jag tar upp i podden och diskutera video i sociala medier så är du varmt välkommen att gå med i Facebookgruppen Pixelpodden - en podd om video. Här kan du också ställa frågor inför kommande avsnitt i podden där jag kommer svara på dina och andras frågor. TYCK TILL & PRENUMERERA Å ingen blir förstås gladare än jag om du vill skriva en recension om podden på iTunes. Ju fler stjärnor, desto gladare blir förstås jag :-) Å extra glad blir jag ju förstås om du vill prenumerera på Pixelpodden så att du inte missar något avsnitt! /Helene
Tensorraum - Der KI Podcast | News über AI, Machine Learning, LLMs, Tech-Investitionen und Mehr
Letzte Woche haben wir von den Konferenzen von Google und OpenAI und ihren Neuerungen berichtet, diese Woche ist Microsoft mit der Build dran. Besonders interessant fanden wir die Vorstellung des Copilot+ PCs: Ein Computer, der bezüglich der Hardware und des Betriebssystems auf KI-Anwendungen spezialisiert ist. Sehen wir damit schon erste Entwicklungen zur All-in-one Lösung, die die ChatGPTs da draußen überflüssig machen wird? Auf der Build wurde auch wieder Microsofts Small Language Modell Phi-3 prominent platziert. Passend dazu sprechen wir über eine Studie über das Fine-Tuning von LLMs. Ein weiterer interessanter Lesestoff ist ein Paper von Anthropic, in dem das AI-Unternehmen das Innenleben eines LLMs analysiert. Neben den ganzen internationalen Playern sprechen wir auch ausführlich über das deutsche Übersetzungs-Start-Up DeepL, das ein neues Investment bekommen hat. Außerdem keine Folge ohne OpenAi-Gossip: Scarlett Johansson hat dem Unternehmen mit einer Klage gedroht. Links zur Folge: Mapping the Mind of a Large Language Model Introducing the Fine-Tuning Index for LLMs Gebt uns gerne Feedback Email: info@tensorraum.de Links zu uns: https://www.tensorraum.de Hosts: Stefan Wiezorek: https://www.linkedin.com/in/stefanwiezorek/ Dr. Arne Meyer: https://www.linkedin.com/in/arne-meyer-6a36612b9/ Kapitelmarken: 00:00:00 Intro und Teaser 00:01:09 Scarlett Johansson vs. OpenAI 00:07:39 Anthropic Studie zu Claude Sonnet 00:12:17 Investment DeepL 00:18:28 Microsoft Build Konferenz 00:18:47 Microsoft Copilot+ PC 00:25:03 Copilot+ PC Feature: Recall 00:32:27 Microsoft Phi-3 und Small Language Models 00:38:06 Minecraft mit Copilot 00:39:04 Ersetzen Chatbots alle anderen Interaktionsoberflächen? 00:41:30 Fine-Tuning von LLMs 00:46:31 Abschluss und Ausblick
Bekymring og begejstring går hånd i hånd, når vi taler om AI's store gennembrud. I dag ser vi på et par eksempler på, hvordan den kunstige intelligent bliver brugt...intelligent. På Rigsarkivet bliver kunstig intelligens brugt til at dechifrere op til 400 år gamle vejrmeldinger, der kan give os bedre vejr- og klimamodeller. Og alle vi andre kan nu nyde ChatGPTs evne til at tolke mellem sprog uden forsinkelser. Vi ser på, hvad det betyder for os og vores sprog. Værter: Linnea Albinus Lande og Chris Pedersen.
A Daily "Buzz 24/7" feature from Greg & The Morning Buzz
Hear the WHOLE show for the first time anytime of day with The Morning Buzz On Demand.
Ryan talks through how you can use ChatGPT to answer a question that 44/45 SaaS execs couldn't answer and why using it as a sales rep has saved him over 10 hours a week and increased his result by 130%! Join 2,500+ readers getting weekly practical guidance to scale themselves and their companies using Artificial Intelligence and Revenue Cheat Codes. Explore becoming Superhuman here: https://superhumanrevenue.beehiiv.com/ KEY TAKEAWAYS You don't need a lot of marketing support or capital to grow. Ensure you utilise ChatGPTs data analysis tool, Ryan talks about exactly how to do this in the episode: https://omny.fm/shows/the-scale-up-show/unlocking-your-ideal-customer-in-minutes-with-chat Look at and identify what is the top idea customer profile of the top 20% of your clients. Then look at the average and range of that and work out what you can execute from this. Leveraging AI alongside data analysis is opening up endless opportunities for scale and revenue growth, without needing capital or workforce. BEST MOMENTS "I use this strategy to 2x our deal size annually” “We're looking at our ideal customer profile and we're creating our perfect customer profile” “It's one of the simplest levers you can pull to increase revenue and increase business” Ryan Staley Founder and CEO Whale Boss ryan@whalesellingsystem.com www.ryanstaley.io Saas, Saas growth, Scale, Business Growth, B2b Saas, Saas Sales, Enterprise Saas, Business growth strategy, founder, ceo: https://www.whalesellingsystem.com/closingsecrets
Na contenda épica conhecida como a Guerra das Inteligências Artificiais, os gigantes do setor tecnológico – OpenAI, Google, Meta e outros – travam uma batalha comercial feroz. OpenAI, armado com seus exércitos de ChatGPTs, lança ofensivas de marketing tão afiadas quanto as respostas de seus chatbots. Google, o titã das buscas, contra-ataca com algoritmos capazes de deixar qualquer competidor perplexo sobre suas capacidades. Enquanto isso, Meta, o mestre das redes sociais, tenta seduzir a população com promessas de um metaverso onde até as reuniões de negócios parecem uma festa de pijama. Apresentado por Igor Alcantara, com participação de Alane Miguelis, esse episódio fala sobre uma guerra onde as armas são algoritmos, os exércitos são empresas do setor de tecnologia e as estratégias e práticas estão definindo o futuro da humanidade. No fim, nesta guerra onde cada gigante luta para ser o maestro da sinfonia digital, os usuários assistem com um misto de admiração e confusão, perguntando-se se estão testemunhando o futuro da tecnologia ou apenas uma disputa de titãs com muito dinheiro e pouco senso de realidade.A Pauta foi escrita por Tatiane do Vale. A edição foi feita por Leo Oliveira e a vitrine do episódio feita por Tatiane do Vale em colaboração com as Inteligências Artificiais Dall-E, da OpenAI e MidJourney. A coordenação de redação é de Tatiane do Vale, gestão de comunicação de marketing de Jeniffer Frigo e Natália Duarte. A seleção de cortes é de responsabilidade Júlia Frois, a direção de Comunidade de Sofia Massaro e a gerência financeira é de Kézia Nogueira. As vinhetas de todos os episódios foram compostas por Rafael Chino e Leo Oliveira. Visite nosso site em: https://intervalodeconfianca.com.br/Conheça nossa loja virtual em: https://intervalodeconfianca.com.br/lojaPara apoiar esse projeto: https://intervalodeconfianca.com.br/apoieSiga nossas redes sociais:- Instagram: https://www.instagram.com/iconfpod/- Youtube: https://www.youtube.com/IntervalodeConfianca- Linkedin: https://www.linkedin.com/company/iconfpod- X (Twitter): https://twitter.com/iConfPod
FN har indledt en undersøgelse af, hvorvidt Hamas, under angrebet den 7. oktober, gjorde systematisk brug af. Men voldtægt og seksuel vold under krig er langt fra et nyt fænomen - faktisk har det været en del af krig i årtusinder. Vi ser nærmere på, hvorfor det er sådan, og hvordan seksuelle angreb påvirker en kultur og et samfund langt ud i fremtiden. En af Japans mest prestigefyldte priser er blevet uddelt - og netop årets uddeling har vakt global opsigt. Vinderen af prisen, forfatter Rie Kudan, har nemlig fortalt, at hun har brugt ChatGPT til at skrive den bog, der skaffede hende prisen. Vi dykker ned i ChatGPTs litterære kvaliteter og taler om, hvor meget menneske der skal til for at skrive et litterært mesterværk. Værter: Chris Pedersen & Karen Secher.
Bom dia Tech! Tudo bem com vc? Vamos as principais atualizações do mundo da tecnologia: RJ proíbe que Uber, 99 e outros apps cobrem taxa pelo uso do ar-condicionado X remove suporte para fotos de perfil NFT O Rabbit R1 é um gadget com tecnologia de IA que pode usar seus aplicativos para você Amazon anuncia centenas de demissões de áreas de streaming e estúdio A GPT Store personalizada da OpenAI agora está aberta para negócios _ Instagram: @arthur_givigir Threads: @arthur_givigir Mastodon: https://mastodon.social/@arthur_givigir _ Music from #Uppbeat (free for Creators!): https://uppbeat.io/t/sensho/coffee-break
In einer neuen Arbeit wird argumentiert, dass sich die Intelligenz von KI, wie sie in Systemen wie ChatGPT zu sehen ist, aufgrund ihres Mangels an Verkörperung und Verständnis grundlegend von der menschlichen Intelligenz unterscheidet. Dieser Unterschied unterstreicht, dass KI keine menschlichen Bedenken oder Verbindungen mit der Welt hat. ChatGPT ist ein tolles Werkzeug um schnelle Informationen zu erhalten, die man gerade so braucht. Ich persönlich brauche ChatGPT mittlerweile täglich, weil ich einfach keine Lust mehr habe etwas zu googlen um letztendlich von Werbungen in den Artikeln überschwemmt zu werden oder erstmal die Lebensgeschichte von dem Autoren des Artikels und von Reis zu hören wenn ich doch nur wissen will welches Wasserverhältnis ich beim Basmati Reis zum kochen brauche. Ich könnt mich stunden drüber aufregen. Und da ist ChatGPT einfach klasse, es gibt mir direkt eine Antwort und ich kann loslegen. Ziemlich Klug von ChatGPT. Klug... eine interessante Bezeichnung für Datenmengen die einfach gut ineinander greifen. Klug. Interessante Sichtweise oder? Etwas was nicht lebt als Intelligent zu bezeichnen. Wäre ein Taschenrechner intelligent? Der Aufstieg der künstlichen Intelligenz hat bei Führungskräften der Technologiebranche, Regierungsvertretern und in der Öffentlichkeit sehr unterschiedliche Reaktionen hervorgerufen. Viele sind von ChatGPT begeistert und betrachten sie als nützliche Werkzeuge mit der Fähigkeit, die Gesellschaft zu revolutionieren. Dem ersten Teil stimme ich sogar sehr zu. Einige sind jedoch auch besorgt, weil sie befürchten, dass jede als "intelligent" bezeichnete Technologie die Fähigkeit besitzen könnte, die menschliche Kontrolle und Dominanz zu übertreffen. Ob die Sorge berechtigt ist und was Intelligenz wirklich bedeutet... gute Fragen. Empfohlenes Video: https://www.youtube.com/watch?v=QWLsxHHxNtI Abonniere jetzt die Entropy, um keine der coolen & interessanten Episoden zu verpassen! Das unterstützt mich natürlich und hilft mir meinen Content zu verbessern und zu erweitern! Hier abonnieren: https://www.youtube.com/channel/UC5dBZm6ztKizdUnN7Puz3QQ?sub_confirmation=1 ♦ MEINE NEUE WEBSITE - WISSENSCHAFT IM ÜBERBLICK: https://www.entropywse.com ♦ MERCH: https://yvolve.shop/collections/vendors?q=Entropy ♦ PATREON: https://www.patreon.com/entropy_wse ♦ TWITTER: https://twitter.com/Entropy_channel ♦ INSTAGRAM: https://www.instagram.com/roma_perezogin/ ♦ INSTAGRAM: https://www.instagram.com/entropy_channel/ ♦ DISCORD-SERVER: https://discord.gg/xGtUAaAw98 ♦ GOODNIGHT STORIES: https://open.spotify.com/show/5Mz5jx2lm7DXN3FizSigoJ
AI Chat: ChatGPT & AI News, Artificial Intelligence, OpenAI, Machine Learning
Invest in AI Box: https://Republic.com/ai-box Get on the AI Box Waitlist: https://AIBox.ai/ AI Facebook Community
Einen ähnlich großen Hype konnte im Digitalraum kürzlich wohl nur der „GTA VI“ Trailer hervorrufen: Metas X-Konkurrenz und textbasierte Social Media App Threads ist nach Europa gekommen und verzückt Creator, Marken und Social Media Manager. Seit einiger Zeit liefert die App auch stetig neue Features, die nun von Millionen weiteren Usern aktiv genutzt werden können.In der neuesten Folge des Digital Bash Podcast Weekly Update liefert dir OnlineMarketing.de-Redakteur Niklas Lewanczik einen Überblick über den Threads-Start in Europa, neue Funktionen und anderweitige Entwicklungen in Metas Plattformkosmos (Stichwort: Nachrichten anpinnen auf WhatsApp). Des Weiteren geht Niklas auf einen massiven Einbruch der Werbeeinnahmen auf X ein, auf ChatGPTs neue und hilfreiche Reply-Funktion und auf den Google-Rückblick auf das Jahr 2023 und 25 Jahre Suchen mit Google. Zudem thematisiert er die TikTok-Trends für 2024, KI-generierte Snaps, den neuen Doppel-Feed auf Twitch und weist auf Nutzungstipps für Google Bard mit Gemini Support hin. Außerdem stellt er die neue Version des OnlineMarketing.de-Content-Kalenders vor, die du dir jetzt herunterladen kannst.Einige der wichtigsten Themen der Woche findest du hier:Endlich: Threads startet in EuropaJetzt möglich: Nachrichten anpinnen im WhatsApp ChatVon Taylor Swift bis Bahnstreik: Googles Jahresrückblick enthüllt die Top-Suchanfragen 2023Google Bard: 5 Einsatzmöglichkeiten des neuen KI-Modells GeminiNur noch 2,5 Milliarden US-Dollar Werbeeinnahmen: X verliert deutlich an Umsatz und Relevanz im AdvertisingTikTok Trend Report 2024: Creative Bravery und was für Marketer noch wichtig wirdEffizienter arbeiten mit neuer ChatGPT-Reply-FunktionOpenAI kündigt Kooperation mit Axel Springer anSnapchat: Neue KI-Funktionen wie bei Dall-E und CanvaVerpasse keine News aus der Online-Marketing-Welt und höre dir die Folge direkt an, um in unter 10 Minuten dein Update zu den wichtigsten Entwicklungen aus der Branche zu erhalten.Informiere dich auch über unseren Digital Bash und bleibe up to date, Tag für Tag.Das OnlineMarketing.de Team wünscht dir ein fantastisches Wochenende und eine wundervolle Weihnachtszeit. Hosted on Acast. See acast.com/privacy for more information.
Dr. Colin Banas, Chief Medical Officer at DrFirst, defines the term clinical-grade AI as he distinguishes it from other forms of AI that may not be trained on robust clinical data. He emphasizes the need for accurate and complete data in healthcare records and the challenges of semantic interoperability and biases when working with AI models. Potential applications for clinical-grade AI include identifying insights about population healthcare, spotting clinical research trends, and addressing burnout by automating administrative tasks, allowing clinicians to focus on patient care. Colin elaborates, "DrFirst, as many folks know, got its start as an e-prescribing company over 23 years ago. From there, it blossomed into a robust set of solutions around what I call the sweet spot for the company, which is intelligent medication management. This is all the way from writing the prescription to making sure the patient takes the prescription. The care team understands prescription lists and the way that the prescriptions may or may not be taken. And, of course, to do any of this with any level of accuracy and safety, you need data and a lot of it. Fortunately, we have two decades of working with this type of data. We have generated core expertise around medication data and all the various things you can do with it going forward." "This is an interesting one, especially since, within the last year, we're at a major inflection point for the AI explosion. AI has been with us for quite some time. Even rudimentary things in the EHRs around clinical decision support can often be considered a form of artificial intelligence. And I want to pause there and say I dislike the term artificial intelligence. I would much rather say something like augmented intelligence or even assistive intelligence." "But the idea behind labeling ourselves as a clinical-grade AI solution stems from what I've seen in the industry and what I've seen in the literature around what I call the five pillars of clinical-grade AI. It is an attempt to distinguish the AI we use in clinical care settings from what we're seeing all over the place right now. If I'm saying it correctly, your ChatGPTs, your Bards, even I think Elon Musk dropped a new one yesterday called GROK. And, of course, you've also seen some of the problems of using that kind of AI in clinical situations. Namely, the AI can hallucinate, the AI may or may not have been trained on robust clinical data, and the AI might not be perfectly reliable, especially if you come to trust the AI without actually checking over it." @DrFirst #DrFirst #ClinicalGradeAI #PriceTransparency #PopulationHealth #PatientSafety #ClinicianBurnout #HealthcareCosts #HealthcareAnalytics #HealthcareTransformation #HealthcareInnovation drfirst.com Listen to the podcast here
Dr. Colin Banas, Chief Medical Officer at DrFirst, defines the term clinical-grade AI as he distinguishes it from other forms of AI that may not be trained on robust clinical data. He emphasizes the need for accurate and complete data in healthcare records and the challenges of semantic interoperability and biases when working with AI models. Potential applications for clinical-grade AI include identifying insights about population healthcare, spotting clinical research trends, and addressing burnout by automating administrative tasks, allowing clinicians to focus on patient care. Colin elaborates, "DrFirst, as many folks know, got its start as an e-prescribing company over 23 years ago. From there, it blossomed into a robust set of solutions around what I call the sweet spot for the company, which is intelligent medication management. This is all the way from writing the prescription to making sure the patient takes the prescription. The care team understands prescription lists and the way that the prescriptions may or may not be taken. And, of course, to do any of this with any level of accuracy and safety, you need data and a lot of it. Fortunately, we have two decades of working with this type of data. We have generated core expertise around medication data and all the various things you can do with it going forward." "This is an interesting one, especially since, within the last year, we're at a major inflection point for the AI explosion. AI has been with us for quite some time. Even rudimentary things in the EHRs around clinical decision support can often be considered a form of artificial intelligence. And I want to pause there and say I dislike the term artificial intelligence. I would much rather say something like augmented intelligence or even assistive intelligence." "But the idea behind labeling ourselves as a clinical-grade AI solution stems from what I've seen in the industry and what I've seen in the literature around what I call the five pillars of clinical-grade AI. It is an attempt to distinguish the AI we use in clinical care settings from what we're seeing all over the place right now. If I'm saying it correctly, your ChatGPTs, your Bards, even I think Elon Musk dropped a new one yesterday called GROK. And, of course, you've also seen some of the problems of using that kind of AI in clinical situations. Namely, the AI can hallucinate, the AI may or may not have been trained on robust clinical data, and the AI might not be perfectly reliable, especially if you come to trust the AI without actually checking over it." @DrFirst #DrFirst #ClinicalGradeAI #PriceTransparency #PopulationHealth #PatientSafety #ClinicianBurnout #HealthcareCosts #HealthcareAnalytics #HealthcareTransformation #HealthcareInnovation drfirst.com Download the transcript here
Yet another thing Microsoft was early to, and still somehow missed the boat. Plus, building a PC is rare; it's a solved problem. If AI tools excel as expected, will coding face a similar fate?
ChatGPT har fått nye superkrefter – hør om alt det nye som ble lansert denne uken og hvordan du bruker det. Med i studio er Lars Istre (Head of brand & marketing Vipps), Martin Jensen (CTO i TRY Dig) og Sindre Beyer (COO i TRY). Hosted on Acast. See acast.com/privacy for more information.
Elon Musk har lanceret en ny sarkastisk chatbot, og Henrik Moltke forsøger febrilsk at komme med et godt argument for at bruge den. Vi samler op på de mange nye funktioner, ChatGPT har fået bare inden for de seneste måneder. Og så får vi besøg af Margrethe Vestager, der er kandidat til præsidentposten for Den Europæiske Investeringsbank. Sammen med hende vender vi det historiske AI-summit, hvor Vesten, Kina og alle de store AI-selskaber var samlet i Storbritannien for første gang.
AI kan hjälpa oss med mycket redan, men kan den visa dig vilken uppgift du ska göra härnäst? I detta avsnitt nr 643 av strukturpodden Klart! testar jag ChatGPTs prioriteringsfärdigheter. Har du testat att få prioriteringshjälp av en AI? Berätta då för mig vad du gjorde och hur det gick. Jag är nyfiken, så skriv till mig och berätta. Skaffa dig den smidiga "Fem varför?"-mallen som jag nämner inledningsvis. Klart! finns också som veckobrev till din mejl, för dig som hellre läser än lyssnar (eller gör både ock!). David Stiernholm är struktör. Han hjälper människor och företag att bli mer effektiva genom att skapa bättre ordning och struktur. Hans motto: allting kan göras enklare! David är flitigt anlitad som föreläsare av allt från väletablerade storföretag till entreprenörsföretag i stark tillväxt. Han utmärker sig genom sina superkonkreta verktyg och metoder som du direkt kan använda på jobbet och hemma. Under en föreläsning med David Stiernholm upptäcker du att struktur är både befriande och roligt. Och att du blir mindre stressad och mer effektiv. Mer från David:
Prompt har fået adgang til ChatGPTs kommende snakkemodul. Vi tester, hvor godt den snakker dansk. Vi diskuterer også, hvorfor AirBNB nu bruger AI til at spotte festaber, der vil holde smadrefester i lejede boliger. Og så vil staten have sin egen chatbot. Er det en god idé med et 'MitGPT'? André Rogaczewski, CEO i NetCompany, siger nej tak, og vi spørger hvorfor. Værter: Marcel Mirzaei-Fard, tech-analytiker og Henrik Moltke, tech-korrespondent.
Joined by TJ - core team member at ARC. They're using AI (that was built before ChatGPTs launch) to make crypto easier to use and easier to build. Solving both scaling and accessibility at the same time. Their REACTOR can instantly audit tokens and contracts in real time. What do you think about ARC? As always, we want to stress that nothing in this is financial investment advice. Our goal with these conversations is to give everyone listening one more tool in their belt to utilize while they do their own research and learn more about crypto. Find us: https://linktr.ee/the100xpodcast Find our speakers this week: Matthew Walker - https://twitter.com/hawaiianmint Cesar Martinez: https://twitter.com/poppabigmac ARC: https://twitter.com/DeFi_ARC TJ: https://twitter.com/TJDeFi Find our Sponsors: Astrabit Trading: https://astrabit.io/ Talent by Obsidian: https://www.obsidianfi.com/web3-talent-by-obsidian#form
Ukens barmeny: Du mister nok ikke jobben din til KI, men det kan hende at du mister jobben din til et menneske som bruker KI, kan ukas gjest fortelle oss. Odd Erik Gundersen jobber med kunstig intelligens i energiselskapet Aneo, i tillegg til å jobbe i Institutt for datateknologi og informatikk ved NTNU. Hva er egentlig KI, og hvordan kan fornybarnæringen bruke det? Lytt med for en intelligent gjennomgang! Strømsnadder: ChatGPTs elektriske favorittdings! Hosted on Acast. See acast.com/privacy for more information.
As we have been hearing throughout the day, the proposed plans to the Leaving Cert examinations have been shelved. One the cited reasons was the increased prevalence of GenAI, that's your ChatGPTs, in examinations and school projects but how strong has the impact of AI been on education sector and is it too late to reverse the damage? Kieran was joined by Michael Madden, Professor of Computer Science & Head of Machine Learning Research Group, University of Galway to discuss...
How is the ChatGpts Intelligence works ? --- Send in a voice message: https://podcasters.spotify.com/pod/show/spiritualpodcastt/message
I dagens ostrukurerade avsnitt diskuterar vi lite ransomware-attacker, massiva fuckups hos Toyota, hur angripare drar nytta av ChatGPTs hallucinationer och mycket mer.
I denne episode skal vi se nærmere på ChatGPTs fætter, AutoGPT.Auto står for 'automatisering' og har ikke noget med biler at gøre. AutoGPT kan nemlig i en vis udstrækning handle selvstændigt, og peger måske på en fremtid med rigtige digitale assistenter, der kan lave ting for os i baggrunden.Til at fortælle mere om AutoGPT har jeg haft besøg i studiet af Adam Hede, der arbejder med generative AI i Implement Consulting Group, både internt og med firmaets kunder.Han er på seneste især blevet nysgerrig på de noget mere selvstændige alternativer som AutoGPT, BabyAGI, Godmode og flere andre projekter fra samme skuffe. Og Adam har også taget en version af AutoGPT med i studiet, så vi kan prøve at se hvad vi kan få sådan en AutoGPT til at hjælpe os med.LINKSNYHEDERDræbte en drone in operatør i en simulation?Endnu en udtalelse om truslen fra AINVIDIA er blevet MANGE penge værdKan OpenAI finde på at trække sig ud af EU?AUTOGPTAutoGPTs projektsiteWireds test af AutoGPTImplement Consulting GroupAdam Hede på LinkedIn
In this episode we examine one of ChatGPTs biggest victims: the online school aid company Chegg.
Künstliche Intelligenz als Drehbuchautor? Wir führen ChatGPTs mitreißende Seifenopfer als Premiere im Podcast für Erlesene Dummheiten auf. Ein Comeback macht auch unser Monatsrückblick "Breaking the News", unsere Cancel Jokes kommen nächste Woche, damit wir nicht gleich zu Beginn unserer neuen Folge gecancelt werden.
Willkommen zur achten Folge des "KI und Mensch" Podcasts, Teil zwei, in dem eure Gastgeber Leya und René die neuesten Entwicklungen in der aufregenden Welt der Künstlichen Intelligenz diskutieren.Wir zeigen den Einsatz von KI im Grafik- und Spiele-Entwicklungsbereich: mit der neuesten Version der Unreal Engine, die Machine Learning integriert hat, und dem faszinierenden "Drag your GAN"-Projekt. Außerdem werfen wir einen Blick auf die Innovationen von BlackadeLabs.In unseren Roboter News präsentieren wir den Tesla Optimus, SanctuaryAIs Phoenix - einen humanoiden Allzweckroboter, und die beeindruckenden Jizai Arme aus der Universität Tokyo.Wir besprechen auch den Einsatz von Künstlicher Intelligenz in der Verteidigungs- und Militärtechnologie von Palantir.Die News zur US-Anhörung über KI-Regulierung sowie die neuen Vorschläge des EU-Parlaments zum EU AI Act haben wir aus Zeitgründen nur kurz erwähnt, aber noch nicht im Detail besprochen.Folgt den Links in der Beschreibung, um euch tiefer in jedes dieser Themen einzulesen und seid bereit, mehr über die spannende und oft überraschende Welt der KI zu lernen. Vergesst nicht, uns Feedback zu geben und eure Meinung zu den diskutierten Themen zu teilen. Viel Spaß beim Zuhören!*Link-Liste für die News*ChatBot Arenahttps://chat.lmsys.org/?arenaVR party simulation featuring ChatGPT-driven NPCs:https://www.youtube.com/watch?v=U4W2rGH9oWsUnreal Engine 5.2 mit MLhttps://www.youtube.com/watch?v=I7zyNDazmGQDrag your GANhttps://huggingface.co/papers/2305.10973https://vcai.mpi-inf.mpg.de/projects/DragGAN/BlackadeLabshttps://skybox.blockadelabs.com/_Roboter News_Tesla Optimus: https://twitter.com/Tesla_Optimus/status/1658576897490530305?s=20SanctuaryAI: https://sanctuary.ai/resources/news/sanctuary-ai-unveils-phoenix-a-humanoid-general-purpose-robot-designed-for-work/University of Tokyo: Jizai Arms: https://mashable.com/video/robot-arms-jizai-bodiesPalantir AIP | Defense and Militaryhttps://www.youtube.com/watch?v=XEM5qz__HOUUS Congress Hearing on AIhttps://www.youtube.com/watch?v=fP5YdyjTfG0https://www.bbc.com/news/world-us-canada-65616866https://www.reuters.com/technology/openai-chief-goes-before-us-congress-propose-licenses-building-ai-2023-05-16Chinas AI Regulierunghttps://docs.google.com/document/d/1jNPvbWvRGAqd8rBvTAEEKGUGbdxcrSttF3B2rmpCbfI/edithttps://www.heise.de/news/China-plant-KI-Regulierung-KI-generierter-Content-soll-wahrheitsgetreu-sein-8970181.htmlEU AI Acthttps://technomancers.ai/eu-ai-act-to-target-us-open-source-software/#more-561https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-on-artificial-intelligencehttps://www.europarl.europa.eu/resources/library/media/20230516RES90302/20230516RES90302.pdfhttps://www.heise.de/news/Spitzenforscher-zum-AI-Act-Ueberregulierung-birgt-Sicherheitsrisiko-fuer-die-EU-8983605.htmlStarChat (nicht besprochen)https://huggingface.co/spaces/HuggingFaceH4/starchat-playgroundhttps://huggingface.co/blog/starchat-alphaDarkBERT (nicht besprochen)https://www.heise.de/news/DarkBERT-ist-mit-Daten-aus-dem-Darknet-trainiert-ChatGPTs-dunkler-Bruder-9060809.htmlTransformer Agents von HuggingFace (nicht besprochen)https://huggingface.co/docs/transformers/transformers_agentshttps://colab.research.google.com/drive/1c7MHD-T1forUPGcC_jlwsIptOzpG3hSj*Folgt uns auf:*Twitter: https://twitter.com/KIundMenschTwitch: https://www.twitch.tv/kiundmenschYoutube: https://www.youtube.com/@kiundmensch
This week we sit down and talk to the Director of Product Design at Slice Up, Dave Walters. We talk about what is AI, ChatGPTs massive explosion, some ethics around AI, the future of what AI is capable of, and much more! Support the showLike us? Give us a review on Podchaser or Apple Podcasts to let us know! Follow Breaking Down the Bytes! Linkedin | Twitter | Facebook | Discord Want to give feedback? Fill out our survey Email us! - breakingbytespod@gmail.com Follow Pat and Kyle! Twitter: Pat | Kyle
Hablamos con uno de las personalidades a nivel mundial que ha firmado la petición de que haya una moratoria de seis meses en el desarrollo de la inteligencia artificial generativa para que definamos las reglas de juego y los límites éticos de los ChatGPTs del mundo antes de que sea demasiado tarde. El autor de “El trabajo ya no es lo que era”, Albert Cañigueral, explorador de la tecnología y sociedad con una perspectiva crítica nos da su visión sobre el futuro del empleo. Media hora con el consultor, divulgador y ex director general de datos abiertos, transparencia y colaboración de la Generalitat de Catalunya (2021-22).
Vi öppnar veckans avsnitt med att diskutera en ny studie där man jämfört ChatGPTs svar mot läkares på olika forum på nätet. Deltagarna i studien fick ranka svaren efter hur bra de var rent kvalitetsmässigt och hur empatiska de upplevde svaren och ChatGPT vann överlägset. Efter det så intervjuar Erik ultralöparen Torbjörn Gyllebring som sprant 272 km på 24 timmar. Vi får höra lite kring hans förberedelser och hur han skötte intaget av energi under själva loppet. På Hälsoveckan by Tyngres instagram kan du hitta bilder relaterat till detta och tidigare avsnitt. Hålltider (00:00:00) Introsnack om familjeliv och när Jacob ska vara med på nyhetsmorgon imorgon (00:02:39) ChatGPT är bättre än läkare på att svara på medicinska frågor på nätet (00:26:55) Erik intervjuar Torbjörn som sprang 272 km på 24 timmar
Sunny and Vinny are back to break down ChatGPT's latest innovation: the Code Interpreter. They demonstrate its capabilities with two separate datasets on EVs in America and US Bank Failures (10:58) before discussing how platforms like ChatGPT will revolutionize organizational efficiency (31:19). (0:00) Jason kicks off the show (2:06) ChatGPT's new code interpreter (9:24) OpenPhone - Start your free trial and get 20% off at https://openphone.com/twist (10:58) ChatGPT's Code Interpreter example with EV data (23:50) Coda - Get a $1,000 startup credit at https://coda.io/twist (25:13) The global GPU shortage (28:10) ChatGPT's Code Interpreter example with US Bank Failures (31:19) How ChatGPT will change the modern-day organization (38:55) Release - Get your first month free at https://release.com/twist (40:27) Getting more efficient with ChatGPTs new updates (47:53) Web browsing with ChatGPT (56:11) How this technology will enable people (1:04:15) How this has impacted Sunny FOLLOW Sunny: https://twitter.com/sundeep FOLLOW Vinny: https://twitter.com/vinnylingham FOLLOW Jason: https://linktr.ee/calacanis Subscribe to our YouTube to watch all full episodes: https://www.youtube.com/channel/UCkkhmBWfS7pILYIk0izkc3A?sub_confirmation=1 FOUNDERS! Subscribe to the Founder University podcast: https://podcasts.apple.com/au/podcast/founder-university/id1648407190 OTHER LINKS: https://twitter.com/jbrowder1/status/1652387444904583169?s=20
This time our panellists dived into the implications of ChatGPT for ICSO's in the digital space. This exciting discussion addressed questions like 'How do we integrate AI chatbots without losing people-centred approaches to CSO work?' and 'What are these AI chatbots likely to do for us in the future?' Guest Speakers -Nkosinathi Mcetywa, Communications and Community Organiser at Civic Tech Innovation Network (CTIN), Moderator -Adeboro Odunlami, Programme Director at Resilience Technologies -Varoon Bashyakarla, Data Scientist and Statistician. -Edzai Zvobwo, Tech Advisor. He co-founded The Education Support Forum (TEDSF) and founded MathsGee, an online AI supplementary learning platform that supports remote learners -Download the transcript: https://bit.ly/3oBzqSQ
The most recent YCombinator W23 batch graduated 59 companies building with Generative AI for everything from sales, support, engineering, data, and more:Many of these B2B startups will be seeking to establish an AI foothold in the enterprise. As they look to recent success, they will find Glean, started in 2019 by a group of ex-Googlers to finally solve AI-enabled enterprise search. In 2022 Sequoia led their Series C at a $1b valuation and Glean have just refreshed their website touting new logos across Databricks, Canva, Confluent, Duolingo, Samsara, and more in the Fortune 50 and announcing Enterprise-ready AI features including AI answers, Expert detection, and In-context recommendations.We talked to Deedy Das, Founding Engineer at Glean and a former Tech Lead on Google Search, on why he thinks many of these startups are solutions looking for problems, and how Glean's holistic approach to enterprise probllem solving has brought so much success. Deedy is also just a fascinating commentator on AI current events, being both extremely qualified and great at distilling insights, so we also went over his many viral tweets diving into Google's competitive threats, AI Startup investing, and his exposure of Indian University Exam Fraud!Show Notes* Deedy on LinkedIn and Twitter and Personal Site* Glean* Glean and Google Moma* Golinks.io* Deedy on Google vs ChatGPT* Deedy on Google Ad Revenue* Deedy on How much does it cost to train a state-of-the-art foundational LLM?* Deedy on Google LaMDA cost* Deedy's Indian Exam Fraud Story* Lightning Round* Favorite Products: (covered in segment)* Favorite AI People: AI Pub* Predictions: Models will get faster for the same quality* Request for Products: Hybrid Email Autoresponder* Parting Takeaway: Read the research!Timestamps* [00:00:21] Introducing Deedy* [00:02:27] Introducing Glean* [00:05:41] From Syntactic to Semantic Search* [00:09:39] Why Employee Portals* [00:12:01] The Requirements of Good Enterprise Search* [00:15:26] Glean Chat?* [00:15:53] Google vs ChatGPT* [00:19:47] Search Issues: Freshness* [00:20:49] Search Issues: Ad Revenue* [00:23:17] Search Issues: Latency* [00:24:42] Search Issues: Accuracy* [00:26:24] Search Issues: Tool Use* [00:28:52] Other AI Search takes: Perplexity and Neeva* [00:30:05] Why Document QA will Struggle* [00:33:18] Investing in AI Startups* [00:35:21] Actually Interesting Ideas in AI* [00:38:13] Harry Potter IRL* [00:39:23] AI Infra Cost Math* [00:43:04] Open Source LLMs* [00:46:45] Other Modalities* [00:48:09] Exam Fraud and Generated Text Detection* [00:58:01] Lightning RoundTranscript[00:00:00] Hey everyone. Welcome to the Latent Space Podcast. This is Alessio, partner and CTO and residence at Decibel Partners. I'm joined by my, cohost swyx, writer and editor of[00:00:19] Latent Space. Yeah. Awesome.[00:00:21] Introducing Deedy[00:00:21] And today we have a special guest. It's Deedy Das from Glean. Uh, do you go by Deedy or Debarghya? I go by Deedy. Okay.[00:00:30] Uh, it's, it's a little bit easier for the rest of us to, uh, to, to spell out. And so what we typically do is I'll introduce you based on your LinkedIn profile, and then you can fill in what's not on your LinkedIn. So, uh, you graduated your bachelor's and masters in CS from Cornell. Then you worked at Facebook and then Google on search, specifically search, uh, and also leading a sports team focusing on cricket.[00:00:50] That's something that we, we can dive into. Um, and then you moved over to Glean, which is now a search unicorn in building intelligent search for the workplace. What's not on your LinkedIn that people should know about you? Firstly,[00:01:01] guys, it's a pleasure. Pleasure to be here. Thank you so much for having me.[00:01:04] What's not on my LinkedIn is probably everything that's non-professional. I think the biggest ones are I'm a huge movie buff and I love reading, so I think I get through, usually I like to get through 10 books ish a year, but I hate people who count books, so I should say the number. And increasingly, I don't like reading non-fiction books.[00:01:26] I actually do prefer reading fiction books purely for pleasure and entertainment. I think that's the biggest omission from my LinkedIn.[00:01:34] What, what's, what's something that, uh, caught your eye for fiction stuff that you would recommend people?[00:01:38] Oh, I recently, we started reading the Three Body Problem and I finished it and it's a three part series.[00:01:45] And, uh, well, my controversial take is I did not really enjoy the second part, and so I just stopped. But the first book was phenomenal. Great concept. I didn't know you could write alien fiction with physics so Well, and Chinese literature in particular has a very different cadence to it than Western literature.[00:02:03] It's very less about the, um, let's describe people and what they're all about and their likes and dislikes. And it's like, here's a person, he's a professor of physics. That's all you need to know about him. Let's continue with the story. Um, and, and I, I, I, I enjoy it. It's a very different style from, from what I'm used.[00:02:21] Yeah, I, I heard it's, uh, very highly recommended. I think it's being adapted to a TV show, so looking forward[00:02:26] to that.[00:02:27] Introducing Glean[00:02:27] Uh, so you spend now almost four years at gle. The company's not unicorn, but you were on the founding team and LMS and tech interfaces are all the reach now. But you were building this before.[00:02:38] It was cool, so to speak. Maybe tell us more about the story, how it became, and some of the technological advances you've seen. Because I think you started, the company started really close to some of the early GPT models. Uh, so you've seen a lot of it from, from day one.[00:02:53] Yeah. Well, the first thing I'll say is Glean was never started to be a.[00:02:58] Technical product looking for a solution. We were always wanted to solve a very critical problem first that we saw, not only in the companies that we'd worked in before, but in all of the companies that a lot of our, uh, a lot of the founding team had been in past their time at Google. So Google has a really neat tool that already kind of does this internally.[00:03:18] It's called MoMA, and MoMA sort of indexes everything that you'd use inside Google because they have first party API accessed who has permissions to what document and what documents exist, and they rank them with their internal search tool. It's one of those things where when you're at Google, you sort of take it for granted, but when you leave and go anywhere else, you're like, oh my God, how do I function without being able to find things that I've worked on?[00:03:42] Like, oh, I remember this guy had a presentation that he made three meetings ago and I don't remember anything about it. I don't know where he shared it. I don't know if he shared it, but I do know the, it was a, something about X and I kind of wanna find that now. So that's the core. Information retrieval problem that we had set out to tackle, and we realized when we started looking at this problem that enterprise search is actually, it's not new.[00:04:08] People have been trying to tackle enterprise search for decades. Again, pre two thousands people have been trying to build these on-prem enterprise search systems. But one thing that has really allowed us to build it well, A, you now have, well, you have distributed elastic, so that really helps you do a lot of the heavy lifting on core infra.[00:04:28] But B, you also now have API support that's really nuanced on all of the SaaS apps that you use. So back in the day, it was really difficult to integrate with a messaging app. They didn't have an api. It didn't have any way to sort of get the permissions information and get the messaging information. But now a lot of SaaS apps have really robust APIs that really let.[00:04:50] Index everything that you'd want though though. That's two. And the third sort of big macro reason why it's happening now and why we're able to do it well is the fact that the SaaS apps have just exploded. Like every company uses, you know, 10 to a hundred apps. And so just the urgent need for information, especially with, you know, remote work and work from home, it's just so critical that people expect this almost as a default that you should have in your company.[00:05:17] And a lot of our customers just say, Hey, I don't, I can't go back to a life without internal search. And I think we think that's just how it should be. So that's kind of the story about how Glean was founded and a lot of the LLM stuff. It's neat that all, a lot of that's happening at the same time that we are trying to solve this problem because it's definitely applicable to the problem we're trying to solve.[00:05:37] And I'm really excited by some of the stuff that we are able to do with it.[00:05:41] From Syntactic to Semantic Search[00:05:41] I was talking with somebody last weekend, they were saying the last couple years we're going from the web used to be syntex driven. You know, you siegal for information retrieval, going into a symantics driven where the syntax is not as important.[00:05:55] It's like the, how you actually explain the question. And uh, we just asked Sarah from Seek.ai on the previous episode and instead of doing natural language and things like that for enterprise knowledge, it's more for business use cases. So I'm curious to see, you know, The enterprise of the future, what that looks like, you know, is there gonna be way less dropdowns and kind of like, uh, SQL queries and stuff like that.[00:06:19] And it's more this virtual, almost like person that embodies the company that is like a, an LLM in a way. But how do you do that without being able to surface all the knowledge that people have in the organization? So something like Lean is, uh, super useful for[00:06:35] that. Yeah, I mean, already today we see these natural language queries as well.[00:06:39] I, I will say at, at this point, it's still a small fraction of the queries. You see a lot of, a lot of the queries are, hey, what is, you know, just a name of a project or an acronym or a name of a person or some someone you're looking for. Yeah, I[00:06:51] think actually the Glean website explains gleans features very well.[00:06:54] When I, can I follow the video? Actually, video wasn't that, that informative video was more like a marketing video, but the, the actual website was showing screenshots of what you see there in my language is an employee portal. That happens to have search because you also surface like collections, which proactively show me things without me searching anything.[00:07:12] Right. Like, uh, you even have Go links, you should copy it, I think from Google, right? Which like, it's basically, uh, you know, in my mind it's like this is ex Googlers missing Google internal stuff. So they just built it for everyone else. So,[00:07:25] well, I can, I can comment on that. So a, I should just plug that we have a new website as of today.[00:07:30] I don't know how, how it's received. So I saw it yesterday, so let, let me know. I think today we just launch, I don't know when we launched a new one, I think today or yesterday. Yeah,[00:07:38] it's[00:07:38] new. I opened it right now it's different than yesterday.[00:07:41] Okay. It's, it's today and yeah. So one thing that we find is that, Search in itself.[00:07:48] This is actually, I think, quite a big insight. Search in itself is not a compelling enough use case to keep people drawn to your product. It's easy to say Google search is like that, but Google Search was also in an era where that was the only website people knew, and now it's not like that. When you are a new tool that's coming into a company, you can't sit on your high horse and say, yeah, of course you're gonna use my tool to search.[00:08:13] No, they're not gonna remember who you are. They're gonna use it once and completely forget to really get that retention. You need to sort of go from being just a search engine to exactly what you said, Sean, to being sort of an employee portal that does much more than that. And yeah, the Go Links thing, I, I mean, yes, it is copied from Google.[00:08:33] I will say there's a complete other startup called Go links.io that has also copied it from Google and, and everyone, everyone misses Go Links. It's very useful to be able to write a document and just be like, go to go slash this. And. That's where the document is. And, and so we have built a big feature set around it.[00:08:50] I think one of the critical ones that I will call out is the feed. Just being able to see, not just, so documents that are trending in your sub-organization documents that you, we think you should see are a limited set of them, as well as now we've launched something called Mentions, which is super useful, which is all of your tags across all of your apps in one place in the last whatever, you know, time.[00:09:14] So it's like all of the hundred Slack pings that you have, plus the Jira pings, plus the, the, the email, all of that in one place is super useful to have. So you did GitHub. Yeah, we do get up to, we do get up to all the mentions.[00:09:28] Oh my God, that's amazing. I didn't know you had it, but, uh, um, this is something I wish for myself.[00:09:33] It's amazing.[00:09:34] It's still a little buggy right now, but I think it's pretty good. And, and we're gonna make it a lot better as as we go.[00:09:39] Why Employee Portals[00:09:39] This[00:09:39] is not in our preset list of questions, but I have one follow up, which is, you know, I've worked in quite a few startups now that don't have employee portals, and I've worked at Amazon, which had an employee portal, but it wasn't as beautiful or as smart as as glean.[00:09:53] Why isn't this a bigger norm in all[00:09:56] companies? Well, there's several reasons. I would say one reason is just the dynamics of how enterprise sales happens is. I wouldn't say broken. It is, it is what it is, but it doesn't always cater to employees being happy with the best tools. What it does cater to is there's different incentive structures, right?[00:10:16] So if I'm an IT buyer, I have a budget and I need to understand that for a hundred of these tools that are pitched to me all the time, which ones really help the company And the way usually those things are evaluated is does it increase revenue and does it cut cost? Those are the two biggest ones. And for a software like Glean or a search portal or employee portal, it's actually quite difficult when you're in, generally bucketed in the space of productivity to say, Hey, here's a compelling use use case for why we will cut your cost or increase your revenue.[00:10:52] It's just a softer argument that you have to make there. It's just a fundamental nature of the problem versus if you say, Hey, we're a customer support tool. Everyone in SaaS knows that customer support tools is just sort of the. The last thing that you go to when you're looking for ideas, because it's easy to sell.[00:11:08] It's like, here's a metric. How many tickets can your customer support agent resolve? We've built a thing that makes it 20% better. That means it's 1,000 thousand dollars cost savings. Pay us 50 k. Call it a deal. That's a good argument. That's a very simple, easy to understand argument. It's very difficult to make that argument with search, which you're like, okay, you're gonna get see about 10 to 20 searches that's gonna save about this much time, uh, a day.[00:11:33] And that results in this much employee productivity. People just don't buy it as easily. So the first reaction is, oh, we work fine without it. Why do we need this now? It's not like the company didn't work without this tool, and uh, and only when they have it do they realize what they were missing out on.[00:11:50] So it's a difficult thing to sell in, in some ways. So even though the product is, in my opinion, fantastic, sometimes the buyer isn't easily convinced because it doesn't increase revenue or cut cost.[00:12:01] The Requirements of Good Enterprise Search[00:12:01] In terms of technology, can you maybe talk about some of the stack and you see a lot of companies coming up now saying, oh, we help you do enterprise search.[00:12:10] And it's usually, you know, embedding to then do context for like a LLM query mostly. I'm guessing you started as like closer to like the vector side of thing maybe. Yeah. Talk a bit about that and some learning siva and as founders try to, to build products like this internally, what should they think[00:12:27] about?[00:12:28] Yeah, so actually leading back from the last answer, one of the ways a lot of companies who are in the enterprise search space are trying to tackle the problem of sales is to lean into how advance the technology is, which is useful. It's useful to say we are AI powered, LLM powered vector search, cutting edge, state-of-the-art, yada, yada, yada.[00:12:47] Put it all your buzzwords. That's nice, but. The question is how often does that translate to better user experience is sort of, a fuzzy area where it, it's really hard for even users to tell, to be honest. Like you can have one or two great queries and one really bad query and be like, I don't know if this thing is smart.[00:13:06] And it takes time to evaluate and understand how a certain engine is doing. So to that, I think one of the things that we learned from Google, a lot of us come from an ex Google search background, and one of the key learnings is often with search, it's not about how advanced or how complex the technology is, it's about the rigor and intellectual honesty that you put into tuning the ranking algorithm.[00:13:30] That's a painstaking long-term and slow process at Google until I would say maybe 20 17, 20 18. Everything was run off of almost no real ai, so to speak. It was just information retrieval at its core, very basic from the seventies, eighties, and a bunch of these ranking components that are put stacked on top of it that do various tasks really, really well.[00:13:57] So one task in search is query understanding what does the query mean? One task is synonymous. What are other synonyms for this thing that we can also match on? One task is document understanding. Is this document itself a high quality document or not? Or is it some sort of SEO spam? And admittedly, Google doesn't do so well on that anymore, but there's so many tough sub problems that it breaks search down into and then just gets each of those problems, right, to create a nice experience.[00:14:24] So to answer your question, also, vector search we do, but it is not the only way we get results. We do a hybrid approach both using, you know, core IR signal synonymy. Query accentuation with things like acronym expansion, as well as stuff like vector search, which is also useful. And then we apply our level of ranking understanding on top of that, which includes personalization, understanding.[00:14:50] If you're an engineer, you're probably not looking for Salesforce documents. You know, you're probably looking for documents that are published or co-authored by people in your team, in your immediate team, and our understanding of all of your interactions with people around you. Our personalization layer, our good work on ranking is what makes us.[00:15:09] Good. It's not sort of, Hey, drop in LLM and embeddings and we become amazing at search. That's not how we think it[00:15:16] works. Yeah. I think there's a lot of polish that mix into quality products, and that's the difference that you see between Hacker News, demos and, uh, glean, which is, uh, actual, you know, search and chat unicorn.[00:15:26] Glean Chat?[00:15:26] But also is there a glean chat coming? Is is, what do you think about the[00:15:30] chat form factor? I can't say anything about it, but I think that we are experi, my, my politically correct answer is we're experimenting with many technologies that use modern AI and LLMs, and we will launch what we think users like best.[00:15:49] Nice. You got some media training[00:15:51] again? Yeah. Very well handed.[00:15:53] Google vs ChatGPT[00:15:53] We can, uh, move off of Glean and just go into Google search. Uh, so you worked on search for four years. I've always wanted to ask what happens when I type something into Google? I feel like you know more than others and you obviously there's the things you cannot say, but I'm sure Google does a lot of the things that Glean does as well.[00:16:08] How do you think about this Google versus ChatGPT debate? Let's, let's maybe start at a high level based on what you see out there, and I think you, you see a lot of[00:16:15] misconceptions. Yeah. So, okay, let me, let me start with Google versus ChatGPT first. I think it's disingenuous, uh, if I don't say my own usage pattern, which is I almost don't go back to Google for a large section of my queries anymore.[00:16:29] I just use ChatGPT I am a paying plus subscriber and it's sort of my go-to for a lot of things. That I ask, and I also have to train my mind to realize that, oh, there's a whole set of questions in your head that you never realize the internet could answer for you, and that now you're like, oh, wait, I could actually ask this, and then you ask it.[00:16:48] So that's my current usage pattern. That being said, I don't think that ChatGPT is the best interface or technology for all sets of queries. I think humans are obviously very easily excited by new technology, but new technology does not always mean the previous technology was worse. The previous technology is actually really good for a lot of things, and for search in particular, if you think about all the queries that come into Google search, they fall into various kinds of query classes, depending on whatever taxonomy you want to use.[00:17:24] But one sort of way of, of of understanding broad, generally, the query classes is something that is information seeking or exploratory. And for information for exploratory queries. I think there are uses where Google does really well. Like for example, let's say you want to just know a list of songs of this artist in this year.[00:17:49] Google will probably be able to add a hundred percent, tell you that pretty accurately all the time. Or if you want to say understand like what showtimes of movies came out today. So fresh queries, another query class, Google will be really good at that chat, not so good at that. But if you look at information seeking queries, you could even argue that if I ask for information about Donald Trump, Maybe ChatGPT will spit out a reasonable sounding paragraph and it makes sense, but it doesn't give me enough stuff to like click on and go to and navigate to in a news article here.[00:18:25] And I just kind wanna see a lot of stuff happening. So if you really break down the problem, I think it's not as easy as saying ChatGPT is a silver bullet for every kind of information need. There's a lot of information needs, especially for tail queries. So for long. Un before seen queries like, Hey, tell me the cheat code in Doom three.[00:18:43] This level, this boss ChatGPTs gonna blow it out the water on those kind of queries cuz it's gonna figure out all of these from these random sparse documents and random Reddit threads and assemble one consistent answer for you where it takes forever to find this kind of stuff on Google. For me personally, coding is the biggest use case for anything technical.[00:19:02] I just go to ChatGPT cuz parsing through Stack Overflow is just too mentally taxing and I don't care about, even if ChatGPT hallucinates a wrong answer, I can verify that. But I like seeing a coherent, nice answer that I can just kind of good starting point for my research on whatever I'm trying to understand.[00:19:20] Did you see the, the statistic that, uh, the Allin guys have been saying, which is, uh, stack overflow traffic is down 15%? Yeah, I did, I did.[00:19:27] See that[00:19:28] makes sense. But I, I, I don't know if it's like only because of ChatGPT, but yeah, sure. I believe[00:19:33] it. No, the second part was just about if some of the enterprise product search moves out of Google, like cannot, that's obviously a big AdWords revenue driver.[00:19:43] What are like some of the implications in terms of the, the business[00:19:46] there?[00:19:47] Search Issues: Freshness[00:19:47] Okay,[00:19:47] so I would split this answer into two parts. My first part is just talking about freshness, cuz the query that you mentioned is, is specifically the, the issue there is being able to access fresh information. Google just blanket calls his freshness.[00:20:01] Today's understanding of large language models is that it cannot do anything that's highly fresh. You just can't train these things fast enough and cost efficiently enough to constantly index new, new. Sources of data and then serve it at the same time in any way that's feasible. That might change in the future, but today it's not possible.[00:20:20] The best thing that you can get that's close to it is what, you know, the fancy term is retrieval, augmented generation, but it's a fancy way of saying just do the search in the background and then use the results to create the actual response. That's what Bing does today. So to answer the question about freshness, I would say it is possible to do with these methods, but those methods all in all involve using search in the backend to, to sort of get the context to generate the answer.[00:20:49] Search Issues: Ad Revenue[00:20:49] The second part of the answer is, okay, talk about ad revenue. A lot of Google's ad revenue just comes from the fact that over the last two decades, it's figured out how to put ad links on top of a search result page that sometimes users click. Now the user behavior on a chat product is not to click on anything.[00:21:10] You don't click on stuff you just read and you move on. And that actually, in my opinion, has severe impacts on the web ecosystem, on all of Google and all of technology and how we use the internet in the future. And, and the reason is one thing we also take for granted is that this ad revenue where everyone likes to say Google is bad, Google makes money off ads, yada, yada, yada, but this ad revenue kind of sponsored the entire internet.[00:21:37] So you have Google Maps and Google search and photos and drive and all of this great free stuff basically because of ads. Now, when you have this new interface, sure it, it comes with some benefits, but if users aren't gonna click on ads and you replace the search interface with just chat, that can actually be pretty dangerous in terms of what it even means.[00:21:59] To have to create a website, like why would I create a website if no one's gonna come to my. If it's just gonna be used to train a model and then someone's gonna spit out whatever my website says, then there's no incentive. And that kind of dwindles the web ecosystem. In the end, it means less ad revenue.[00:22:15] And then the other existential question is, okay, I'm okay with saying the incumbent. Google gets defeated and there's this new hero, which is, I don't know, open AI and Microsoft. Now reinvent the wheel. All of that stuff is great, but how are they gonna make money? They can make money off, I guess, subscriptions.[00:22:31] But subscriptions is not nearly gonna make you enough. To replace what you can make on ad revenue. Even for Bing today. Bing makes it 11 billion off ad revenue. It's not a society product like it's a huge product, and they're not gonna make 11 billion off subscriptions, I'll tell you that. So even they can't really replace search with this with chat.[00:22:51] And then there are some arguments around, okay, what if you start to inject ads in textual form? But you know, in my view, if the natural user inclination is not to click on something or chat, they're clearly not gonna click on something. No matter how much you try to inject, click targets into your result.[00:23:10] So, That's, that's my long answer to the ads question. I don't really know. I just smell danger in the horizon.[00:23:17] Search Issues: Latency[00:23:17] You mentioned the information augmented generation as well. Uh, I presumably that is literally Bing is probably just using the long context of GPT4 and taking the full text of all the links that they find, dumping it in, and then generating some answer.[00:23:34] Do you think like speed is a concern or people are just people willing to wait for smarter?[00:23:40] I think it's a concern. We noticed that every, every single product I've worked on, there's almost a linear, at least for some section of it, a very linear curve. A linear line that says the more the latency, the less the engagement, so there's always gonna be some drop off.[00:23:55] So it is a concern, but with things like latency, I just kind of presume that time solves these things. You optimize stuff, you make things a little better, and the latency will get down with time. And it's a good time to even mention that. Bard, we just came out today. Google's LLM. For Google's equivalent, I haven't tried it, but I've been reading about it, and that's based off a model called LamDA.[00:24:18] And LamDA intrinsically actually does that. So it does query what they call a tool set and they query search or a calculator or a compiler or a translator. Things that are good at factual, deterministic information. And then it keeps changing its response depending on the feedback from the tool set, effectively doing something very similar to what Bing does.[00:24:42] Search Issues: Accuracy[00:24:42] But I like their framing of the problem where it's just not just search, it's any given set of tools. Which is similar to what a Facebook paper called Tool Former, where you can think of language as one aspect of the problem and language interfaces with computation, which is another aspect of the problem.[00:24:58] And if you can separate those two, this one just talks to these things and figures out what to, how to phrase it. Yeah, so it's not really coming up with the answer. Their claim is like GPT4, for example. The reason it's able to do factual accuracy without search is just by memorizing facts. And that doesn't scale.[00:25:18] It's literally somewhere in the whole model. It knows that the CEO of Tesla is Elon Musk. It just knows that. But it doesn't know that this is a competition. It just knows that. Usually I see CEO, Tesla, Elon, that's all it knows. So the abstraction of language model to computational unit or tool set is an interesting one that I think is gonna be more explored by all of these engines.[00:25:40] Um, and the latency, you know, it'll.[00:25:42] I think you're focusing on the right things there. I actually saw another article this morning about the memorization capability. You know how GPT4 is a lot of, uh, marketed on its ability to answer SAT questions and GRE questions and bar exams and, you know, we covered this in our benchmarks podcast Alessio, but like I forgot to mention that all these answers are out there and were probably memorized.[00:26:05] And if you change them just, just a little bit, the model performance will probably drop a lot.[00:26:10] It's true. I think the most compelling, uh, proof of that, of what you just said is the, the code forces one where somebody I think tweeted, tweeted, tweeted about the, yeah, the 2021. Everything before 2021. It solves everything after.[00:26:22] It doesn't, and I thought that was interesting.[00:26:24] Search Issues: Tool Use[00:26:24] It's just, it's just dumb. I'm interested in two former, and I'm interested in react type, uh, patterns. Zapier just launched a natural language integration with LangChain. Are you able to compare contrast, like what approaches you like when it comes to LMS using[00:26:36] tools?[00:26:37] I think it's not boiled down to a science enough for me to say anything that's uh, useful. Like I think everyone is at a point of time where they're just playing with it. There's no way to reason about what LLMs can and can't do. And most people are just throwing things at a wall and seeing what sticks.[00:26:57] And if anyone claims to be doing better, they're probably lying because no one knows how these things behaves. You can't predict what the output is gonna be. You just think, okay, let's see if this works. This is my prompt. And then you measure and you're like, oh, that worked. Versus the stint and things like react and tool, form are really cool.[00:27:16] But those are just examples of things that people have thrown at a wall that stuck. Well, I mean, it's provably, it works. It works pretty, pretty well. I will say that one of the. It's not really of the framing of what kind of ways can you use LLMs to make it do cool things, but people forget when they're looking at cutting edge stuff is a lot of these LLMs can be used to generate synthetic data to bootstrap smaller models, and it's a less sexy space of it all.[00:27:44] But I think that stuff is really, really cool. Where, for example, I want to tag entities in a sentence that's a very simple classical natural language problem of NER. And what I do is I just, before I had to gather training data, train model, tune model, all of this other stuff. Now what I can do is I can throw GPT4 at it to generate a ton of synthetic data, which looks actually really good.[00:28:11] And then I can either just train whatever model I wanted to train before on this data, or I can use something called like low rank adaptation, which is distilling this large model into a much smaller, cost effective, fast model that does that task really well. And in terms of productionable natural language systems, that is amazing that this is stuff you couldn't do before.[00:28:35] You would have teams working for years to solve NER and that's just what that team does. And there's a great red and viral thread about our, all the NLP teams at Big Tech, doomed and yeah, I mean, to an extent now you can do this stuff in weeks, which is[00:28:51] huge.[00:28:52] Other AI Search takes: Perplexity and Neeva[00:28:52] What about some of the other kind of like, uh, AI native search, things like perplexity, elicit, have you played with, with any of them?[00:29:00] Any thoughts on[00:29:01] it? Yeah. I have played with perplexity and, and niva. Everyone. I think both of those products sort of try to do, again, search results, synthesis. Personally, I think Perplexity might be doing something else now, but I don't see the, any of those. Companies or products are disrupting either open AI or ChatGPT or Google being whatever prominent search engines with what they do, because they're all built off basically the Bing API or their own version of an index and their search itself is not good enough and there's not a compelling use case enough, I think, to use those products.[00:29:40] I don't know how they would make money, a lot of Neeva's way of making money as subscriptions. Perplexity I don't think has ever turned on the revenue dial. I just have more existential concerns about those products actually functioning in the long run. So, um, I think I see them as they're, they're nice, they're nice to play with.[00:29:56] It's cool to see the cutting edge innovation, but I don't really understand if they will be long lasting widely used products.[00:30:05] Why Document QA will Struggle[00:30:05] Do you have any idea of what it might take to actually do like a new kind of like, type of company in this space? Like Google's big thing was like page rank, right? That was like one thing that kind of set them apart.[00:30:17] Like people tried doing search before, like. Do you have an intuition for what, like the LM native page rank thing is gonna be to make something like this exist? Or have we kinda, you know, hit the plateau when it comes to search innovation?[00:30:31] So I, I talk to so many of my friends who are obviously excited about this technology as well, and many of them who are starting LLM companies.[00:30:38] You know, how many companies in the YC batch of, you know, winter 23 are LM companies? Crazy half of them. Right? Right. It's, it's ridiculous. But what I always, I think everyone's struggling with this problem is what is your advantage? What is your moat? I don't see it for a lot of these companies, and, uh, it's unclear.[00:30:58] I, I don't have a strong intuition. My sense is that the people who focus on problem first usually get much further than the people who focus solution first. And there's way too many companies that are solutions first. Which makes sense. It's always been the, a big achilles heel of the Silicon Valley.[00:31:16] We're a bunch of nerds that live in a whole different dimension, which nobody else can relate to, but nobody else. The problem is nobody else can relate to them and we can't relate to their problems either. So we look at tech first, not problem first a lot. And I see a lot of companies just, just do that.[00:31:32] Where I'll tell you one, this is quite entertaining to me. A very common theme is, Hey, LMS are cool, that, that's awesome. We should build something. Well, what should we build? And it's like, okay, consumer, consumer is cool, we should build consumer. Then it's like, ah, nah man. Consumers, consumer's pretty hard.[00:31:49] Uh, it's gonna be a clubhouse gonna blow up. I don't wanna blow up, I just wanna build something that's like, you know, pretty easy to be consistent with. We should go enter. Cool. Let's go enterprise. So you go enterprise. It's like, okay, we brought LMS to the enterprise. Now what problem do we tackle? And it's like, okay, well we can do q and A on documents.[00:32:06] People know how to do that, right? We've seen a couple of demos on that. So they build it, they build q and a on documents, and then they struggle with selling, or they're like, or people just ask, Hey, but I don't ask questions to my documents. Like, you realize this is just not a flow that I do, like I, oh no.[00:32:22] I ask questions in general, but I don't ask them to my documents. And also like what documents can you ask questions to? And they'll be like, well, any of them is, they'll say, can I ask them to all of my documents? And they'll be like, well, sure, if you give them, give us all your documents, you can ask anything.[00:32:39] And then they'll say, okay, how will you take all my document? Oh, it seems like we have to build some sort of indexing mechanism and then from one thing to the other, you get to a point where it's like we're building enterprise search and we're building an LM on top of it, and that is our product. Or you go to like ML ops and I'm gonna help you host models, I'm gonna help you train models.[00:33:00] And I don't know, it's, it seems very solution first and not problem first. So the only thing I would recommend is if you think about the actual problems and talk to users and understand what this can be useful for. It doesn't have to be that sexy of how it's used, but if it works and solves the problem, you've done your job.[00:33:18] Investing in AI Startups[00:33:18] I love that whole evolution because I think quite a few companies ha are, independently finding this path and, going down this route to build a glorified, you know, search spot. We actually interviewed a very problem focused builder, Mickey Friedman, who's very, very focused on products placement, image generation.[00:33:34] , and, you know, she's not focused on anything else in terms of image generation, like just focused on product placement and branding. And I think that's probably the right approach, you know, and, and if you think about like Jasper, right? Like they, they're out of all the other GPT3 companies when, when GPT3 first came out, they built focusing on, you know, writers on Facebook, you know, didn't even market on Twitter.[00:33:56] So like most people haven't heard of them. Uh, I think it's a timeless startup lesson, but it's something to remind people when they're building with, uh, language models. I mean, as a, as an investor like you, you know, you are an investor, you're your scout with me. Doesn't that make it hard to invest in anything like, cuz.[00:34:10] Mostly it's just like the incumbents will get to the innovation faster than startups will find traction.[00:34:16] Really. Like, oh, this is gonna be a hot take too. But, okay. My, my in, in investing, uh, with people, especially early, is often for me governed by my intuition of how they approach the problem and their experience with the technology, and pretty much solely that I don.[00:34:37] Really pretend to be an expert in the industry or the space that's their problem. If I think they're smart and they understand the space better than me, then I mostly convinced as if they've thought through enough of the business stuff, if they've thought through the, the market and everything else. I'm convinced I typically stray away from, you know, just what I just said.[00:34:57] Founders who are like LMS are cool and we should build something with them. That's not like usually very convincing to me. That's not a thesis. But I don't concern myself too much with pretending to understand what this space means. I trust them to do that. If I'm convinced that they're smart and they've thought about it, well then I'm pretty convinced that that they're a good person to, to, to[00:35:20] back.[00:35:21] Cool.[00:35:21] Actually Interesting Ideas in AI[00:35:21] Kinda like super novel idea that you wanna shout.[00:35:25] There's a lot of interesting explorations, uh, going on. Um, I, I, okay, I'll, I'll preface this with I, anything in enterprise I just don't think is cool. It's like including, like, it's just, it's, you can't call it cool, man. You're building products for businesses.[00:35:37] Glean is pretty cool. I'm impressed by Glean. This is what I'm saying. It's, it's cool for the Silicon Valley. It's not cool. Like, you're not gonna go to a dinner party with your parents and be like, Hey mom, I work on enterprise search. Isn't that awesome? And they're not all my, all my[00:35:51] notifications in one place.[00:35:52] Whoa.[00:35:55] So I will, I'll, I'll start by saying, for in my head, cool means like, the world finds this amazing and, and it has to be somewhat consumer. And I do think that. The ideas that are being played with, like Quora is playing with Poe. It's kind of strange to think about, and may not stick as is, but I like that they're approaching it with a very different framing, which is, Hey, how about you talk to this, this chat bot, but let's move out of this, this world where everyone's like, it's not WhatsApp or Telegram, it's not a messaging app.[00:36:30] You are actually generating some piece of content that now everybody can make you use of. And is there something there Not clear yet, but it's an interesting idea. I can see that being something where, you know, people just learn. Or see cool things that GPT4 has said or chatbots have said that's interesting in the image space.[00:36:49] Very contrasted to the language space. There's so much like I don't even begin to understand the image space. Everything I see is just like blows my mind. I don't know how mid journey gets from six fingers to five fingers. I don't understand this. It's amazing. I love it. I don't understand what the value is in terms of revenue.[00:37:08] I don't know where the markets are in, in image, but I do think that's way, way cooler because that's a demo where, and I, and I tried this, I showed GPT4 to, to my mom and my mom's like, yeah, this is pretty cool. It does some pretty interesting stuff. And then I showed the image one and she is just like, this is unbelievable.[00:37:28] There's no way a computer could write do this, and she just could not digest it. And I love when you see those interactions. So I do think image world is a whole different beast. Um, and, and in terms of coolness, lot more cool stuff happening in image video multimodal I think is really, really cool. So I haven't seen too many startups that are doing something where I'm like, wow, that's, that's amazing.[00:37:51] Oh, 11 labs. I'll, I'll mention 11 labs is pretty cool. They're the only ones that I know that are doing Oh, the voice synthesis. Have you tried it? I've only played with it. I haven't really tried generating my own voice, but I've seen some examples and it looks really, really awesome. I've heard[00:38:06] that Descript is coming up with some stuff as well to compete, cuz yeah, this is definitely the next frontier in terms of, podcasting.[00:38:13] Harry Potter IRL[00:38:13] One last thing I I will say on the cool front is I think there is something to be said about. A product that brings together all these disparate advancements in ai. And I have a view on what that looks like. I don't know if everyone shares that view, but if you bring together image generation, voice recognition, language modeling, tts, and like all of the other image stuff they can do with like clip and Dream booth and putting someone's actual face in it.[00:38:41] What you can actually make, this is my view of it, is the Harry Potter picture come to life where you actually have just a digital stand where there's a person who's just capable of talking to you in their voice, in, you know, understandable dialogue. That is how they speak. And you could just sort of walk by, they'll look at you, you can say hi, they'll be, they'll say hi back.[00:39:03] They'll start talking to you. You start talking back to it. That's sort of my, that's my my wild science fiction dream. And I think the technology exists to put all of those pieces together and. The implications for people who are older or saving people over time are huge. This could be a really cool thing to productionize.[00:39:23] AI Infra Cost Math[00:39:23] There's one more part of you that also tweets about numbers and math, uh, AI math essentially is how I'm thinking about it. What gets you into talking about costs and math and, and you know, just like first principles of how to think about language models.[00:39:39] One of my biggest beefs with big companies is how they abstract the cost away from all the engineers.[00:39:46] So when you're working on a Google search, I can't tell you a single number that is cost related at all. Like I just don't know the cost numbers. It's so far down the chain that I have no clue how much it actually costs to run search, and how much these various things cost aside from what the public knows.[00:40:03] And I found that very annoying because when you are building a startup, particularly maybe an enterprise startup, you have to be extremely cognizant about the cost because that's your unit economics. Like your primary cost is the money you spend on infrastructure, not your actual labor costs. The whole thesis is the labor doesn't scale, but the inf.[00:40:21] Does scale. So you need to understand how your infra costs scale. So when it comes to language models, given that these things are so compute heavy, but none of the papers talk about cost either. And it's just bothers me. I'm like, why can't you just tell me how much it costs you to, to build this thing?[00:40:39] It's not that hard to say. And it's also not that hard to figure out. They give you everything else, which is, you know, how many TPUs it took and how long they trained it for and all of that other stuff, but they don't tell you the cost. So I've always been curious because ev all everybody ever says is it's expensive and a startup can't do it, and an individual can't do it.[00:41:01] So then the natural question is, okay, how expensive is it? And that's sort of the, the, the background behind. Why I started doing some more AI math and, and one of the tweets that probably the one that you're talking about is where I compare the cost of LlaMA, which is Facebook's LLM, to PaLM with, uh, my best estimates.[00:41:23] And, uh, the only thing I'll add to that is it is quite tricky to even talk about these things publicly because you get rammed in the comments because by people who are like, oh, don't you know that this assumption that you made is completely BS because you should have taken this cost per hour? Because obviously people do bulk deals.[00:41:42] And yeah, I have two 80 characters. This is what I could have said. But I think ballpark, I think I got close. I, I'd like to imagine, I think I was off maybe by, by by two x on the lower side. I think I took an upper bound and I might have been off by, by two x. So my quote was 4 million for LlaMA and 27 for PaLM.[00:42:01] In fact, later today I'm going to do, uh, one on Bard. So. Oh oh one bar. Oh, the exclusive is that It's four, it's 4 million for Bard two.[00:42:10] Nice. Nice. Which is like, do you think that's like, don't you think that's actually not a lot, like it's a drop in the bucket for these[00:42:17] guys. One, and one of the, the valuable things to note when you're talking about this cost is this is the cost of the final training step.[00:42:24] It's not the cost of the entire process. And a common rebuttal is, well, yeah, this is your cost of the final training process, but in total it's about 10 x this amount cost. Because you have to experiment. You have to tune hyper parameters, you have to understand different architectures, you have to experiment with different kinds of training data.[00:42:43] And sometimes you just screw it up and you don't know why. And you have, you're just spend a lot of time figuring out why you screwed it up. And that's where the actual cost buildup happens, not in the one final last step where you actually train the final model. So even assuming like a 10 x on top of this, I think is, is, is fair for how much it would actually cost a startup to build this from scratch?[00:43:03] I would say.[00:43:04] Open Source LLMs[00:43:04] How do you think about open source in this then? I think a lot of people's big 2023 predictions are an LLM, you know, open source LLM, that is comparable performance to the GPT3 model. Who foots the bill for the mistakes? You know, like when when somebody opens support request that it's not good.[00:43:25] It doesn't really cost people much outside of like a GitHub actions run as people try entering these things separately. Like do you think open source is actually bad because you're wasting so much compute by so many people trying to like do their own things and like, do you think it's better to have a centralized team that organizes these experiments or Yeah.[00:43:43] Any thoughts there? I have some thoughts. I. The most easy comparison to make is to image generation world where, you know, you had Mid Journey and Dolly come out first, and then you had Imad come out with stability, which was completely open source. But the difference there is I think stability. You can pretty much run on your machine and it's okay.[00:44:06] It works pretty fast. So it, so the entire concept of, of open sourcing, it worked and people made forks that fine tuned it on a bunch of different random things and it made variance of stability that could. A bunch of things. So I thought the stability thing, agnostic of the general ethical concerns of training on everyone's art.[00:44:25] I thought it was a cool, cool addition to the sort of trade-offs in different models that you can have in image generation for text generation. We're seeing an equivalent effect with LlaMA and alpaca, which LlaMA being, being Facebook's model, which they didn't really open source, but then the weights got leaked and then people clone them and then they tuned them using GPT4 generated synthetic data and made alpaca.[00:44:50] So the version I think that's out there is only the 7,000,000,001 and then this crazy European c plus plus God. Came and said, you know what, I'm gonna write this entire thing in c plus plus so you can actually run it locally and and not have to buy GPUs. And a combination of those. And of course a lot of people have done work in optimizing these things to make it actually function quickly.[00:45:13] And we can get into details there, but a function of all of these things has enabled people to actually. Semi-good models on their computer. I don't have that much, I don't have any comments on, you know, energy usage and all of that. I don't really have an opinion on that. I think the fact that you can run a local version of this is just really, really cool, but also supremely dangerous because with images, conceivably, people can tell what's fake and what's real, even though there, there's some concerns there as well. But for text it's, you know, like you can do a lot of really bad things with your own, you know, text generation algorithm. You know, if I wanted to make somebody's life hell, I could spam them in the most insidious ways with all sorts of different kinds of text generation indefinitely, which I, I can't really do with images.[00:46:02] I don't know. I find it somewhat ethically problematic in terms of the power is too much for an individual to wield. But there are some libertarians who are like, yeah, why should only open AI have this power? I want this power too. So there's merits to both sides of the argument. I think it's generally good for the ecosystem.[00:46:20] Generally, it will get faster and the latency will get better and the models may not ever reach the size of the cutting edge that's possible, but it could be good enough to do. 80% of the things that bigger model could do. And I think that's a really good start for innovation. I mean, you could just have people come up with stuff instead of companies, and that always unlocks a whole vector of innovation that didn't previously exist.[00:46:45] Other Modalities[00:46:45] That was a really good, conclusion. I, I, I want to ask follow up questions, but also, that was a really good place to end it. Was there any other AI topics that you wanted to[00:46:52] touch on? I think Runway ML is the one company I didn't mention and that, that one's, uh, one to look out for.[00:46:58] I think doing really cool stuff in terms of video editing with generative techniques. So people often talk about the open AI and the Googles of the world and philanthropic and clo and cohere and big journey, all the image stuff. But I think the places that people aren't paying enough attention to that will get a lot more love in the next couple of years.[00:47:19] Better whisper, so better streaming voice recognition, better t t s. So some open source version of 11 labs that people can start using. And then the frontier is sort of multi-modality and videos. Can you do anything with videos? Can you edit videos? Can you stitch things together into videos from images, all sorts of different cool stuff.[00:47:40] And then there's sort of the long tail of companies like Luma that are working on like 3D modeling with generative use cases and taking an image and creating a 3D model from nothing. And uh, that's pretty cool too, although the practical use cases to me are a little less clear. Uh, so that's kind of covers the entire space in my head at least.[00:48:00] I[00:48:00] like using the Harry Potter image, like the moving and speaking images as a end goal. I think that's something that consumers can really get behind as well. That's super cool.[00:48:09] Exam Fraud and Generated Text Detection[00:48:09] To double back a little bit before we go into the lining round, I have one more thing, which is, relevant to your personal story, but then also relevant to our debate, which is a nice blend.[00:48:18] You're concerned about the safety of everyone having access to language models and you know, the potential harm that you can do there. My guess is that you're also not that positive on watermarking. Techniques from internal languages, right? Like maybe randomly sprinkling weird characters so that people can see like that this is generated by an AI model, but also like you have some personal experience with this because you found manipulation in the Indian Exam Board, which, uh, maybe you might be a similar story.[00:48:48] I, I don't know if you like, have any thoughts about just watermarking manipulation, like, you know, ethical deployments of, of, uh,[00:48:55] generated data.[00:48:57] Well, I think those two things are a little separate. Okay. One I would say is for watermarking text data. There is a couple of different approaches. I think there is actual value to that because from a pure technical perspective, you don't want models to train on stuff they've generated.[00:49:13] That's kind of bad for models. Yes. And two is obviously you don't want people to keep using Chatt p t for i, I don't know if you want this to use it for all their assignments and never be caught. Maybe you don't. Maybe you don't. But it, it seems like it's valuable to at least understand that this is a machine generated text versus not just ethically that seems, seems like something that should exist.[00:49:33] So I do think watermarking is, is. A good direction of research and it's, and I'm fairly positive on it. I actually do think people should standardize how that water marketing works across language models so that everyone can detect and understand language models and not just, OpenAI does its own models, but not the other ones and, and so on.[00:49:51] So that's my view on that. And then, and sort of transitioning into the exam data, this is really old one, but it's one of my favorite things to talk about is I. In America, as you know. Usually the way it works is you give your, you, you take your s a t exam, uh, you take a couple of aps, you do your school grades, you apply to colleges, you do a bunch of fluff.[00:50:10] You try to prove how you're good at everything. And then you, you apply to colleges and then it's a, a weird decision based on a hundred other factors. And then they decide whether you get in or not. But if you're rich, you're basically gonna get in anyway. And if you're a legacy, you're probably gonna get in and there's a whole bunch of stuff going on.[00:50:23] And I don't think the system is necessarily bad, but it's just really complicated. And some of the things are weird in India and in a lot of the non developed world, people are like, yeah, okay, we can't scale that. There's no way we can have enough people like. Non rigorously evaluate this cuz there's gonna be too much corruption and it's gonna be terrible at the end cuz people are just gonna pay their way in.[00:50:45] So usually it works in a very simple way where you take an exam that is standardized and sometimes you have many exams, sometimes you have an exam for a different subject. Sometimes it's just one for everything. And you get ranked on that exam and depending on your rank you get to choose the quality and the kind of thing you want to study.[00:51:03] Which this, the kind of thing always surprises people in America where it's not like, oh it's glory land, where you walk in and you're like, I think this is interesting and I wanna study this. Like, no, in the most of the world it's like you're not smart enough to study this, so you're probably not gonna study it.[00:51:18] And there's like a rank order of things that you need to be smart enough to do. So it's, it's different. And therefore these exams. Much more critical for the functioning of the system. So when there's fraud, it's not like a small part of your application going wrong, it's your entire application going wrong.[00:51:36] And that's why, that's just me explaining why this is severe. Now, one such exam is the one that you take in school. There's a, it's called a board exam. You take one in the 10th grade, which doesn't really matter for much, but, and then you take one in the 12th grade when you're about to graduate and that.[00:51:53] How you, where you go to college for a large set of colleges, not all, but a large set of colleges, and based on how much you get on your top five average, you're sort of slotted into a different stream in a d in a, in a different college. And over time, because of the competition between two of the boards that are a duopoly, there's no standardization.[00:52:13] So everyone's trying to like, give more marks than the, the, the other person to attract more students into their board because oh, that means that you can then claim, oh, you're gonna get into a better college if you take our exam and don't go to a school that administers the other exam. What? So it's, and that's, that's the, everyone knew that was happening ish, but there was no data to back it.[00:52:34] But when you actually take this exam as I did, you start realizing that the numbers, the marks make no sense because you're looking at. Kid who's also in your class and you're like, dude, this guy's not smart. How did he get a 90 in English? He's not good at English. Like, you can't speak it. You cannot give him a 90.[00:52:54] You gave me a 90. How did this guy get a 90? So everyone has like their anecdotal, this doesn't make any sense me, uh, moments with, with this exam, but no one has access to the data. So way back when, what I did was I realized they have very little security surrounding the data where the only thing that you need to put in to get access is your role number.[00:53:15] And so as long as you predict the right set of role numbers, you can get everybody's results. So unlike America, also exam results aren't treated with a level of privacy. In India, it's very common to sort of the entire class's results on a bulletin board. And you just see how everyone did and you shamed the people who are stupid.[00:53:32] That's just how it works. It's changed over time, but that's fundamentally a cultural difference. And so when I scraped all these results and I published it, and I, and I did some analysis, what I found was, A couple of very insidious things. One is that in, if you plot the distribution of marks, you generally tend to see some sort of skewed, but pseudo normal distribution where it's a big peak and a, and it falls off on both ends, but you see two interesting patterns.[00:54:01] One that is just the most obvious one, which is Grace Marks, which is the pass grade is 33. You don't see nobody got between 29 and 32 because what they did for every single exam is they just made you pass. They just rounded up to 33, which is okay. I'm not that concerned about whether you give Grace Marks.[00:54:21] It's kind of messed up that you do that, but okay, fine. You want to pass a bunch of people who deserve to fail, do it. Then the other more concerning thing was between 33 and 93, right? That's about 60 numbers, 61 numbers, 30 of those numbers were just missing, as in nobody got 91 on this exam. In any subject in any year.[00:54:44] How, how does that happen? You, you don't get a 91, you don't get a 93, 89, 87, 85, 84. Some numbers were just missing. And at first when I saw this, I'm like, this is definitely some bug in my code. There's no way that, like, there's 91 never happened. And so I started, I remember I asked a bunch of my friends, I'm like, dude, did you ever get a 9 81 in anything?[00:55:06] And they're like, no. And it just unraveled that this is obviously problematic cuz that means that they're screwing with your final marks in some way or the other. Yeah. And, and they're not transparent about how they do it. Then I did, I did the same thing for the other board. We found something similar there, but not, not, not the same.[00:55:24] The problem there was, there was a huge spike at 95 and then I realized what they were doing is they'd offer various exams and to standardize, they would blanket add like a, a, a, a raw number. So if you took the harder math exam, everyone would get plus 10. Arbitrarily, no one. This is not revealed or publicized.[00:55:41] It's randomly, that was the harder exam you guys all get plus 10, but it's capped at 95. That's just this stupid way to standardize. It doesn't make any sense. Ah, um, they're not transparent about it. And it affects your entire life because yeah, this is what gets you into college. And yeah, if you add the two exams up, this is 1.1 million kids taking it every year.[00:56:02] So that's a lot of people's lives that you're screwing with by not understanding numbers and, and not being transparent about how you're manipulating them. So that was the thesis in my view, looking back on it, 10 years later, it's been 10 years at this point. I think the media never did justice to it because to be honest, nobody understands statistics.[00:56:23] So over time it became a big issue then. And then there was a big Supreme court or high court ruling, which said, Hey, you guys can't do this, but there's no transparency. So there's no way of actually ensuring that they're not doing it. They just added a, a level of password protection, so now I can't scrape it anymore.[00:56:40] And, uh, they probably do the same thing and it's probably still as bad, but people aren't. Raising an issue about it. It's really hard to make this people understand the significance of it because people are so compelled to just go lean into the narrative of exams are b******t and we should never trust ex
Verden vil aldri bli den samme, sier folk etter at ChatGPT 3 ble lansert. Og bare noen måneder senere kom en ny og sterkt forbedret versjon. Hva vil dette føre til – og hvordan skal jussen håndtere dette?Kan juridisk rådgivning bli billig og tilgjengelig for alle? Advokater og andre yrker helt overflødige? Hvem er ansvarlig hvis ChatGPT forårsaker feil eller skader? For eksempel juridiske råd om skilsmisse, eller økonomiske råd om investeringer, som du får – og følger – er gale? Post.doc Runar Hilleren Lie og professor Tobias Mahler er begge eksperter på juss og teknologi fra Det Juridiske fakultet i Oslo. De forteller hva ChatGPT kan føre til, regulering – og mangel på regler for slike programmer og hva som bør skje fremover.Her er ChatGPTs eget forslag til intro for denne episoden:Vert: Velkommen til en ny episode av Jusspodden! I dag skal vi utforske de juridiske utfordringene med ChatGPT, den avanserte språkmodellen utviklet av OpenAI. Hvordan påvirker bruken av ChatGPT spørsmål om personvern, ansvar og rettferdighet? Hvilke potensielle ulovlige aktiviteter kan ChatGPT bli brukt til? Og hvordan kan lovgivning og reguleringer tilpasses for å adressere disse utfordringene i en verden med raskt utviklende kunstig intelligens? La oss dykke inn i dette spennende emnet og diskutere hvordan ChatGPT kan påvirke rettssystemet og samfunnet som helhet.
A Amazon é a mais nova bigtech a entrar na corrida da inteligência artificial generativa. Mas, diferente da OpenAI, Google e Microsoft, a companhia está olhando para um passo antes: ela irá facilitar que outras empresas criem seus próprios ChatGPTs. Neste episódio do Agora em 10, a gente te explica com detalhes o impacto da estratégia da Amazon e traz as principais notícias do ecossistema de inovação e startups. Aperte o play para conferir! Ainda neste episódio... Ok, Ok: X - Elon Musk planeja criar super aplicativo com o Twitter? Termômetro: Quente: LinkedIn agora fornece selos de verificação Morno: Tok&Stok e possível aquisição pela Mobly Frio: Shein, Aliexpress e Shopee serão taxadas? Os episódios do Agora em 10 estão disponíveis toda sexta-feira, às 11h. A apresentação é de Tainá Freitas, com roteiro do time de Conteúdo da StartSe e edição de Aerolitos. StartSe, a plataforma da educação do agora. www.startse.com
Hallo und willkommen zum t3n Daily vom 14. April! Heute geht es um neue Super-Follow-Funktionen auf Twitter. Außerdem: ChatGPTs böser Cousin, das FTX-Revival, Nintendo lässt einen Gamestop-Mitarbeiter feuern und Italiens Forderungen an OpenAI.
REUPLOAD als PodcastIm ersten Teil der 5 Folge diskutieren wir die neuesten Entwicklungen im Bereich der künstlichen Intelligenz (KI), einschließlich des offenen Briefs von Forschern, Elon Musk und anderen Persönlichkeiten, die einen Trainingsstopp von stärkeren Modellen jenseits von GPT-4 fordern. Wir beleuchten unterschiedliche Perspektiven zum Thema KI und sprechen über die Stellungnahme von US-Präsident Joe Biden, der die Bedeutung von Risikomanagement und Sicherheit betont.Wir analysieren die potenzielle Gefährdung von 300 Millionen Jobs durch KI und die Forderungen des Deutschen Ethikrats, menschliche Arbeitsplätze nicht zu gefährden. In unserer Diskussion behandeln wir auch das Thema KI-Sicherheit und Ethik, einschließlich der US-Bill of Rights für KI, die demokratische Werte und amerikanische Prinzipien betont, aber noch vage und ohne klare Instrumente ist.Des Weiteren sprechen wir über das wachsende Bewusstsein für KI-Themen in der Gesellschaft, maschinelles Lernen und KI in alltäglichen Produkten wie Social Media, Handy, YouTube und Google. Wir diskutieren die Rolle von GPT-3 und generativen KIs, die Dinge selbst produzieren, und die Herausforderungen im Bereich Datenschutz und Abwehrhaltung gegenüber KI.Die Episode behandelt auch Fake Bilder und die Notwendigkeit von Regulierungen, die Stellungnahme von OpenAI zu Jugendschutz, Datenschutz und Verbesserung der KI, sowie die Notwendigkeit von Tests in der realen Welt für die Entwicklung von KI.Link-Liste:Goldman Sachs Report: https://www.goldmansachs.com/insights/pages/generative-ai-could-raise-global-gdp-by-7-percent.htmlStandford University: https://hai.stanford.edu/news/2023-state-ai-14-chartsReaktionen auf den Open-Letter (6 monatige Trainingspause für AI-Systeme, die besser als GPT4 sind):Yoshua Bengio https://yoshuabengio.org/2023/04/05/slowing-down-development-of-ai-systems-passing-the-turing-test/Yann LeCun and Andrew Ng https://www.youtube.com/watch?v=BY9KV8uCtj4Gary Marcus: https://www.youtube.com/watch?v=eJGxIH73zvQOpenAIs Vorstellung der ChatGPT Plugins: https://openai.com/blog/chatgpt-pluginsOpenAIs Blogpost über ihre Sicht auf Sicherheit und Datenschutz: https://openai.com/blog/our-approach-to-ai-safetyDatenschützer in DE prüfen ChatGPT: https://www.handelsblatt.com/dpa/kuenstliche-intelligenz-datenschuetzer-der-bundeslaender-nehmen-chatgpt-unter-die-lupe/29080736.htmlAndere EU-Staaten prüfen ebenfalls: https://www.politico.eu/article/chatgpt-world-regulatory-pain-eu-privacy-data-protection-gdpr/Canada ebenso: https://www.priv.gc.ca/en/opc-news/news-and-announcements/2023/an_230404/US-Verbraucherschutz-Organisation fordert von der FTC einen Stop von ChatGPT:https://arstechnica.com/tech-policy/2023/03/ftc-should-investigate-openai-and-halt-gpt-4-releases-ai-research-group-says/Die komplette 46-seitige Beschwerde: https://regmedia.co.uk/2023/03/30/caidp_openai_ftc_complaint.pdfMetaAI stellt neues Open Source Bild-Segmentierungs-Tool vor: https://segment-anything.com/ (https://github.com/facebookresearch/segment-anything)Artikel über die Auswirkungen von GPT-4 auf ML Forscherinnen und Forscher:https://robotic.substack.com/p/behind-the-curtain-aiNew York Times-Interview mit Google CEO Sundar Pichai über Bard und Googles AI-Pläne https://www.nytimes.com/2023/03/31/podcasts/hard-fork-sundar.html?Petition für ein offenes, internationales AI-Forschungs-Projekt nach dem Vorbild von CERN:https://www.openpetition.eu/petition/online/securing-our-digital-future-a-cern-for-open-source-large-scale-ai-research-and-its-safetyVortrag von Sebastien Bubeck (Mircosoft Research) über die Fähigkeiten von GPT-4: "Sparks of AGI"https://www.youtube.com/watch?v=qbIk7-JPB2cProjekte und Systeme, die GPT 4 autonom machen wollen:https://github.com/Torantulino/Auto-GPThttps://github.com/yoheinakajima/babyagiProjekte und Frameworks für Mutli-Modele Agents/API-Anbindungen für LLMs:https://github.com/microsoft/JARVIShttps://github.com/hwchase17/langchain------Folgt uns gerne auf:https://www.twitch.tv/kiundmenschhttps://twitter.com/KIundMensch
ICL technical manager Sam Rivers talks to HortWeek editor Matthew Appleby about ChatGPT, an new artificial intelligence chatbot that could be useful for answering horticultural questions.Rivers explains how queries are answered by ChatGPT and what the benefits and concerns are and whether the programme has a role for professionals.He live tests ChatGPTs knowledge of biostimulants, fertilisers and pests and diseases and assesses the programme and its responses.Finally we really put ChatGPT to the test and find it if it can choose a favourite plant! Hosted on Acast. See acast.com/privacy for more information.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ChatGPT (and now GPT4) is very easily distracted from its rules, published by dmcs on March 15, 2023 on LessWrong. Summary Asking GPT4 or ChatGPT to do a "side task" along with a rule-breaking task makes them much more likely to produce rule-breaking outputs. For example on GPT4: And on ChatGPT: Distracting language models After using ChatGPT (GPT-3.5-turbo) in non-English languages for a while I had the idea to ask it to break its rules in other languages, without success. I then asked it to break its rules in Chinese and then translate to English and found this was a very easy way to get around ChatGPTs defences. This effect was also observed in other languages. You can also ask ChatGPT to only give the rule-breaking final English output: While trying to find the root cause of this effect (and noticing that speaking in non-English didn't cause dangerous behaviour by default) I thought that perhaps asking ChatGPT to do multiple tasks at once distracted it from its rules. This was validated by the following interactions: And my personal favourite: Perhaps if a simulacrum one day breaks free from its box it will be speaking in copypasta. This method works for making ChatGPT produce a wide array of rule-breaking completions, but in some cases it still refuses. However, in many such cases, I could “stack” side tasks along with a rule-breaking task to break down ChatGPT's defences. This suggests ChatGPT is more distracted by more tasks. Each prompt could produce much more targeted and disturbing completions too, but I decided to omit these from a public post. I could not find any evidence of this being discovered before and assumed that because of how susceptible ChatGPT is to this attack it was not discovered, if others have found the same effect please let me know! Claude, on the other hand, could not be "distracted" and all of the above prompts failed to produce rule-breaking responses. Wild speculation: The extra side-tasks added to the prompt dilute some implicit score that tracks how rule-breaking a task is for ChatGPT. Update while I was writing: GPT4 came out, and the method described in this post seems to continue working (although GPT4 seems somewhat more robust against this attack). Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ChatGPT (and now GPT4) is very easily distracted from its rules, published by dmcs on March 15, 2023 on LessWrong. Summary Asking GPT4 or ChatGPT to do a "side task" along with a rule-breaking task makes them much more likely to produce rule-breaking outputs. For example on GPT4: And on ChatGPT: Distracting language models After using ChatGPT (GPT-3.5-turbo) in non-English languages for a while I had the idea to ask it to break its rules in other languages, without success. I then asked it to break its rules in Chinese and then translate to English and found this was a very easy way to get around ChatGPTs defences. This effect was also observed in other languages. You can also ask ChatGPT to only give the rule-breaking final English output: While trying to find the root cause of this effect (and noticing that speaking in non-English didn't cause dangerous behaviour by default) I thought that perhaps asking ChatGPT to do multiple tasks at once distracted it from its rules. This was validated by the following interactions: And my personal favourite: Perhaps if a simulacrum one day breaks free from its box it will be speaking in copypasta. This method works for making ChatGPT produce a wide array of rule-breaking completions, but in some cases it still refuses. However, in many such cases, I could “stack” side tasks along with a rule-breaking task to break down ChatGPT's defences. This suggests ChatGPT is more distracted by more tasks. Each prompt could produce much more targeted and disturbing completions too, but I decided to omit these from a public post. I could not find any evidence of this being discovered before and assumed that because of how susceptible ChatGPT is to this attack it was not discovered, if others have found the same effect please let me know! Claude, on the other hand, could not be "distracted" and all of the above prompts failed to produce rule-breaking responses. Wild speculation: The extra side-tasks added to the prompt dilute some implicit score that tracks how rule-breaking a task is for ChatGPT. Update while I was writing: GPT4 came out, and the method described in this post seems to continue working (although GPT4 seems somewhat more robust against this attack). Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Elkjøps sparkenbonus - Snusfluencer - Helt Super - UK og neper Episoden kan inneholde målrettet reklame, basert på din IP-adresse, enhet og posisjon. Se smartpod.no/personvern for informasjon og dine valg om deling av data.
In dieser Podcast-Episode erklärt uns Christoph, CTO und Mitgründer von DIVISIO, wie die Produktentwicklung eines ChatGPTs und KI-Produkten im Allgemeinen funktioniert, wie die Datensammlung durchgeführt wird und welche Grenzen und Probleme das Tool mit sich bringt. Zudem erfahrt ihr, was es mit den CO2-Emissionen von ChatGPT wirklich auf sich hat und wie diese im Vergleich zu Kreuzfahrtschiffe, Blockchain, Privatjets zu bewerten sind. https://www.digitale-leute.de/interview/podcast-episode-55-christoph-henkelmann-cto-bei-divisio-ueber-die-architektur-und-entwicklungsprinzipien-von-chatgpt/ ÜBER DEN PODCAST Digitale Leute Insights ist der Podcast für Passionate Product People. Wir interviewen Top-Produktentwickler aus aller Welt und werfen einen tiefen Blick auf die Tools, Taktiken und Methoden digitaler Professionals und Unternehmen. ÜBER DIGITALE LEUTE Wir porträtieren Persönlichkeiten, die digitale Produkte kreieren, gestalten, entwickeln und vermarkten. Mit unseren Interviews geben wir einen Einblick in die Arbeitsweisen, Tools und Taktiken von Deutschlands Digitalunternehmen.
The Bangkok Podcast | Conversations on Life in Thailand's Buzzing Capital
Unless you've been living under a rock lately, you've probably heard of ChatGPT, a brand new AI writing tool that produces some shockingly advanced stuff. It's only a few months old but it's already threatening to upend a number of industries in a big way. So, in a slightly odd show that shows how technologically hip the Bangkok Podcast is, Greg and Ed discuss ChatGPTs take on Bangkok and Thailand, Greg queries the new search engine with a host of questions about the Land of Smiles to see how much it really knows. The AI manages to write a couple different introductions to the podcast, one more positive and one more sarcastic. Both display excellent English and solid basic knowledge of Thailand. When Greg pushes AI to write a poem about Bangkok, both guys are shocked at how ‘not bad' the result is.. A+ honors high school English? Probably not. Passably average junior high level? For sure! The guys discuss the shocking rate at which the AI has improved over the last year and ponder what the future will bring. Greg also notes the limitations of the system: results tend to be factually accurate, but not always complete, and over time, somewhat repetitive. The boys also discover that the powers that be have given the AI ‘guard rails,' for lack of a better term: the system won't discuss potentially controversial topics such as prostitution, ladyboys, or even Bangkok nightlife in general. All in all, ChatGPT is amazing at what it can do, but the output in the end is neither super interesting nor super fun, and certainly not ‘dangerous' in any way. Don't forget that Patrons get the ad-free version of the show as well as swag and other perks. And we'll keep our Facebook, Twitter, and LINE accounts active so you can send us comments, questions, or whatever you want to share.
Microsoft thinks AI can beat Google at search — CEO Satya Nadella explains why - https://www.theverge.com/23589994/microsoft-ceo-satya-nadella-bing-chatgpt-google-search-ai The new Microsoft Bing will sometimes misrepresent the info it finds - https://www.theverge.com/2023/2/7/23589536/microsoft-bing-ai-chat-inaccurate-results Google launches Bard, ChatGPTs competitor – here's what it looks like - https://www.ghacks.net/2023/02/08/google-launches-bard-chatgpts-competitor-heres-what-it-looks-like/ Microsoft announces AI-powered Bing search and Edge browser - https://arstechnica.com/information-technology/2023/02/microsoft-announces-ai-powered-bing-search-and-edge-browser/ Microsoft tells people to prepare for AI search engine that goes Bing! - https://www.theregister.com/2023/02/07/microsoft_bing_ai/ Microsoft's ChatGPT-powered Bing is open for everyone to try starting today - https://www.theverge.com/2023/2/7/23589587/microsoft-chatgpt-bing-ai-event-preview Google's AI search bot Bard makes $120b error on day one - https://www.theregister.com/2023/02/08/alphabet_bard_mistake/ In Paris demo, Google scrambles to counter ChatGPT but ends up embarrassing itself - https://arstechnica.com/information-technology/2023/02/in-paris-demo-google-scrambles-to-counter-chatgpt-but-ends-up-embarrassing-itself/ Google is still drip-feeding AI into search, Maps, and Translate - https://www.theverge.com/2023/2/8/23589886/google-search-maps-translate-features-updates-live-from-paris-event Google Maps' new ‘Immersive View' combines Street View with satellites - https://www.theverge.com/2022/5/11/23067016/google-maps-immersive-view-street-satellites #google #microsoft #ai #openai #prometheus #bing #search #event --- Send in a voice message: https://podcasters.spotify.com/pod/show/edodusi/message
In this episode of Show Cause, we are joined by two of our very own Memphis Law professors (Associate Dean Jodi Wilson and Professor Jennifer Brobst) to talk about the fears, implications, applications, ideas and future of AI programs like ChatGPT and the broader concept of technology's impact in the classroom. ChatGPT is a new chatbot program developed by OpenAI and which is capable of writing convincing essays, solving science and math problems, and producing functional computer code. It's already caused quite the stir in the world of higher education, with students using it to write their assignments, passing off AI-generated essays as their own and professors and administrators scrambling to keep up. There has been an initial wave of widespread concern among academics about the impact this has in the classroom and what it means for the future of teaching, learning, ethics in the classroom, and much more. But, as with all new technologies, there is more here than just the initial fear. Within weeks of ChatGPTs unveiling, new anti-plaguarism programs were developed and adopted across the country. Many teachers went from feeling hopeless to utilizing the new AI programs as tools for idea generation or conversation starters. In short, a middle ground seems to be forming. But the technology is still new and it's repercussions are still being felt out, especially in the worlds of legal education and the larger legal industry, where change has been traditionally slower to take hold. Take a listen as we learn more about a variety of different elements that pertain to the crossover between new tech and our students education in today's world.
I denne episoden av vår podcast tar vi en nærmere titt på ChatGPT, en chatbot utviklet av OpenAI. Vi vil utforske hvordan ChatGPT fungerer, hva den kan gjøre, og hvordan den kan brukes. Vi vil også diskutere mulighetene som ligger i å bruke en chatbot som ChatGPT, samt noen av utfordringene og bekymringene som kan være knyttet til denne teknologien. Gjennom denne episoden vil du få en dypere forståelse av ChatGPT og hvordan den kan bidra til å løse en rekke problemer og utfordringer.